Category Archives: Social psychology

Crime in Slow Motion

The research I’m about to present resonates with a personal experience of mine. Three years ago, I served on a jury that acquitted Cheswick, PA, councilman Jonathan Skedel on a charge of assaulting Joe Ferrero, president of the Cheswick Volunteer Fire Department. (I was stunned when the prosecutor allowed a retired college professor whose field is social psychology to sit on the jury.) The charge resulted from a fistfight between the two men in which Ferrero suffered facial injuries requiring dental surgery. The fight took place in the parking lot of a physical therapy clinic and the entire episode was captured by one of our ubiquitous surveillance cameras.

The video was played several times during the trial, both at real speed and in slow motion. In his summation, the prosecutor paused the video just before Mr. Skedel delivered the punch which injured Mr. Ferrero, and stated that Mr. Skedel could have stopped the fight at that point, but instead decided to assault Mr. Ferrero.

During the jury’s deliberations, I was disturbed to discover that some of my fellow jurors accepted the prosecutor’s definition of the situation. I tried my best to argue—with limited success—that pausing the video was an artificial intervention in what was, in reality, a continuous episode that provided little opportunity for conscious deliberation by either man. The jury eventually acquitted Mr. Skedel, but this was probably due to the majority’s belief that both men had acted equally badly, and it was unfair to single out one of them for prosecution.

Playing crime scene videos in slow motion, or pausing them at critical points, is common practice in jury trials and their effects should be investigated. The former of these issues was the subject of four experiments by Dr. Eugene Caruso of the University of Chicago and his colleagues. They compared the effects of watching a video either in slow motion or at regular speed. Their slow motion was 2.25 times slower than regular speed. The researchers measured participants’ estimates of how much time had passed, and their judgments of the intentionality of the defendant’s behavior.

Three of these experiments used a surveillance video from a Philadelphia trial in which the defendant, John Lewis, was convicted of first degree murder for  shooting a man during a convenience store robbery. Here it is (in slow motion).

They measured the intentionality of the act because the real jury had to decide whether the defendant was guilty of first degree murder, which is premeditated, or second degree murder, which is not.

Study 1 showed that participants in the slow motion condition estimated that more time had passed than those in the real time condition, and saw the defendant’s behavior as more intentional. Further analysis showed that their judgments of intention were mediated by their estimates of how much time had passed. The researchers refer to this effect of slow motion on perceived intentionality as the intentionality bias. It occurs because the participants mistakenly infer that the defendant had more time to think before acting than he actually had. Study 2 replicated this finding with a video of a professional football tackle involving violent contact. (You might want to remember this the next time you watch a slow motion replay during a sports event.)

Mr. Lewis’s lawyers argued on appeal that showing the slow motion video had biased the jurors, causing them to see his actions as more intentional than they actually were. The judges rejected this argument because, they said, the jurors were shown the video at regular speed as well as slow motion, and because the amount of elapsed time was stamped on the video.

The researchers effectively demolished both of these arguments. Study 3 added a “time salient” condition in which participants were reminded that they could see how much time had elapsed from the time stamp on the videotape (which was present in all conditions). This reduced the amount of intentionality bias produced by slow motion, but did not eliminate it. Finally, Study 4 included a condition in which participants were shown the video twice, first at regular speed and again in slow motion. This too reduced the magnitude of the intentionality bias but did not eliminate it.

Summarizing the data, the researchers calculated that, prior to deliberation, juries randomly composed of Study 1 participants would be almost four times as likely to unanimously believe that the killing was premeditated in the slow motion condition.

Unfortunately, Dr. Caruso and his colleagues did not include a condition in which the video was paused immediately before the critical action took place. My guess is that such a condition would have further increased the intentionality bias, since it stretches the length of the presentation.

The use of slow motion is often justified on the grounds that it provides a “better” look at an event, and this may be true in some instances. However, when intentionality is at issue, slow motion also produces a biased causal attribution for the event. These studies are probably too late to help Mr. Lewis, who was sentenced to death and is awaiting execution.

You may also be interested in reading:

A Downside of Police Body Cameras

Self-Censorship

Suppose you were completing an online survey and encountered the following warning:

The next section of the survey asks for your honest opinions about some controversial political issues. While we make every attempt to ensure your opinions are kept confidential, it is important to keep in mind that the National Security Agency does monitor the online activities of individual citizens, and these actions are beyond the study’s control.

That statement is absolutely true, but how often do we think about it? And if we do think about it, will it make any difference in our responses to the survey?

Social psychologists have been studying conformity for 80 years. Conformity refers to a change in a person’s attitude or behavior due to real or imagined pressure from another person or group. In the 1940s, using a perceptual task, Solomon Asch demonstrated how many people conform to the incorrect judgments of others. His research was followed a series of studies documenting many factors that affect the conformity rate, i.e., we conform more if we think the other group members are experts, if we like them, etc.

In the Asch conformity experiments, participants were asked which line, A, B or C, is equal to the line at the left.

There are two main reasons for conformity. People use other group members as sources of information and sometimes conclude that the others are right and they are wrong. In other words, they privately accept to group’s judgment. Social psychologists call this informational social influence. In other cases, people privately believe their own judgment is correct, but they conform to gain the approval (or avoid the disapproval) of other group members. In this case, there is public compliance without private acceptance. This is called normative social influence. One way to disentangle these two forces is to do an experiment in which some participants respond publicly, while others think they are responding privately. Conformity will usually be greater in the public condition, and the difference between the two conditions represents normative social influence. However, conformity also will be greater than zero in the private condition, and this represents informational social influence.

The spiral of silence theory of German political scientist Elisabeth Noelle-Neumann described the consequences of conformity for society. Due to fear of social isolation, people who think they hold minority positions will tend to remain silent in social situations, while people who think they are in the majority will be confident and vocal. Over time, the perceived consensus will drift toward the majority position, as everyone overestimates the percentage of people who hold that view. For example, if atheists are reluctant to reveal their beliefs to survey researchers, the public will underestimate the number of atheists and atheists will feel more marginalized. In extreme cases, there is evidence of pluralistic ignorance, in which the majority privately rejects a position, but mistakenly assumes that others accept it, and therefore go along with it.

A recent study by Elizabeth Stoycheff of Wayne State University measures normative social influence on the internet. It contrasts a condition in which participants are reminded that they are under surveillance with one in which they are not. The 255 participants are a representative sample of Americans recruited through Survey Systems International in January 2015. They are told they are participating in a study of attitudes toward the ongoing American bombing of Iraq and Syria. After answering questions about their political attitudes, personality traits and media use, half of them were given the above message reminding them of the possibility of government surveillance. They other half didn’t receive the message. Next they encountered a Facebook post which passed along, without comment, a fictitious, neutral ABC News article about American bombing of ISIS targets in Iraq.

The measure of self-censorship was they average of their responses to four questions: their likelihood of sharing the post, liking the post, commenting on the post, or writing a new post on the same topic. They were asked whether they approved of U. S. airstrikes in Iraq, and to estimate the average American’s attitude toward the airstrikes. The difference between these two measures determined what the author called their climate of opinion—how much they thought their attitude deviated from the majority viewpoint. Participants were also asked whether they thought government surveillance of the internet was justified.

There would be evidence of self-censorship if those participants who were reminded of surveillance were more likely to speak out when they thought the climate of opinion was friendly and less likely to speak when they thought it was hostile. Although some secondary sources have implied that this is what Stoycheff found, the actual results are more complicated than that. She divided people into three groups depending on their attitude toward surveillance: Those who thought it was justified, those who merely tolerated it, and those who thought it was unjustified. The results are shown below.

329F806A00000578-3513034-image-a-88_1459202220023Those who thought surveillance was unjustified showed no evidence of self-censorship. They were slightly less likely to speak when under surveillance, but their likelihood of speaking was unaffected by the climate of opinion. Those who believe that government spying on citizens is unacceptable apparently refuse to be silenced even when they know the opinion climate is hostile to their views and they are reminded that they are under surveillance. Stoycheff reports that these people are also higher in political interest than the other participants.

However, those who tolerated surveillance, and especially those who thought it was justified (“because [they] have nothing to hide,”] showed evidence of self-censorship. They were more likely to speak out when they thought they were in the majority, and less likely to speak out when they thought they were in the minority. They conform in two ways. First, they acquiesce to government spying, and secondly, they censor their opinions by telling other people only what they think they want to hear.

Conformists cheat the group or society by withholding whatever information or good judgment they possess. But as Stoycheff notes, “Democracy thrives on a diversity of ideas, and self-censorship starves it.” Better outcomes will come to a group or society that creates incentives for people to reveal dissenting information. The First Amendment is an important safeguard when conformity is demanded by the government, but freedom of speech may not be sufficient if people decide that they have nothing to say.

You may also be interested in reading:

Chomsky, Greenwald and Snowden on Privacy

Are Terrorists Getting What They Want?

An Embarrassment of Riches

For the first time, not one but two filmmakers have made serious attempts to portray research in social psychology. Experimenter, written and directed by Michael Almereyda, is about Stanley Milgram’s 1961-62 obedience studies, and The Stanford Prison Experiment, written by Tim Talbott and directed by Kyle Patrick Alvarez, recreates Philip Zimbardo’s 1971 prison simulation. Please take a moment and read these two blog posts (Milgram here and Zimbardo here) which I wrote before I saw the films. They contain background information about the studies and the official trailers of the two films.

There are important similarities between these two research programs. Both support situationism, the school of psychology which claims that human behavior is largely determined by its immediate social environment rather than by personal qualities of the behaving individual. Both Milgram and Zimbardo have suggested that their research can help to explain wartime atrocities such as the torture of prisoners and the mass killings of the Holocaust. The dramatic behavioral changes that occurred in these experiments are surprising to most people, and the studies are sometimes summarily rejected for this reason. Both studies were controversial, with critics maintaining that it was unethical to subject unwitting volunteers to the psychological stress that they generated. Neither would be allowed by today’s institutional review boards. They represent, for some of us, a distant golden age when social psychology dealt with more important social questions. (Finally, in an interesting coincidence, Stanley Milgram and Phil Zimbardo both graduated from James Monroe High School in the Bronx in 1950. They were acquaintances, but not close friends.)

There are also similarities between the films themselves. Both are independent productions obviously made on a shoestring budget. They both premiered at the 2015 Sundance Film Festival. To their credit, both filmmakers meticulously re-created the original experiments. Sasha Milgram, Stanley’s widow, was a consultant to Experimenter, and Phil Zimbardo played an active role in The Stanford Prison Experiment‘s production. Both films received favorable reviews but almost no nationwide distribution, and as a result they were financially unsuccessful. The Stanford Prison Experiment grossed $644,000 in its first three months, and Experimenter only $155,000 in two months. It will probably be a long time before we see another movie about one of those boring social psychologists.

In spite of these similarities, the films are quite different. The Stanford Prison Experiment attempts to portray the study as realistically as possible. Experimenter is more abstract, and is ultimately the more interesting of the two. For example, while both films show the researchers observing experimental participants from behind one-way mirrors, Almereyda seems to use mirrors as a metaphor to comment on social psychology as a profession.

The Stanford Prison Experiment covers the time from when the participants were recruited to their debriefing the day after the experiment ended. Most of the film, like the experiment itself, takes place in a small, enclosed space, with lots of in-your-face closeups. Alvarez’s intent seems to have been to induce claustrophobia, so viewers can share the experience of incarceration. Here is a scene in which one of the prisoners is placed in solitary confinement (a closet) for refusing to eat his sausages.

In spite of Zimbardo’s participation in the production, the film contains some none-too-subtle criticisms of him. As portrayed by Billy Crudup, he resembles the devil, a look that Zimbardo himself may have sought. Early in the experiment, he appears to incite the guards to behave more provocatively—a clear violation of research methodology. Although the guards were told that physical aggression was forbidden, he ignores a guard’s act of violence reported to him by his graduate assistants. Although he stops the experiment on the sixth day at the insistence of his girlfriend (later, wife) Christina Maslach, the film leads viewers to conclude that he was negligent in not ending it sooner. The filmmakers fail to dramatize his reasons for not discontinuing the study—his commitments to his graduate students, his department and university, and his funding sources, all of whom were expecting tangible results from all the time and effort that went into the study.

The first half hour of Experimenter is a realistic re-creation of the obedience experiments. Here is one of Milgram’s debriefings in which he first attempts to confront the participant with the ethical implications of his behavior, but then allows him to evade responsibility by showing him that the victim is unharmed.

Milgram is ambivalent toward his participants. His situationism makes him sympathetic to their plight, as illustrated by this quote from his book, Obedience to Authority.

Sitting back in one’s armchair, it is easy to condemn the actions of the obedient subjects. But those who condemn the subjects measure them against the standard of their own ability to formulate high-minded moral prescriptions. That is hardly a fair standard. Many of the subjects, at the level of stated opinion, feel quite a strongly as any of us about the moral requirement of refraining from action against a helpless victim. They, too, in general terms know what ought to be done and can state their values when the occasion arises. This has little, if anything, to do with their actual behavior under the pressure of circumstances.

Much of the rest of Experimenter reminded me of Thornton Wilder’s play, Our Town, in which the narrator speaks directly to the audience and introduces scenes some of which take place in front of deliberately artificial-looking sets. In Experimenter, Milgram (played by Peter Sarsgaard) is the narrator, and his narration tends to distance the audience from the events being depicted. Here is a scene of Stanley and Sasha (Winona Ryder) sitting in a fake car with a black-and-white photograph as background, reading a New York Times article about the obedience studies.

Some of the narration consists of recognizable paraphrases of statements from Milgram’s book and articles. They emphasize not only his intellectualism but also his sense of ironic detachment from his research. As portrayed by Almereyda, he applies this detachment to his personal life as well. Critics have debated the meaning of the elephant in the room. (I’m serious; there’s a real elephant there, and nobody notices.) Its first appearance seems to signifiy the Holocaust. The second time it wanders in, Milgram deadpans, “1984 was also the year in which I died.” He died of a heart attack in a hospital emergency room while Sasha filled out medical forms. Almereyda seems to suggest that he may have died because his wife was unwilling to disobey authority.

Experimenter covers the time from the obedience studies until Milgram’s death. This is a problem for Almereyda since Milgram’s greatest accomplishment occurred early in life. He notes that Milgram’s life was anti-climactic, but then so is the film. Much of it concerns other people’s reactions to the obedience studies, beginning with his failure to get tenure at Harvard, and including his frustrating experience with a TV play, The Tenth Level, that sensationalized his research.

Milgram was probably the most creative of all social psychologists. Some of his later contributions, such as the lost-letter technique and the small world problem (“six degrees of separation”), are presented clearly. Not so, his research on urban psychology. Although a couple of his demonstrations are shown, they are presented out of context. Milgram attributed many of the peculiarities of urban life to information overload, a point which could have been clarified by inserting a few sentences from his 1970 paper, “The Experience of Living in Cities.” His research on cyranoids was not included. These unpublished studies ask the question, “If someone secretly controlled what you said, would anyone notice?” Their omission was a missed opportunity for Almereyda, since you could argue that they illustrate what was, or should have been, one of the dominant themes of the film.

I hope my insider criticisms won’t discourage anyone from seeking out these two films. I strongly recommend them both, and I hope my colleagues in social psychology will encourage their students to learn from them.

Recommended reading:

Milgram, Stanley (1974).  Obedience to Authority: An Experimental View.

Blass, Thomas (2004).  The Man Who Shocked the World: The Life and Legacy of Stanley Milgram.

Zimbardo, Philip G. (2000).  The Lucifer Effect: Understanding How Good People Turn Evil.

You may also be interested in reading:

Advance Planning

Social Psychology on Film, Take 2

The Dirty Dozen of 2015

The Invisible Hand

We live in a market economy. We are frequently exposed to reminders of money. Does living under capitalism change our behavior? In a classic paper, social psychologists Margaret Clark and Judson Mills distinguished between communal relationships such as those that exist between family members and friends, and exchange relationships such as those that occur in business. Different norms apply to these two types of relationships. For example, people in an exchange relationship keep track of each other’s inputs into a joint task, while people in a communal relationship keep track of each other’s needs.

Several studies suggest that leading participants to think about money changes their behavior in predictable ways. These studies use cognitive priming to create subtle reminders of money. For example, participants may be asked to unscramble words into meaningful sentences. In one condition, all the sentences just happen to be about money, while in another condition they are about something else. In general, thinking about money increases achievement on difficult tasks, but decreases altruism or helping behavior.

In the latest contribution to this research, Agata Gaslorowska and her colleagues report four experiments done with Polish children aged 3 to 6. The priming manipulation was a sorting task. The children in the money condition were asked to sort 25 coins into three different denominations. Those in the control group sorted nonmonetary objects, such as buttons or hard candies.

Two of the experiments involved motivation and performance. In one of them, children who had handled money were more likely to complete a difficult labyrinth puzzle than those in the control group. In the second, those in the money condition spent a longer time working at what was essentially an insoluble task, a jigsaw puzzle intended for older children.

The other two studies involved willingness to help another child. In the third experiment, children were given an opportunity to help by bringing the child red crayons from across the room. Those who had sorted money brought fewer crayons than those in the control group. The final study measured self-interested behavior as well as altruism. As a reward for being in the study, the children were allowed to choose up to six stickers for themselves. Those who had handled money took more stickers. Then the children were asked if they would donate some of their stickers to another child who had not participated in the study. Those in the money condition donated fewer of their stickers. The results are shown below.

For each percentage of stickers donated, the graph shows the percentage of children in that condition who donated at least that percentage of their stickers. It should be noted that sorting candies put the children in a better mood than sorting buttons or coins, but mood was unrelated to helping in this experiment.

These experiments show that thinking about money affects the behavior of 3 to 6-year-old children in ways that are similar to its effects on adults. These kids had only a limited understanding of money. For example, they were unable to identify, at better than chance, which coin would buy the most candy. Nevertheless, they were aware enough of the function of money for it to change their behavior.

One of the authors of the study, Kathleen Vohs, proposes that the unifying thread in all these money studies is that thinking about money causes people to place a greater value on self-sufficiency. In another of her studies, adults primed with thoughts of money were more likely to choose to work alone rather than with another participant. If it’s good to be self-sufficient, this could explain why people in need are seen as less deserving of help.

Sociologist Robert Putnam, in his book Bowling Alone, presents data suggesting that over the last 50 years, Americans have engaged in fewer group and community activities and more solitary ones, with the result that we are less cooperative and trusting. Ironically, Putnam uses a market metaphor to summarize his theory. He says the disintegration of communal relationships reduces social capital, giving society fewer resources that can be used for the public good in times of need.

Michael Sandel, a political philosopher, argues that we have gone from having a market economy to being a market society. Public goods are increasingly privatized and virtually everything is for sale if the price is right. He summarizes his critique in this TED talk.

Since most of us have never lived under any other economic system, we are largely unaware of how capitalism affects our behavior. However, some of us spend more time handling and thinking about money than others. In one study, college students majoring in economics behaved less cooperatively in a bargaining game than students majoring in other fields. Studies consistently show that poor people are more generous and helpful than rich people.

These studies have something to appeal to people of all political persuasions. Conservatives will no doubt be pleased to learn that thinking about money encourages hard work and achievement. On the other hand, the finding that the market society replaces helpfulness with selfishness confirms an important part of the liberal critique of capitalism.

You may also be interested in reading:

More Bad News For Religion

On Obama’s Speech

Power and Corruption, Part 1

Don’t Worry, Be Happy?

One of the core beliefs of positive psychology, also known as the psychology of happiness, has been seriously challenged. A major reason for the popularity of positive psychology is their claim that happiness leads to improved health and greater longevity. Not so, according to a new study by a research team headed by Dr. Bette Liu of Oxford University published in the medical journal The Lancet.

The study shows the difficulty of drawing causal inferences from correlational data. The majority of previous studies of the happiness-health hypothesis are correlational. The researchers measured both the participants’ happiness and their health at the same time and found a positive relationahip. However, a correlation between A and B could mean that A causes B, B causes A, or both A and B are jointly caused by some third variable, C. In other words, the previous studies have at least two problems.

  1. Directionality. Rather than happiness causing good health, it could be that good health is the reason people are happy. Is happiness a cause of good health, or an effect?
  1. Third variables. There an infinite number of other variables which might be correlated with happiness and health and might be causing both. An obvious possibility is social class. Poverty could be making people unhappy and also making it difficult for them to lead healthy lives or obtain adequate health care.

The Liu data come from the Million Women Study conducted in the United Kingdom. Participants were recruited between 1996 and 2001 and were tested three years after their recruitment. At this baseline measurement session, they were then asked whether they suffered from a list of common health problems, and to rate their health as “excellent,” “good,” “fair” or “poor.” Then they were asked, “How often do you feel happy?” The alternatives were “most of the time,” “usually,” “sometimes,” and “rarely/never.” Measures were also taken of how often they felt stressed, relaxed, and in control.

Data were also collected for 13 demographic and lifestyle variables: age, region, area deprivation (a measure of the wealth of their census area), education, whether living with a partner, number of children, body mass index, exercise, smoking, alcohol consumption, hours of sleep, religiosity, and participation in other community groups. In 2012, it was determined whether each woman had died and, if so, the cause. The average duration of the study, from testing to outcome, was 9.6 years. Not all women completed the baseline measurements, and those who suffered from serious health problems at that time were eliminated, leaving a total of about 720,000 participants.

Participants were combined into three groups: happy most of the time, usually happy, and unhappy. About 4% of the women had died by 2012. Controlling only for age, the researchers found a strong relationship between happiness and all-cause mortality. This replicates previous studies. However, poor health at baseline was strongly related to unhappiness. When self-rated health was statistically controlled, the relationship between happiness and mortality was no longer statistically reliable. When all 13 demographic and lifestyle variables—some of which were significantly related to mortality—were controlled, the relationship almost completely disappeared. Happiness was also unrelated to heart disease mortality and cancer mortality once baseline health was controlled.

When they controlled for baseline health, the same results were obtained substituting the measures of feeling stressed, relaxed and in control for the happiness measure. These four analyses are illustrated by these graphs. Notice the almost flat lines hovering around RR = 1. A rate ratio (RR) of 1 indicates that a person in this group was no more or less likely to die than anyone else in the sample.

gr5_lrg

In summary, the results are consistent with the reverse causality hypothesis: Good health causes happiness, rather than the reverse. As one of the authors, Dr. Richard Peto, said, “The claim that [unhappiness] is an important cause of mortality is just nonsense. . . . Many still believe that stress or unhappiness can directly cause disease, but they are confusing cause and effect.”

Negative results are usually not considered a sufficient reason to reject a hypothesis, because many things can go wrong that can cause a study to fail even when the hypothesis is true. However, the Million Women Study must be taken seriously due to its large sample size and long duration. Certainly it would be better if the study had included men and citizens of other countries. However, there is no obvious theoretical reason to think that the happiness-health relationship holds only for men and not women, and previous studies of  the effect size for men and women are inconsistent.

Although the authors statistically controlled 13 demographic and lifestyle variables, it is impossible to control all possible confounding variables. With these negative results, a critic would have to argue that some third variable is masking the relationship between happiness and health. That is, there would have to be a third variable that is positively related to happiness but that increases mortality and therefore counteracts the expected positive effect of happiness on health.

A comment published along with the study criticized their simple, one-item measure of happiness. However, many previous studies have used the same or a similar measure. More importantly, when baseline health was not controlled, the authors replicated the results of previous studies, which suggests that their measure is adequate. Nevertheless, I anticipate that some positive psychologists will speak philosophically about some deeper meaning of happiness, insisting that whatever they mean by happiness is still a cause of good health, despite these negative results.

There are probably hundreds of thousands of professionals—not just clinical psychologists, but pop psychology practitioners from A (art therapists) to Z (well, yoga instructors)—who promise their clients they will be happier as a result of their treatment, and who implicitly or explicitly promise better health as an indirect result. They will either ignore this study or scrutinize it carefully for flaws. It should be fun to watch.

You may also be interested in reading:

On Obama’s Speech

Deep Background

Theories of causal attribution in social psychology distinguish between proximal and distal causes of events. Proximal causes are close to the event in time and space while distal causes are further removed from it. Proximal causes usually include the intentional acts of persons as well as immediate situational influences on them. Distal causes include the institutions, social structure and physical environment within which behavior is embedded. Distal and proximal causes combine to form a causal chain in which the more distal causes lead to the more proximal ones.

Distal causes are sometimes called ultimate causes. This reflects more than simply a judgment that they are important. It implies that distal causes are more permanent, while proximal causes are to some extent substitutable for one another. For example, a person who is under chronic economic stress due to poverty (a distal cause) may respond aggressively to a variety of frustrating situations (proximal causes). Eliminating some of these frustrations may do little to reduce overall aggression.

Research on causal attribution suggests than proximal causes are more easily recognized and rated by participants as more important than distal causes, and that voluntary acts of individuals are regarded as the most causally significant. This preference for intentional acts follows from the fundamental attribution error—the tendency to give greater weight to personal causes of behavior and to minimize the importance of situational or environmental causes.

Given this research, it is not surprising that the public blames terrorist acts primarily on their perpetrators and places a high priority on detecting and eliminating potential terrorists. However, if distal causes of terrorism are not addressed, we face the possibility of an inexhaustible supply of terrorists, as new recruits volunteer to take the places of those who are captured or killed. Fortunately, researchers are exploring some of the more distal causes of terrorism.

Politics, or Why They Hate Us

Robert Pape, a political scientist at the University of Chicago, author of Dying to Win: The Strategic Logic of Suicide Terrorism studied all of the 4600 suicidal terrorist attacks that have occurred in the world since 1980. His information comes from interviews with relatives and colleagues of the perpetrators, news reports, and the data bases of other groups that study terrorism. He reports that almost all terrorist attacks are part of a campaign directed by a militant secular organization whose goal is to compel other countries to withdraw their military forces from territory they regard as their homeland.

What 95% of all suicide attacks have in common . . . is not religion, but a specific strategic motivation to respond to a military intervention, often specifically a military occupation, of territory that the terrorists view as their homeland or prize greatly. From Lebanon and the West Bank in the 80s and 90s, to Iraq and Afghanistan, and up through the Paris suicide attacks we’ve just experienced in the last days, military intervention—and specifically when the military intervention is occupying territory—that’s what prompts suicide terrorism more than anything else.

Pape rules out religion as the ultimate cause since many suicide terrorists, such as those from the Tamil Tigers in Sri Lanka, were not religious. The leadership of ISIS consists of former Iraqi military leaders under Sadam Hussein. However, Islam is not irrelevant. Terrorist groups such as al Qaeda and ISIS use Islam as a recruitment tool and as a way to get recruits to overcome their fear of death.

The arguments that terrorist attacks such as the Paris massacre are intended to prompt France to increase its bombing of Syria, or to persuade the French people to persecute Muslims in France (thereby recruiting more local terrorists), are not inconsistent with Pape’s thesis. He refers to these as short-term goals which are intended to increase the costs of French intervention in the Middle East, and ultimately to persuade foreign governments to withdraw from the Persian Gulf.

Global Warming

Some climate scientists have suggested that there is a causal chain that runs from climate change, through drought, to migration from rural or urban areas, to political instability in the Middle East, particularly in Syria. A study published in March by Colin Kelley of the University of California at Santa Barbara and his colleagues addresses the first link in this causal chain. The authors argue that, although droughts are common in the Middle East, the drought that occurred in 2007-2010 was unprecedented in its severity in recent history. This drought matched computer simulations of the effects of increased greenhouse gas emissions on the region. The simulations predicted both hotter temperatures and a weakening of westerly winds bringing moisture from the Mediterranean, both of which occurred.

The method used in the study was to generate computer simulations of climate in the region both with and without climate change, and compare them to what actually happened. They conclude that climate change made the drought “two to three times more likely” than natural variability alone. While I can follow their argument, I don’t have the knowledge to evaluate it.

This thesis is similar to the arguments of some U. S. military analysts that climate change acts as a “threat multiplier” that increases instability in various regions of the world. However, Kelley sees climate change as an ultimate cause of the Syrian War, rather than just a catalyst. His paper is part of a larger scholarly literature linking global warming to interpersonal and political conflict.

Inequality

Frenchman Thomas Piketty, author of the best selling Capital in the Twenty-First Century, in a blog post published by Le Monde, proposed that income inequality is a major cause of Middle East terrorism. Since the interview is in French, I am relying on an article by Jim Tankersly of the Washington Post. He describes Piketty’s theory as “controversial,” since it explicitly blames the U. S. and Europe for their victimization by terrorists.

By Middle East, Piketty means the area between Egypt and Iran, which of couse includes Syria. This region contains six corrupt oil monarchies—Saudi Arabia, Kuwait, Bahrain, Oman, Qatar and the United Arab Emirates—all of which survive due to militarily support from the U. S. and Europe. Within those countries, a small minority controls most of the wealth, while the majority are kept in “semi-slavery.” Collectively, they control almost 60% of the wealth of the region, but only 16% of its population. The remaining Arab countries—Iran, Iraq, Syria, Jordan, Lebanon and Yemen—are much poorer. These countries, described by Piketty as a “powder keg” of terrorism, have a history of political instability.

In an 2014 paper, Alvaredo and Piketty attempted to estimate income inequality in the Middle East, a task made more difficult by the poor quality of their economic statistics. They estimated (“under reasonable assumptions”) that the top 10% controls over 60% of income in the region and the top 1% controls over 25%. This estimate is compared below to the income shares of the top 1% in five other countries for which more accurate statistics are available:

  • Sweden                                                          8.67%
  • France                                                            8.94%
  • Great Britain                                                12.4%
  • Germany                                                      13.13%
  • United States                                              22.83%
  • Middle East                                                  26.2%

Yes, folks, income inequality in the Middle East is even greater than in the United States! (Who would have thought, 35 years ago, that we would become the comparison group against which a dysfunctional level of inequality is measured?)

As you’ve no doubt noticed, all three of these analyses ultimately blame Middle Eastern terrorism and the war in Syria primarily on the United States and Europe. Removing or mitigating these three distal causes requires that we decide to leave the fossil fuels of the Middle East in the ground, withdraw our military forces from the region, and promote education and social development for the majority of the people in the Middle East.

You may also be interested in reading:

The Muslim Clock Strikes

More Bad News for Religion

In May, I reported on the Pew Research Center’s 2014 Religious Landscape Study, a survey of a quota sample of 35,000 adults, with a margin of error of plus or minus .6%. The first installment of their results concentrated on the size and demographic characteristics of various religious groups. The big news was that Americans with no religious affiliation (the “nones”) increased from 16% in 2007 to 23% in 2014, while those calling themselves Christians dropped from 78% to 71%. The biggest increase in the percentage of nones occurred among Milennials—people born after 1980.

Pew has published a second installment of results from the survey, focusing on religious beliefs and practices. The share of Americans who say they believe in God has declined from 92% in 2007 to 89% in 2014, while those who claim to be “absolutely certain” that God exists dropped from 71% to 63%. These declines are most pronounced among younger adults. This chart breaks down a number of superstitious beliefs and practices by age. All of them have declined since 2007.

in-many-ways-younger-americans-are-less-religious-than-older-americans

Pew also looked at the political beliefs of religious and nonreligious participants. Acceptance of homosexuality has increased dramatically among both religious and nonreligious participants, while support for abortion is relatively unchanged. For the first time, the nones are now the largest single group (28%) among Democrats. Evangelical Protestants are the largest group (38%) of Republicans. Not surprisingly given their political affiliations, religious people are more likely than nones to oppose government aid to the poor, to oppose stricter environmental regulations, and to see increased immigration as a change for the worse. Belief in evolution differs sharply between affiliated (55%) and nonaffiliated people (82%), and is nearly universal among atheists (95%) and agnostics (96%).

By and large, Americans see religion as a force for good in the society. Eighty-nine percent say that churches “bring people together and strengthen community bonds,” 87% say they “play an important role in helping the poor and needy,” and 75% say they “protect and strengthen morality in society.” However, some of these claims are becoming harder to defend in light of recent research. There is strong evidence that American religious people are higher in racism than nonreligious Americans. A recent study looks at some related moral behaviors.

Altruism refers to behavior that benefits others at some cost to oneself. Although there are studies that suggest that religious people report more charitable giving than nonreligious people, these self-reports are suspect since religious people are more likely to engage in socially desirable responding–a tendency to over-report one’s good behavior and under-report the bad. On the other hand, the research is fairly clear that religious people are more punitive in their evaluations of bad behavior than nonreligious people. For example, religiously affiliated whites are more likely to support the death penalty than unaffiliated whites. (Large majorities of black and Hispanic Americans oppose the death penalty regardless of religious affiliation.)

Dr. Jean Decety of the University of Chicago and his colleagues studied moral behavior among a broad and diverse sample of 1,170 children aged 5-12 in six countries (Canada, China, Jordan, South Africa, Turkey, and the US). Children were assigned to the religious affiliation reported by their parents. They were 24% Christian, 43% Muslim, and 28% nonreligious. Other religions were not reported often enough to include in the statistical analysis.

Altruism was measured using the Dictator Game, in which children were allowed to divide an attractive resource—in this case, ten stickers—between themselves a peer. The measure is the number of stickers shared with others. Religiously affiliated children were less generous than nonaffiliated children, with no significant difference in generosity between Christians and Muslims. Importantly, the negative association between religion and altruism was greater among the older children (aged 8-12), suggesting that as children come to understand their family’s beliefs better, the differences between those from religious and nonreligious families increase.

decety

To measure punitiveness, the authors had children watch videos depicting mild interpersonal harms and asked them to evaluate the “meanness” of the behavior and to suggest a level of punishment for the perpetrator. Religious children saw these behaviors as more “mean” and suggested greater punishment than nonreligious children. Muslim children evaluated the behaviors more negatively than Christian children.

The authors also asked the parents of these children to rate them on empathy and sensitivity to justice. In contrast to the actual behavior of the children, the religious parents rated their children as higher in empathy than the nonreligious parents. They also rated their children as more sensitive to justice. This could be another instance of socially desirable responding by the religious parents.

If these results, as well as the differences in prejudice and discrimination, were more widely known, people might be less likely to see religion as a force for good in society and less likely to favor exempting religious institutions from taxation.

You may also be interested in reading:

And Then There Were Nones

Power and Corruption, Part 1

Making a Mockery of the Batson Rule

Even when a jury pool is selected from the community by a reasonably random method, prospective jurors are questioned in a process known as voir dire, during which both the prosecution and the defense can object to jurors. A potential juror can be eliminated either by a challenge for cause, such as being acquainted with the defendant, or by a limited number of peremptory challenges, in which the attorney does not have to specify a reason. The number of peremptory challenges permitted varies among the states.

Historically, peremptory challenges have been used by prosecutors to create all-white juries in cases involving black defendants. However, in Batson v. Kentucky (1986), the Supreme Court ruled that using peremptory challenges to exclude jurors based solely on their race violates the equal protection clause of the Fourteenth Amendment. The Batson rule states that whenever the prosecution or defense excludes a minority group member, it must specify a race-neutral reason. However, there is widespread consensus that this procedure has failed to eliminate racial discrimination, since judges accept a wide variety of “race-neutral” excuses for disqualifying black members of the jury pool.

Here are excerpts from a 1996 (post-Batson) training video instructing young prosecutors on how to select a jury. This blatant endorsement of prosecutorial misconduct was produced by former Philadelphia District Attorney Ron Castille, who went on to become Chief Justice of the Pennsylvania Supreme Court.

Racial discrimination in jury selection is arguably more important today than in 1986, given the large differences in attitudes between whites and African-Americans toward the police and the criminal justice system. For example, in a July 2015 New York Times poll, 77% of black respondents, but only 44% of whites, thought that the criminal justice is biased against blacks. Clearly, black and white jurors approach criminal cases from very different perspectives. Laboratory research suggests that racially diverse juries exchange a wider range of information and make fewer errors than all-white juries.

Yesterday, the Supremes heard oral arguments in Foster v. Chatman, a blatant case of racial discrimination in jury selection. Timothy Foster, a black man, was convicted and sentenced to death for killing a white woman in 1987 by an all-white jury in Rome, Georgia. All four black potential jurors were disqualified by the prosecution using peremptory challenges. In notes that recently surfaced, it was found that prosecutors circled the names of the prospective black jurors in green and labeled them B#1, B#2, etc. They were ranked in order of acceptability “in case it comes down to having to pick one of the black jurors.” It did not come to that. The judge accepted a variety of “race-neutral” reasons, including rejecting one 34-year-old black woman for being too close in age to the defendant, who was 19, even though they did not challenge eight white potential jurors aged 35 or younger (including one man who was 21). In the trial itself, the prosecutor urged the jury to sentence Foster to death in order to send a message to “deter other people out there in the projects.”

There is abundant evidence from field studies conducted after the Batson decision showing that racial discrimination in jury selection still exists. For example, Grosso and O’Brien examined 173 capital cases in North Carolina between 1987 and 2010, involving over 7400 potential jurors. Prosecutors struck 52.6% of potential black jurors and 25.7% of potential white jurors. In cases with a black defendant, the strike rates were 60% for blacks and 21.3% for whites. A black prospective juror was 2.48 times more likely to be excluded than a white even after statistically controlling for the most common race-neutral reasons given for challenging a potential juror.

A laboratory experiment by Norton and Sommers (2007) illustrates the flexibility with which people can rationalize racially discriminatory decisions. Participants (college students, law students and attorneys) were asked to assume the role of prosecutor in a criminal case with a black defendant. They were told they had one peremptory challenge left, and to choose between two prospective jurors—a journalist who had investigated police misconduct and an advertising executive who expressed skepticism about statistical evidence to be used by the prosecution. For half the participants, the journalist was said to be African-American and the advertiser white, while for the remainder of the participants the races were reversed. The black juror candidate was challenged 63% of the time. When participants were asked why they struck the person they did, only 7% mentioned race, while 96% mentioned either the journalist’s investigation of police misconduct or the ad man’s skepticism about statistics. More importantly, both justifications were more likely to be cited as critical when they were associated with the black prospective juror than with the white prospective juror.

Today’s news reports suggest that even the more conservative Supremes were sympathetic the the defense’s arguments in Foster v. Chatman. However, the Court could decide the case very narrowly by simply overturning Foster’s conviction. It would be more interesting if their decision were to establish some new principle to minimize the abuse of peremptory challenges. It’s unlikely that these nine justices will establish a minority “quota” against which the fairness of juries can be assessed. However, an argument could be made for severely limiting peremptory challenges, or dispensing with them altogether, on the grounds that they merely provide opportunities for attorneys to express their conscious or implicit biases. If they have a legitimate reason for challenging a juror, let them present it to the judge for evaluation. Otherwise, let the juror be seated.

A beneficial side effect of eliminating peremptory challenges would be to put out of business those expensive “scientific” jury consultants who help lawyers choose a “friendly” jury. To the extent that they are actually helpful, this is yet another advantage possessed by wealthy defendants.

If the Supremes fail to eliminate peremptory challenges, then this case has implications for the fairness of the death penalty.

You may also be interested in reading:

Outrage

A Theory in Search of Evidence

Outrage

I run across a new study documenting discrimination against a minority group—usually African-Americans—almost every day. They are so commonplace that I seldom write about them, even though I know the cumulative effect of discrimination is devastating to its victims. However, since most of these studies are not controlled experiments, critics can usually offer alternative explanations that blame the victim. For example, if we find that black kids are expelled from schools at a much higher rate than white kids, a critic can always charge that they misbehave more often or that their misbehavior is more serious. While it’s sometimes possible to collect additional data that makes these explanations unlikely, they are hard to refute definitively.

I don’t think that reservation applies to a recent study by Dr. Monika Goyal and her colleagues in the Journal of the American Medical Association. It involves willingness to prescribe pain medication to black and white children suffering with appendicitis.

The data come from the National Hospital Ambulatory Medical Care Survey, a national probability survey of visits to hospital emergency departments between 2003 and 2010. The unwitting participants were about 940,000 children (mean age = 13.5) admitted with a diagnosis of appendicitis. The children were categorized as white, black or other. The main outcome measure was whether they received analgesic medication for their pain, and if so whether it was an opiate—generally acknowledged to be more effective—or a nonopiate, such as ibuprofen or acetaminophen. The effects of several control variables were statistically removed before analyzing the data: age, gender, ethnicity, triage acuity level, insurance status, geographic region, type of emergency department, year, and (most importantly) pain score on the 10-point Stanford Pain Scale.

Overall, 56.8% of the children received some type of pain medication and 41.3% received at least one opiate. These percentages are lower than is medically recommended. Not surprisingly, the higher the pain score, the greater the likelihood of receiving an analgesic.

m_poi150051t2

The table shows the distribution of analgesia by race, holding pain level constant. The black-white difference in receiving any analgesia was not statistically significant; however, whites were more likely to receive a more effective opioid analgesic than blacks reporting the same pain level. (In case you were wondering, the analysis of ethnicity showed no significant discrimination against Hispanics.)

m_poi150051f1

The data were further analyzed by looking at different levels of pain. Severe pain was defined as between “7” and “10” on the pain scale, while moderate pain was between “4” and “6.” Black and white children in severe pain were equally likely to get some pain medication, but whites were more likely to get opiates. Greater discrimination occurred among children with moderate pain. Black children were not only less likely to get opiates, they were also less likely to get anything at all. In other words, there are higher thresholds for both treating black children for pain, and for treating their pain with opiates.

The authors point out that previous ER studies have found that blacks of all ages and with various medical conditions were less likely to receive analgesics, but these studies can be explained away with victim-blaming rationalizations. For example, it was proposed that, since blacks were less likely to have health insurance, they used the emergency room for less serious conditions. However, all of these children had the same illness its severity was held constant. It has also been proposed that doctors are less willing to trust black patients with opiates due stereotypes about drug misuse. However, the current study did not involve prescriptions, and none of these children were sent home. Presumably, they all received appendectomies as soon as possible.

Since this study was published, it has been suggested that the findings reflect hospital policies rather than decisions by individual doctors. Maybe inner city hospitals that serve a higher percentage of black patients discourage their doctors from prescribing analgesics, especially opiates. It probably doesn’t matter to these kids whether they are denied pain relief by a person with a stethoscope or a person in a suit, although these two hypotheses do suggest different remedies.

In trying to understand this finding, I find myself drawn to some of the most depressing studies in all of social psychology—those involving dehumanization. Dehumanization refers to perceiving and treating another person as non-human—for example, as if he or she were an animal. Dehumanization is sometimes invoked as an explanation for extreme abuses, such as enslavement, torture and genocide. Ordinarily, when you see children in pain, you want to relieve their suffering if possible. Failure to do so suggests dehumanization of the victim. Studies show what appears to be dehumanization of black children (relative to white children) as early as age 10.

Social psychologist Jennifer Eberhardt and her colleagues have done studies suggesting that among white Americans, there is an unconscious association between black people and apes (called the “Negro-ape metaphor.”) To understand her studies, you must know about subliminal priming. A subliminal stimulus is an image presented very rapidly, below the threshhold of awareness. Studies show that subliminal primes improve the recognition of objects in the same or similar categories. Eberhardt has found that subliminally priming participants with images of black people improves their ability to recognize pictures of apes, and vice versa.

In one of her studies, participants were subliminally primed with images of either apes or large cats (lions, tigers, etc.) and shown a video of a policeman severely beating a suspect who they were informed was either black or white. Participants primed with ape images were more likely to see the beating of the black man as justified. This did not occur when they were primed with images of big cats, or when the suspect was said to be white.

Eberhardt did a content analysis of news articles showing that reporters were more likely to use ape metaphors when referring to convicted black murderers than convicted white murderers. Furthermore, those killers described as apelike were more likely to be executed by the state.

I suspect that dehumanization is one cause of the greater willingness of police to shoot and kill black suspects than white suspects in similar situations. Philip Atiba Goff and his colleagues were able to test police officers from a large urban department. The researchers had anonymous access to their personnel files, including their previous uses of force. The more strongly the officers associated black people with apes, the more frequently they had used force against black children, relative to children of other races, during their careers.

The destroyers are merely men enforcing the whims of our country, correctly interpreting its heritage and legacy. But all our phrasing—race relations, racial chasm, racial justice, racial profiling, white privilege, even white supremacy—serves to obscure that racism is a visceral experience, that it dislodges brains, blocks airways, rips muscle, extracts organs, cracks bones, breaks teeth. You must never look away from this. You must always remember that the sociology, the history, the economics, the graphs, the charts, the regressions all land, with great violence, on the body.

Ta-Nehisi Coates, Between the World and Me (p. 10)

Anonymous e-mail circulated among Florida Republicans
Anonymous e-mail circulated among Florida Republicans

It might also be a good idea to take a closer look at those political cartoons depicting President Obama as an ape.

We can only hope the publication of the Goyal study in such a prominent medical journal shames the profession into correcting this type of discrimination against black children. It is unacceptable.

The Muslim Clock Strikes

Ahmed Mohamed, a 14-year-old high school student and self-described science nerd from Irving, TX, took a homemade clock to school. He showed it to his science teacher, who approved. But when it accidentally beeped in his English class and he showed it to that teacher, she reported that he had a bomb, the police were called, he was removed from school and arrested. Fingerprints and a mug shot were taken, and he was not permitted to contact his parents for several hours. Although he told everyone who questioned him that it was only a clock, he was suspended for three days for bringing a fake bomb to school. Irving police spokesman James McLellan explained, “We attempted to question the juvenile about what it was and he would simply only tell us that it was a clock.” Apparently, that was not the right answer.

Ahmed the terrorist
Ahmed the terrorist

Irving police chief Larry Boyd justified their overreaction by saying, “You just can’t take things like that to school.” A blogger compiled a list of seven other (presumably White) kids who brought homemade clocks to school and were not arrested. The incident raisies obvious questions about racial profiling in school disciplinary cases. (Ahmed’s family is from Somalia, so he is Black as well as Muslim.) We know from dozens of social psychological studies that ambiguous actions are interpreted differently depending on whether they come from a member of a liked or a disliked group. I’ve chosen some examples that involve possible violence or the potential for violence, since that was the issue in Ahmed’s case.

In one of Allport and Postman’s 1947 studies of rumor transmission, the initial participants were shown a drawing two men standing in a subway—a White man holding a razor and an African-American man holding nothing at all. The first person was asked to describe it to a second person who had not seen the picture, who described it to a third person, and so on. By the end of the chain of six or seven participants, the razor had jumped to the Black man’s hand almost half the time.

In an experiment by Birt Duncan, White participants were shown a videotape of an argument between a White man and a Black man. At the end of the argument, one man stomps out of the room, and in so doing, may or may not have shoved the other man aside. (The camera angle makes this deliberately ambiguous.) There are four versions of this video, consisting of all four possible combinations of a Black and a White perpetrator (the man who may have done the shoving) and victim (the man who may have been shoved). Viewers of the video were asked whether an act of violence had occurred. The incident was more likely to be labeled violent when the perpetrator was Black and when the victim was White. With a Black perpetrator and a White victim, 73% of the audience saw the incident as violent. With a White perpetrator and a Black victim, 13% saw it as violent.

I’ve written before about studies by Joshua Correll and others of the “police officer’s dilemma,” a simulation in which participants were shown slides of Black and White men standing in public places holding either a gun or an innocuous object, such as a cell phone or a soda can. The participants had half a second to press one of two keys, labeled “shoot” or “don’t shoot.” Results showed that Black men were more likely to be “shot” than White men, both when they were armed and when they were not.

Glenn Greenwald writes that Ahmed’s ordeal and other examples of Islamophobia are an almost inevitable result of 14 years of fear-mongering and official harassment of Muslims, encouraged for political gain by U. S. politicians who have been waging wars against Islamic countries for three decades.

At a town meeting in New Hampshire, the following exchange occurred between Republican front-runner Donald Trump and a man in the audience.

  • Man: “We have a problem in this country, it’s called Muslims. We know our current president is one. You know, he’s not even an American. Birth certificate, man.”
  • Trump: “Right. We need this question? This first question?”
  • Man: “But anyway, we have training camps growing where they want to kill us.”
  • Trump: “Uh-huh.”
  • Man: “That’s my question: When can we get rid of them?”
  • Trump: “We’re going to be looking at a lot of different things. You know, a lot of people are saying that, and a lot of people are saying that bad things are happening out there. We’re going to look at that, and plenty of other things.”

Presumably, some of those “other things” involve people who speak with a Spanish accent. Will Trump pay a political price for his failure to correct the statement that President Obama is a Muslim, and his implicit promise to deport Muslims? So far, the media have been reporting Trump’s xenophobia in a matter-of-fact way, without calling attention to historical parallels or the negative consequences of encouraging fear and hatred. Of course, the corporate media are owned by wealthy people who continue to profit from the long-term migration of bigots into the Republican party.

Update (9/19/15):

In their coverage of this Q and A, the corporate media have emphasized Trump’s failure to challenge the statement that President Obama is a Muslim. The rest of the exchange has either gone unmentioned, or the media have accepted a Trump spokesperson’s assertion that his answer referred to “training camps” rather than to Muslims generally. You can judge for yourself.

https://www.youtube.com/watch?v=bTNHZfWMihw

However, since these training camps are part of a right wing conspiracy theory and have never been shown to exist, I don’t see how it’s to Trump’s credit that he is looking into how to get rid of them.