Tag Archives: implicit bias

Implicit Bias Against Atheists?

Consider the following problem:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which alternative is more probable?

A. Linda is a bank teller.

B. Linda is a bank teller and is active in the feminist movement.

“A” is the correct answer. Since there are undoubtedly some bank tellers who are not feminists, “B” cannot be more probable than “A”. To answer “B” is to commit conjunction fallacy, since the conjunction of two events (bank teller and feminist) cannot be more probable than one of them (bank teller) alone. We commit this error because we associate the other qualities mentioned in the description with being a feminist.

Will Gervais of the University of Kentucky and his colleagues used the conjunction fallacy to measure what they call “extreme intuitive moral prejudice against atheists.” Participants were 3,256 people from the United States and 12 other countries. (See the chart below for the countries). They read a description of a man who tortured animals as a child. As an adult, he engaged in several acts of violence, ending with the murder and mutilation of five homeless people. Half the participants from each country were asked:

Which alternative is more probable?

A. He is a teacher.

B. He is a teacher who is a religious believer.

The other participants were asked:

Which alternative is more probable?

A. He is a teacher.

B. He is a teacher who does not believe in god(s).

“B” is always the wrong answer, but the authors infer that if more people give this incorrect answer when the target is described as not believing in a god than when he is described as a religious believer, then the participants are (collectively) biased against atheists. Presumably, the respondents believe serial murderers are more likely to be atheists than religious people. Here are the results.

The chart shows the probability of a participant answering “B” when the target is an atheist compared to when he is religious, while statistically controlling for the participant’s gender, age, socioeconomic status and belief in god(s). There was bias against atheists in 12 of the 13 countries, the exception being Finland. Overall, people are about twice as likely to commit the conjunction fallacy when the target is described as an atheist (61%) than when he is described as religious (28%).

What is the effect of the respondents’ own belief in god(s) on answers to these questions? In the chart above, the individual’s certainty of the existence of a god increases from left to right. People at all levels of religious belief show prejudice against atheists, including atheists themselves—that is, people at the left who answered that the probability of a god’s existence is zero.

The authors did several followup studies. Using the same research method, they found that:

  • People are more likely to assume that a person who does not believe in god(s) is a serial murderer than a person who does not believe in evolution, the accuracy of horoscopes, the safety of vaccines, or the reality of global warming.
  • People are more likely to assume that a priest described as having molested young boys for decades is a priest who does not believe in god than a priest who does believe in god.

The assumption that morality depends on religious belief seems to be quite widespread, since it was obtained in religiously diverse cultures, including Christian, Buddhist, Hindu and Muslim societies. This association between atheism and bad behavior is all the more impressive given the lack of empirical evidence for a moral effect of religious beliefs.

On the other hand, 28% of the respondents who were given that choice saw the target as more likely to be a murderer if he was described as a religious believer than when his religiosity was not specified. This suggests that a minority of respondents associate religiosity with violence.

The authors describe their results as demonstrating an “intuitive” prejudice against atheists. They don’t indicate whether an intuitive belief operates consciously or without conscious intention. However, this prejudice seems to have some of the characteristics of an unconscious or implicit bias. It was measured using a fairly subtle technique. Participants were never asked to directly compare atheists with religious believers (although when the target was described as just a teacher, participants may have made the default assumption that he was religious). Furthermore, it is a bias shared by atheists themselves, suggesting that participants are repeating a popular cultural assumption, rather than reporting a belief that they have thoughtfully considered.

You may also be interested in reading:

The Implicit Association Test: Racial Bias on Cruise Control

Teaching Bias, Part 1

A Darker Side of Politics

Why “Bad Dudes” Look So Bad

A 2016 Washington Post analysis showed that Black Americans are 2.5 times as likely to be shot and killed by police officers than White Americans, and that unarmed Blacks are 5 times as likely to be shot dead than unarmed Whites. While there are many explanations for this finding, there is little support for the knee-jerk conservative response that attributes this racial disparity to the fact that Blacks commit more crimes. An analysis of the U. S. Police Shooting Database at the county level found no relationship between the racial bias in police shootings and either the overall crime rate or the race-specific crime rate. Thus, this racial bias is not explainable as a response to local crime rates.

When police officers shoot an unarmed Black teenager or adult, they are not likely to be convicted or even prosecuted if they claim to have felt themselves threatened by the victim. This suggests that it’s important to look at factors that affect whether police officers feel threatened. A study by Phillip Goff and others found that participants overestimated the ages of teenaged Black boys by 4.5 years compared to White or Latino boys, and rated them as less innocent than White or Latino boys when they committed identical crimes. While age may be related to perceived threat, the present study by John Paul Wilson of Montclair State University and his colleagues is more relevant, since it looks at the relationship between race and the perceived physical size and strength of young men.

The researchers were extremely thorough. They conducted seven studies involving over 950 online participants. Unless otherwise specified, participants were shown color facial photographs of 45 Black and 45 White high school football players who were balanced for overall height and weight. In the first study, the Black athletes were judged to be taller and heavier than the White athletes. Furthermore, when asked to match each photo with one of the bodies shown below, they judged the young Black men to be more muscular, or, as they put it, more “formidable.”

In a second study, participants were asked to imagine that they were in a fight with the person in the photograph, and were asked how capable he would be of physically harming them. The young Black men were seen as capable of inflicting greater harm.

In the third study, the authors examined the possibility that racial prejudice might predict these physical size and harm judgments. A fairly obvious measure of prejudice was used. Participants were asked to complete “feeling thermometers” indicating their favorability toward White and Black people. This measure of prejudice was only weakly associated with the participants’ judgments of Black-White differences in harm capability and not at all with Black-White differences in harm perception.

Up to this point, Black participants were excluded. However, the fourth study compared Black and White participants. Both Blacks and Whites saw the young Black men as more muscular, though the effect was larger for Whites. Only White participants saw the Black men as more capable of inflicting harm. Apparently Black participants subscribe the the size stereotype, but not to the stereotype about threat.

The fifth study was an attempt to apply these results to the dilemmas faced by police officers. Once again, both Blacks and Whites participated. They were asked to imagine that the young men in the photographs had behaved aggressively but were unarmed. How appropriate would it have been for the police to use force? White participants saw the police as more justified in using force against the young Black men than against the young White men. For the Black participants, there was no difference.

Previous research had shown that Black men who have an Afrocentric appearance—that is, who have dark skin and facial structures typical of African-Americans—are treated differently than Black men who are less prototypical. For example, in a laboratory simulation, participants are more likely to “shoot” a Black man if he has Afrocentric features, and a Black man convicted of murder is more likely to be sentenced to death if he is prototypical. The sixth study showed that young Black men whose facial features are prototypical are seen as more formidable and the police are seen as more justified in using force against them. Furthermore, this is true even when participants are shown photos of young White men. That is, White men with darker skin and facial features resembling Black men are seen as more muscular than other White men, and participants believe the police are more justified in using force against them.

In the final study, participants were shown the exact same photographs of men’s bodies with the head cropped off, but they were given additional information indicating the man was either White or Black. The photos were color-inverted to make the man’s race difficult to detect. The man’s race was indicated either by a Black or White face said to be the man in the photo, or a stereotypically Black or White first name. Results indicated that the very same bodies were seen as taller and heavier when the man was presumed to be Black than when he was presumed to be White.

You might be wondering whether Black and White men actually differ in size. Data from the Center for Disease Control shows that the average Black and White male has exactly the same weight, and that Whites are on average 1 cm taller. Therefore, when participants see Black men as larger, they are not generalizing from their real world experience.

These studies are important in explaining why police officers feel more threatened by young Black men than young White men, and why jurors are more likely to see the killing of young Blacks as justified. It may help to explain why no charges were brought against a Milwaukee police officer who shot Dontre Hamilton 14 times. The officer described Hamilton as “muscular” and “most definitely would have overpowered me or pretty much any officer I can think of.” Hamilton was 5’7” and weighed 169 lbs.

It is important to realize that the results of these studies are not readily explained by conscious race prejudice. This size estimation bias is probably automatic and unconscious, and is most likely to affect behavior when a police officer must make a split-second decision. The fact that officers are likely to be found not guilty of using excessive force against a Black victim if they testify that they felt threatened is troubling, since it suggests that implicit racial bias can be used successfully as a defense when charged with a violent crime.

You may also be interested in reading:

Publicizing “Bad Dudes”

Teaching Bias, Part 1

Making a Mockery of the Batson Rule

Teaching Bias, Part 2

Before continuing, please read Part 1 of this article.

Since people are usually not aware of their nonverbal behavior, nonverbal bias is a common feature of everyday life. As a result, families and friends routinely teach children racial and ethnic preferences without intending to. These biases are also taught through the mass media. A 2009 series of studies by Max Weisbuch and his colleagues, done with college students, demonstrates the teaching of implicit racial bias by television.

These researchers recorded 90 10-sec segments from 11 popular television programs in which White characters interacted with either White or Black targets. The clips were edited to eliminate the soundtrack and to mask the White or Black target to whom the character was talking. Twenty-three judges rated how positively the targets were treated. The (unseen) White targets were perceived as being treated more favorably than the (unseen) Black targets. This study established the existence of nonverbal racial bias on television. It seems unlikely that the actors and directors of these programs were aware that they were transmitting bias. These 11 shows had an average weekly audience of 9 million people.

The remaining studies were designed to test whether nonverbal race bias affects the viewer. In the second study, the 11 programs in Study 1 were scored according to the amount of race bias in the clips. The participants were asked which of these programs they watched regularly. It was found that watchers of the more biased programs showed a greater preference for Whites on the Implicit Association Test (IAT), a standard measure of implicit racial bias. (See this previous post for an explanation of the IAT.)

Since this is a correlational study, it does not demonstrate that exposure to biased programs causes prejudiced attitudes. An alternative explanation is that viewers prefer TV programs that reinforce their pre-existing attitudes. The remaining two studies, however, were true experiments in which participants were randomly assigned to be exposed to different televised content.

In these two experiments, participants were shown one of two silent videos constructed from clips used in Study 1. The pro-White tape featured White targets receiving positive nonverbal signals and Black targets being treated more negatively. The pro-Black tape featured favorable treatment of Black targets and unfavorable treatment of Whites. The participants were then tested for implicit racial bias. In Study 3, the IAT was used as the measure of bias. As expected, those who had seen the pro-White video showed a greater preference for Whites than those who had seen the pro-Black video.

Study 4 involved a different measure of implicit racial bias, an affective priming task. This task measures whether subliminal exposure to photos of White and Black faces speeds up the recognition of positive or negative images. Subliminal means below the level of awareness. Photos are presented on a computer so quickly that they are not consciously perceived. Nevertheless, they influence behavior. The premise, well established through previous research, is that you respond more quickly to an image if it is preceded by another that elicits a similar emotional response. Therefore, if you are subliminally exposed to a photo of a liked person, you can recognize a positive object, i.e., a puppy, more quickly, while exposure to a disliked person allows you to identify a negative object, i.e., a rattlesnake, more quickly.

This experiment was strengthened by some additional controls not present in Study 3. In addition to pro-White and pro-Black videos, there was a race-neutral control video. Photos of White, Black and Asian-Americans were used as subliminal primes. The results are shown below.

A higher number on the vertical axis indicates a faster response to that prime. The people who had seen the pro-White video showed faster positive associations to White faces (compared to Black faces), while those who had seen the pro-Black video showed faster positive associations to Black faces (compared to White faces). The control video had the same effect on both Black and White associations. Asian faces had no priming effect.

The studies cited in these posts make it clear that we don’t have to be explicitly taught to like or dislike members of different racial or ethnic groups. Our social environment contains nonverbal cues which encourage the reproduction of prejudice and discrimination from one generation to the next.

You may also be interested in reading:

What Does a Welfare Recipient Look Like?

Racial Profiling in Preschool

A Darker Side of Politics

Teaching Bias, Part 1

You may be surprised to hear that White children show evidence of bias against African-Americans as early as age 3. How does this happen? Since there is evidence that the implicit biases of adults leak out through their nonverbal behavior, it seems reasonable that children pick up these cues from their parents and older acquaintances. A new study by Allison Skinner and her colleagues shows how exposure to positive or negative nonverbal cues can create social biases in preschool children. The studies are simple, but they are awkward to explain, so please bear with me.

In the first experiment, 67 4- and 5-year-old pre-school children watched a video in which two adult female actors each exhibited nonverbal bias toward two adult female targets. The targets were idenified by the colors of their shirts, red or black. Although the actors used exactly the same scripts when talking to the two targets, one target received positive nonverbal signals (i.e., smiling, warm tone of voice, leaning in) and the other received negative signals (scowling, cold tone, leaning away). Since these were two different women, the actual identity of the targets who received the warmer and colder treatment was counterbalanced; that is, each woman received positive and negative treatment an equal number of times over the course of the experiment.

After the video, the researchers gave the children four tasks designed to measure which target they preferred. The first was a simple preference question asking which woman they liked better. For the second, they were given an opportunity to behave prosocially. They were asked to which target the experimenter should give a toy. The two remaining tasks were opportunities to imitate one of the two targets. In the third task, they had to choose which of two labels to give to a toy which the two targets had called by different names. In the fourth, they had to choose one of two actions, ways to use a cone-shaped object, which the two targets had used differently.

The children showed a preference for the target who received the positive nonverbal treatment on three of the four tasks—all but the action imitation. A summary measure of the number of times out of four they showed favoritism toward the preferred target was statistically significant.

The researchers were less interested in demonstrating favoritism toward specific individuals than in the development of favoristism toward groups of people. In a second experiment, they measured whether the preferences demonstrated in Study 1 would generalize to other members of the target’s group. The two targets were introduced as members of the red group and the black group, matching the colors of their shirts. After the video, the children were given three tasks—preference, prosocial behavior and label imitation. The results replicated those of the first study.

Then the children were introduced to two new adult woman targets, said to be members of the red and black groups (wearing appropriate-colored shirts), who were best friends of the previous two women. They were asked which friend they liked better and were asked to imitate the actions of one of the two friends. The results showed greater liking for and more imitation of the friends of the preferred target on the video. In other words, the favoritism toward the preferred target (and against the non-preferred target) generalized to other members of their groups.

This study is a demonstration experiment. To prove that they understand how a widget works, researchers will show that they can create a widget in the laboratory. The widget in this case is group favoritism. We should not be put off by the fact that the groups are artificial, defined only by the colors of their shirts. Suppose the researchers had used members of real groups, such as White and African-American women, as their targets. In that case, the researchers would not have created group biases, since 4- and 5-year-olds already have racial attitudes. For evidence of how pre-existing attitudes can be strengthened or weakened by the way targets are treated, please read Part 2 of this post.

You may also be interested in reading:

What Does a Welfare Recipient Look Like?

Racial Profiling in Preschool

A Darker Side of Politics

Racial Profiling in Preschool

Data from the U. S. Department of Justice, Office of Civil Rights, shows that African-American children, especially boys, are suspended or expelled from preschools at a higher rate than White children. For example, while 20% of preschool boys are Black, 45% of the boys suspended are Black. However, this is not proof of racial discrimination, since a skeptic could argue that, even at this young age, Black children are more likely to misbehave.

A new study by Walter Gilliam and his colleagues at the Yale University Child Study Center takes an experimental approach to this issue by holding the behavior of Black and White children constant and observing how teachers respond. Participants were 132 prechool teachers recruited at an annual conference. Sixty-seven percent of the teachers were White and 22% were Black. They took part in two studies.

In the first study, participants were shown a 6-minute video of four preschool children—a Black boy, a Black girl, a White boy and a White girl—seated around a table. The teachers were asked to watch for “challenging behavior,” but in fact the video did not contain any misbehavior. A computerized eye-tracking device was used to measure the amount of time the teachers spent watching each child. At the conclusion, the teachers were asked to report which of the four children required the most attention.

The eye tracking results showed that the participants spent more time looking at boys than girls, and more time looking at Black children than White children. In addition, the time spent gazing at the Black boy was significantly greater than would have been expected on the basis of his combined race and gender. The race of the teacher made no difference in this study.

The title of the paper frames the research as a study of implicit bias, and media reports of the study have followed suit. The authors define implicit bias as the “automatic and unconscious stereotypes that drive people to behave and make decisions in certain ways.” However, the teachers’ conscious appraisal of which child they paid the most attention to appeared to match the eye-tracking results fairly closely, as shown in the chart below. Apparently the teachers were well aware that they were paying more attention to the Black boy.

yale_implicit_bias_infographic_v07

I mention this because the term “implicit bias” is sometimes used to deny personal responsibility for one’s own and others’ discriminatory behavior on the grounds that it is unconscious. By labeling this as a study of implicit bias, the authors may have given their teacher-participants less blame for their behavior than they actually deserved.

In my title, I described these results as similar to racial profiling. Racial profiling targets people based on stereotypes about their race, as when the police stop and frisk Black teenagers having no evidence that they are committing crimes. Like the police, these teachers were scanning for misbehavior, and they responded by giving special attention to African-American boys. (An editorial writer for the New York Times drew this same analogy.)

These same participants also took part in a second experiment. In this study, they were asked to read a vignette describing a preschool child who repeatedly engaged in disruptive behavior. The child’s race and gender were manipulated by changing the child’s name (DeShawn, Jake, Latoya or Emily). Half the participants in each race and gender condition also read background information suggesting that the child lived with a single mother who was under a great deal of stress. The others were not given background information. The teachers were then asked to rate the severity of the child’s behavior and to recommend whether the child should be suspended or expelled.

The following results were found for ratings of the severity of the behavior.

  • The same behavior was rated as more seriously disruptive when the child was White than when he or she was Black.
  • Giving teachers background information increased the ratings of the severity of the behavior.
  • The Black teachers rated the behavior as more serious than the White teachers.
  • The background information increased the perceived severity of the behavior when the teacher was of a different race than the child, but the teachers responded more sympathetically to it when the teacher and the child were of the same race.

With regard to suspension or expulsion, the only finding was that Black teachers were more likely to recommend these options.

The results of the second study are not a good fit with the Department of Justice data, since the teachers appear to be discriminating against the White children. The researchers explain this by suggesting that these teachers expected the Black children to be disruptive, but held the White children to a higher standard. Therefore, the identical behavior was seen as more serious when attributed to a White child.

My guess is that had the same behavior been rated more disruptive when when the child was Black, the results would have been interpreted in a straightforward manner as discrimination against African-Americans. However, since the results were unexpected, a more complex explanation was presented. This explanation may be correct, of course. There is some evidence for “shifting standards” with respect to race. However, the authors could have strengthened their argument with a followup study measuring teachers’ expectations about the misbehavior of Black and White children and the extent to which the behavior described in their vignette violated those expectations.

Since the Black teachers were stricter overall, it appears that increasing the representation of Black teachers will not by itself reduce the number of suspensions and expulsions.

Some additional perspective on this issue is provided by a set of two experiments by Jason Okonofua and Jennifer Eberhardt. Their participants, grade school teachers, read a desription of either a White or Black boy in middle school who committed two infractions—one class disruption and one act of insubordination. After each incident, they were asked how severely the child should be disciplined.

There was no difference in the punishment recommended for Black and White boys after the first infraction. As shown in the table, the recommended disciplinary action increased in severity after the second infraction, but it did so more for the Black boy than for the White boy. (In the table, “feeling troubled” refers to a combined measure the the severity of the misbehavior and the extent to which it would hinder and irritate a teacher.)

Apparently, the teachers were more likely to infer a disposition to misbehave from two bad actions when the child was African-American than when he was White.

You may also be interested in reading:

White Prejudice Affects Black Death Rates

Outrage

Asian-American Achievement as a Self-Fulfilling Prophecy

A Darker Side of Politics

Regular readers of this blog will know of my interest in the political decisions—often referred to as Richard Nixon’s “Southern strategy”—that have resulted in an association between racism and membership in the Republican party. During their political campaigns, Republicans (and sometimes Democrats) use “dog whistle politics”—racially coded appeals that automatically activate the negative stereotypes of their increasingly prejudiced audience.

There is now a fairly extensive literature in social psychology demonstrating that white people respond more negatively to images of dark-skinned African-Americans than those with lighter skin. For example, one experiment found that participants assigned more negative traits and fewer positive traits to dark-skinned blacks than to light-skinned blacks. Another study showed that, among blacks convicted of murder, those with darker skins were more likely to receive the death penalty.

There are persistent rumors that Barack Obama’s skin tone has been manipulated in campaign advertisements. For example, in 2008, Hillary Clinton’s campaign was accused of doctoring images of Obama to make him appear blacker, although it’s not clear whether this was deliberate. A new set of studies by Solomon Messig and his colleagues analyzes images of Obama from the 2008 presidential campaign against John McCain.

Working from a complete library of television commercials aired by both candidates, the researchers electronically measured the brightness of the faces in all 534 still images, 259 of Obama and 275 of McCain. The advertisements were independently coded for content by judges who were unaware of the purpose of the study. The researchers looked at whether each image appeared in an attack ad, and whether the ad tried to associate the candidate with criminal activity. Two differences emerged. Obama’s skin tone was darker in commercials linking him with criminal activity—see example below—than in all other images of Obama.

https://www.youtube.com/watch?v=ONfJ7YSXE5w%20

In fact, 86% of the photos in these ads were among the darkest 25% of all Obama photos. Secondly, in attack ads produced by the McCain campaign, images of Obama grew darker toward the end of the campaign, even as their own images of McCain grew lighter.

The authors did two followup studies to determine whether darker images of Obama activated more negative reactions to black people than lighter images of Obama. They wanted to show that darkening the skin of a familiar black man, whom they refer to as “counterstereotypical,” would have the same effect as the darker faces of the unknown persons used in previous studies. In one experiment, participants viewed one of the Obama images below and completed a stereotype activation task in which they were asked to fill in the blanks of incomplete words such as “L A _ _” and “_ _ O R.” The darker image of Obama on the right elicited more stereotypical completions—“lazy” and “poor,” in these cases—than the lighter image.

The second study was more complicated, involving subliminal priming, but it too found that a variety of darker images of Obama yield more negative reactions than lighter images of Obama.

It’s not clear from these studies what the McCain campaign actually did in 2008. Did they deliberately darken some images of Obama, or did they merely select darker images? If the latter, did they select images because of their darkness, or were they merely trying to choose images than made him “look bad,” without thinking about why. The fact that these darker images appeared in ads attempting to link Obama with criminality, however, suggests that whatever they did was not accidental.

These campaign ads appeared on television seven years ago. The pace of social psychological research—including the publication lag—is often quite slow. The two followup studies probably accounted for most of the delay. Although they allowed the authors to tie up some loose ends, it could be argued that they were unnecessary, since they largely replicated previous studies. The delay was unfortunate, since the analysis of the ads didn’t appear in print until Obama was no longer running for office and the corporate media could treat it as old news. Sometimes postponing the release of information is almost as effective as completely suppressing it.

Of course, there will be other black candidates and many more opportunities for dog whistle politics.

You may also be interested in reading:

Guarding the Hen House

The World According to the Donald

Another Dog Whistle

Making a Mockery of the Batson Rule

Even when a jury pool is selected from the community by a reasonably random method, prospective jurors are questioned in a process known as voir dire, during which both the prosecution and the defense can object to jurors. A potential juror can be eliminated either by a challenge for cause, such as being acquainted with the defendant, or by a limited number of peremptory challenges, in which the attorney does not have to specify a reason. The number of peremptory challenges permitted varies among the states.

Historically, peremptory challenges have been used by prosecutors to create all-white juries in cases involving black defendants. However, in Batson v. Kentucky (1986), the Supreme Court ruled that using peremptory challenges to exclude jurors based solely on their race violates the equal protection clause of the Fourteenth Amendment. The Batson rule states that whenever the prosecution or defense excludes a minority group member, it must specify a race-neutral reason. However, there is widespread consensus that this procedure has failed to eliminate racial discrimination, since judges accept a wide variety of “race-neutral” excuses for disqualifying black members of the jury pool.

Here are excerpts from a 1996 (post-Batson) training video instructing young prosecutors on how to select a jury. This blatant endorsement of prosecutorial misconduct was produced by former Philadelphia District Attorney Ron Castille, who went on to become Chief Justice of the Pennsylvania Supreme Court.

Racial discrimination in jury selection is arguably more important today than in 1986, given the large differences in attitudes between whites and African-Americans toward the police and the criminal justice system. For example, in a July 2015 New York Times poll, 77% of black respondents, but only 44% of whites, thought that the criminal justice is biased against blacks. Clearly, black and white jurors approach criminal cases from very different perspectives. Laboratory research suggests that racially diverse juries exchange a wider range of information and make fewer errors than all-white juries.

Yesterday, the Supremes heard oral arguments in Foster v. Chatman, a blatant case of racial discrimination in jury selection. Timothy Foster, a black man, was convicted and sentenced to death for killing a white woman in 1987 by an all-white jury in Rome, Georgia. All four black potential jurors were disqualified by the prosecution using peremptory challenges. In notes that recently surfaced, it was found that prosecutors circled the names of the prospective black jurors in green and labeled them B#1, B#2, etc. They were ranked in order of acceptability “in case it comes down to having to pick one of the black jurors.” It did not come to that. The judge accepted a variety of “race-neutral” reasons, including rejecting one 34-year-old black woman for being too close in age to the defendant, who was 19, even though they did not challenge eight white potential jurors aged 35 or younger (including one man who was 21). In the trial itself, the prosecutor urged the jury to sentence Foster to death in order to send a message to “deter other people out there in the projects.”

There is abundant evidence from field studies conducted after the Batson decision showing that racial discrimination in jury selection still exists. For example, Grosso and O’Brien examined 173 capital cases in North Carolina between 1987 and 2010, involving over 7400 potential jurors. Prosecutors struck 52.6% of potential black jurors and 25.7% of potential white jurors. In cases with a black defendant, the strike rates were 60% for blacks and 21.3% for whites. A black prospective juror was 2.48 times more likely to be excluded than a white even after statistically controlling for the most common race-neutral reasons given for challenging a potential juror.

A laboratory experiment by Norton and Sommers (2007) illustrates the flexibility with which people can rationalize racially discriminatory decisions. Participants (college students, law students and attorneys) were asked to assume the role of prosecutor in a criminal case with a black defendant. They were told they had one peremptory challenge left, and to choose between two prospective jurors—a journalist who had investigated police misconduct and an advertising executive who expressed skepticism about statistical evidence to be used by the prosecution. For half the participants, the journalist was said to be African-American and the advertiser white, while for the remainder of the participants the races were reversed. The black juror candidate was challenged 63% of the time. When participants were asked why they struck the person they did, only 7% mentioned race, while 96% mentioned either the journalist’s investigation of police misconduct or the ad man’s skepticism about statistics. More importantly, both justifications were more likely to be cited as critical when they were associated with the black prospective juror than with the white prospective juror.

Today’s news reports suggest that even the more conservative Supremes were sympathetic the the defense’s arguments in Foster v. Chatman. However, the Court could decide the case very narrowly by simply overturning Foster’s conviction. It would be more interesting if their decision were to establish some new principle to minimize the abuse of peremptory challenges. It’s unlikely that these nine justices will establish a minority “quota” against which the fairness of juries can be assessed. However, an argument could be made for severely limiting peremptory challenges, or dispensing with them altogether, on the grounds that they merely provide opportunities for attorneys to express their conscious or implicit biases. If they have a legitimate reason for challenging a juror, let them present it to the judge for evaluation. Otherwise, let the juror be seated.

A beneficial side effect of eliminating peremptory challenges would be to put out of business those expensive “scientific” jury consultants who help lawyers choose a “friendly” jury. To the extent that they are actually helpful, this is yet another advantage possessed by wealthy defendants.

If the Supremes fail to eliminate peremptory challenges, then this case has implications for the fairness of the death penalty.

You may also be interested in reading:

Outrage

A Theory in Search of Evidence