Monthly Archives: November 2015

Popping Placebos

The greatest wonder drug we know about is the placebo. A placebo is an inert substance, such as a sugar pill, that has no direct physiological effect. Placebos can cause research participants to report improvements in a variety of physical and mental conditions. For this reason, tests of the effectiveness of new drugs or medical treatments must include not only treatment and no treatment conditions but also a placebo condition. While the size of placebo effects varies, placebos can account for well over half the difference between treatment and no treatment groups, especially with subjective outcomes such as pain or depression. Placebo effects are part of a broader class of self-fulfilling prophecies in which the expectation that some event will occur sets in motion processes that result it actually occurring.

Placebo effects are often underestimated, since clinical trials seldom use active placebos. An active placebo is one that has a noticeable physiological effect that is irrelevant to the condition being measured. It is used to convince patients that they are receiving a real drug rather than a placebo. Of course, if they figure out that they are getting a placebo, they may not expect to improve and have, in effect, reassigned themselves to a no treatment condition. Studies show that active placebos are more effective than passive or inert placebos.

3003880499_4c96051372_b

A new study by Kate Faasse and her colleagues in Health Psychology shows how subtle placebo effects can be. The participants were 81 New Zealand undergraduates who reported frequent headaches. They were given four doses of medication to treat their next four headaches. Two of them were labeled “Nurofen,” a common New Zealand brand name, while the other two were labeled “generic ibuprofin.” Within each of these conditions, one dose was active ibuprofin while the other was a placebo. Therefore, the study contained four conditions: branded active, generic active, branded placebo and generic placebo. To avoid order effects, each participant was asked to use the drugs in an assigned random order.

The students filled out a standard 6-point pain scale before taking each pill and again one hour later. Results showed that real ibuprofin reduced pain more than the placebo did. When participants received ibuprofin, it was equally effective no matter how it was labeled. However, the branded placebo was more effective in relieving pain than the generic placebo. In fact, branded placebos did not differ in effectiveness from real ibuprofin. Apparently these college students mistakenly believed brand-named drugs are more effective than generics.

This experiment had a within-subjects design; that is, participants received all four treatments in random order. This increases the statistical power of the data analysis, but it creates other problems. It allows the participants to compare the four conditions to one another. They probably assumed the researchers were comparing the effectiveness of brand name and generic headache remedies. It’s not clear to me whether the greater reported pain relief in the branded placebo condition was due to participants’ faith in brand names or their guess about what the researchers hoped to find.

This study is similar to an experiment by Alberto Espay and others published earlier this year. Twelve people with “moderate to severe” Parkinson’s disease were given two different placebos—two identical injections of a saline solution—in random order. They were told that one of them was an expensive new drug costing $1500 per dose, while the other cost only $100 per dose. Before and after each injection, participants completed three tests of motor skills used to measure the severity of Parkinson’s disease. While both placebos improved performance on the tests, the expensive placebo was more effective than the cheaper one.

Research shows that placebos can cause real, measurable physiological changes in the brain. Some have attributed the placebo effect to classical conditioning, in which the physiological response to effective drugs is generalized to ineffective ones such as placebos. However, the present results would seem to require a cognitive explanation. Classical conditioning also has difficulty explaining placebo effects that don’t involve habitual behaviors, such as the pain relief and increased mobility reported by patients who received sham knee surgery!

The effectiveness of placebos raises ethical questions. Should doctors be permitted to prescribe placebos? Since telling patients they are getting a placebo would reduce–but not completely eliminate–its effectiveness, should they be allowed to conceal from patients the fact that their treatment is a placebo? How much is our society willing to tolerate willful deception of patients by health care providers? (How much does it tolerate already?)

A literal reading of the results of these two studies suggests not only that doctors should prescribe placebos, but also that expensive placebos are more effective than cheaper ones. How much should drug companies and health care providers be allowed to charge for placebos? Of course, given what we know about placebos, the American public is already paying a considerable sum for both prescription and over-the-counter drugs whose effectiveness is partially or completely explained by the placebo effect.

You may also be interested in reading:

Asian-American Achievement as a Self-Fulfilling Prophecy

More Bad News for Religion

In May, I reported on the Pew Research Center’s 2014 Religious Landscape Study, a survey of a quota sample of 35,000 adults, with a margin of error of plus or minus .6%. The first installment of their results concentrated on the size and demographic characteristics of various religious groups. The big news was that Americans with no religious affiliation (the “nones”) increased from 16% in 2007 to 23% in 2014, while those calling themselves Christians dropped from 78% to 71%. The biggest increase in the percentage of nones occurred among Milennials—people born after 1980.

Pew has published a second installment of results from the survey, focusing on religious beliefs and practices. The share of Americans who say they believe in God has declined from 92% in 2007 to 89% in 2014, while those who claim to be “absolutely certain” that God exists dropped from 71% to 63%. These declines are most pronounced among younger adults. This chart breaks down a number of superstitious beliefs and practices by age. All of them have declined since 2007.

in-many-ways-younger-americans-are-less-religious-than-older-americans

Pew also looked at the political beliefs of religious and nonreligious participants. Acceptance of homosexuality has increased dramatically among both religious and nonreligious participants, while support for abortion is relatively unchanged. For the first time, the nones are now the largest single group (28%) among Democrats. Evangelical Protestants are the largest group (38%) of Republicans. Not surprisingly given their political affiliations, religious people are more likely than nones to oppose government aid to the poor, to oppose stricter environmental regulations, and to see increased immigration as a change for the worse. Belief in evolution differs sharply between affiliated (55%) and nonaffiliated people (82%), and is nearly universal among atheists (95%) and agnostics (96%).

By and large, Americans see religion as a force for good in the society. Eighty-nine percent say that churches “bring people together and strengthen community bonds,” 87% say they “play an important role in helping the poor and needy,” and 75% say they “protect and strengthen morality in society.” However, some of these claims are becoming harder to defend in light of recent research. There is strong evidence that American religious people are higher in racism than nonreligious Americans. A recent study looks at some related moral behaviors.

Altruism refers to behavior that benefits others at some cost to oneself. Although there are studies that suggest that religious people report more charitable giving than nonreligious people, these self-reports are suspect since religious people are more likely to engage in socially desirable responding–a tendency to over-report one’s good behavior and under-report the bad. On the other hand, the research is fairly clear that religious people are more punitive in their evaluations of bad behavior than nonreligious people. For example, religiously affiliated whites are more likely to support the death penalty than unaffiliated whites. (Large majorities of black and Hispanic Americans oppose the death penalty regardless of religious affiliation.)

Dr. Jean Decety of the University of Chicago and his colleagues studied moral behavior among a broad and diverse sample of 1,170 children aged 5-12 in six countries (Canada, China, Jordan, South Africa, Turkey, and the US). Children were assigned to the religious affiliation reported by their parents. They were 24% Christian, 43% Muslim, and 28% nonreligious. Other religions were not reported often enough to include in the statistical analysis.

Altruism was measured using the Dictator Game, in which children were allowed to divide an attractive resource—in this case, ten stickers—between themselves a peer. The measure is the number of stickers shared with others. Religiously affiliated children were less generous than nonaffiliated children, with no significant difference in generosity between Christians and Muslims. Importantly, the negative association between religion and altruism was greater among the older children (aged 8-12), suggesting that as children come to understand their family’s beliefs better, the differences between those from religious and nonreligious families increase.

decety

To measure punitiveness, the authors had children watch videos depicting mild interpersonal harms and asked them to evaluate the “meanness” of the behavior and to suggest a level of punishment for the perpetrator. Religious children saw these behaviors as more “mean” and suggested greater punishment than nonreligious children. Muslim children evaluated the behaviors more negatively than Christian children.

The authors also asked the parents of these children to rate them on empathy and sensitivity to justice. In contrast to the actual behavior of the children, the religious parents rated their children as higher in empathy than the nonreligious parents. They also rated their children as more sensitive to justice. This could be another instance of socially desirable responding by the religious parents.

If these results, as well as the differences in prejudice and discrimination, were more widely known, people might be less likely to see religion as a force for good in society and less likely to favor exempting religious institutions from taxation.

You may also be interested in reading:

And Then There Were Nones

Power and Corruption, Part 1

Making a Mockery of the Batson Rule

Even when a jury pool is selected from the community by a reasonably random method, prospective jurors are questioned in a process known as voir dire, during which both the prosecution and the defense can object to jurors. A potential juror can be eliminated either by a challenge for cause, such as being acquainted with the defendant, or by a limited number of peremptory challenges, in which the attorney does not have to specify a reason. The number of peremptory challenges permitted varies among the states.

Historically, peremptory challenges have been used by prosecutors to create all-white juries in cases involving black defendants. However, in Batson v. Kentucky (1986), the Supreme Court ruled that using peremptory challenges to exclude jurors based solely on their race violates the equal protection clause of the Fourteenth Amendment. The Batson rule states that whenever the prosecution or defense excludes a minority group member, it must specify a race-neutral reason. However, there is widespread consensus that this procedure has failed to eliminate racial discrimination, since judges accept a wide variety of “race-neutral” excuses for disqualifying black members of the jury pool.

Here are excerpts from a 1996 (post-Batson) training video instructing young prosecutors on how to select a jury. This blatant endorsement of prosecutorial misconduct was produced by former Philadelphia District Attorney Ron Castille, who went on to become Chief Justice of the Pennsylvania Supreme Court.

Racial discrimination in jury selection is arguably more important today than in 1986, given the large differences in attitudes between whites and African-Americans toward the police and the criminal justice system. For example, in a July 2015 New York Times poll, 77% of black respondents, but only 44% of whites, thought that the criminal justice is biased against blacks. Clearly, black and white jurors approach criminal cases from very different perspectives. Laboratory research suggests that racially diverse juries exchange a wider range of information and make fewer errors than all-white juries.

Yesterday, the Supremes heard oral arguments in Foster v. Chatman, a blatant case of racial discrimination in jury selection. Timothy Foster, a black man, was convicted and sentenced to death for killing a white woman in 1987 by an all-white jury in Rome, Georgia. All four black potential jurors were disqualified by the prosecution using peremptory challenges. In notes that recently surfaced, it was found that prosecutors circled the names of the prospective black jurors in green and labeled them B#1, B#2, etc. They were ranked in order of acceptability “in case it comes down to having to pick one of the black jurors.” It did not come to that. The judge accepted a variety of “race-neutral” reasons, including rejecting one 34-year-old black woman for being too close in age to the defendant, who was 19, even though they did not challenge eight white potential jurors aged 35 or younger (including one man who was 21). In the trial itself, the prosecutor urged the jury to sentence Foster to death in order to send a message to “deter other people out there in the projects.”

There is abundant evidence from field studies conducted after the Batson decision showing that racial discrimination in jury selection still exists. For example, Grosso and O’Brien examined 173 capital cases in North Carolina between 1987 and 2010, involving over 7400 potential jurors. Prosecutors struck 52.6% of potential black jurors and 25.7% of potential white jurors. In cases with a black defendant, the strike rates were 60% for blacks and 21.3% for whites. A black prospective juror was 2.48 times more likely to be excluded than a white even after statistically controlling for the most common race-neutral reasons given for challenging a potential juror.

A laboratory experiment by Norton and Sommers (2007) illustrates the flexibility with which people can rationalize racially discriminatory decisions. Participants (college students, law students and attorneys) were asked to assume the role of prosecutor in a criminal case with a black defendant. They were told they had one peremptory challenge left, and to choose between two prospective jurors—a journalist who had investigated police misconduct and an advertising executive who expressed skepticism about statistical evidence to be used by the prosecution. For half the participants, the journalist was said to be African-American and the advertiser white, while for the remainder of the participants the races were reversed. The black juror candidate was challenged 63% of the time. When participants were asked why they struck the person they did, only 7% mentioned race, while 96% mentioned either the journalist’s investigation of police misconduct or the ad man’s skepticism about statistics. More importantly, both justifications were more likely to be cited as critical when they were associated with the black prospective juror than with the white prospective juror.

Today’s news reports suggest that even the more conservative Supremes were sympathetic the the defense’s arguments in Foster v. Chatman. However, the Court could decide the case very narrowly by simply overturning Foster’s conviction. It would be more interesting if their decision were to establish some new principle to minimize the abuse of peremptory challenges. It’s unlikely that these nine justices will establish a minority “quota” against which the fairness of juries can be assessed. However, an argument could be made for severely limiting peremptory challenges, or dispensing with them altogether, on the grounds that they merely provide opportunities for attorneys to express their conscious or implicit biases. If they have a legitimate reason for challenging a juror, let them present it to the judge for evaluation. Otherwise, let the juror be seated.

A beneficial side effect of eliminating peremptory challenges would be to put out of business those expensive “scientific” jury consultants who help lawyers choose a “friendly” jury. To the extent that they are actually helpful, this is yet another advantage possessed by wealthy defendants.

If the Supremes fail to eliminate peremptory challenges, then this case has implications for the fairness of the death penalty.

You may also be interested in reading:

Outrage

A Theory in Search of Evidence