Monthly Archives: December 2016

Did Ebola Influence the 2014 Elections?

Republicans did very well on Election Day 2014, gaining control of the Senate for the first time in eight years and increasing their majority in the House of Representatives. Most pundits attributed these results to low turnout by Democrats in a non-presidential election year and to President Obama’s poor approval ratings, due primarily to the disastrous rollout of the Affordable Care Act earlier that year. But a recent paper by Alec Beall and two other psychologists at the University of British Columbia suggests that breaking news about the Ebola epidemic also played a significant role in the election outcome.

Their paper contains two studies, both of which are interrupted time series designs. In this design, data that are routinely collected are examined to see if they change after a specific event. In the first study, they analyzed the aggregate results of all polls conducted between September 1 and November 1, 2014 that asked respondents whether they intended to vote for a Democrat or a Republican in their upcoming House election. The “interruption” occurred when Center for Disease Control announced the first Ebola case in the U. S. on September 30. The research question was whether the poll results changed from before to after that date.

The above results show support for the Republican candidate minus support for the Democratic candidate in the month and the week before and after the Ebola story broke. In both cases, the temporal trends were significantly different from before to after September 30. The before and after lines had different slopes, and the shift was in favor of the Republican candidates. The authors also collected data from Google on the daily search volume for the term “Ebola,” and found that it too was positively related to Republican voting intentions.

Beall and his colleagues examined two possible alternative explanations—concern about terrorism and the economy. They measured daily search volume for the term “ISIS,” and checked the Dow-Jones Industrial Average, which was dropping at the time. Interest in ISIS was (surprisingly) negatively related to Republican voting intentions and the stock market had no significant effect.

In their second study, the authors looked at the 34 Senate races. They computed 34 state-specific polling averages by subtracting Democratic voting intentions from Republican intentions. Then they subtracted the September results from the October results. Thus, a higher number would indicate a shift toward the Republican candidate. The aggregate results showed a significant increase in Republican voting intentions after September 30.

However, not all states shifted in the same direction. Using Cook’s Partisan Voter Index, they determined whether each state had voted more for Republicans or Democrats in recent years. Then they analyzed the data separately for “red” and “blue” states. The results are shown below.

The changes were in the direction of the state’s dominant political party. In the red states, the Republican candidate did better after September 30. In the blue states, the Ebola scare seemed to help the Democrat, although the effect was smaller. This could also be interpreted as a shift toward the favorite, since candidates who were leading before September 30 tended to do even better after that date.

This study is part of a small but increasing body of research which shows that external threats that cause fear in the population seem to work to the advantage of conservative political candidates. In a previous post, I reported on a British study which indicated that the 2005 London bombings increased prejudice toward Muslims. More to the point is a 2004 study in which reminding participants of the 9/11 terrorist attack on the World Trade Center increased support for President George W. Bush in his campaign against John Kerry. These studies are consistent with older research suggesting that social threats are associated with an increase in authoritarianism in the U. S. population. Authoritarian attitudes are characterized by obedience to authority, hostility toward minority groups and a high degree of conformity to social norms.

Surprisingly, Beall and his colleagues did not mention terror management theory as a way of understanding their results. According to this theory, human awareness of the inevitability of death—called mortality salience—creates existential terror and the need to manage this terror. One way people manage terror is through defensive efforts to validate their own cultural world views—those attitudes that give their lives meaning and purpose. Previous research suggests that mortality salience results primarily in conservative shifts in attitudes, including support for harsher punishment for moral transgressors, increased attachment to charismatic leaders, and increases in religiosity and patriotism. (A charismatic leader is one whose influence depends on citizen identification with the leader or the nation-state, as in “Make America great again.”) The Bush v. Kerry study mentioned in the preceding paragraph was intended to be a test of terror management theory.

One of the effects of saturation coverage of the Ebola epidemic was to remind people of the possibility of their own death and that of loved ones. The results of the 2014 House elections are consistent with a terror management interpretation. The Senate results do not contradict the theory, since there was an overall shift in favor of Republican candidates, but they add an additional detail. In states that usually voted Democratic, the Ebola scare increased support for Democrats. If mortality salience causes people to reaffirm their cultural world views, this could have produced a shift toward liberalism in states in which the majority of citizens held progressive attitudes.

Research findings such as these suggest the possibility that political parties and the corporate media might strategically exaggerate threats in order to influence the outcomes of elections. Willer found that government-issued terror alerts between 2001 and 2004 were associated with stronger approval ratings of President Bush. Tom Ridge, Director of Homeland Security at the time, later admitted that he was pressured by the White House to increase the threat level before the 2004 election. Since that time, it has become routine for Republicans to emphasize threats to the public’s well-being more than Democrats, and evidence from the 2016 presidential debates suggests that the media gave greater attention to Republican issues.

Republicans made Ebola an issue in the 2014 election, claiming that President Obama was failing to adequately protect public health and arguing that he should close the borders and not allow Americans suffering from the virus back into the country for treatment. In retrospect, news coverage of the threat of Ebola appears to have created unnecessary panic. Analysis of the motives of the media decision makers is complicated by the knowledge that they also exaggerate threats because they believe that increasing public fear leads to higher ratings. Media Matters for America presented data showing that coverage of Ebola plummeted immediately after the 2014 election was over (see below). However, I know of no “smoking gun” showing that the corporate media deliberately created panic in order to help Republican candidates.

You may also be interested in reading:

Are Terrorists Getting What They Want?

Framing the Debates

Trump’s Trump Card

Climate Spirals

Here’s one of those animated charts that helps us to see things that might otherwise be difficult to visualize. It’s an animated version of the “hockey stick” graph, showing the increase in global temperature since 1850. This animation is under copyright by British climate scientist Dr. Ed Hawkins, and he grants permission to reproduce it provided he is given proper credit.

The year 1850 is chosen as the starting point since it was the approximate beginning of the Industrial Revolution. A change of 1.5 degrees Celsius equals 2.7 degrees Fahrenheit. Notice how the 2016 line stands apart from recent years, particularly during the first half of the year, when the temperature reached 1.5 degrees Celsius above baseline for the first time. In less than a week, 2016 will officially become the hottest year on record. Here’s how it compares to recent years.

When the lines in this spiral stop overlapping one another and begin to diverge noticeably, that is an indication that global temperature is increasing exponentially, rather than at a linear rate, as had previously been assumed. Exponential growth can lead to rapid change in a short period of time.

The primary reason for these temperature increases is the accumulation of greenhouse gases in the upper atmosphere. The most important greenhouse gas is carbon dioxide, and this second Hawkins animation shows its accumulation is parts per million.

This March, carbon dioxide reached 400 ppm for the first time, and it will continue to increase. 350 ppm is considered a “safe” level of carbon dioxide.

If humanity wishes to preserve a planet similar to that on which civilization developed and to which life on Earth is adapted, paleoclimate evidence and ongoing climate change suggest that CO2 will need to be reduced . . . to at most 350 ppm.

Dr. James Hansen

Although the world’s CO2 emissions have stabilized in recent years, that’s not the same as dropping to zero. CO2 continues to pile up in the atmosphere. The only way CO2 can be reduced is to stop using fossil fuels.

The Trump administration has threatened to elimate NASA’s $2 million per year budget for Earth science, which is the world’s major source of data on climate change, including the information in these charts. Maybe the theory is that what we don’t know can’t hurt us.

You may also be interested in reading:

The Cost of Climate Inaction

Community Solar

Here’s a bit of holiday cheer–a video about a community solar network that is bringing electricity and WiFi to an off-grid rural village in Bangladesh.

The question raised by this bottom-up approach is whether enough people can be helped to make a difference. The video says the sponsor, ME SOLshare, Ltd., plans to bring installations like this one to 1 million people by 2021. (The current population of Bangladesh is 170 million.)

You may also be interested in reading:

The Way of Ta’u

The Cost of Climate Inaction

The Stroking Community, Part 2

Please read The Stroking Community, Part 1 before continuing.

The Grading Leniency Assumption

The evidence for the bias assumption questions the validity of SETs, but it does not, by itself, explain grade inflation. The grading leniency assumption adds that college teachers try to obtain favorable evaluations by assigning higher grades and by reducing course workloads. Stroebe cites three surveys that show that a majority of faculty believe that higher grades and lower workloads result in higher SETs. One survey published in 1980 found that 38% of faculty respondents admitted lowering the difficulty level of their courses as a result of SETs. (I’m not aware of any more recent survey which asked this question, which is unfortunate.)

It should be noted, of course, that faculty may not be aware of having changed their behavior, or they may think they have done it for other reasons. One common reason given for watering down courses is that contemporary students are unprepared for college-level work. (One former colleague, for example, said, “You have to meet students at a place where they feel comfortable.” Unfortunately, that “place” gets closer to the downtown bars with each passing year.)

Indirect evidence for the grading leniency assumption comes from student behavior. Greenwald and Gillmore note that students would ordinarily be expected to report working harder in courses in which they expect to get a higher grade. However, in a study of over 500 classes, students reported doing less work in those courses in which they expected to get a higher grade, a finding which is readily explained by the grading leniency assumption.

Finally, there are studies of the effects of grades on future course enrollment. Some universities publish average grades by course and instructor at the university’s website, and it is possible to determine through computer signatures whether students have accessed this information. In two studies, consulting past grading data predicted future choices of courses and sections, with the sections with higher grades being preferred by about 2 to 1. In one of these studies, this preference for easier courses was greater among low ability students than high ability students.

It should be noted that lowering the students’ workload not only improves faculty evaluations, it also lowers the faculty’s own workload. There are fewer of those time-consuming term papers and essay exams to grade. Instead, teachers can give the multiple-choice exams that are considerately provided free of charge by the textbook publisher.

The faculty members with the most to lose in the current enviroment are those who attempt to maintain high academic standards and are punished for their integrity with low student evaluations. If they don’t have tenure, they could be fired. And even if they do have tenure, they are likely to be under considerable pressure from administrators to improve their evaluations.

Grade Inflation

Here’s another chart to remind you of how bad grade inflation has gotten. It shows the change over time in the frequency of letter grades.

Grade inflation is an unintended consequence of universities’ reliance on student evaluations. Can it be considered a good thing? Kohn proposes that grades serve three functions: sorting, motivation and feedback. If grades gradually lose their meaning, they become less useful as sorting criteria for employers and graduate schools and less useful as feedback to students. The students most harmed are the hard-working, high ability students who would have gotten A’s in the absence of grade inflation. They are no longer able to distinguish themselves from their more mediocre colleagues. Leading average students to believe they are doing better than they actually are could lead to unpleasant shocks after they graduate.

The motivational function of grading assumes that the rewards and punishments provided by grades induce students to work harder and learn more. But the picture that emerges from the course selection studies is one of students attempting to obtain higher grades without working for them. Stroebe suggests that grade inflation is most likely to demotivate high ability students, who might decide that studying is not worth the effort if they wind up with the same grades as their less deserving classmates.

It’s hard to see how grade inflation can be reversed. The Wellesley solution of mandating lower grades holds some promise, but only if it is adopted by almost all similar universities at about the same time, since if some universities attempt to control grade inflation while others do not, their students will be at a competitive disadvantage when applying for jobs or to graduate school. Princeton initiated a similar program, but abandoned it after peer colleges failed to follow suit. There was some concern that controlling grade inflation might cause studients not to come to Princeton.

A shorter-term solution is suggested by Greenwald and Gillmore. They propose that SETs be statistically corrected for the average grade in the class. Although their method is complicated, the gist of it is that if the distribution of grades in a class is lenient, SETs are reduced. If the distribution is strict, the instructor receives a bonus. Although this makes good sense to me, it’s hard to imagine a university faculty agreeing to it.

The implications of this research are depressing. Students and professors are rewarding one another for working less hard. They are caught in a social trap in which short-term positive reinforcement serves to maintain behavior that has long-term negative consequences for themselves, the university and the society. Meanwhile, colleges and universities, already under financial stress, are decaying from the inside out because they are failing to meet their most basic obligation—that of helping and requiring students to learn.

You may also be interested in reading:

The Stroking Community, Part 1

Asian-American Achievement as a Self-Fulfilling Prophecy

Racial Profiling in Preschool

The Stroking Community, Part 1

Grade inflation has been a fact of life at American universities for several decades. College grades are measured on a 4-point scale (A = 4, B = 3, C = 2, D = 1, F = 0). Since the 1980’s, grades at a large sample of colleges and universities, have increased on average by .10 to .15 points per decade. The overall grade point average now stands at about 3.15.

This would seem to imply that students have either gotten smarter or are working harder. However, verbal SAT scores of incoming students have declined sharply during this period, while math scores have remained relatively stable. There has also been a decline in the amount of time students report that they spend studying. On average, college students now claim to study only 12 to 14 hours per week. Assuming 16 hours of class time, that amounts to a work week of less than 30 hours.

More disturbing is the research of Richard Arum and Josipa Roksa. They administered the Collegiate Learning Assessment, a cognitive test measuring critical thinking, complex reasoning and writing, to 2300 students at 24 universities in their first semester and at the end of their sophomore year. They found only limited improvement (.18 of a standard deviation, on average), and no improvement at all among 45% of the students. Of the behaviors they measured, only time spent studying was associated with cognitive gains.

Beginning in the 1980s, colleges and universities entered what is sometimes called the “student-as-consumer” era. Almost all of them began routinely administering student evaluations of teaching (SETs), and basing decisions about tenure and promotion of faculty members in part on their SETs. Social psychologist Wolfgang Stroebe, in an article entitled “Why Good Teaching Evaluations May Reward Bad Teaching,” argues that SETs are responsible for some of the grade inflation. Stroebe has organized the research on SETs around two hypotheses which he calls the bias assumption and the grading leniency assumption.

The Bias Assumption

It has long been known that higher student grades are associated with better evaluations, both within and between classes. That is, within a class, the students with the highest grades give the instructor the most favorable evaluations. When you compare different classes, those with the highest average grades also have the highest average SETs. A recent meta-analysis found that grades account for about 10% of the variability in teaching evaluations.

Since these data are correlational, their meaning is ambiguous. They were initially interpreted to mean that teaching effectiveness influences both grades and evaluations. If so, SETs are a valid measure of instructional quality. Stroebe’s bias assumption states that students give favorable evaluations in appreciation for having less work to do and higher grades, and that this is an important source of bias which undermines the validity of SETs.

Over the years, this debate has been a source of animosity among college faculty. It is probably the case that SET believers receive more favorable evaluations than SET skeptics. SET believers sometimes accuse SET skeptics of making excuses for their poor student evaluations, while skeptics suggest that believers are in denial about the possibility that their high ratings are obtained in part by ingratiating their students.

The obvious—but unethical—way to test the bias hypothesis is to manipulate students’ grades in order to see what effect this has on SETs. Back in the days before ethical review of research with human subjects became routine, there were a few studies that temporarily gave students false feedback about their grades. They found that grades did affect evaluations. For example, in one study, students in two large sections of General Psychology taught by the same instructor were graded on slightly different scales. The instructor received better evaluations in the section with the more generous grading scale.

There are several other research findings that, while correlational, are consistent with the bias hypothesis.

  • In the early 2000s, Wellesley College, concerned about grade inflation, instituted a policy requiring that average grades in introductory course be no higher than 3.33. This resulted in an immediate decline in grades. Average SETs declined significantly in the affected courses and departments.
  • Greenwald and Gillmore found that the grade a student expected affected not only ratings of teaching effectiveness, but also had significant effects on logically irrelevant factors such as ratings of the instructor’s handwriting, the audibility of his or her voice, and the quality of the classroom. This suggests that there is a general halo effect surrounding lenient instructors.
  • The website Rate My Professors (RMP) contains 15 million ratings of 1.4 miliion professors at 7000 colleges. Professors are rated on easiness, helpfulness, clarity, “hotness” and overall quality. Easiness—a question that is seldom asked on institutional evaluations—is defined on the website as the ability to get a high grade without working hard. RMP ratings closely match the institutional SET scores of the same professors. The two dimensions most highly correlated with overall quality are easiness (r = .62) and hotness (r = .64). Obviously, the professor’s physical attractiveness is another threat to the validity of student evaluations that is deserving of study.

In my judgment, the best test of whether teachers with high evaluations are really better teachers are those studies which examine the effects of SETs in one course on performance in a followup course. For example, do students who give their Calculus I instructor a high rating do better in a Calculus II course taught by a different instructor? Stroebe found five studies using this research design. Three of them reported that those students who gave their instructors high SETs in the first course did more poorly in the followup course, one of them found no difference, and the fifth reported a mixture of negative and null effects depending on the item.

Merely finding no relationship between SETs in Course 1 and grades in Course 2 raises questions about the validity of SETs. The negative relationship found in the majority of these studies has a more radical implication. It implies that students learn less from those teachers to whom they give high evaluations. One of these studies found, however, that ratings of grading strictness in the first course were positively related to performance in the second.

Its important to noted that Stroebe does not claim that SETs are totally invalid as measures of teaching effectiveness, but only that they are strongly biased. Poor student evaluations can serve as a warning that faculty are not meeting their obligations. One recent study found a non-linear relationship between SETs and its measure of student learning. The students learned the most from professors whose SETs were near the middle of the distribution. They learned the least from those whose evaluations were the lowest and the highest.

There are a number of possible explanations for the bias hypothesis. One is simple reciprocity. When a professor does something nice for a student, the student returns the favor with a positive evaluation. SETs give students who are unhappy with their grades an opportunity to exact their revenge. A second explanation for the negative ratings given by students with lower grades is attributional bias. The self-serving attribution bias predicts that we maintain our self-esteem by taking personal credit for our successful behaviors but blaming our failures on external causes, such as poor teaching or unfair grading by the professor.

Please continue reading Part 2.

You may also be interested in reading:

The Stroking Community, Part 2

Asian-American Achievement as a Self-Fulfilling Prophecy

Racial Profiling in Preschool

Reforms as Experiments

It is one of the happy incidents of the federal system that a single courageous state may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.

Justice Louis Brandeis, New State Ice Co. v. Liebmann

It looks as though we are about to once again embark on a national program of deregulation, tax cuts for the wealthy, and austerity for everyone else. So how has that worked out so far? Economist Robert Reich explains.

Social psychologist Donald Campbell, in a 1969 paper entitled “Reforms as Experiments,” argued that we ought to try out various social policies, carefully evaluating the results, and repeat only those that are successful. Of course, Reich’s comparison is not really an experiment. California, Kansas and Texas were not randomly assigned to conditions of austerity or public investment, and even if they had been, there were many pre-existing differences between the three states. Nevertheless, prevailing evidence argues strongly against conservative economics and in favor of public investment as a long-term strategy.

You may also be interested in reading:

The Cost of Climate Inaction

The Invisible Hand

Bullshit: A Footnote

A year ago, I wrote a short piece entitled “Bullshit,” about research using Gordon Pennycook’s Bullshit Receptivity Scale (BSR). The BSR measures willingness to see as profound ten syntactically correct but meaningless statements, such as “Imagination is inside exponential space time events.” The scale also includes ten mundane but meaningful statements (“A wet person does not fear the rain”) to correct for the tendency to rate every statement as profound. Pennycook defines bullshit sensitivity as the difference between the ratings of the ten pseudo-profound bullshit statements and the ten mundane statements.

In January 2016, two German psychologists, Stefan Pfattheicher and Simon Schindler, asked 196 American volunteers recruited on the internet to complete the BSR. Participants also rated, on 5-point scales, their favorability toward six American presidential candidates: Ted Cruz, Marco Rubio, Donald Trump, Hillary Clinton, Martin O’Malley and Bernie Sanders. Finally, they rated themselves on a 7-point scale of liberalism-conservatism.

Above are the correlations between scores on the BSR and the political attitude measures. The darker yellow bars are the most important, since they are the correlations with bullshit sensitivity, which control for agreement with the mundane statements. Favorable ratings of the three Republican candidates and of conservatism were all positively related to bullshit receptivity. In other words, conservatives appear to be more easily impressed by bullshit. Democratic partisans, on the other hand, were not as susceptible to bullshit.

These are correlations. They do not mean that conservatism causes bullshit receptivity, or vice versa. However, they do suggest that conservatives may be more likely to accept statements as profound without thinking carefully about what they actually mean.

The Need For Cognition Scale measures people’s tendency to engage in and enjoy critical thinking. (One of the items reads, “I only think as hard as I have to.”) In an interview, social psychologist John Jost reported the results of a not-yet-published review of 40 studies in which 25 of them found a significant tendency for conservatives to be lower in need for cognition.

To be fair, I should report that Dan Kahan, in a highly publicized study, found no differences between liberals and conservatives on the Cognitive Reflection Test, a measure of a person’s ability to resist seemingly obvious, but wrong, conclusions. (“If it takes 5 machines 5 minutes to make 5 widgets, how long does it take 100 machines to make 100 widgets?” The answer is not 100 minutes.) However, Jost claims that 11 other studies showed that liberals outperform conservatives on the Cognitive Reflection Test.

These studies may be relevant to current concerns about Americans’ susceptibility to fake news and the possibility that we are living in a “post-truth” era. The Oxford Dictionary has chosen post-truth, defined as a condition “in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief,” as its 2016 word of the year. Last week, a man blasted a Washington pizza shop with an assault rifle after reading a fake news story that the shop was the home of a child sex ring being run by Hillary Clinton.

The editors of BuzzFeed News analyzed 1,145 stories forwarded through Facebook but originating in three left-wing (Addicting Information, Occupy Democrats and The Other 98%), three right-wing (Eagle Rising, Freedom Daily and Right Wing News), and three mainstream (CNN, ABC and Politico) sources of political news. The fact that these stories were forwarded suggests that the person who did so was impressed by them. Two people independently rated each story as mostly true, mostly false, or a mixture of true and false statements. Differences of opinion were resolved by a third reader. The results showed more fake news at the right-wing sites.

The study is flawed. There is no assurance that the nine chosen sites are representative of all sites within the three categories, and the authors don’t say how they knew a story was true or false. Nevertheless, convergent evidence from different sources seems to points to the same conclusion: Conservatives are more willing consumers of bullshit, including fake news stories.

Most articles about fake news end with the recommendation that mainstream journalists be more aggressive in identifying false claims made by politicians and pundits. However, surveys show that conservatives are more likely than liberals to distrust mainstream news sources. Mr. Trump may have neutralized this approach by telling his followers that the mainstream media peddle bullshit—which, in fact, they sometimes do.

You may also be interested in reading:

Bullshit

Framing the Debates

Guarding the Hen House

What Does a Welfare Recipient Look Like?

Economic inequality in the United States is at record levels. In surveys, Americans say they would prefer a more equal distribution of wealth. However, the majority consistently votes against public assistance programs that redistribute wealth. Political scientist Martin Gilens, in his 1999 book Why Americans Hate Welfare, attributes this primarily to racial prejudice. Gilens examined the photographs that accompanied stories about poverty in the news magazines Time, Newsweek and U. S. News. African-Americans accounted for 62% of the poor people shown in the photos. On the ABC, CBS and NBC nightly news programs, 65% of poor people shown in reports on poverty were Black. In reality, as of 2010, 32% of welfare recipients were Black, 32% were White and 30% were Hispanic.

Gilens also did an experiment in which a “welfare mother” was identified as either White or Black. Participants who read about a Black welfare recipient were more opposed to welfare than those reading of a White recipient. The implication of Gilens’ research is that White Americans’ disdain for welfare is explained in part by racial prejudice. Americans hate welfare because they overestimate the percentage of recipients who are African-Americans. However, there is a missing link in this analysis. Gilens implies, but does not show, that Americans are influenced by these misleading media reports—that is, that the average American’s mental image of a welfare recipient is a Black person.

A research team headed by Jazmin Brown-Iannuzzi of the University of Kentucky sought to measure their participants’ mental representations of a typical welfare recipient using an unusual technique. I’m not sure I completely understand it without seeing a demonstration, but the image generation phase of their study goes something like this: First, they constructed a computer-generated “base face,” a composite of a Black man, a Black Woman, a White man and a White woman. Then, on each of 400 trials, the computer introduced noise which altered the base image in two opposite directions. The participants were asked to choose which of these two altered faces most resembled a welfare recipient and which one resembled a non-welfare recipient. (Race was never mentioned.) The computer then generated a composite image of a typical welfare recipient and a typical non-welfare recipient, based on all the responses of all the participants.

This was done twice, with 118 college students participating in Study 1 and 238 internet volunteers in Study 2. The composite faces from the two studies are similar and are shown below. Although the composite faces of the welfare recipients look like African-Americans, I presume this was less apparent to the participants as they made their 400 decisions.

During the second phase of these two studies, 90 different participants were shown one of the composite faces and were asked to rate the person on a number of different dimensions. No mention of welfare was made to these participants. The raters judged the welfare recipient composites as more likely to be African-American (rather than White) than the non-welfare recipient composites. The welfare recipients were also rated more negatively on 11 different traits, including lazier, more incompetent, more hostile, less likeable and less human(!). These studies fill in the missing link in Gilens’ research. The average person’s mental image of a typical welfare recipient is of an African-American.

Finally, Brown-Iannuzzi and her colleagues did a third study, an experiment in which 229 internet volunteers were shown one of the composite images—either a welfare recipient or a non-welfare recipient—and asked a number of questions. The critical items were whether they would support giving this person food stamps and cash assistance. The other questions repeated some of the ratings used in the previous studies. Here are the results. This study replicates the Gilens experiment mentioned in the second paragraph.

In summary, the first two studies showed that when asked to imagine a typical welfare recipient, people generate a mental image of an African-American, while their mental image of a non-welfare recipient is that of a White person. The third study demonstrated that when other people are shown these mental images, they were less supportive of giving welfare to the composite typical welfare recipients than the composite non-welfare recipients.

Finally, the authors did a mediational analysis to see which variables mediated between the composite images and the decision to support or not support giving welfare to that person. The data were consistent with the following causal chain (see below): The image leads first to an inference that the person is either Black or White. This, in turn, leads to a judgment of how deserving the person is. (Black people are less deserving.) Finally, the judgment of deservingness leads the decision of whether to support giving welfare to the person.

We are going through a period of extreme racialization of politics. Americans’ racial attitudes influence their opinions about other political issues that may or may not be related to race. In some cases, survey participants’ racial attitudes determine their attitude toward a policy merely because they believe President Obama does or does not support the policy. Not only do racial attitudes appear to have been the strongest predictor of support for Donald Trump, they mattered more in electing Trump than Obama.

Nowhere is racialization more evident than in attitudes toward financial relief for the poor. People support income redistribution in principle, but they overestimate the percentage of poor people who are Black. As a result, their racial prejudice discourages them from supporting income redistribution policies.

You may also be interested in reading:

Old-Fashioned Racism

The Singer, Not the Song

Racialization and “Student-Athletes”

“Here I Am. Do You See Me?”

Maybe because of the continuing increase in economic inequality in the United States, social psychologists are taking a greater interest in social class differences in behavior. I have previously written about studies showing that upper class people are less likely to help a person in need than lower class individuals, and are more likely to engage in unethical behavior—behavior that is potentially harmful to others.

A new article by Pia Dietze and Eric Knowles of New York University suggests an explanation for these differences: Upper class individuals regard others as less motivationally relevant—that is, less “potentially rewarding, threatening or otherwise worth attending to”—than lower class members do. If that is the case, then members of the upper clases should pay less attention to other people they meet in public places.

In the first of three studies, Dietze and Knowles asked 61 college students to take a walk around the streets of Manhattan “testing” the Google Glass, a device that fits over the right eye and records what the person is looking at. Six independent judges watched these videos and measured the participants’ social gazes—the number and duration of their looks at the people they passed. The students were asked to classify themselves as either poor, working class, middle class, upper-middle class or upper class. These five labels were treated as a 5-point continuous scale.

Results showed that the number of social gazes did not differ by social class, perhaps indicating that it is necessary to at least glance at passers-by to successfully navigate the sidewalk. However, as predicted, the higher the self-reported social class of the participants, the longer the time they spent looking at the people they passed.

Is this only because other people are less “motivationally relevant” to upper class participants? After reading this study, I thought about sociologist Erving Goffman’s concept of civil inattention. Goffman said that when we pass strangers, we glance at them briefly, but then quickly look away, in order to avoid the appearance of staring at them. Are upper class children more likely to have been taught that it’s not polite to stare? Fortunately, the other two studies the authors report don’t involve face-to-face interaction and are not subject to this alternative explanation.

In the second study, 158 participants were asked to look at several visually diverse street scenes while fitted with an eye-tracking device which measured which part of the scene they were looking at, and for how long. The authors recorded the time spent looking at both people and things (cars, buildings, etc.) in the environment. Time spent looking at things did not differ significantly by social class, but participants who classified themselves in the lower classes spent more time looking at people. This is illustrated in the chart below, which compares working class and upper-middle class participants. (Study 2a involved 41 New York City scenes, while Study 2b added an equal number of scenes from London and San Francisco.)

In the last study, 397 paid internet volunteers participated in a flicker task. On each trial, participants were shown two rapidly alternating slides consisting of pictures of a person’s face and five other objects. On some trials, the two slides were identical, but on others, one of the six pictures—either the person or one of the five things—was different. Participants were asked to press a key as quickly as possible indicating whether the slides were the same or different, and the computer measured how rapidly participants responded. It was expected that lower class participants would be better at detecting changes among the people, but not among the things. This hypothesis was confirmed.

Although the flicker task has no obvious relevance to everyday life, the fact that the lower class participants detected changes in the faces more rapidly than the upper class participants suggests that they were more likely to be looking at the faces, rather than some other part of the slide. The fact that the differences were in milliseconds—a millisecond is a thousandth of a second—suggests that this is an automatic response rather than one that is under conscious control.

The chart above is from an article by Michael Kraus and two colleagues summarizing research on class differences in behavior. The present studies deal with cognition. Lower class people’s cognition is said to be contextual because it is directed at the social environment, probably because their lives are controlled more by outside forces, such as bosses and government policies. Upper class people are more likely to be paying attention to themselves and their own thoughts. It is hypothesized that this explains the differences in prosocial (helpful) behavior among the lower classes vs. selfish behavior among the upper classes that I noted in the opening paragraph. It may also help to explain class differences in political party affiliation and voting behavior, as long as voters are not confused or misled about which policies the candidates actually favor.

You may also be interested in reading:

Class Act

Me First

Racial Profiling in Preschool

The Way of Ta’u

If human life is to survive on this planet, we must switch to 100% renewable energy very soon—yesterday, if possible. There is now a demonstration project that can help us to visualize this possibility.

Ta’u is the largest island in American Samoa, a U. S. territory in the Southern Pacific Ocean. It’s about 17 square miles and has 790 inhabitants. It previously generated electricity by shipping in 109,500 gallons of diesel fuel every year. Not only was this expensive, but there were occasional interruptions of the supply due to rough seas.

Ta’u now runs on nearly 100% solar energy due to the installation of a 1.4 megawatt microgrid consisting of 5328 Solar City solar panels and 60 Tesla Powerpacks, which are batteries for energy storage. This not only gives them enough energy to supply the island’s needs 24/7, it provides enough storage capacity to last for three days without sunlight—a rare occurrence—and recharges in seven hours. Here’s a promotional video from Tesla advertising the project.

The $8 million project was funded by the American Samoa Economic Development Authority, the Department of the Interior and the Environmental Protection Agency. Solar power is almost free once the system is installed. (Three full-time workers are required for plant maintenance.) Unfortunately, I couldn’t find an estimate of the current yearly cost of their diesel fuel, so I can’t tell how long it will take to recover its cost.

Obviously, transforming this small demonstration project to a larger power grid poses all kinds of infrastructure problems, but they are problems that are soluble in principle, and at a much lower cost than the fossil fuel companies would lead us to believe. We have no choice but to do it.

You may also be interested in reading:

The Cost of Climate Inaction