The New Yorker
May 16, 2014  
I Don’t Want to Be Right
By _Maria Konnikova_ 
(http://www.newyorker.com/contributors/maria-konnikova) 

 
 
Last month, Brendan Nyhan, a professor of political science  at Dartmouth, 
published the results of a study that he and a team of  pediatricians and 
political scientists had been working on for three years. They  had followed a 
group of almost two thousand parents, all of whom had at least  one child 
under the age of seventeen, to test a simple relationship: Could  various 
pro-vaccination campaigns change parental attitudes toward vaccines?  Each 
household received one of four messages: a leaflet from the Centers for  
Disease 
Control and Prevention stating that there had been no evidence linking  the 
measles, mumps, and rubella (M.M.R.) vaccine and autism; a leaflet from the 
 Vaccine Information Statement on the dangers of the diseases that the 
M.M.R.  vaccine prevents; photographs of children who had suffered from the 
diseases;  and a dramatic story from a Centers for Disease Control and 
Prevention about an  infant who almost died of measles. A control group did not 
receive any  information at all. The goal was to test whether facts, science, 
emotions, or  stories could make people change their minds. 
The result was dramatic: a whole lot of nothing. None of the  interventions 
worked. The first leaflet—focussed on a lack of evidence _connecting  
vaccines and autism_ 
(http://www.newyorker.com/online/blogs/elements/2013/07/jenny-mccarthys-dangerous-views.html)
 —seemed to reduce misperceptions about the 
link, but it  did nothing to affect intentions to vaccinate. It even 
decreased intent among  parents who held the most negative attitudes toward 
vaccines, a phenomenon known  as the _backfire effect_ 
(http://journals.lww.com/lww-medicalcare/Abstract/2013/02000/The_Hazards_of_Correcting_Myths_About_Healt
h_Care.2.aspx) . The other two interventions fared even worse:  the images 
of sick children increased the belief that vaccines cause autism,  while the 
dramatic narrative somehow managed to increase beliefs about the  dangers 
of vaccines. “It’s depressing,” Nyhan said. “We were definitely  depressed,”
 he repeated, after a pause. 
 
 




Nyhan’s interest in false beliefs dates back to early 2000,  when he was a 
senior at Swarthmore. It was the middle of a messy Presidential  campaign, 
and he was studying the intricacies of political science. “The 2000  campaign 
was something of a fact-free zone,” he said. Along with two classmates,  
Nyhan decided to try to create a forum dedicated to debunking political lies.  
The result was _Spinsanity_ (http://www.spinsanity.org/) , a fact-checking 
site that presaged venues like _PolitiFact_ (http://www.politifact.com/)  
and the Annenberg  Policy Center’s factcheck.org. For four years, the trio 
plugged along. Their  work was popular—it was syndicated by Salon and the 
Philadelphia  Inquirer, and it led to a best-selling _book_ 
(http://www.amazon.com/All-Presidents-Spin-George-Media/dp/0743262514/) —but 
the errors 
persisted. And so Nyhan, who had already  enrolled in a doctorate program in 
political science at Duke, left Spinsanity  behind to focus on what he now sees 
as 
the more pressing issue: If factual  correction is ineffective, how can you 
make people change their misperceptions?  The 2014 vaccine study was part of 
a series of experiments designed to answer  the question. 
Until recently, attempts to correct false beliefs haven’t  had much 
success. Stephan Lewandowsky, a psychologist at the University of  Bristol 
whose 
_research into misinformation_ (http://www.ncbi.nlm.nih.gov/pubmed/15733198)  
began around the same time as  Nyhan’s, conducted a _review_ 
(http://psi.sagepub.com/content/13/3/106.full)  of misperception literature 
through 2012. 
He found much  speculation, but, apart from his own work and the studies that 
Nyhan was  conducting, there was little empirical research. In the past few 
years, Nyhan  has tried to address this gap by using real-life scenarios 
and news in his  studies: the controversy surrounding weapons of mass 
destruction in Iraq, the  questioning of Obama’s birth certificate, and 
anti-G.M.O. 
activism. Traditional  work in this area has focussed on fictional stories 
told in laboratory settings,  but Nyhan believes that looking at real debates 
is the best way to learn how  persistently incorrect views of the world can 
be corrected. 
One thing he learned early on is that not all errors are  created equal. 
Not all false information goes on to become a false belief—that  is, a more 
lasting state of incorrect knowledge—and not all false beliefs are  difficult 
to correct. Take astronomy. If someone asked you to explain the  
relationship between the Earth and the sun, you might say something wrong:  
perhaps 
that the sun rotates around the Earth, rising in the east and setting in  the 
west. A friend who understands astronomy may correct you. It’s no big deal;  
you simply change your belief. 
But imagine living in the time of Galileo, when  understandings of the 
Earth-sun relationship were completely different, and when  that view was tied 
closely to ideas of the nature of the world, the self, and  religion. What 
would happen if Galileo tried to correct your belief? The process  isn’t 
nearly as simple. The crucial difference between then and now, of course,  is 
the 
importance of the misperception. When there’s no immediate threat to our  
understanding of the world, we change our beliefs. It’s when that change  
contradicts something we’ve long held as important that problems occur. 
 
 




In those scenarios, attempts at correction can indeed be  tricky. In a 
_study_ (http://dl.acm.org/citation.cfm?id=2441895)  from 2013, Kelly Garrett 
and Brian Weeks looked to see  if political misinformation—specifically, 
details about who is and is not  allowed to access your electronic health 
records—
that was corrected immediately  would be any less resilient than 
information that was allowed to go uncontested  for a while. At first, it 
appeared as 
though the correction did cause some  people to change their false beliefs. 
But, when the researchers took a closer  look, they found that the only 
people who had changed their views were those who  were ideologically 
predisposed to disbelieve the fact in question. If someone  held a contrary 
attitude, 
the correction not only didn’t work—it made the  subject more distrustful 
of the source. A climate-change _study_ 
(http://crx.sagepub.com/content/39/6/701)  from 2012  found a similar effect. 
Strong partisanship affected how a 
story about climate  change was processed, even if the story was apolitical 
in nature, such as an  article about possible health ramifications from a 
disease like the West Nile  Virus, a potential side effect of change. If 
information doesn’t square with  someone’s prior beliefs, he discards the 
beliefs if they’re weak and discards  the information if the beliefs are 
strong. 
Even when we think we’ve properly corrected a false belief,  the original 
exposure often continues to _influence_ 
(http://www.ncbi.nlm.nih.gov/pubmed/21359617)   our memory and thoughts. In a 
series of studies, Lewandowsky and 
his colleagues  at the University of Western Australia asked university 
students to read the  report of a liquor robbery that had ostensibly taken 
place 
in Australia’s  Northern Territory. Everyone read the same report, but in 
some cases racial  information about the perpetrators was included and in 
others it wasn’t. In one  scenario, the students were led to believe that the 
suspects were Caucasian, and  in another that they were Aboriginal. At the 
end of the report, the racial  information either was or wasn’t retracted. 
Participants were then asked to take  part in an unrelated computer task for 
half an hour. After that, they were asked  a number of factual questions (“
What sort of car was found abandoned?”) and  inference questions (“Who do you 
think the attackers were?”). After the students  answered all of the 
questions, they were given a scale to assess their racial  attitudes toward 
Aboriginals. 
Everyone’s memory worked correctly: the students could all  recall the 
details of the crime and could report precisely what information was  or wasn’t 
retracted. But the students who scored highest on racial prejudice  
continued to rely on the racial misinformation that identified the perpetrators 
 as 
Aboriginals, even though they knew it had been corrected. They answered the  
factual questions accurately, stating that the information about race was 
false,  and yet they still relied on race in their inference responses, 
saying that the  attackers were likely Aboriginal or that the store owner 
likely 
had trouble  understanding them because they were Aboriginal. This was, in 
other words, a  laboratory case of the very dynamic that Nyhan identified: 
strongly held beliefs  continued to influence judgment, despite correction 
attempts—even with a  supposedly conscious awareness of what was happening. 
In a follow-up, Lewandowsky presented a scenario that was  similar to the 
original experiment, except now, the Aboriginal was a hero who  disarmed the 
would-be robber. This time, it was students who had scored lowest  in racial 
prejudice who persisted in their reliance on false information, in  spite 
of any attempt at correction. In their subsequent recollections, they  
mentioned race more frequently, and incorrectly, even though they knew that  
piece 
of information had been retracted. False beliefs, it turns out, have  
little to do with one’s stated political affiliations and far more to do with  
self-identity: What kind of person am I, and what kind of person do I want to  
be? All ideologies are similarly affected. 
It’s the realization that persistently false beliefs stem  from issues 
closely tied to our conception of self that prompted Nyhan and his  colleagues 
to look at less traditional methods of rectifying misinformation.  Rather 
than correcting or augmenting facts, they decided to target people’s  beliefs 
about themselves. In a series of studies that they’ve just submitted for  
publication, the Dartmouth team approached false-belief correction from a  
self-affirmation angle, an approach that had previously been used for fighting  
prejudice and low self-esteem. The theory, _pioneered_ 
(http://psycnet.apa.org/psycinfo/2000-07436-018)  by Claude Steele, suggests 
that, when people 
feel  their sense of self threatened by the outside world, they are strongly 
motivated  to correct the misperception, be it by reasoning away the 
inconsistency or by  modifying their behavior. For example, when women are 
asked to 
state their  gender before taking a math or science test, they end up 
performing worse than  if no such statement appears, conforming their behavior 
to 
societal beliefs  about female math-and-science ability. To address this 
so-called stereotype  threat, Steele proposes an exercise in self-affirmation: 
either write down or  say aloud positive moments from your past that 
reaffirm your sense of self and  are related to the threat in question. 
Steele’s 
research suggests that  affirmation makes people far more resilient and high 
performing, be it on an  S.A.T., an I.Q. test, or at a book-club meeting. 
Normally, self-affirmation is reserved for instances in  which identity is 
threatened in direct ways: race, gender, age, weight, and the  like. Here, 
Nyhan decided to apply it in an unrelated context: Could recalling a  time 
when you felt good about yourself make you more broad-minded about highly  
politicized issues, like the Iraq surge or global warming? As it turns out, it  
would. On all issues, attitudes became more accurate with self-affirmation, 
and  remained just as inaccurate without. That effect held even when no 
additional  information was presented—that is, when people were simply asked 
the same  questions twice, before and after the self-affirmation. 
Still, as Nyhan is the first to admit, it’s hardly a solution  that can be 
applied easily outside the lab. “People don’t just go around writing  
essays about a time they felt good about themselves,” he said. And who knows 
how  
long the effect lasts—it’s not as though we often think good thoughts and 
then  go on to debate climate change. 
But, despite its unwieldiness, the theory may still be  useful. Facts and 
evidence, for one, may not be the answer everyone thinks they  are: they 
simply aren’t that effective, given how selectively they are processed  and 
interpreted. Instead, why not focus on presenting issues in a way keeps  
broader 
notions out of it—messages that are not political, not ideological, not  in 
any way a reflection of who you are? 
Take the example of _the  burgeoning raw-milk movement_ 
(http://www.newyorker.com/reporting/2012/04/30/120430fa_fact_goodyear) . So 
far, it’s a 
relatively fringe phenomenon,  but if it spreads it threatens to undo the 
health 
benefits of more than a  century of pasteurization. The C.D.C. calls raw milk “
one of the world’s most  dangerous food products,” noting that improperly 
handled raw milk is responsible  for almost three times as many 
hospitalizations as any other food-borne illness.  And yet raw-milk activists 
are 
becoming increasingly vocal—and the supposed  health benefits of raw milk are 
gaining increased support. To prevent the idea  from spreading even further, 
Nyhan advises, advocates of pasteurization  shouldn’t dwell on the 
misperceptions, lest they “inadvertently draw more  attention to the 
counterclaim.” 
Instead, they should create messaging that  self-consciously avoids any broader 
issues of identity, pointing out, for  example, that pasteurized milk has 
kept children healthy for a hundred  years. 
I asked Nyhan if a similar approach would work with  vaccines. He wasn’t 
sure—for the present moment, at least. “We may be past that  point with 
vaccines,” he told me. “For now, while the issue is already so  personalized in 
such a public way, it’s hard to find anything that will work.”  The message 
that could be useful for raw milk, he pointed out, cuts another way  in the 
current vaccine narrative: the diseases are bad, but people now believe  
that the vaccines, unlike pasteurized milk, are dangerous. The longer the  
narrative remains co-opted by prominent figures with little to no actual 
medical  expertise—the _Jenny  McCarthys_ 
(http://www.newyorker.com/online/blogs/elements/2013/07/jenny-mccarthys-dangerous-views.html)
  of the world—the more 
difficult it becomes to find a unified,  non-ideological theme. The message 
can’t change unless the perceived consensus  among figures we see as 
opinion and thought leaders changes first. 
And that, ultimately, is the final, big piece of the puzzle:  the 
cross-party, cross-platform unification of the country’s élites, those we  
perceive 
as opinion leaders, can make it possible for messages to spread  broadly. The 
campaign against smoking is one of the most successful  public-interest 
fact-checking operations in history. But, if smoking were just  for Republicans 
or Democrats, change would have been far more unlikely. It’s  only after 
ideology is put to the side that a message itself can change, so that  it 
becomes decoupled from notions of self-perception. 
Vaccines, fortunately, aren’t political. “They’re not  inherently linked 
to ideology,” Nyhan said. “And that’s good. That means we can  get to a 
consensus.” Ignoring vaccination, after all, can make people of every  
political 
party, and every religion, just as  sick.

-- 
-- 
Centroids: The Center of the Radical Centrist Community 
<[email protected]>
Google Group: http://groups.google.com/group/RadicalCentrism
Radical Centrism website and blog: http://RadicalCentrism.org

--- 
You received this message because you are subscribed to the Google Groups 
"Centroids: The Center of the Radical Centrist Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to