Science 2.0
 
 
Republican And Democratic Brains  Debunked For (Hopefully) The Last Time
By _Hank Campbell_ (http://www.science20.com/profile/hank_campbell)  |  
September 3rd 2013 

 
It's easy to forget that there was once a time when a lot of hype resulted  
from claims that magnetic resonance imaging (MRI) showed biological 
differences  between political brains - it was open season on the opposition by 
people who  understand biology even less than psychology. 

To dredge the  water-logged corpse up again, the scholars behind a paper 
(eventually published  years later _in PLoS One_ 
(http://www.plosone.org/article/info:doi/10.1371/journal.pone.0052970) , with a 
few modifications) said 
conservatives were  found to be more 'afraid' in a risk-taking task and that 
is why they like more  social authoritarian policies. This didn't make sense 
to anyone who knows which  party actually uses a lot more social 
authoritarianism to force society to obey  its desires of the week; it isn't 
Kansas 
City banning everything, it's places  like San Francisco and New York City.  
Prior papers said _everyone is motivated by fear_ 
(http://www.science20.com/news_releases/what_makes_you_liberal_or_conservative_fear_in_both_cases_says_s
tudy) , not just Republicans, and a  later one determined that liberals are 
just being politically correct - _when they get drunk and lose their 
inhibitions, they become more  conservative_ 
(http://www.science20.com/science_20/can_getting_drunk_make_you_more_conservative-88373)
 .  But with a science 
media and social science academia  overwhelmingly voting one way, there was 
invoked a delightful jumble of  motivated reasoning, identity-protective 
cognition, naïve realism and a bunch of  other science-y sounding terms for 
what 
your dad probably told you, bereft of  any psychology degree at all - 
'people believe what they want to believe'. So if  a psychology paper said 
Republicans are scared of risk, well, the skeptical  filters were turned off by 
people who like to believe that  stuff.

Biologists _dismissed the claims on scientific grounds_ 
(http://www.science20.com/search/apachesolr_search/conservative%20brains)  
while others just  
argued where they had a chance to be right - about the interpretation of the  
insults, I mean results. If you argue about interpretation, you can  never 
really be wrong, it is the same subjectivity I ridiculed in _Undermine 
Science By Redefining It_ 
(http://www.science20.com/science_20/undermine_science_redefining_it-119202)  , 
where people will choose  their own definition or 
lump science in with their morally relative  issue.

But no one bothered with the methodology - _Dan Kahan, Professor of Law and 
Psychology at Yale Law School and  proponent of his Cultural Cognition 
hypothesis does just that_ 
(http://www.culturalcognition.net/blog/2013/4/30/deja-voodoo-the-puzzling-reemergence-of-invalid-neuroscience.html)
 .  He  wonder
s why, many years after the problems of too many papers combining fMRI  with 
simple, basic errors relating to causal inference were exposed, undermining 
 the credibility of an alarming number of papers using MRI, anyone would 
still  make the same exact errors; things like voodoo correlations and 
opportunistic  observation.

As explained, they selected observations of activating “voxels” in  the 
amygdala of Republican subjects precisely because those voxels—as opposed  to 
others that Schreiber et al. then ignored in “further analysis”—were  “
activating” in the manner that they were searching for in a large expanse of  
the brain. They then reported the resulting high correlation between these  
observed voxel activations and Republican party self-identification as a test 
 for “predicting” subjects’ party affiliations—one that “significantly  
out-performs the longstanding parental model, correctly predicting 82.9% of  
the observed choices of party.”

This is bogus. Unless one “use[s] an  independent dataset” to validate the 
predictive power of “the selected . .  .voxels” detected in this way, 
Kriegeskorte et al. explain in their Nature  Neuroscience paper, no valid 
inferences can be drawn. None.
It  doesn't matter. As I said, for people with motivated reasoning, seeing 
social  science and humanities scholars behind "Red Brain, Blue Brain" in 
the title was  enough to know what they were getting. As of this writing, it 
has almost 28,000  reads and almost 1,900 shares. 


If  you want to believe so, this graphic shows Republicans are more scared 
of the  unknown than open-minded, super-smart Democrats. Credit and link: 
_doi:10.1371/journal.pone.0052970_ 
(http://www.plosone.org/article/info:doi/10.1371/journal.pone.0052970) 

Andrew Gelman,  professor of statistics and political science and director 
of the Applied  Statistics Center at Columbia University,_ is even harder on 
the paper in the context of talking about why  post-publication peer review 
is often no better than the pre-publication  kind_ 
(http://andrewgelman.com/2013/09/01/post-publication-peer-review-how-it-sometimes-really-works/)
 , 
saying in a comment on Kahan's article:

Read between the lines. The paper originally was released in 2009  and was 
published in 2013 in PLOS-One, which is one step above appearing on  Arxiv. 
PLOS-One publishes some good things (so does Arxiv) but it’s the place  
people place papers that can’t be placed. We can deduce that the paper was  
rejected by Science, Nature, various other biology journals, and maybe some  
political science journals as well.

I’m not saying you shouldn’t  criticize the paper in question, but you can’
t really demand better from a  paper published in a bottom-feeder journal.

Again, just because  something’s in a crap journal, doesn’t mean it’s 
crap; I’ve published lots of  papers in unselective, low-prestige outlets. But 
it
’s certainly no surprise if  a paper published in a low-grade journal 
happens to be crap. They publish the  things nobody else will touch.

Post-publication peer review,  Gelman notes, is no better because peers 
often just link to things without  really analyzing the data and methods - and 
I took his criticisms of PLoS  One (the other PLoS journals are not having 
the same issue he alleges, as  have many others) at face value because he 
generally knows his stuff and isn't  saying more prestigious journals don't put 
out rubbish articles as well - but  that it was rejected by other journals 
over a period of years before being  accepted in PLoS One could be telling. 
It could be telling us the credit card  cleared and the paper survived the 
four items an editor has to check off to  approve something.  But I can't say 
that happened for sure, so Gelman has a  fine point about post-publication 
peer review being a little light  also.

-- 
-- 
Centroids: The Center of the Radical Centrist Community 
<[email protected]>
Google Group: http://groups.google.com/group/RadicalCentrism
Radical Centrism website and blog: http://RadicalCentrism.org

--- 
You received this message because you are subscribed to the Google Groups 
"Centroids: The Center of the Radical Centrist Community" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to