New Scientist
 
 
Does trusting your gut make you  unscientific? 
18:00 13 April 2012 

 
Andrew Pontzen, astrophysics  postdoc


 
 
I've been popping up at a few science festivals recently, discussing the 
_evidence for dark matter_ 
(http://www.cheltenhamfestivals.com/find-events/science/s3-dark-matters)  with 
particle physicist Tom Whyntie.  After a session 
at Cambridge Science Festival, UK, a man in a duffle coat  approached me to 
explain that Einstein was wrong - that the universe is not, in  fact, 
expanding. (Actually, a static universe would have pleased Einstein  greatly, 
but 
that's beside the point.) As evidence, he tried to give me his book  on 
reinterpreting the redshift of galaxies. When I politely refused, he accused  
me of being "unscientific".  
The implication is that being "scientific" means completely digesting and  
testing every idea before deciding whether it's right or wrong. But 
sometimes we  have to make fast decisions based on prejudice, or we'll never 
get 
anything  done. Is that OK, or does it fundamentally undermine what we're 
trying to  achieve?  
Let's consider another recent challenge to Einstein: faster-than-light  
neutrinos. Forget what _we  now know_ 
(http://www.newscientist.com/article/mg21328544.700-neutrino-speed-errors-dash-exotic-physics-dreams.html)
  - even 
from the moment those results were announced, it was widely  considered that 
they were unlikely to be right. But while the smartest people  calculated the 
contradictions with results from _neutrino  astronomy_ 
(http://neutrinoscience.blogspot.co.uk/2011/09/supernova-neutrinos-in-1983-and-1987.html)
  or 
the _disastrous  implications_ 
(http://www.newscientist.com/article/dn21515-lights-speed-limit-is-safe-for-now.html)
  for basic particle physics, the more 
widespread reaction - _mine included_ 
(https://twitter.com/#!/apontzen/status/116975105950756864)  - was simple 
scepticism. 
We can model this scepticism mathematically. The process of making rational 
 judgements in the presence of uncertainty and incomplete knowledge is  
beautifully handled by Bayesian statistics. It's all about assigning numerical  
"confidence levels". If you assign 100 per cent confidence to true 
statements,  and 0 per cent confidence to false statements, Bayesian statistics 
automatically  boils down to normal logical reasoning. But in science we never 
have such clear  situations; we have scales of plausibility.  
So, based on the reams of historical evidence, you might start by assigning 
 99 per cent confidence to the assertion that "Einstein was right and 
nothing  goes faster than the speed of light". That means you are pretty 
confident it has  to be true, but you allow a generous 1 per cent margin of 
doubt.  
Then someone claims to have measured neutrinos breaking the speed limit. 
How  confident are you that the results from a typical, carefully conducted  
experiment are correct? Ninety per cent confident? That would mean that,  
typically, you take experimental results at face value.  
But this particular neutrino experiment has contradicted your 99-per-cent  
confident "speed limit" principle. Inserting all these previously settled-on 
 numbers into Bayes's theorem, you find that it's roughly 10 times more 
likely  that the neutrino experiment has gone awry than that the speed of light 
is  genuinely broken. The confidence in a prejudice (despite being slightly 
eroded)  has trumped the confidence in other people's experiments.  
Mathematics is hardly necessary in this case, but the point is that there 
is  a watertight way to model the judgements taking place in our minds. We  
don't need to know anything about the details of the experiment. We can  
self-consistently use our existing knowledge to make a snap judgement about how 
 
likely the new result is.  
But does the existence of a coherent mathematical framework tell us that  
rejecting ideas - without even looking at them - is acceptable? Not really. 
It's  a _useful  tool_ 
(http://www.newscientist.com/article/mg20727725.700-cosmologys-not-broken-so-why-try-to-fix-it.html)
  as situations get more 
complicated, but it's still only modelling what  we do in our minds. In 
particular, if we start with wrong prior beliefs, we'll  end up making 
unreasonable 
snap judgements. For example, I could be blinded by  my physics education, 
fooled into being far too confident about statements of  orthodoxy.  
Once we admit that we really do operate in this world of personal belief,  
what makes us "scientific"? Essentially that we never assign any statement 
100  per cent confidence (which would make it unassailable). We try to hold 
beliefs  that can, given enough evidence, be overridden. For instance an 
additional,  independent faster-than-light neutrino result would have beaten 
the 
99 per cent  Einstein-was-right assertion. We can then live with snap 
judgements and trust  that other people will keep presenting us with new 
evidence 
if we really are  wrong.  
I don't think I managed to convey all this very well to the man in the 
duffle  coat in Cambridge. But then, he seemed to hold the unerodible prior 
belief that  all scientists are lazily arrogant with 100 per cent confidence. 
So, by my  definition, he's the unscientific one. Only if I'd said that, I 
might have  appeared arrogant. 

-- 
Centroids: The Center of the Radical Centrist Community 
<[email protected]>
Google Group: http://groups.google.com/group/RadicalCentrism
Radical Centrism website and blog: http://RadicalCentrism.org

Reply via email to