[silk] Fwd: UK judge bans Bayesian logic

2011-10-03 Thread Suresh Ramasubramanian
Sorry for breaking this thread, sending along a couple of emails forwarded on 
with permission from another list where this was being discussed.

Email #1 below

--srs(iPad)

Begin forwarded message:

 - Forwarded by Suresh Ramasubramanian1/India/IBM on 10/04/2011 07:47 AM 
 - 
 
 From:Joe St Sauver j...@oregon.uoregon.edu 
 To:Suresh Ramasubramanian1/India/IBM@IBMIN, 
 Date:10/03/2011 06:53 PM 
 Subject:Re: Fw: [silk] UK judge bans Bayesian logic 
 
 
 
 Hi Suresh,
 
 Thanks for passing along the pointer to the Guardian article
 
 # 
 http://www.guardian.co.uk/law/2011/oct/02/formula-justice-bayes-theorem-miscarriage
 
 The funny thing is that the bench in this case has actually weighed in,
 perhaps unintentionally, on a long-standing debate in the statistical
 community -- folks may or may not know this, but there are different 
 camps in the statistical community, much as there often is in many 
 different academic disciplines.
 
 For example, in the Decision Sciences Department at UO, the chair (and my
 dissertation advisor) was an adherent of Fisher; another faculty member,
 very well regarded, was a passionate and well published advocate of Bayes.
 
 If folks are curious what all the fuss is about, I'd recommend the 
 article, Why Isn't Everyone a Bayesian? by B. Efron, from the February 
 1986 American Statistician. A copy of that article is available online at 
 http://www.isye.gatech.edu/~brani/isyebayes/bank/EfronWhyEveryone.pdf
 
 If that article is too opaque, and you'd just like to see if you yourself
 are a latent Bayesian, consider the classic Monty Hall game show --
 for those of you who might never have seen it, Monty would select a member
 of the audience and offer them the opportunity to pick one of three doors.
 
 Behind one door, there might be a terrific prize, such as a new car.
 
 Behind another door, there might be a gag prize such as a lifesize crude 
 ceramic billy goat, the perfect kitsch addition for your living room, eh?
 
 And then, behind the third door, there's some other prize, which might
 be pretty cool, or pretty lame, it would vary, but usually be something
 like a major appliance.
 
 Then again, sometimes Monty might have two lame prizes. 
 
 The contestant gets to pick one door. At that point, what is his or her
 chance of winning the high dollar value good prize, e.g., the car? 
 (most folks would say, 1-in-3)
 
 To make things more interesting, Monty would remind the contestant that
 while there's a terrific prize behind one of the doors, they might not
 have picked it. He'd then offer them cash-in-hand, if they want to
 take the money and run. 
 
 To help the contestant, Monty would also open one door. Since Monty knew
 what door actually has the top prize, he'd never open that one. You
 might see, instead, a nice washer and dryer set, or maybe the goat.
 You'd never see the car (if there was a car).
 
 And now we come to the question that determines if you're a Fisherian
 or a Bayesian at heart: 
 
 *what's the probability that the contestant will win the car NOW that
 Monty has opened one door?*
 
 Remember, there are two choices left, one of which has the car, one of
 which does not. 
 
 Fisher and his fans would say, obviously, 1-in-2, or 50%.
 
 Bayes and his adherents would say, no, the correct answer is 2-in-3, or
 66%.
 
 If you find yourself leaning toward Bayes, let me ask you an additional
 question: assume the audience member is given the chance to *switch*
 their choice, and pick the other unopened door. Should they? Would it 
 matter? If Fisher is right, both doors have an equal chance of being 
 right, and there's no reason why the person should switch.
 
 What would Bayesians say? :-;
 
 http://en.wikipedia.org/wiki/Monty_Hall_problem#Bayes.27_theorem
 
 Regards,
 
 Joe


[silk] Fwd: UK judge bans Bayesian logic

2011-10-03 Thread Suresh Ramasubramanian


--srs(iPad)

Begin forwarded message:

 - Forwarded by Suresh Ramasubramanian1/India/IBM on 10/04/2011 07:48 AM 
 - 
 
 From:Dave CROCKER dcroc...@bbiw.net 
 To:Joe St Sauver j...@oregon.uoregon.edu, 
 Cc:Suresh Ramasubramanian1/India/IBM@IBMIN 
 Date:10/03/2011 08:18 PM 
 Subject:Re: Fw: [silk] UK judge bans Bayesian logic 
 
 
 On 10/3/2011 5:26 AM, Joe St Sauver wrote:
  If that article is too opaque, and you'd just like to see if you yourself 
  are
  a latent Bayesian, consider the classic Monty Hall game show -- for those
  of you who might never have seen it, Monty would select a member of the
  audience and offer them the opportunity to pick one of three doors.
 
 
 I've long held two views that diverge from much of what is used for
 behavior-related research:
 
1.  Sophisticated statistics are appropriate only when there is massively
 good data that is extremely well understood.  Since that's rare, most use of
 statistics should be simple and obvious and use algorithms that are relatively
 IN sensitive.
 
2.  The framework or methodology for approaching an analysis is far more
 important than the statistical algorithm.  For example, from the Guardian 
 article:
 
  When Sally Clark was convicted in 1999 of smothering her two children, 
  jurors
  and judges bought into the claim that the odds of siblings dying by cot 
  death
  was too unlikely for her to be innocent. In fact, it was statistically more
  rare for a mother to kill both her children.
 
 That's highlights a methodology error in the original work and it's one that 
 is 
 fundamental.  The original trial took a statistic in isolation rather than 
 asking about comparable choices and /their/ numbers.
 
 (One of the engineers who worked on the original HP hand caclulator in the 
 early
 70s wrote an article about its impact.  He cited an experience with a banker,
 when he and some friends were trying to get a loan for an airplane purchase 
 and
 they haggled with the banker over some of the numbers.  The engineer pulled 
 out
 his brand new (and extremely rare) calculator, pushed a few buttons, showed 
 the
 result to the banker and the banker caved on the negotiation, without question
 any of the underlying details.)
 
 For most behavioral analysis, we simply do not know enough about the 
 surrounding
 environment or the population to be as precise as many statistics tools imply.
 And too frequently that surrounding analytic framework has a deep flaw that
 isn't even within site of those deciding whether to accept the statistical 
 numbers.
 
 Two anecdotes in this vein...
 
 Back when I was still in school mode, I twice got into quite a bit of trouble 
 for my simplistic attitude.
 
 Just after dropping out of undergrad, I interviewed with the folks at 
 Engelbart's SRI project, for a kind of user support job. (These are the folks 
 that invented the mouse, office automatic, and otherwise laid the foundation 
 for 
 the work that was done at Xerox Parc and then Apple.)  I had been dealing 
 with 
 them for a couple of years, so this was a friendly interview, until... at 
 lunch 
 with the guy I knew, and the guy who worked for him who would be my boss, the 
 latter described the challenges of developing a good survey instrument to 
 assess 
 user 'needs'.  In a fit of massive political stupidity, I noted that I had 
 been 
 told that such things were indeed hard to do well but that in the interim, 
 couldn't he just /ask/ users what they wanted?  He immediately stiffened and 
 -- 
 I swear he started looking down his nose at me -- he said that that would be 
 methodologically naive.  I looked at his boss who shrugged with an obvious 
 meaning that this meant he knew the guy would not tolerate my working for 
 him. 
 We were done.  On the other hand, it was my first taste of Anchor Steam Beer.
 
 And then when working at Rand, there was some spectacularly good information 
 processing / cognitive psychology work being done by 3 very hot researchers. 
 (The term cognitive psych was not yet in vogue for info proc work; these guys 
 were trailblazers on the psych side and were /very/ well regarded in the 
 field 
 with an impressive publication record.)  To get a raise at Rand, you needed 
 to 
 publish Rand Reports, no matter what outside publications you had.  So they 
 assembled their hottest published papers into a compendium.  Rand Reports are 
 refereed and they asked me to be a reviewer.  There were few folk at Rand 
 with a 
 psych and computer science background, especially with any background on info 
 processing psych.  Unfortunately I was back in school by then and taking a 
 multivarate stat course an dthe prof had just made us do an 'error' paper, 
 where 
 the term was not about the error part of stats algorithms but about 
 methodological errors.  In assigning the task -- we had to find an example in 
 the literature of our field, in my case that was Human/Mass