--srs(iPad)

Begin forwarded message:

> ----- Forwarded by Suresh Ramasubramanian1/India/IBM on 10/04/2011 07:48 AM 
> ----- 
> 
> From:        Dave CROCKER <[email protected]> 
> To:        Joe St Sauver <[email protected]>, 
> Cc:        Suresh Ramasubramanian1/India/IBM@IBMIN 
> Date:        10/03/2011 08:18 PM 
> Subject:        Re: Fw: [silk] UK judge bans Bayesian logic 
> 
> 
> On 10/3/2011 5:26 AM, Joe St Sauver wrote:
> > If that article is too opaque, and you'd just like to see if you yourself 
> > are
> > a "latent Bayesian," consider the classic Monty Hall game show -- for those
> > of you who might never have seen it, Monty would select a member of the
> > audience and offer them the opportunity to pick one of three doors.
> 
> 
> I've long held two views that diverge from much of what is used for
> behavior-related research:
> 
>    1.  Sophisticated statistics are appropriate only when there is massively
> good data that is extremely well understood.  Since that's rare, most use of
> statistics should be simple and obvious and use algorithms that are relatively
> IN sensitive.
> 
>    2.  The framework or methodology for approaching an analysis is far more
> important than the statistical algorithm.  For example, from the Guardian 
> article:
> 
> > When Sally Clark was convicted in 1999 of smothering her two children, 
> > jurors
> > and judges bought into the claim that the odds of siblings dying by cot 
> > death
> > was too unlikely for her to be innocent. In fact, it was statistically more
> > rare for a mother to kill both her children.
> 
> That's highlights a methodology error in the original work and it's one that 
> is 
> fundamental.  The original trial took a statistic in isolation rather than 
> asking about comparable choices and /their/ numbers.
> 
> (One of the engineers who worked on the original HP hand caclulator in the 
> early
> 70s wrote an article about its impact.  He cited an experience with a banker,
> when he and some friends were trying to get a loan for an airplane purchase 
> and
> they haggled with the banker over some of the numbers.  The engineer pulled 
> out
> his brand new (and extremely rare) calculator, pushed a few buttons, showed 
> the
> result to the banker and the banker caved on the negotiation, without question
> any of the underlying details.)
> 
> For most behavioral analysis, we simply do not know enough about the 
> surrounding
> environment or the population to be as precise as many statistics tools imply.
> And too frequently that surrounding analytic framework has a deep flaw that
> isn't even within site of those deciding whether to accept the statistical 
> numbers.
> 
> Two anecdotes in this vein...
> 
> Back when I was still in school mode, I twice got into quite a bit of trouble 
> for my simplistic attitude.
> 
> Just after dropping out of undergrad, I interviewed with the folks at 
> Engelbart's SRI project, for a kind of user support job. (These are the folks 
> that invented the mouse, office automatic, and otherwise laid the foundation 
> for 
> the work that was done at Xerox Parc and then Apple.)  I had been dealing 
> with 
> them for a couple of years, so this was a friendly interview, until... at 
> lunch 
> with the guy I knew, and the guy who worked for him who would be my boss, the 
> latter described the challenges of developing a good survey instrument to 
> assess 
> user 'needs'.  In a fit of massive political stupidity, I noted that I had 
> been 
> told that such things were indeed hard to do well but that in the interim, 
> couldn't he just /ask/ users what they wanted?  He immediately stiffened and 
> -- 
> I swear he started looking down his nose at me -- he said that that would be 
> methodologically naive.  I looked at his boss who shrugged with an obvious 
> meaning that this meant he knew the guy would not tolerate my working for 
> him. 
> We were done.  On the other hand, it was my first taste of Anchor Steam Beer.
> 
> And then when working at Rand, there was some spectacularly good information 
> processing / cognitive psychology work being done by 3 very hot researchers. 
> (The term cognitive psych was not yet in vogue for info proc work; these guys 
> were trailblazers on the psych side and were /very/ well regarded in the 
> field 
> with an impressive publication record.)  To get a raise at Rand, you needed 
> to 
> publish "Rand Reports", no matter what outside publications you had.  So they 
> assembled their hottest published papers into a compendium.  Rand Reports are 
> refereed and they asked me to be a reviewer.  There were few folk at Rand 
> with a 
> psych and computer science background, especially with any background on info 
> processing psych.  Unfortunately I was back in school by then and taking a 
> multivarate stat course an dthe prof had just made us do an 'error' paper, 
> where 
> the term was not about the error part of stats algorithms but about 
> methodological errors.  In assigning the task -- we had to find an example in 
> the literature of our field, in my case that was Human/Mass communications -- 
> we 
> were told that some errors were so common we were not allowed to use them.  
> In 
> particular, repeated application of ANOVA (univariate analysis of variance) 
> to 
> the same sample set) was excluded.  ANOVA is hyper-sensitive and the main 
> purpose of the multivariate version is to de-tune its sensitivity.  I did, 
> indeed, find it frequently in the published literate.  I also found it in the 
> draft Rand Report.  After asking an independent psych researcher and an 
> independent stats expert, to confirm that this was an egregious error, I 
> cited 
> it in my review.  The authors stopped talking to me.
> 
> Happy Monday.
> 
> d/
> 
> -- 
> 
>   Dave Crocker
>   Brandenburg InternetWorking
>   bbiw.net

Reply via email to