Sorry to delurk with a massive rant but I love this field and Pearl's work, and spent the last 18 months being denied my doctorate because I use to much maths for a Psych department.
>Anyone else have opinions on why his ideas haven't caught on more >generally? There are two connected problems (sorry, this area of statistics is my field and raison d'etre, so bear with me) as to why Pearl's work isn't universal. Foremost is that causality research as a statistical problem started with Medical research, and Psychology/Educational research. One can include the early econometric work as well, but at the time simultaneous equation sets for endogenous processes were more theoretical publications and derivations until Jöreskogs' publication for multivariate normal estimation in 1969. Basically though, neither of those two fields are particularly mathematically inclined, and Pearl's graph and discrete theoretic arguments tend to get very complicated. One cannot simply plug in an equation and be finished, which is also the reason why SPSS is still the golden child of Psychology departments. Bayesian stats (I recommend taking a look at the free software package JASP; Just Another Statistical Package) also fall prey to the same issue in that the estimation of a model in full Stan framework involves things about and defining each variable, and the distributional structure of their measurements and probability density functions. Pearl has exactly the same issue in that his work is more in depth than the answer provided by a regression coefficient's p-value. Unfortunately, most of the published Social Sciences work is grossly invalid (I recommend Les Hayduk's work and book on SEM and his vitriolic diatribes against relative fit measures compared to the actual best case of the likelihood ratio test. He can be a bit singularly focused, but he is correct) and the p-values that are commonly reported are only valid conditional upon that falsehood. It is very hard to convince people they are wrong, even on basic truisms. Things like the non-equivalence of factor analysis and principal components analysis, the latter of which SPSS still labels as factor analysis under the menu options; decades of research, for example consider the freely available book on an update of the California F-scale called Right Wing Authoritarians by Bob Altemeyer. He performs PCA for measurement structure assessment, calls it FA, and his justification for three components is that it retains 85% of the variance. This ignores the fact that Likert scale items are intrinsically discrete, and the decomposition of a covariance structure as normally estimated is only defined for continuous spaces. The reason for this side note is that especially Psychology is extremely change resistant. Many of the propensity score analysis (i.e., Don Rubin's work) concepts have been automated for 'applied' researchers, and so they don't have to feel a need to worry about the data. I was literally just reviewing someone's grant application that one of his students shared, and I wanted to go punch the researcher. It was that poorly conducted and applied, and didn't reflect any of the theoretical requirements of validity. As well, one has to note that the justifications for Rubin's dissertation was built upon Bayesian assertions and requirements, but those parts, and the meaning of the actual propensity scores themselves, were dropped. Easy to use software is the second issue, and it's tied to why Bayes in general doesn't pervade intro statistics courses. I've had faculty in psych PhD programmes who never took undergraduate calculus but felt they knew more about how to use a technique, even if it was literally stated as a requirement in the introductory text: Jöreskogs' book on SEM explicitly stated how ordinal scales can never be continuous and so the application of confirmatory factor analysis to such items should never be applied in 1969. It's my favourite example because it's literally the most trivially wrong and yet universally published easy technique that qualifies one as an 'advanced quantitative expert'. On 22 August 2018 05:00:39 GMT-04:00, Charles Haynes <[email protected]> wrote: >Pearl has been spruiking his causality formalisms for years, but they >don't >seem to have caught on despite widespread dissemiy of the ideas. I've >read >them and my reaction was "hm, interesting" rather than "oh! I see how >this >could be useful" > >Anyone else have opinions on why his ideas haven't caught on more >generally? > >-- Charles > >On Wed., 22 Aug. 2018, 5:28 am Bharat Shetty, <[email protected]> >wrote: > >> Sharing an intriguing interview with Judea Pearl related to his book >"The >> Book of Why", a book that I have been reading and enjoying. >> >> "In his new book, Pearl, now 81, elaborates a vision for how truly >> intelligent machines would think. The key, he argues, is to replace >> reasoning by association with causal reasoning. Instead of the mere >ability >> to correlate fever and malaria, machines need the capacity to reason >that >> malaria causes fever. Once this kind of causal framework is in place, >it >> becomes possible for machines to ask counterfactual questions — to >inquire >> how the causal relationships would change given some kind of >intervention — >> which Pearl views as the cornerstone of scientific thought. Pearl >also >> proposes a formal language in which to make this kind of thinking >possible >> — a 21st-century version of the Bayesian framework that allowed >machines to >> think probabilistically. >> >> Pearl expects that causal reasoning could provide machines with >human-level >> intelligence. They’d be able to communicate with humans more >effectively >> and even, he explains, achieve status as moral entities with a >capacity for >> free will — and for evil." >> >> >> >https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515/ >> >> PS: If there are similar mind-bending and worldview changing books, >holler >> about them at me. >> >> Regards, >> - Bharat >> -- Violence is the last refuge of the incompetent.
