First, stepping back, https://youtu.be/ajGX7odA87k provides some examples of my 
ML and AI involve too much magical thinking. That jobs with some of the points 
in the Quanta essay. I'm especially sensitive to this because of days of AI 
including a stint in the MIT clinical decision making group (over four decades 
ago). The focus wasn't just on computing but also understanding how doctors 
approached problems. Humans don't do a great job either.

But when I see 

"Three decades ago, a prime challenge in artificial intelligence research was 
to program machines to associate a potential cause to a set of observable 
conditions. Pearl figured out how to do that using a scheme called Bayesian 
networks. Bayesian networks made it practical for machines to say that, given a 
patient who returned from Africa with a fever and body aches, the most likely 
explanation was malaria. In 2011 Pearl won the Turing Award, computer science’s 
highest honor, in large part for this work."

I'm wary because in that CDMG we recognized that Bayesian approaches didn't 
work when there wasn't a well-defined space of choices. But causal reasoning is 
also a problem when there isn't enough information. I can understand the 
attraction of a WTF approach of ML/AI (I call it splat -- throwing the problem 
against wall and reading the shards).

So, yeah, it would be nice to be able to understand why ... whether we're three 
years old or eighty. Yet we still don't know why the chickened cross the road 
-- we understand some ways but not the ultimate why.


Reply via email to