The Man Who Would Teach Machines to Think

"...Take Deep Blue, the IBM supercomputer that bested the chess grandmaster 
Garry Kasparov. Deep Blue won by brute force. For each legal move it could 
make at a given point in the game, it would consider its opponent’s 
responses, its own responses to those responses, and so on for six or more 
steps down the line. With a fast evaluation function, it would calculate a 
score for each possible position, and then make the move that led to the 
best score. What allowed Deep Blue to beat the world’s best humans was raw 
computational power. It could evaluate up to 330 million positions a 
second, while Kasparov could evaluate only a few dozen before having to 
make a decision. 

Hofstadter wanted to ask: Why conquer a task if there’s no insight to be 
had from the victory? “Okay,” he says, “Deep Blue plays very good chess—so 
what? Does that tell you something about how *we* play chess? No. Does it 
tell you about how Kasparov envisions, understands a chessboard?” A brand 
of AI that didn’t try to answer such questions—however impressive it might 
have been—was, in Hofstadter’s mind, a diversion. He distanced himself from 
the field almost as soon as he became a part of it. “To me, as a fledgling 
AI person,” he says, “it was self-evident that I did not want to get 
involved in that trickery. It was obvious: I don’t want to be involved in 
passing off some fancy program’s behavior for intelligence when I know that 
it has nothing to do with intelligence. And I don’t know why more people 
aren’t that way...”

This is precisely my argument against John Clark's position.

Another quote I will be stealing:

"Airplanes don’t flap their wings; why should computers think?"

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
To post to this group, send email to
Visit this group at
For more options, visit

Reply via email to