On Thursday, October 24, 2013 7:16:55 PM UTC-4, stathisp wrote:
>
> On 25 October 2013 03:39, Craig Weinberg <whats...@gmail.com <javascript:>> 
> wrote: 
> > 
> http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
>  
> > 
> > The Man Who Would Teach Machines to Think 
> > 
> > "...Take Deep Blue, the IBM supercomputer that bested the chess 
> grandmaster 
> > Garry Kasparov. Deep Blue won by brute force. For each legal move it 
> could 
> > make at a given point in the game, it would consider its opponent’s 
> > responses, its own responses to those responses, and so on for six or 
> more 
> > steps down the line. With a fast evaluation function, it would calculate 
> a 
> > score for each possible position, and then make the move that led to the 
> > best score. What allowed Deep Blue to beat the world’s best humans was 
> raw 
> > computational power. It could evaluate up to 330 million positions a 
> second, 
> > while Kasparov could evaluate only a few dozen before having to make a 
> > decision. 
> > 
> > Hofstadter wanted to ask: Why conquer a task if there’s no insight to be 
> had 
> > from the victory? “Okay,” he says, “Deep Blue plays very good chess—so 
> what? 
> > Does that tell you something about how we play chess? No. Does it tell 
> you 
> > about how Kasparov envisions, understands a chessboard?” A brand of AI 
> that 
> > didn’t try to answer such questions—however impressive it might have 
> > been—was, in Hofstadter’s mind, a diversion. He distanced himself from 
> the 
> > field almost as soon as he became a part of it. “To me, as a fledgling 
> AI 
> > person,” he says, “it was self-evident that I did not want to get 
> involved 
> > in that trickery. It was obvious: I don’t want to be involved in passing 
> off 
> > some fancy program’s behavior for intelligence when I know that it has 
> > nothing to do with intelligence. And I don’t know why more people aren’t 
> > that way...” 
> > 
> > This is precisely my argument against John Clark's position. 
> > 
> > Another quote I will be stealing: 
> > 
> > "Airplanes don’t flap their wings; why should computers think?" 
>
> You could say that human chess players just take in visual data, 
> process it in a series of biological relays, then send electrical 
> signals to muscles that move the pieces around. This is what an alien 
> scientist would observe. That's not thinking! That's not 
> understanding! 
>

Right, but since we understand that such an alien observation would be in 
error, we must give our own experience the benefit of the doubt. The 
computer does not deserve any such benefit of the doubt, since there is no 
question that it has been assembled intentionally from controllable parts. 
When we see a ventriloquist with a dummy, we do not entertain seriously 
that we could be mistaken about which one is really the ventriloquist, or 
whether they are equivalent to each other. 

Looking at natural presences, like atoms or galaxies, the scope of their 
persistence is well beyond any human relation so they do deserve the 
benefit of the doubt. We have no reason to believe that they were assembled 
by anything other than themselves. The fact that we are made of atoms and 
atoms are made from stars is another point in their favor, whereas no 
living organism that we have encountered is made of inorganic atoms, or of 
pure mathematics, or can survive by consuming only inorganic atoms or 
mathematics.

Craig


>
> -- 
> Stathis Papaioannou 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to