Charles,

I don't think I've misunderstood what Turing was proposing. At least not any more than the thousands of other people who have written about Turing and his test over the decades:

http://en.wikipedia.org/wiki/Turing_test
http://www.zompist.com/turing.html (Twelve reasons to toss the Turing test)
http://plato.stanford.edu/entries/turing-test/ (read the entire article)
http://tekhnema.free.fr/3Lasseguearticle.htm (What Kind of Turing Test Did Turing Have in Mind)

What I think Turing had in mind was a test of an artificial intelligence's ability to fool a human into thinking s/he was "talking to" another human. If the computer program claiming to be intelligent couldn't simulate a human successfully enough to fool an actual human, it failed the "imitation test." Therefore, the Turing test is a test to see if a computer program (artificial intelligence) can imitate human intelligence well enough to fool a human. This is what is meant by the term "Turing-indistinguishable (*from a human*)."

Clearly, Turing's test is capable *only* of judging human-like artificial intelligences. Yet, there are other forms of intelligence that humans have created and can, in the future, create. IMHO, as long as we continue to hold up the Turning test (whatever flavor you like) as the gold standard of what is or is not successful AGI, we will continue to make little progress. It's not that it's a WRONG TEST (although you will find people who argue strenuously that it is, in fact, wrong). It's that it tests the WRONG THING. Unless, that is, you plan on building a Turing-indistinguishable AGI. In which case I wish you luck, but have little hope for the success of your endeavors.

I propose dropping the Turning test as a means to test the efficacy of artificial general intelligence. I don't really care if the AGI I build can imitate a human successfully because I'm not setting out to create a human-like AGI. I'm setting out to build a human-compatible AGI that will be empathetic to humans but that will far surpass their intellectual capabilities in short order. There's a HUGE difference.

I truly believe that successful AGI will be to human intelligence what the Boeing 747 is to a bird. They both fly, but that's pretty much where the similarities end. The bird is an evolved, natural flier. The 747 is a product of the evolved human brain, inspired by the natural fliers, but it is, itself, a very un-bird-like, artificial flier. I say thank your lucky stars there was nobody like Alan Turing around at the latter part of the 19th century proposing that, to be deemed a successful artificial flying machine, the candidate machine would have to fool real birds into thinking it was another bird. If humans had continued to try to "imitate" a bird to achieve human flight, we'd still be taking ocean liners to Europe.

I strongly believe the first successful AGI will have very little in common with human intelligence. It will be better at many things beneficial to humanity, it will do those things faster and it will be able to create its own, improved, replacement. I believe this so much that I am betting the rest of my life on it.

Cheers,

Brad


Charles Hixson wrote:
Brad Paulsen wrote:
...
Sigh. Your point of view is heavily biased by the unspoken assumption that AGI must be Turing-indistinguishable from humans. That it must be AGHI. This is not necessarily a bad idea, it's just the wrong idea given our (lack of) understanding of general intelligence. Turing was a genius, no doubt about that. But many geniuses have been wrong. Turing was tragically wrong in proposing (and AI researchers/engineers terribly naive in accepting) his infamous "imitation test," a simple test that has, almost single-handedly, kept AGI from becoming a reality for over fifty years. The idea that "AGI won't be real AGI unless it is embodied" is a natural extension of Turing's imitation test and, therefore, inherits all of its wrongness.
...
Cheers,

Brad
You have misunderstood what Turing was proposing. He was claiming that if a computer could act in the proposed manner that you would be forced to conceed that it was intelligent, not the converse. I have seen no indication that he believed that there was any requirement that a computer be able to pass the Turing test to be considered intelligent.


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to