It was actually intended as a kind of "thought experiment" about what the criterion for attibuting intelligence to machines, not an actual test. Then Loebner started running an actual (though limited) "Turing Test" competition in (I think) Boston about 15 years ago. For a while it was taken quite seriously (and even the best program routinely did very poorly on it) but Loebner himself was so eratic an individual that the AI  and CogSci "heavyweights" who attended in the early years have since abandoned it.

Computer programs still don't fool many people much of the time (except sometimes when the domain of the discussion is extremely limited). Most of the "Artificial intelligensia" don't actually take the Turing Test to be the "proper" criterion of machine intelligence anymore (though no alternative consensus has emerged either).

Regards,
--
Christopher D. Green
Department of Psychology
York University
Toronto, Ontario, Canada
M3J 1P3

e-mail: [EMAIL PROTECTED]
phone: 416-736-5115 ext. 66164
fax: 416-736-5814
http://www.yorku.ca/christo/
============================
.



Marie Helweg-Larsen wrote:
My students are reading Stanovich's How to Think Straight about Psychology. Stanovich describes the Turing proposal (end of Chp 3) and the basic test: Can a human communicating with a computer and communicating with a human being (in another room) tell who is the computer and who is the human? However, Stanovich never reveals the result of the Turing test. So did the test show that people could not reliably tell who the computer was and who the human was?
On a related note, what is the current state of AI on this issue? Can humans in general be fooled into thinking computers are human?
Marie



---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]

Reply via email to