By theway, I don't think much of Levesque's criterion either. I think we will eventually write programs that can game those questions, but they will not be nearly as smart as a chimp, who could not answer those questions. Actually, the book proposal I was asked to look at concerned establishing criteria for determining whether an entity is intelligent. I have not investigated this matter myself. Maybe this book has something to say.

Richard E. Neapolitan, Professor
Division of Biomedical Informatics
Department of Preventive Medicine
Northwestern University Feinberg School of Medicine
750 N. Lake Shore Drive, 11th Floor
Chicago, Illinois 60611

On 8/24/2013 1:46 PM, Kathryn Blackmond Laskey wrote:
Rich et al -

Here is a counter-argument...

http://www.newyorker.com/online/blogs/elements/2013/08/why-cant-my-computer-understand-me.html


On Aug 20, 2013, at 9:22 AM, Richard E. Neapolitan wrote:

Dear Colleagues,
One of my publisher'sasked me to review aproposal for a book. The theme of the book is predicated onthe statement that "it is widely believed that in the next 10 to 100 years scientists will succeed in creating human level artificial general intelligence."There is no research that gives me any reason to believe this. In my recent AI textbook I took the stance that we have essentially failed at this endeavor. Does anyone know of anyresearch that would make someone make such a statement?
Thanks,
Rich
--
Richard E. Neapolitan, Ph.D., Professor
Division of Health and Biomedical Informatics
Department of Preventive Medicine
Northwestern University Feinberg School of Medicine
750 N. Lake Shore Drive, 11th floor
Chicago IL 60611
_______________________________________________
uai mailing list
[email protected]
https://secure.engr.oregonstate.edu/mailman/listinfo/uai


_______________________________________________
uai mailing list
[email protected]
https://secure.engr.oregonstate.edu/mailman/listinfo/uai

Reply via email to