On 6/4/07, Papiewski, John <[EMAIL PROTECTED]> wrote:
(...)
I disagree.  If even a half-baked, partial, buggy, slow simulation of a
human mind were available the captains of industry would jump on it in a
second.
(...)
Do you remember when no business had an automated answering service?  That
transition took only a few years.
(...)

Considering previous messages from Matt, I think that when he mentions
"simulation of a human mind" he means an entity possessing not only
human intelligence, but also human feelings and motivations. That, I
agree, would look "uneconomical" in the sense that it would have the
same problems of a human worker - boredom, getting pissed off, making
strikes, and so on. (Not to mention the ethical problem of having a
human-equivalent intelligence that would probably be kept as a slave.)
Maybe a "profitable" AI should just do the work that it is supposed to
do with the same degree of efficiency, never complain and never
manifest the slightest hint of emotions.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to