Mark,

Could you specify some of those good reasons (i.e. why a sufficiently
large/fast enough von Neumann architecture isn't sufficient substrate
for a sufficiently complex mind to be conscious and feel -- or, at
least, to believe itself to be conscious and believe itself to feel

For being [/believing to be] conscious - no - I don't see a problem
with coding that.

For feelings - like pain - there is a problem. But I don't feel like
spending much time explaining it little by little through many emails.
There are books and articles on this topic. Let me just emphasize that
I'm talking about pain that really *hurts* (note: with some drugs, you
can alter the sensation of pain so that patients still report feeling
pain of the same intensity - they just no longer mind it). There are
levels of the qualitative aspect of pain and other things which make
it more difficult to really cover the topic well. Start with Dennett's
book "Why you can't make a computer that feels pain" if you are really
interested. BTW some argue about this stuff for years (just like those
never ending AI definition exchanges). I guess we better spend more
time with more practical AGI stuff (like KR, UI & problem solving).

Jiri

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reply via email to