--- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote:
> For example, in
> fifty years, I think it is quite possible we will be able to say with some
> confidence if certain machine intelligences we design are conscious nor not,
> and whether their pain is as real as the pain of another type of animal, such
> as chimpanzee, dog, bird, reptile, fly, or amoeba .

No it won't, because people are free to decide what makes pain "real". The 
question is not resolved for simple systems which are completely understood, 
for example, the 302 neuron nervous system of C. elegans. If it can be trained 
by reinforcement learning, it that "real" pain? What about autobliss? It learns 
to avoid negative reinforcement and it says "ouch". Do you really think that if 
we build AGI in the likeness of a human mind, and stick it with a pin and it 
says "ouch", that we will finally have an answer to the question of whether 
machines have a consciousness?

And there is no reason to believe the question will be easier in the future. 
100 years ago there was little controversy over animal rights, euthanasia, 
abortion, or capital punishment. Do you think that the addition of intelligent 
robots will make the boundary between human and non-human any sharper?

-- Matt Mahoney, [EMAIL PROTECTED]




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to