Matt Mahoney wrote:
--- On Wed, 11/5/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:

In the future (perhaps the near future) it will be possible
to create systems that will have their own consciousness.

*Appear* to have consciousness, or do you have a test?

Yes.

But the test depends on an understanding of the system architecture.

The question of whether a test is possible at all depends on the fact that there is a coherent theory behind the idea of consciousness.

Stepping back for the moment, the entire question of ethics
depends crucially on your theory of how consciousness
arises.

We talk about such things as if we can answer the question of why it is OK to 
stomp on a roach but not a puppy by studying the brains of roaches and puppies.

It is not possible to look at the brains and decide whether or not is okay to stomp, but we can decide whether or not the brain has a significant level of consciousness that it is comparable to ours.

That is vital information in making a reasoned judgement of stompworthiness.

For the record, I am treading carefully.  As far as what
happens in my lab, I will explicitly put in place measures
to ensure that AGI systems that do have a chance of
reasonably high levels of consciousness will have the
fullest possible ethical protections.  I cannot speak for
anyone else, but that is my policy.

Now I am curious. Given a program P, what is your lab's criteria for 
determining whether P is conscious?

Complicated.

I'll get right backtya on that ;-)




Richard Loosemore



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to