--- On Wed, 11/5/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:

> In the future (perhaps the near future) it will be possible
> to create systems that will have their own consciousness. 

*Appear* to have consciousness, or do you have a test?

> Stepping back for the moment, the entire question of ethics
> depends crucially on your theory of how consciousness
> arises.

We talk about such things as if we can answer the question of why it is OK to 
stomp on a roach but not a puppy by studying the brains of roaches and puppies.

> For the record, I am treading carefully.  As far as what
> happens in my lab, I will explicitly put in place measures
> to ensure that AGI systems that do have a chance of
> reasonably high levels of consciousness will have the
> fullest possible ethical protections.  I cannot speak for
> anyone else, but that is my policy.

Now I am curious. Given a program P, what is your lab's criteria for 
determining whether P is conscious?

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to