Hi Ben,

I just read Chalmers article and yours.

You concluded your article with:

> In artificial intelligence terms, the present theory suggests that if
> an AI program is constructed so that its dynamics give rise to a
> constant stream of patterns that are novel and significant (measured
> relative to the system itself), then this program will report
> experiences of awareness and consciousness somewhat similar to those
> that humans report.

This is a useful statement because it testable at some stage when AI exists that can hold complex conversations.

By the way, would it be true that a "novel and significant" pattern is one that by definition trigger the AI's attention system?  If so then that is a commmon point in both our speculations.

I think I've nearly exhausted the value of my speculations for the moment.  My intuition is that qualia are going to be different in intelligences that do *not* have long evolutionary histories of being social, compared to those that do have such histories (a species could be currently non-social, but if it has evolved from antecendents that have gone through a social phase then my guess is that it would experieice qualia more like social species ie. the capacity for experiencing qualia is like to be retained to some degree). 

My guess is that there will be structured processes discovered in brains that account for the subjective experience of qualia - and that qualia will not be experienced without some appropriate system for qualia generation ie. pattern recognition by an AI will not be enough by itself to give rise to the experience of qualia.  But this intuition is so speculative and so poorly based on my part that it probably doesn't warrant comment from others! :)  So I might leave it there and just wait to see what people come up with in the future.

Cheers, Philip


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to