--- On Tue, 10/14/08, Colin Hales <[EMAIL PROTECTED]> wrote:

> The only reason for not connecting consciousness with AGI is a
> situation where one can see no mechanism or role for it. That inability
> is no proof there is none....and I have both to the point of having a
> patent in progress.  Yes, I know it's only my claim at the moment...but
> it's behind why I believe the links to machine consciousness are not
> optional, despite the cultural state/history of the field at the moment
> being less than perfect and folks cautiously sidling around
> consciousness like it was bomb under their budgets.

Colin, I read your paper in publication that you were so kind to send me. For 
those who have not seen it, it is a well written, comprehensive survey of 
research in machine consciousness. It does not take a position on whether 
consciousness plays an essential role in AGI. (I understand that taking a 
controversial position probably would have resulted in rejection).

With regard to COMP, I assume you define COMP to be the position that 
everything the mind does is, in principle, computable. If I understand your 
position, consciousness does play a critical role in AGI. However, we don't 
know what it is. Therefore we need to find out by using scientific research, 
then duplicate that process (if possible) in a machine before it can achieve 
AGI.

Here and in your paper, you have not defined what consciousness is. Most 
philosophical arguments can be traced to disagreements about the meanings of 
words. In your paper you say that consciousness means having phenomenal states, 
but you don't define what a phenomenal state is.

Without a definition, we default to what we think it means. "Everybody knows" 
what consciousness is. It is something that all living humans have. We 
associate consciousness with properties of humans, such as having a name, a 
face, emotions, the ability to communicate in natural language, the ability to 
learn, to behave in ways we expect people to behave, to look like a human. 
Thus, we ascribe partial degrees of consciousness (with appropriate ethical 
treatment) to animals, video game characters, human shaped robots, and teddy 
bears.

To argue your position, you need to nail down a definition of consciousness. 
But that is hard. For example, you could define consciousness as having goals. 
So if a dog wants to go for a walk, it is conscious. But then a thermostat 
wants to keep the room at a set temperature, and a linear regression algorithm 
wants to find the best straight line fit to a set of points.

You could define consciousness as the ability to experience pleasure and pain. 
But then you need a test to distinguish experience from mere reaction, or else 
I could argue that simple reinforcement learners like 
http://www.mattmahoney.net/autobliss.txt experience pain. It boils down to how 
you define "experience".

You could define consciousness as being aware of your own thoughts. But again, 
you must define "aware". We distinguish conscious or episodic memories, such as 
when I recalled yesterday something that happened last month, and unconscious 
or procedural memories, such as the learned skills in coordinating my leg 
muscles while walking. We can do studies to show that conscious memories are 
stored in the hippocampus and higher layers of the cerebral cortex, and 
unconscious memories are stored in the cerebellum. But that is not really 
helpful for AGI design. The important distinction is that we remember 
remembering conscious memories but not unconscious. Reading from conscious 
memory also writes into it. But I can simulate this process in simple programs, 
for example, a database that logs transactions.

So if you can nail down a definition of consciousness without pointing to a 
human, I am willing to listen. Otherwise we default to the possibility of 
building AGI on COMP principles and then ascribing consciousness to it since it 
behaves just like a human.

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to