Matt Mahoney wrote:
--- On Tue, 10/14/08, Colin Hales <[EMAIL PROTECTED]> wrote:

The only reason for not connecting consciousness with AGI is a
situation where one can see no mechanism or role for it. That inability
is no proof there is none....and I have both to the point of having a
patent in progress.  Yes, I know it's only my claim at the moment...but
it's behind why I believe the links to machine consciousness are not
optional, despite the cultural state/history of the field at the moment
being less than perfect and folks cautiously sidling around
consciousness like it was bomb under their budgets.

Colin, I read your paper in publication that you were so kind to send me. For 
those who have not seen it, it is a well written, comprehensive survey of 
research in machine consciousness. It does not take a position on whether 
consciousness plays an essential role in AGI. (I understand that taking a 
controversial position probably would have resulted in rejection).

With regard to COMP, I assume you define COMP to be the position that 
everything the mind does is, in principle, computable. If I understand your 
position, consciousness does play a critical role in AGI. However, we don't 
know what it is. Therefore we need to find out by using scientific research, 
then duplicate that process (if possible) in a machine before it can achieve 
AGI.

Here and in your paper, you have not defined what consciousness is. Most 
philosophical arguments can be traced to disagreements about the meanings of 
words. In your paper you say that consciousness means having phenomenal states, 
but you don't define what a phenomenal state is.

Without a definition, we default to what we think it means. "Everybody knows" 
what consciousness is. It is something that all living humans have. We associate 
consciousness with properties of humans, such as having a name, a face, emotions, the 
ability to communicate in natural language, the ability to learn, to behave in ways we 
expect people to behave, to look like a human. Thus, we ascribe partial degrees of 
consciousness (with appropriate ethical treatment) to animals, video game characters, 
human shaped robots, and teddy bears.

To argue your position, you need to nail down a definition of consciousness. 
But that is hard. For example, you could define consciousness as having goals. 
So if a dog wants to go for a walk, it is conscious. But then a thermostat 
wants to keep the room at a set temperature, and a linear regression algorithm 
wants to find the best straight line fit to a set of points.

You could define consciousness as the ability to experience pleasure and pain. But then 
you need a test to distinguish experience from mere reaction, or else I could argue that 
simple reinforcement learners like http://www.mattmahoney.net/autobliss.txt experience 
pain. It boils down to how you define "experience".

You could define consciousness as being aware of your own thoughts. But again, you must 
define "aware". We distinguish conscious or episodic memories, such as when I 
recalled yesterday something that happened last month, and unconscious or procedural 
memories, such as the learned skills in coordinating my leg muscles while walking. We can 
do studies to show that conscious memories are stored in the hippocampus and higher 
layers of the cerebral cortex, and unconscious memories are stored in the cerebellum. But 
that is not really helpful for AGI design. The important distinction is that we remember 
remembering conscious memories but not unconscious. Reading from conscious memory also 
writes into it. But I can simulate this process in simple programs, for example, a 
database that logs transactions.

So if you can nail down a definition of consciousness without pointing to a 
human, I am willing to listen. Otherwise we default to the possibility of 
building AGI on COMP principles and then ascribing consciousness to it since it 
behaves just like a human.

-- Matt Mahoney, [EMAIL PROTECTED]

I am way past merely defining anything. I know what phenomenal fields are constructed of: Virtual Nambu Goldstone Bosons. Brain material is best regarded as a radically anisotropic quasi-fluid undergoing massive phase changes on multiple time scales. The problem is one of thermodynamics, not abstract computation. Duplicating the boson generation inorganically and applying that process to regulatory mechanisms of learning is exactly what I plan for my AGI chips. The virtual particles were named Qualeons by some weird guy here that i was talking to one day. I forgot is name. I better find that out! I digress. :-)

It would take 3 PhD dissertations to cover everything from quantum mechanics to psychology. You have to be a polymath. And to see how they explain consciousness you need to internalise 'dual aspect science', from which perspective its all obvious. I have to change the whole of science from single to dual aspect to make it understood. Which really really sucks.

So you'll just have to wait. Sorry. I also have patent/IP issues. Just rest assured that there is a very solid non-computational (non-linear electrodynamics) route to AGI, and that testing against human and COMP benchmarks is the route to empirical proof. I intend the empirical testing to deliver the goods, not words. But bits and pieces will come forth, slowly. If it was a simple story it would be done already. Sitting around defining things

cheers
colin hales




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to