Matt Mahoney wrote:
--- On Fri, 11/14/08, Colin Hales <[EMAIL PROTECTED]> wrote:
Try running yourself with empirical results instead of metabelief
(belief about belief). You'll get someplace .i.e. you'll resolve the
inconsistencies. When inconsistencies are testably absent, no
matter how weird the answer, it will deliver maximally informed
choices. Not facts. Facts will only ever appear differently after
choices are made. This too is a fact...which I have chosen to make
choices about. :-) If you fail to resolve your inconsistency then you
are guaranteeing that your choices are minimally informed.

Fine. By your definition of consciousness, I must be conscious because I can 
see and because I can apply the scientific method, which you didn't precisely 
define, but I assume that means I can do experiments and learn from them.
Not quite. The claim is specific: You have visual P-consciousness because you can do science evidenced from visual P-consciousness. This is the crucial and unique circumstance involved here.

The scientific process: We scientists are obliged to construct and deliver abstractions about the natural world that have the status of generalisations that operate independently of a particular scientist. Yes, learning is involved. But the deliverable is more than just learning (the act). The deliverable evidence (which makes it testable) is what is learnt ....the generalisation. Like F=MA and such.....the 'law of nature'. What is learnt must be applied by the agent in a completely novel (degenerate I./O) circumstance in which the law of nature is *implicitly *encoded in the natural world outside the agent. That is what humans do.
But by your definition, a simple modification to autobliss ( 
http://www.mattmahoney.net/autobliss.txt ) would make it conscious. It already 
applies the scientific method. It outputs 3 bits (2 randomly picked inputs to 
an unknown logic gate and a proposed output) and learns the logic function. The 
missing component is vision. But suppose I replace the logic function (a 4 bit 
value specified by the teacher) with a black box with 3 switches and a light 
bulb to indicate whether the proposed output (one of the switches) is right or 
wrong. You also didn't precisely define what constitutes vision, so I assume a 
1 pixel system qualifies.

The test for consciousness involves the delivery of the abstraction (in your case some kind of logic gate) in a completely different context to that in which it was acquired. So rework your test so that there is a logic gate of the kind 'learnt', but encoded in the position of rocks, for example. Then make the agent recognise the same abstraction applies by some kind of cued behaviour that will only result if the abstraction was known. Of course you have to verify that the abstraction was not known before the original learning, by intially verifying the failure of this stage of the test.

It also requires all acquisition of the abstraction to occur without any human involvement whatever. The nice thing about the test is that we can specify that the scientific evidence shall be obvious through perception of photon radiation in the visible range (for example). That's all we have to specify. If the test subject can do all this autonomously then visual experience must have been involved. That is the rationale.

The PCST (P-conscious scientist test) will demand learning in an environment that you can never have been exposed it to before, that no human can be involved in and the knowledge will be applied to solving a completely novel problem that no human involved in the testing will have had anything to do with. This is my PCST: a test for what human scientists do.

A single pixel will not suffice. Nor will any system that has been told what to learn (knowledge = configuration of logic gates). However... so what? The test demands that the test subject learn the same way humans do, not what humans actually learn. You can't be trained to be successful at the PCST...the test itself is the training. It's what humans do. Any system that requires a-priori training will fail. The test candidate merely has to be suited to survive in the (a-priori unknown) test environment.

If you think 'autobliss' can be conscious (in this case, be claimed to have visual P-consciousness) as a result of behaving to way you say...then simply submit it to the PCST. If it passes then you'll have a scientific claim to the existence of visual P-consciousness. I predict that 'autobliss' will fail irretrievably and permanently. Indeed it won't even be able to begin the test.. The test is for completely autonomous, embodied agency.

If you can get the entity to do authentic original science on the unknown in a 'double blind' fashion you have a really good claim to the P-consciousness of the entity in the perceptual mode in which the science operated.

You don't have to believe my solution to consciousness(below). The test sorts it out. That's why it is nice. You can use the PCST to test any model of P-consciousness. The agents that can do science have a viable claim to P-consciousness. Those that can't, don't. Very simple. The down side is you have to engineer a very sophisticated agent with volition and imagination and even emotions.... but if you have a real working model of consciousness then these things should be possible... In my case the test candidate is in a 'do science or die painfully' situation through the use of power supply applied to the primordial/homeostasis emotion 'hunger' or even 'pain' or the oposite 'pleasure' when they get it right..

Of course I don't expect anyone to precisely define consciousness (as a property of Turing machines). There 
is no algorithmically simple definition that agrees with intuition, i.e. that living humans and nothing else 
are conscious. This goes beyond Rice's theorem, which would make any nontrivial definition not computable. 
Even allowing non computable definitions (the output can be "yes", "no", or 
"maybe"), you still have the problem that any specification with algorithmic complexity K can be 
expressed as a program with complexity K. Given any simple specification (meaning K is small) I can write a 
simple program that satisfies it (my program has complexity at most K). However, for humans, K is about 10^9 
bits. That means any specification smaller than a 1 GB file or 1000 books would allow a counter intuitive 
example of a simple program that meets your test for consciousness.

Try it if you don't believe me. Give me a simple definition of consciousness 
without pointing to a human (like the Turing test does). I am looking for a 
program is_conscious(x) shorter than 10^9 bits that inputs a Turing machine x 
and outputs yes, no, or maybe.

-- Matt Mahoney, [EMAIL PROTECTED]

My model of consciousness? The 3rd person view of P-consciousness is that it is literally the entire electromagnetic field expressed in space by a brain. Some of it leaks out as EEG. Understanding how the EM field can 'be like something' from a first person perspective is the real problem. I have developed a science model that makes sense of this. You have to understand consciousness in terms of the natural world that expresses it in humans, NOT human abstractions of the natural world. The relationship between the dynamics in the algorithmic machinations of a computer program, no matter how adaptive, and the dynamics of the real world electromagnetism of the computer...these are unrelated.

Think of it this way...In my case I will be building an intelligent electromagnetic furnace, where nested 'electromagnetic' flames act literally as a phenomenal mirror, the image inside being views of the distant world and within which regularity becomes apparent.

Those particular electromagnetic phenomena responsible for the perceptual fields are virtual bosons which can be located as arriving at various boundaries (layers/columns) in the (firing) cellular syncytium that is brain material. The properties of the EM field are what constrains/defines the knowledge gleaned. The syntax of transacted natural symbols corresponds to 'reasoning' (= 'interacting flames') that is directly coupled to the external world, not any model of it. That's where science becomes possible.

The PCST test works well for any approach (theory of conciousness). Observation of external agent behaviour is decisive.

I hope this all makes sense.

cheers,
colin




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to