Colin said: The PCT is “Do science on your own or die slowly and painfully”. No human is involved at any point. We video what is going on inside and just sit and watch and wait. If the robot exits the test rig it passes the test.
The PCT prescribes zero human involvement and radical novelty. The test subject will be put in multiple environments utterly unlike anything it can ever have encountered before. The test subject is checked for ignorance of the target message which is encoded indirectly in a strange environment and cannot have any classification any human involved in the robot design or the test execution had a hand in making. The decoded message (=’learning’) is delivered by passing the test 3rd phase based on the ‘learning’ in a completely different phase 2 context. The phase 3 visible signs of the message are utterly different. Completely different sensory modality, for example. This is done in the phase 1 environment previously used to test the robot was ignorant of the knowledge contained in the encoded message. Testers and robot designers are not involved in what is going on inside the rig. I.E. this is real empirical double blinded testing. -------------- Are you saying that the ability to learn through education is not a test of -conscious- intelligence?!? Jim Bromer On Fri, Jan 9, 2015 at 6:10 PM, Colin Geoffrey Hales via AGI <[email protected]> wrote: > -----Original Message----- > From: Matt Mahoney via AGI [mailto:[email protected]] > Sent: Saturday, 10 January 2015 6:37 AM > To: AGI > Subject: Re: [agi] How to create an uploader. > > > > On Fri, Jan 9, 2015 at 2:27 AM, Colin Geoffrey Hales via AGI > <[email protected]> wrote: > >> Human science is critically dependent on consciousness. > > > > In chapter 12 you reject the Turing test and some stronger variants as a > test for consciousness. Then you propose a test based on doing science. In > your test, a robot has to learn a rule or pattern to get a reward. Then it > is tested again in a different environment to see if it can apply the rule > that it learned. If I understand you correctly, then passing this test > implies that the robot is P-conscious. > > > > This seems like a standard machine learning problem. I have written such > programs without the robotic capabilities in data compression programs. A > program has to learn rules of nature that describe data about the > environment. The data might be English text, an image, audio, DNA, or some > other description of a natural phenomena. A good model will successfully > predict the symbols in the description string and demonstrate this > capability by encoding them using fewer bits for the more likely outcomes. > Furthermore, it will demonstrate that it can apply the rules that it learned > to a new environment. Given two strings x and y, it demonstrates this by C(x > + y) < C(x) + C(y), where > > C() is the compressed size. The compressor takes what it learned from x and > applies it to improve its ability to predict y. Most compression algorithms > will do this on many types of data. > > > > I know you don't believe that a program like zip or rar or 7zip is > conscious. But then again, you wrote a whole book and have not been able to > define P-consciousness. You simply assert that it is necessary to do > science. I don't see how you can make an assertion without being able to > even give a precise statement of your assertion. > > > > -- > > -- Matt Mahoney, [email protected] > > > > #1 ------------------------ > > The problem here is that I know you won’t/can’t see the difference > consciousness makes and I can. > > > > “You simply assert that it is necessary to do science. I don't see how you > can make an assertion without being able to even give a precise statement of > your assertion.” > > > > I measure it. I don’t have to precisely state anything more than an > empirical fact. > > > > Scientific behaviour in humans degrades with consciousness. End of story. > Accept what this means because you are a scientist. You don’t have to know > what consciousness is to encounter the apparent lack of it. Just close your > eyes and try and read a volt meter. > > > > Everything in machine learning classification systems is grounded in the > presupposed consciousness of _you_, the designer. Your consciousness. You > decided what things are, what vision is, what audition is, what to look for. > What ‘novelty’ is. You get to decide what ‘success’ is. There is no original > science being done here by that software because none of it is actually > novel in the way nature presents novelty to scientists. > > > > #2---------------------------- > > But so what? > > > > The PCT is “Do science on your own or die slowly and painfully”. No human is > involved at any point. We video what is going on inside and just sit and > watch and wait. If the robot exits the test rig it passes the test. > > > > The PCT prescribes zero human involvement and radical novelty. The test > subject will be put in multiple environments utterly unlike anything it can > ever have encountered before. The test subject is checked for ignorance of > the target message which is encoded indirectly in a strange environment and > cannot have any classification any human involved in the robot design or the > test execution had a hand in making. The decoded message (=’learning’) is > delivered by passing the test 3rd phase based on the ‘learning’ in a > completely different phase 2 context. The phase 3 visible signs of the > message are utterly different. Completely different sensory modality, for > example. This is done in the phase 1 environment previously used to test the > robot was ignorant of the knowledge contained in the encoded message. > Testers and robot designers are not involved in what is going on inside the > rig. > > > > I.E. this is real empirical double blinded testing. > > > > So it doesn’t matter what I think or what you think. We can sort it out by > doing the PCT. > > > > If a robot passes the PCT _then_ we get to make an informed judgement > whether or not the robot is conscious and then about the role of > consciousness (and embodiment, for that matter) in a capacity to do science > and be intelligent insofar as science is a product of intelligence. > > > > Not before. > > > > The Turing test is not science on machine intelligence. It can only be > claimed to detect the capacity for a human to be fooled by a verbal puppet. > Time to dump it as contributing anything to science other than as historical > artefact and possibly as evidence of aspects of human psychology in computer > science practice. > > > > The PCT test designs should be done. Can be done. Now. It should be the > definitive test. I would accept what the PCT tells me about the > presence/absence and role of consciousness in the robots that pass/fail such > a test. > > > > So should anyone in robot AGI. > > > > Maybe I should rename the test? After all, no particular predisposition > about consciousness is needed for the test to be defined and built and > entered. Any robot can enter the test. A human too. > > > > My prediction is that the PCT will be failed by any robot that is not > conscious. I happen to think that consciousness is detected by the test. And > yes, humans like Matt Mahony can do the PCT and indeed must do it (to > calibrate it) and IMO are proved conscious by it! > > > > But what I/you think is irrelevant. The test itself is how you make progress > in deciding. I defer to that test result. So should you. > > > > It’s past due time to do that. We need tests designed. We need a proper > testing body that is independent of any/all robot test subjects. Yes the > robot development and the test is ‘large hadron collider’ hard. Tough for > us. > > > > We need to make this AGI project real science. Currently it is not. > > > > I could write this up as an AGI conference paper proposition based on Ch 12, > attached, but stripped of all my consciousness predispositions and renamed > accordingly. Call it the ‘Artificial Scientist Test’ AST or ‘AGIT’ or > something. Is that of any interest? Implementing an instance of a physical > test rig and procedure would be a great, fun PhD project and would actually > get us past the Turing test. > > > > Clearly I have blathered on too long again. Coffee time. After which I might > reassess my enthusiasm for the conference paper. No more long rambling > posts. Sorry. > > > > Cheers > > Colin > > > > AGI | Archives | Modify Your Subscription ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
