Hi John,

I am glad you seem to have gotten something out of the book: appreciated. ☺

All I can hope is to light a fire under a broader AGI perspective and hope it 
takes. There are a bunch of strangely configured cultural states of affairs 
surrounding the science of AGI and in science itself.

Technology is over-prepared for replicant AGI. It’ll happen a lot quicker than 
you think. We’ve had 60+ years of fabrication science done already. In a 
paradoxical way, Moore’s Law is right in its association with AGI origins ... 
not because it created more powerful computers for emulant AI aimed at AGI, but 
because it created the beginnings of suitable substrates for replicant AGI. 
That’s my prediction of the ultimate view that will be taken of it. Time will 
tell, of course.

As long as when someone turns up one day with my AGI replicant chips in a 
robot, and demand it do a PCT in competition with a emulant robot, and everyone 
knows what I’m on about .... from an informed perspective ... I’ll be satisfied.

Cheers
Colin


From: John Rose via AGI [mailto:[email protected]]
Sent: Thursday, 15 January 2015 12:09 AM
To: AGI
Subject: RE: [agi] How to create an uploader

Weelllll...

Replication is an approach. I’ve gone the route of emulation, or not really 
emulation but of taking pieces and parts, some of what the brain does but I’m 
not in the camp that says AGI = Human Intelligence.

There are many benefits of going in the direction of emulation - also I’m not 
in the camp that says there have been 60 years of AI failures. That’s a 
self-serving proposition that many make. I choose not to elaborate there...


I would like to go replication if I had the means to but - being a software 
engineer and realizing what is going on with computers, the internet, 
mathematics and the collective human intelligence I feel that combining all 
that will give us AGI and a form of consciousness that will be more than ample.

Not talking about uploading here though... your replication might be required 
for accurate uploading.

So...

My FSM model - the reason I talk and think of it is that it might work for a 
simple robot consciousness or some software agent. It has immediate, 
implementable utility where replication is a huge project. I can produce a 
functional FSM baked consciousness now in software. It’s far from a human level 
consciousness but I agree with Matt that it’s not required for AGI.

For full blown AGI maybe replication would be quicker, haven’t thought about 
it, but I know that software based AGI is within reach, perhaps we should call 
it AGISE for AGI Software Engineering.

BTW your chapters are quite interesting and informative reading ☺

John

From: Colin Geoffrey Hales via AGI [mailto:[email protected]]
Sent: Tuesday, January 13, 2015 5:35 PM
To: AGI
Subject: RE: [agi] How to create an uploader


Hi,



I don’t think you have come to grips with what I am proposing.



In my proposition there are no models. There is no maths. No simulation. No 
mimicry (‘emulation’). No computing whatever. No programming. No such thing as 
a thread. No ‘finite state machine’ model. No CPU. It’s not an analog computer. 
There is no modelling done in hardware or software.



I propose to restore the accidentally missing neglected alternative. The other 
side of the science options: build the physics of the brain. Literally. Like we 
built a plane that flies. Fire that burns. Wheels that roll. Artificial kidneys 
that filter. Artificial hearts that pump.



This is what I call replication.



In my case the hardware will have an EEG and MEG electromagnetic field and will 
produce action potentials naturally. Not because of any model. But because the 
same physics is there. Literally.



I am not saying that computer-based AGI is wrong, not worth trying or 
uninformative!



I am saying that it is completely unique in the history of science that we 
expect a computed model of X to literally be an X. No precedent whatever. No 
natural law priming that expectation. Nobody in the history of science _ever_ 
had that expectation before. Except now. Since the 1950s. In a brand new 
computer ‘science’ born having never done (2) before.



The two approaches:



(1)   EMULATION. Computing a model by hardware or software, analog or digital.

(2)   REPLICATION. Replicating the physics.



(1) is the newer and only available en-masse since 1950 ish.

(2) is centuries old. All we had till 1950 ish.



(1) is the flight simulator.

(2) is the artificially flying plane.



(1) is numerical exploration of a Higgs Boson in the standard model of physics

(2) is building the biggest machine in the history of science to _make_ a Higgs 
Boson.



(1) Original physics all gone. Assumes no essential physics exists.

(2) Original (essential) physics is retained.



(1) is experimental theoretical science (numerical exploration of an 
abstraction).

(2) is empirical science.



Do you see what’s happened? It is totally weird, confined entirely to computer 
science, and unique in the entire history of science that for the brain, (2) 
has not even started yet.



(2) for the brain, is Artificial General Intelligence (AGI). Guaranteed.

(1) is not, for the same reason a flight simulator is not flight..... until 
proven otherwise .... by replication.



You may disagree. Fine.



Q. How do you prove it?

A. By REPLICATION



Q. How do you find what physics is essential and what is not?

A. By REPLICATION



Q. What has been missing since day 1?

A. REPLICATION



There’s no way out of this logic. Replication is needed to test whether there’s 
no essential physics of the brain. To test whether AGI can be done by computer 
you must replicate what the brain does so you can compare the emulant and the 
replicant. Only then do you know, _scientifically_, that the computed model and 
the replication are the same under all contexts.



That comparison is what is being done in my test (Chapter 12) when you put a 
(1) robot in the PCT test and then put a (2) robot in the PCT test and then see 
if there’s a difference.



But if you never replicate ..... and there is essential physics ... what 
happens?



60+ years of rather odd, ‘receding rainbow’ AGI failure that instead produces a 
plethora of brilliant narrow-AI successes and an endless expectation that all 
that is required is more powerful computers. All done by a community that is 
completely unaware that there’s been a fundamental blind accidentally erected 
preventing knowledge of the full scope of the problem.



This only ever happens with brains. It’s the ‘chinese puzzle from hell’. Two 
science disciplines, wet neuroscience and computer science. One has the answer 
to the other’s puzzle and neither know it.... except for me, muggins, colin, 
who has done both and the physics, and can see it by accidental happenstance of 
career. Neuro folk that 100% replicate and don’t care about the AGI problem and 
for whom computing is an emulation tool. Computer folk that have the AGI 
problem and have never done the replication needed to see that computers are 
only half of the problem but who only have computing as a tool.



Chapter 14 attached summarises it. I hope to get this situation in the 
literature soon. It’s been in review since FEB last year.



Cheers



Colin







-----Original Message-----
From: John Rose via AGI [mailto:[email protected]]
Sent: Wednesday, 14 January 2015 12:09 AM
To: AGI
Subject: RE: [agi] How to create an uploader



> -----Original Message-----

> From: Steve Richfield via AGI [mailto:[email protected]]

>

> Again, everything I have seen shows "consciousness" to be a post-hoc

> emergent property of a process that is VERY different that it appears

> to be. Think hundreds of threads that can NOT be done one-at-a-time,

> except maybe in a time sharing sort of way, because some threads may

> never finish, some might cancel others, etc. Further, there is plenty

> of biological evidence supporting bidirectional computations, which

> are incredibly inefficient to simulate on present-day computers (except 
> analog computers).





There are many ways to look at it. It could be a finite state machine model 
where consciousness is the cumulative "moving average" of the contexts of the 
FSM with many threads running many FSM's where the contexts are interlinked. 
This would imply in your chess example where the FSM's are solving many chess 
moves simultaneously in the background and the post-hoc emergent consciousness 
is notified asynchronously of threaded results as they bubble up.



That's not my preferred model just an impromptu example.



Why is bidirectional more efficient on analog computers? Which type of 
bidirectional computation are you referring to.



John






AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpg] 
<https://www.listbox.com/member/archive/rss/303/248029-82d9122f> | 
Modify<https://www.listbox.com/member/?&;> Your Subscription

[https://www.listbox.com/images/listbox-logo-small.png]<http://www.listbox.com>


AGI | Archives<https://www.listbox.com/member/archive/303/=now> 
[https://www.listbox.com/images/feed-icon-10x10.jpg] 
<https://www.listbox.com/member/archive/rss/303/11721311-f886df0a> | 
Modify<https://www.listbox.com/member/?&;> Your Subscription

[https://www.listbox.com/images/listbox-logo-small.png]<http://www.listbox.com>





-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to