Colin,

I've read a lot of your book previously.   It is not difficult to see
your point.

My main objections to trying to literally copy the brain physics:  the
great difficulty in it and that I think consciousness is faulty
anyway.  Human thinking to me is no gold standard of intelligence...
is only one form.

Secondly I think that you neglect  the issue that different means can
produce the same ends.  This is one of those empiricially true
metaphysical universal rules as old as Socrates.  The oft-cited
cliche' is the bird flapping its wings to do the same thing an
airplane does.    You can't just ignore that and paint this as either
we require "true" consciousness  to produce real AI or we fail on the
grounds that we don't have true consciosness.  You are tacitly
assuming that the without the same means you cannot reach the same
ends.  However this is not so in many areas of reality.

You say AI has failed in the last 60 years, but plainly that is not
true.  If you mean full blown intelligence, true, but you can't ignore
the advances, like near complete static scene recognition.

Mike A

On 1/13/15, Colin Geoffrey Hales via AGI <[email protected]> wrote:
> Hi,
>
>
>
> I don’t think you have come to grips with what I am proposing.
>
>
>
> In my proposition there are no models. There is no maths. No simulation. No
> mimicry (‘emulation’). No computing whatever. No programming. No such thing
> as a thread. No ‘finite state machine’ model. No CPU. It’s not an analog
> computer. There is no modelling done in hardware or software.
>
>
>
> I propose to restore the accidentally missing neglected alternative. The
> other side of the science options: build the physics of the brain.
> Literally. Like we built a plane that flies. Fire that burns. Wheels that
> roll. Artificial kidneys that filter. Artificial hearts that pump.
>
>
>
> This is what I call replication.
>
>
>
> In my case the hardware will have an EEG and MEG electromagnetic field and
> will produce action potentials naturally. Not because of any model. But
> because the same physics is there. Literally.
>
>
>
> I am not saying that computer-based AGI is wrong, not worth trying or
> uninformative!
>
>
>
> I am saying that it is completely unique in the history of science that we
> expect a computed model of X to literally be an X. No precedent whatever. No
> natural law priming that expectation. Nobody in the history of science
> _ever_ had that expectation before. Except now. Since the 1950s. In a brand
> new computer ‘science’ born having never done (2) before.
>
>
>
> The two approaches:
>
>
>
> (1)   EMULATION. Computing a model by hardware or software, analog or
> digital.
>
> (2)   REPLICATION. Replicating the physics.
>
>
>
> (1) is the newer and only available en-masse since 1950 ish.
>
> (2) is centuries old. All we had till 1950 ish.
>
>
>
> (1) is the flight simulator.
>
> (2) is the artificially flying plane.
>
>
>
> (1) is numerical exploration of a Higgs Boson in the standard model of
> physics
>
> (2) is building the biggest machine in the history of science to _make_ a
> Higgs Boson.
>
>
>
> (1) Original physics all gone. Assumes no essential physics exists.
>
> (2) Original (essential) physics is retained.
>
>
>
> (1) is experimental theoretical science (numerical exploration of an
> abstraction).
>
> (2) is empirical science.
>
>
>
> Do you see what’s happened? It is totally weird, confined entirely to
> computer science, and unique in the entire history of science that for the
> brain, (2) has not even started yet.
>
>
>
> (2) for the brain, is Artificial General Intelligence (AGI). Guaranteed.
>
> (1) is not, for the same reason a flight simulator is not flight..... until
> proven otherwise .... by replication.
>
>
>
> You may disagree. Fine.
>
>
>
> Q. How do you prove it?
>
> A. By REPLICATION
>
>
>
> Q. How do you find what physics is essential and what is not?
>
> A. By REPLICATION
>
>
>
> Q. What has been missing since day 1?
>
> A. REPLICATION
>
>
>
> There’s no way out of this logic. Replication is needed to test whether
> there’s no essential physics of the brain. To test whether AGI can be done
> by computer you must replicate what the brain does so you can compare the
> emulant and the replicant. Only then do you know, _scientifically_, that the
> computed model and the replication are the same under all contexts.
>
>
>
> That comparison is what is being done in my test (Chapter 12) when you put a
> (1) robot in the PCT test and then put a (2) robot in the PCT test and then
> see if there’s a difference.
>
>
>
> But if you never replicate ..... and there is essential physics ... what
> happens?
>
>
>
> 60+ years of rather odd, ‘receding rainbow’ AGI failure that instead
> produces a plethora of brilliant narrow-AI successes and an endless
> expectation that all that is required is more powerful computers. All done
> by a community that is completely unaware that there’s been a fundamental
> blind accidentally erected preventing knowledge of the full scope of the
> problem.
>
>
>
> This only ever happens with brains. It’s the ‘chinese puzzle from hell’. Two
> science disciplines, wet neuroscience and computer science. One has the
> answer to the other’s puzzle and neither know it.... except for me, muggins,
> colin, who has done both and the physics, and can see it by accidental
> happenstance of career. Neuro folk that 100% replicate and don’t care about
> the AGI problem and for whom computing is an emulation tool. Computer folk
> that have the AGI problem and have never done the replication needed to see
> that computers are only half of the problem but who only have computing as a
> tool.
>
>
>
> Chapter 14 attached summarises it. I hope to get this situation in the
> literature soon. It’s been in review since FEB last year.
>
>
>
> Cheers
>
>
>
> Colin
>
>
>
>
>
>
>
> -----Original Message-----
> From: John Rose via AGI [mailto:[email protected]]
> Sent: Wednesday, 14 January 2015 12:09 AM
> To: AGI
> Subject: RE: [agi] How to create an uploader
>
>
>
>> -----Original Message-----
>
>> From: Steve Richfield via AGI [mailto:[email protected]]
>
>>
>
>> Again, everything I have seen shows "consciousness" to be a post-hoc
>
>> emergent property of a process that is VERY different that it appears
>
>> to be. Think hundreds of threads that can NOT be done one-at-a-time,
>
>> except maybe in a time sharing sort of way, because some threads may
>
>> never finish, some might cancel others, etc. Further, there is plenty
>
>> of biological evidence supporting bidirectional computations, which
>
>> are incredibly inefficient to simulate on present-day computers (except
>> analog computers).
>
>
>
>
>
> There are many ways to look at it. It could be a finite state machine model
> where consciousness is the cumulative "moving average" of the contexts of
> the FSM with many threads running many FSM's where the contexts are
> interlinked. This would imply in your chess example where the FSM's are
> solving many chess moves simultaneously in the background and the post-hoc
> emergent consciousness is notified asynchronously of threaded results as
> they bubble up.
>
>
>
> That's not my preferred model just an impromptu example.
>
>
>
> Why is bidirectional more efficient on analog computers? Which type of
> bidirectional computation are you referring to.
>
>
>
> John
>
>
>
>
>
>
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to