Hi,

You have to understand memory in neurons as a totally autonomous, entirely
dynamic storage and recall mechanism with a vast repertoire of contents
established (write) and recalled (read) by the same neurons that have two
orthogonally acting , mutually adaptive signalling systems that give small
cohorts of neurons a vast amount of storage capacity that
geometrically explodes when cohorts interact with cohorts .... in a nested
hierarchy (EM fields inside EM fields inside .....) ...

1) variable interconnectivity of neurons doing their standard signalling
(spiking) (synapses, axons)
2) the EM field system impressed on space by those neurons by the same
signalling system.

In 'being' the EM field system, the brain has exposure to the surrounding
world (groundng in that world). That  is, the two signalling
mechanisms have a linkage to a ground truth to augment different dynamics
into the same set of neurons. This 2) second signalling mechanism involves
a spectacular geometric increase in the memory storage capacity of the same
set of neurons. The EM part of the causality (via the Lorentz force) causes
the (spiking aspect) signalling to vary in phase and frequency that
constitutes recognition (read) and stabilizes a particular memory (write)
simultaneously. Associative memory is the result. The lookup key is in the
EM field itself.  This works perfectly with imagined (internally
originated) experiential content.  Exposure to a consistent experiential
context (repetition) reinforces the memory. This system of signalling does
it all in the brain. It is writing this email.

If you throw out the specific 2) EM physics of the signalling then the EM
role is gone. Grounding in the external world is gone.The memories are
unstable. Throw it out and the system is nonfunctional. In those parts of
the EM field system that are unconscious, the EM field spatiotemporal
behaviour does no superpose in the temporally integrated way that generates
conscious content, and the function of the neurons is left to adapt merely
by signal type 1).

In relation to your question "why can't you do it with computers?"

*The answer is you can!*

But what you do is permanently enrol external agency (or an explicit
mechanism that does not exist in the natural system) in establishment of
novel memories and maintaining existing memories. A lack of groundedness
and fragility ensues ....Exactly what we find in the well documented
failure modes of the AI world using computers. The failures are inelegant
in their initial failure and possibly irrecoverable because it is the
physics connected to the delivery of experience that holds the signalling
in an accurate, stable and repeatable dynamic that builds in the conditions
of the external world and integrates a response to variability in the
external world. You also end up with the loss of an extremely powerful
memory content system .... this loss demands a massive computational
resources to redress even the smallest glitches in the signal processing.

There is a reason why the performance of a 20W meat processor outdoes both
digital and analogue (neuromorphic) computers. It's the EM basis of the
brain's information handling system. The very thing both these technologies
dump from the getgo in a 'substrate independence' cult based on a
falsehood. The brain is not Turing computable because there is access to
information content that is intrinsic to the physics. Simulating a model of
the physics loses that information content because it is precisely and only
about what the system does not know. The signal processing of the brain is
carried out by 1 thing: EM fields of a specific (brain organization) kind
(1 & 2 above). There is only 1 substrate that nature uses: EM fields. Even
a steam computer is made of EM fields. It is the very particular
organization of EM  in brains that is the 'secret sauce' that is right in
front of everyone.

If you use the EM field physics in the form of a computer and connection to
the external world, and the causality that responds to it, is gone. It has
to be replaced by the designer explicitly. Which is fine, if you can
tolerate that. But it's not AGI in the sense of being an artificial version
of natural general intelligence. The generality (G) is exhibited by the
smallest of creatures. It is how biology (with a nervous system) encounters
novelty in a more survivable way.

This is a subtle failure: it is a loss of autonomy. *Autonomous* novelty
handling. That's what goes missing. The G in AGI is lost. That's the thing
that everyone's after. It literally defines the way AI has been failing all
along.

As I have said here a bunch of times....

If this was the Wright Bros, then if the physics of lift was thrown out (a
flight simulator) then flight is gone.
In exactly the same way, if the EM physics of neuron signalling is thrown
out you 100% lose the autonomous handling of novelty.

Use computers? Fine. Just know what it is that is that you're enrolling in
your future: .... the permanent ongoing exogenous retrofitting of all
novelty (that which the system has never encountered). Note that an
exogenously applied model of novelty handling is not autonomous novelty
handling. The 'knowing you don't know something' moment is the moment that
the machine gets to viable recovery. The system that incorporates
consciousness can do this because some aspects of the character of the
novelty itself is intrinsically available to it.

I am now officially off Jim's hook. :-)

cheers
colin


On Sun, Feb 20, 2022 at 11:34 AM <[email protected]> wrote:

> On Saturday, February 19, 2022, at 6:22 PM, Colin Hales wrote:
>
> In making AGI one must replicate the EM field system as part of the
> replication of the signal processing. AGI done this way has no models, no
> software and does not use computers.
>
>
> @Colin Hales, if we consider my below understanding and your proposal, can
> you help me understand better?
>
> AGI requires a brain to store memories - from past experiences. Upon
> seeing future problems, it recognizes future problems by technically
> ""matching"" them to past memories, which allows it to kind-of know what to
> predict next. If it had no memories to match, no inherited reflexes to use,
> and so on, it would have no clue how to react to a future stimulus. It
> wouldn't react to it any any sort of way that would "benefit it". It'd be
> randomness, causing death. So the whole human body is all about "storing"
> information that helped us survive, and "reading" that information upon
> using it for a recognizable input. The brain also "thinks" even when all is
> quit around itself, this is the brain trying to improve on its pattern
> matching for the future.
>
> Some examples of memories being ""matched"" are these below, and it only
> knows the memory "walk fast":
> wwaallkk
> WALK
> W a L K
> run
> to go forth
> klaw
> waiulk
> "sound" of walking
> etc
> and, touching the back of your tongue initiates a stored gag reflex
>
> Where does EM fit into this matching? And why can't we do memory matching
> in computers? It looks like it works fine...See Google's Pallete.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T85ce710057b5a5ac-M6d9544b7d8956a199bae5989>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T85ce710057b5a5ac-Mb1e64cd1caa3f1a672588c70
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to