Steve,

 

Any thoughts? You have flooded me with thoughts! 

 

I have been thinking for months now, why is it that neurons need so many
synapses? And I know that neurons need to establish short connections in
order to minimize energy used in the storage of information, which in turn
results in entropy decrease and self-organization of the information (more
on this in my Schroedinger's cat post). 

 

But to make short connections, the neurons need some way to compare them, so
they have to make many connections, test them by sending a signal and kill
the ones that are too slow (which would go very well with Hebbian learning).
That's why they start from 50,000 (it keeps growing, I thought it was
10,000) and end up with 200. 

 

What do you think?

 

Where did you get those numbers? Would you please have a reference to a
publication? This is important information for my work.

 

Selforg and learning are different. Learning is acquiring info. Selforg is
removing uncertainty from info you already have acquired. However, in
another sense, selforg is indeed learning because if derives new facts - the
self-organized structures - from known facts - that what you have just
learned. This is inference, and that's why I call it Emergent Inference, or
it could also be Self-organizing Inference.  You may also note that an
entity that can represent knowledge and has an inference is known as a
mathematical logic, so I am proposing EI as a new math logic. 

 

Sergio

 

 

 

 

From: Steve Richfield [mailto:[email protected]] 
Sent: Monday, August 13, 2012 11:26 AM
To: AGI
Subject: [agi] A wrong presumption?

 

AI/AGI has long presumed that we first go through a self-organizing phase,
followed by a learning phase. Here, I question that very basic presumption.

Suppose for a moment that there is no "learning phase" (except perhaps in
some small areas that have been identified as learning-related), and that
nearly everything is done by self-organization. I know, sounds screwy, but
follow me for a moment...

The Neural Network (previously known as Neurological Simulation, Perceptron,
Harmon Neuron, etc.) folks have been working since Von Neumann trying to
find a good unsupervised learning algorithm given a fixed wiring - long
enough to question their underlying assumption of a fixed wiring.

Self-organization is fundamentally a much more powerful sort of learning,
where pretty much everything is a potential input and not just the things
you happen to be hooked up to right now. Further, you can select how much
time delay you want in the input, which may be important for pipelining,
just as it is in computers. In the case of central nervous system neurons,
they have ~50,000 synapses, but only ~200 are active. However, there are
MANY more sites for potential future synapses, that might (said with
absolutely NO supporting evidence) under the right conditions develop into
synapses. Research has been concentrating on how to make the 200 active
synapses do the job, rather than on how to select from among the 50,000
which 200 would make the job easy to do. 


The distinction between learning and self-organization seems to disappear if
you simply connect everything to everything else, which is actually seen in
some of the lower life forms. This is possible in more complex cases in a
computer than in biology. Sure this adds another 2-3 (or more) orders of
magnitude to the problem, but let's first solve the problem as it is, before
we start working on a more efficient solution.

Imagine for a moment a simple process that goes on everywhere neurons come
into contact, that senses a temporal relationship between the activity of
one, followed by the positive or negative reinforcement of the other. Where
such a relationship exists, there is probably some way that the
early-arriving information could be used to improve the operation of the
neuron being reinforced. Once such a prospect was found, a synapse might
develop and adjust its operation to provide the appropriate adjustment.
Perhaps the 49,800 apparently unused synapses have developed this way, but
were never able to find a function that improved operation.

Of course there are probably LOTS of other prospective ways that things
might work. My goal here is to kick people out of the mental rut that goes
along with a "learning" mentality, and start thinking about these
possibilities.

This suggests a tentative abandonment of "learning", and a shift in effort
toward self-organization to do the job of supposed learning.

I thought of this while reflecting on my recent glaucoma cure, where it
became obvious that simple changes to my glasses were making sweeping
changes in the organization of my visual system despite my age - enough to
reverse ongoing physical changes that would have eventually led to the loss
of vision in my right eye. This isn't simple "learning", but something much
more powerful.

Any thoughts?

Steve


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57> |
<https://www.listbox.com/member/?&;
ad2> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to