Sergio,

On Mon, Aug 13, 2012 at 1:24 PM, Sergio Pissanetzky
<[email protected]>wrote:

>  Any thoughts? You have flooded me with thoughts!
>

That was my goal - to kick people's thinking out of their present ruts.

> ** **
>
> I have been thinking for months now, why is it that neurons need so many
> synapses? And I know that neurons need to establish short connections in
> order to minimize energy used in the storage of information, which in turn
> results in entropy decrease and self-organization of the information (more
> on this in my Schroedinger's cat post).
>

I suspect that they need the *correct* delay as needed to preserve time
coherence. A neuron's output represents (as nearly as is practical)
something at a particular relative moment in time, e.g. 1/2 second ago, 2
seconds projected into the future, etc. To accomplish this, they must
adjust the delays of their inputs so that everything comes together for the
SAME moment in time. This is very parallel to the techniques used in
"pipelined" supercomputers, where delays are carefully adjusted. When CRAY
computers first came out, everyone noticed the many excessively long wires
in them. Their lengths corresponded to the number of clock cycles it took
signals to travel from one end to the other, and each wire had to be the
correct length or the computer wouldn't work. I suspect that neurons are
much the same.

Note that time coherence would be AUTOMATICALLY adjusted if neurons
selected inputs that changed just BEFORE feedback information arrived, as
these inputs would contain properly timed information to produce correct
outputs.

> ****
>
> ** **
>
> But to make short connections, the neurons need some way to compare them,
> so they have to make many connections, test them by sending a signal and
> kill the ones that are too slow (which would go very well with Hebbian
> learning). That's why they start from 50,000 (it keeps growing, I thought
> it was 10,000) and end up with 200. ****
>
> ** **
>
> What do you think?****
>
> ** **
>
> Where did you get those numbers? Would you please have a reference to a
> publication? This is important information for my work.
>

I got those numbers from William Calvin during discussions when I worked
for him ~40 years ago at the U.W. Department of Neurological Surgery,
before he became a famous neuroscience author. I suspect that those numbers
came from Calvin's own observations and calculations. I could probably
track him down and ask him.

Note that MOST of what is "known" in the neurosciences does NOT appear in
print!!! They have their own strange sort of "ethics" that is almost the
exact opposite of Physics. Physics is a battle of competing models, while
the neurosciences punish those who advance models before they are "proven",
hence, no models.

However, when you get one of these guys into a long off-the-record
conversation and start talking about what they have actually seen but can't
prove (remember, neuroscience IS the study of irreproducable results,
because you can never exactly repeat an experiment), you start to realize
that things are NOTHING like you read in the literature.

My own view of this is: Without models you can't advance potentially useful
hypotheses, and without these hypotheses you can't practice the Scientific
Method. Hence, the neurosciences are not (yet) a "science". My advice to
funding agencies both public and private is not to fund ANYTHING that lacks
a comprehensive tentative model that was used to form the hypothesis being
tested.

> ****
>
> ** **
>
> Selforg and learning are different. Learning is acquiring info. Selforg is
> removing uncertainty from info you already have acquired. However, in
> another sense, selforg is indeed learning because if derives new facts -
> the self-organized structures - from known facts - that what you have just
> learned. This is inference, and that's why I call it Emergent Inference, or
> it could also be Self-organizing Inference.  You may also note that an
> entity that can represent knowledge and has an inference is known as a
> mathematical logic, so I am proposing EI as a new math logic.
>

"Modern" mathematical notation and computer languages have lose some of
their history. Earlier computers with architectures far more advanced than
PCs (there were LOTS of these) had "indirect addressing", where instead of
referring to a location for an operand, the location pointed anywhere in
memory to the operand. Then, some computers like the GE/Honeywell 600/6000
series mainframes allowed the memory addresses to themselves contain a flag
to perform additional levels of indirect addressing, so an instruction
might go from one location to another in search of its operand. With these
sorts of architectures, operands did NOT have to be constrained to
particular structures or arrays. This was one of the powerful pieces at the
heart of the early MULTICS and other systems.

I suspect that people think in "layers" and "columns" in part because they
correspond to programming structures like arrays, which are in turn an
artifact of the crude processors we now use. I suspect that we need to shed
such baggage and stop being thought-constrained by the CPUs we now use.

Our mathematical notation and our CPUs need to be able to refer to ANYTHING
that provides useful and timely input.

Steve
================

> ****
>
>  *From:* Steve Richfield [mailto:[email protected]]
> *Sent:* Monday, August 13, 2012 11:26 AM
> *To:* AGI
> *Subject:* [agi] A wrong presumption?****
>
> ** **
>
> AI/AGI has long presumed that we first go through a self-organizing phase,
> followed by a learning phase. Here, I question that very basic presumption.
>
> Suppose for a moment that there is no "learning phase" (except perhaps in
> some small areas that have been identified as learning-related), and that
> nearly everything is done by self-organization. I know, sounds screwy, but
> follow me for a moment...
>
> The Neural Network (previously known as Neurological Simulation,
> Perceptron, Harmon Neuron, etc.) folks have been working since Von Neumann
> trying to find a good unsupervised learning algorithm given a fixed wiring
> - long enough to question their underlying assumption of a fixed wiring.
>
> Self-organization is fundamentally a much more powerful sort of learning,
> where pretty much everything is a potential input and not just the things
> you happen to be hooked up to right now. Further, you can select how much
> time delay you want in the input, which may be important for pipelining,
> just as it is in computers. In the case of central nervous system neurons,
> they have ~50,000 synapses, but only ~200 are active. However, there are
> MANY more sites for potential future synapses, that might (said with
> absolutely NO supporting evidence) under the right conditions develop into
> synapses. Research has been concentrating on how to make the 200 active
> synapses do the job, rather than on how to select from among the 50,000
> which 200 would make the job easy to do. ****
>
>
> The distinction between learning and self-organization seems to disappear
> if you simply connect everything to everything else, which is actually seen
> in some of the lower life forms. This is possible in more complex cases in
> a computer than in biology. Sure this adds another 2-3 (or more) orders of
> magnitude to the problem, but let's first solve the problem as it is,
> before we start working on a more efficient solution.
>
> Imagine for a moment a simple process that goes on everywhere neurons come
> into contact, that senses a temporal relationship between the activity of
> one, followed by the positive or negative reinforcement of the other. Where
> such a relationship exists, there is probably some way that the
> early-arriving information could be used to improve the operation of the
> neuron being reinforced. Once such a prospect was found, a synapse might
> develop and adjust its operation to provide the appropriate adjustment.
> Perhaps the 49,800 apparently unused synapses have developed this way, but
> were never able to find a function that improved operation.
>
> Of course there are probably LOTS of other prospective ways that things
> might work. My goal here is to kick people out of the mental rut that goes
> along with a "learning" mentality, and start thinking about these
> possibilities.
>
> This suggests a tentative abandonment of "learning", and a shift in effort
> toward self-organization to do the job of supposed learning.
>
> I thought of this while reflecting on my recent glaucoma cure, where it
> became obvious that simple changes to my glasses were making sweeping
> changes in the *organization* of my visual system despite my age - enough
> to reverse ongoing physical changes that would have eventually led to the
> loss of vision in my right eye. This isn't simple "learning", but something
> much more powerful.
>
> Any thoughts?
>
> Steve****
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57>| 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> ****
>
> <http://www.listbox.com>****
>
> ** **
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to