Edward,

I'm sorry for obscurity of my message. I tried to omit some of the
background that seemed irrelevant, but probably it isn't. I'll try to
describe my point more systematically.

I assume the following low-level model of brain operation (it's more about
terminology needed to communicate intuition, as I'm saying nothing new).
Each spike (or beginning of a burst) constitutes an event. Such event
creates multiple 'echo' events when spike (or subsequent spikes of a burst)
is delivered to another neuron through a synapse. From the point of view of
neuron which received a spike, it was informed by it about an event (of that
spike being fired) that happened some time in the past. Based on collection
of such 'echo' events that it observed, neuron decides to create its own
event (or not). Synapses also decide whether to transmit information about
event or not. So, each neuron defines an identifier of an event, and events
happen in time.

When particular combinations of synapses deliver spikes, neuron fires. This
event 'summarizes' information about certain combinations of events that
caused those spikes and which happened in the past. It represents
information about the fact that original events happened specific amount of
time in the past. As most of the selection of spikes that get delivered
happen in synapses, single neuron can represent multiple more or less
independent groups of past events.

I call identifier of neuron a 'concept' which is narrower than what this
word usually refers to, so it's kind of 'subsymbolic' concept. Real concept
can be represented by many neurons, including collections that tend to
activate together (polychronously), so that they can interact with any other
concept even though brain is not fully connected.

When neuron fires a spike, I say that corresponding concept activates.
Concept is only active for a moment, so state of the brain is a collection
of concept activations that happened in very recent past, spikes
corresponding to which are not yet delivered.

In this setting, memory/knowledge of the brain can be roughly represented by
number of synapses that deliver events that significantly contribute to
behavior (minus redundancy). Let's call this number K (I associate it with
Matt's 10^9). Total number of synapses (S) is significantly greater (say,
S=10^15). I argue that G=S/K should be big enough to implement local
learning (and thus K being too big is also unlikely).

Neuron and its incoming spikes learn a concept only when they observe a
specific repeating pattern of events from neurons to which they are
connected. They only can observe collections of events from neurons to which
they are connected, and these collections have fixed time shifts for each
event, defined by axonal delays. So, if particular collection of events
created with specific delays constitutes a concept, it will only be learned
is there's a neuron receiving these events with the same delays. Some
redundancy in representation of concepts makes it more likely for
collections of them constituting new compound concept to be caught by some
neuron, but still number of watchers should be big enough.

Say, each functional concept (a bit in total amount of memory) is
represented by R synapses and M neurons. When certain pattern of concepts is
observed, it creates a repeatable sequence of events. Say, pattern is one
concept being followed by another with a fixed delay. Observation of each
concept corresponds to approximately M events of firing of neurons
comprising a concept. Let's assume that received spikes can contribute to
the same spiking event if their arrival differs no more than 1/L of average
delay time. Each of M events in the sequence creates F spikes through F
outgoing synapses (F=10^4). So, this pattern will be learned if there is a
neuron that receives at least, say, 2 spikes from different concepts within
1/L of average axonal delay time. At each moment there are M*F/L randomly
sampled neurons for each concept, with total of X neurons in the network, so
probability of two of them arriving in the same neuron is on the order of 1
when (M*F/L)^2 is on the order of X. Calculating: X=10^11, L=(10^2)/2 (I'm
not sure about this one: average delays are up to about 50ms, I assume 1ms
as coincidence time), F=10^4, so M=(3*10^5)*((10^2)/2)/(10^4)=10^3, or
X/M=10^8 neurons representing unique concepts (where each neuron can
represent multiple groups of incoming events), with 1% of active synapses it
corresponds to 10^10 synapses without redundancy, add or take an order of
magnitude.

So, this estimate gives a number close to 10^9, even though all 10^15
synapses are required...


On 10/19/07, Edward W. Porter < [EMAIL PROTECTED]> wrote:
>
> Nesov>> I'm not sure 10^9 is far off, because much more can be required
> for domain-independent association/correlation catching between
> (subsymbolic) concepts implemented by groups of synapses(*). Gap of 10^6 is
> probably about right for this purpose, I can't see how it would be possible
> with, say, gap of only 10^2.
>
> EWP>> I don't know what you are referring to here.  It is not clear
> whether you are supporting or attacking the notion that 10^9 bits is enough
> to store what the brain represents.  So, if possible, please inform me in
> language that is a little more accessible to those not intimate with what
> you are referring to.
>
>
>
> EWP>> Assuming you are supporting the notion that 10^9 bits is enough, as
> your first clause suggests, then the only thing I can think of that might
> map into what you are saying is the concept of cell assemblies.  The
> neurons in cell assemblies each take part in many cell assemblies, of say
> 10K neurons, and thus each of its synaptic weight is actually averaged over
> it roles in multiple different cell assemblies.  In modeling this on a
> computer, this would mean a given variable in a model neuron would take part
> in the representation of many concepts.  I have read implications that
> this could lead to significant representational efficiencies.
>
>
>
> EWP>> Many people seem to think such multiplexing cell assemblies are used
> in our brain.  I have read arguments that a system with a given number of
> neurons can represent many more patterns using one cell assembly per pattern
> than if it used one neuron per pattern.  (Resulting in the
> representational efficiencies I referred to above.)   I haven't seen any
> mathematic explanation of this. (if you know of any I would be interested in
> reading it).  But it seems to me the more you multiplex a neuron the more
> cross-talk becomes an issue, particularly since different neurons in
> different areas of the cortex would tend to represent different connections
> for each of the different concepts they represent, meaning that the
> distribution of cross-talk between representations would not be
> statistically evenly spread across neurons or synapses, but could instead be
> unevenly concentrated at certain synapses, increasing the likelihood of
> cross-talk becoming a problem.
>
>
>
> EWP>> Since, many brain scientist's say the brain uses this technique, and
> assuming it creates representational capacities which multiply the
> representational capacity of each synapse, then the issue is why would the
> cortex have at least 3x10^12 active synapses (I have heard from multiple
> sources that only about 1% of the average 10K synapse per neurons are built
> up enough to be really active, the other 99% are potential connections
> waiting to happen).  Each synapse stores multiple variables, equal to,
> say, at least 2 to 8 bytes, and each, through its location in a complex
> connection space, represents at least a 3 to 7 byte address.  So, if a
> synapse represents, say, 10 bytes, the brain has 3x10^13 bytes (3x10^14
> bits) of representational capacity.  And if there is some magic increase
> in representational capacity due to the use of cell assemblies, that would
> only further increase the number of bits the brain is arguably capable of
> representing.
>
>
>
> EWP>> The energy demands of the large human brain is a major evolutionary
> liability.  It consumes more energy than any other organ.  For example "The
> average newborn's brain consumes an amazing 75-per cent of an infant's daily
> energy needs."1  This large liability can only be justified, in
> evolutionary terms, by the survival benefits the brain's intelligence
> provides.  So those roughly 3x10^13 bytes or more of representational
> capacity in the human cortex have had to earn their evolutionary keep.  And
> that's not even mentioning the larger number or neurons in other parts of
> the brain.
>
>
>
> EWP>> Yes, the brain may be an inefficient design, but we have no strong
> reason to believe its efficiency would be less than one or two orders of
> magnitude worse than the representational scheme envisioned by the cognitive
> scientist who said the brain only stores 10^ 9 bits.
>
>
>
> EWP>> So, I still don't see how anyone can defend the notion that the
> human brain represents only 10^9 bits.
>
>
> Nesov>> New concepts/correlations/associations can be established between
> events (spikes) that are not initially aligned in any way, including
> different delays in time (through axonal delays and spiking sequences), so
> to catch regularities when and where they happen to appear, big enough
> amount of synapse groups should be there 'on watch'.
>
>
>
> EWP>> Again I don't know what you are referring to here.  I understand
> that timing is important to neuronal patterns, but it seems that such added
> temporal complexity would only increase the number of bits required for a
> computer to model the information the brain holds.
>
>
>
> EWP>> 1. http://www.eurekalert.org/pub_releases/2006-02/nsae-tsf021706.php
>
> Edward W. Porter
> Porter & Associates
> 24 String Bridge S12
> Exeter, NH 03833
> (617) 494-1722
> Fax (617) 494-1822
> [EMAIL PROTECTED]
>
>  -----Original Message-----
> *From:* Vladimir Nesov [mailto:[EMAIL PROTECTED]
> *Sent:* Friday, October 19, 2007 5:28 AM
> *To:* [email protected]
> *Subject:* Re: [agi] Poll
>
> Edward,
>
> Does your estimate consider only amount of information required for
> *representation*, or it also includes additional processing elements
> required in neural setting to implement learning? I'm not sure 10^9 is far
> off, because much more can be required for domain-independent
> association/correlation catching between (subsymbolic) concepts implemented
> by groups of synapses(*). Gap of 10^6 is probably about right for this
> purpose, I can't see how it would be possible with, say, gap of only 10^2.
>
> New concepts/correlations/associations can be established between events
> (spikes) that are not initially aligned in any way, including different
> delays in time (through axonal delays and spiking sequences), so to catch
> regularities when and where they happen to appear, big enough amount of
> synapse groups should be there 'on watch'.
>
> -----
> (*) By groups of synapses I mean sets of synapses that can excite a common
> neuron, but single neuron can host multiple groups of synapses responsible
> for multiple subsymbolic concepts. It's not neurologically grounded, just a
> wild theoretic estimate.
>
>
>
> --
> Vladimir Nesov                            mailto:[EMAIL PROTECTED]
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>



-- 
Vladimir Nesov                            mailto:[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55620632-c9d01c

Reply via email to