In response to Vladimir Nesov‘s Fri 10/19/2007 5:28 AM post.

Nesov>> Edward,


Nesov>> Does your estimate consider only amount of information required
for *representation*, or it also includes additional processing elements
required in neural setting to implement learning?



EWP>> The large numbers I was discussing were for learned representation,
including learned behaviors, such as many mental behaviors, but not actual
pre-programmed or OS “code” in the traditional sense.



Nesov>> I'm not sure 10^9 is far off, because much more can be required
for domain-independent association/correlation catching between
(subsymbolic) concepts implemented by groups of synapses(*). Gap of 10^6
is probably about right for this purpose, I can't see how it would be
possible with, say, gap of only 10^2.



EWP>> I don’t know what you are referring to here.  It is not clear
whether you are supporting or attacking the notion that 10^9 bits is
enough to store what the brain represents.  So, if possible, please inform
me in language that is a little more accessible to those not intimate with
what you are referring to.



EWP>> Assuming you are supporting the notion that 10^9 bits is enough, as
your first clause suggests, then the only thing I can think of that might
map into what you are saying is the concept of cell assemblies.  The
neurons in cell assemblies each take part in many cell assemblies, of say
10K neurons, and thus each of its synaptic weight is actually averaged
over it roles in multiple different cell assemblies.  In modeling this on
a computer, this would mean a given variable in a model neuron would take
part in the representation of many concepts.  I have read implications
that this could lead to significant representational efficiencies.



EWP>> Many people seem to think such multiplexing cell assemblies are used
in our brain.  I have read arguments that a system with a given number of
neurons can represent many more patterns using one cell assembly per
pattern than if it used one neuron per pattern.  (Resulting in the
representational efficiencies I referred to above.)   I haven’t seen any
mathematic explanation of this. (if you know of any I would be interested
in reading it).  But it seems to me the more you multiplex a neuron the
more cross-talk becomes an issue, particularly since different neurons in
different areas of the cortex would tend to represent different
connections for each of the different concepts they represent, meaning
that the distribution of cross-talk between representations would not be
statistically evenly spread across neurons or synapses, but could instead
be unevenly concentrated at certain synapses, increasing the likelihood of
cross-talk becoming a problem.



EWP>> Since, many brain scientist’s say the brain uses this technique, and
assuming it creates representational capacities which multiply the
representational capacity of each synapse, then the issue is why would the
cortex have at least 3x10^12 active synapses (I have heard from multiple
sources that only about 1% of the average 10K synapse per neurons are
built up enough to be really active, the other 99% are potential
connections waiting to happen).  Each synapse stores multiple variables,
equal to, say, at least 2 to 8 bytes, and each, through its location in a
complex connection space, represents at least a 3 to 7 byte address.  So,
if a synapse represents, say, 10 bytes, the brain has 3x10^13 bytes
(3x10^14 bits) of representational capacity.  And if there is some magic
increase in representational capacity due to the use of cell assemblies,
that would only further increase the number of bits the brain is arguably
capable of representing.



EWP>> The energy demands of the large human brain is a major evolutionary
liability.  It consumes more energy than any other organ.  For example
“The average newborn's brain consumes an amazing 75-per cent of an
infant's daily energy needs.”1  This large liability can only be
justified, in evolutionary terms, by the survival benefits the brain’s
intelligence provides.  So those roughly 3x10^13 bytes or more of
representational capacity in the human cortex have had to earn their
evolutionary keep.  And that’s not even mentioning the larger number or
neurons in other parts of the brain.



EWP>> Yes, the brain may be an inefficient design, but we have no strong
reason to believe its efficiency would be less than one or two orders of
magnitude worse than the representational scheme envisioned by the
cognitive scientist who said the brain only stores 10^ 9 bits.



EWP>> So, I still don’t see how anyone can defend the notion that the
human brain represents only 10^9 bits.


Nesov>> New concepts/correlations/associations can be established between
events (spikes) that are not initially aligned in any way, including
different delays in time (through axonal delays and spiking sequences), so
to catch regularities when and where they happen to appear, big enough
amount of synapse groups should be there 'on watch'.



EWP>> Again I don’t know what you are referring to here.  I understand
that timing is important to neuronal patterns, but it seems that such
added temporal complexity would only increase the number of bits required
for a computer to model the information the brain holds.



EWP>> 1.
<http://www.eurekalert.org/pub_releases/2006-02/nsae-tsf021706.php>
http://www.eurekalert.org/pub_releases/2006-02/nsae-tsf021706.php


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: Vladimir Nesov [mailto:[EMAIL PROTECTED]
Sent: Friday, October 19, 2007 5:28 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Poll


Edward,

Does your estimate consider only amount of information required for
*representation*, or it also includes additional processing elements
required in neural setting to implement learning? I'm not sure 10^9 is far
off, because much more can be required for domain-independent
association/correlation catching between (subsymbolic) concepts
implemented by groups of synapses(*). Gap of 10^6 is probably about right
for this purpose, I can't see how it would be possible with, say, gap of
only 10^2.

New concepts/correlations/associations can be established between events
(spikes) that are not initially aligned in any way, including different
delays in time (through axonal delays and spiking sequences), so to catch
regularities when and where they happen to appear, big enough amount of
synapse groups should be there 'on watch'.

-----
(*) By groups of synapses I mean sets of synapses that can excite a common
neuron, but single neuron can host multiple groups of synapses responsible
for multiple subsymbolic concepts. It's not neurologically grounded, just
a wild theoretic estimate.



On 10/19/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:

Matt Mahoney's Thu 10/18/2007 9:15 PM post states

MAHONEY>> There is possibly a 6 order of magnitude gap between the size of
a cognitive model of human memory (10^9 bits) and the number of synapses
in the brain (10^15), and precious little research to resolve this
discrepancy.  In fact, these numbers are so poorly known that we aren't
even sure there is a gap.

EWP>> This gap, which Matt was so correct to highlight, is an important
one, and points out one of the many crippling legacy of the small hardware
mindset.

EWP>> I have always been a big believer in memory based reasoning, and for
the last 37 years I have always assumed a human level representation of
world knowledge would require something like 10^12 to 10^14 bytes, which
is 10^13 to 10^15 bits. ( i.e., "within several orders of magnitude of the
human brain", a phrase  I have used so many times before on this list.) My
recollection is that after reading Minsky's reading list in 1970 and my
taking of K-line theory to heart, the number I guessed at that time for
world knowledge was either 10^15 bits or bytes, I forget which.  But, of
course, my notions then were so primitive compared to what they are today.


EWP>> Should we allow ourselves to think in terms of such big numbers?
Yes.  Let's take 10^13 bytes, for example.

EWP>> 10^13 bytes with 2/3s of it in non-volatile memory and 10 million
simple RAM opp processors, capable of performing about 20 trillion random
RAM accesses/sec, and a network with a cross-sectional bandwidth of
roughly 45 TBytes/sec (if you ran it hot), should be manufacturable at a
marginal cost in 7 years of about $40,000, and could be profitably sold
with amortization of development costs for several hundred thousand
dollars if there were a market for several thousand of them -- which there
almost certainly would be because of their extreme power.

EWP>> Why so much more than the 10^9 bits mentioned above?

EWP>> Because 10^9 bits only stores roughly 1 million atoms (nodes or
links) with proper indexing and various state values.  Anybody who thinks
that is enough to represent human-level world knowledge in all its visual,
audio, linguistic, tactile, kinesthetic, emotional, behavioral, and social
complexity hasn't thought about it in sufficient depth.

EWP>> For example, my foggy recollection is that Serre's representation of
the hierarchical memory associated the portion of the visual cortext from
V1 up to the lower level of the pre-frontal cortex (from the paper I have
cited so many times on this list) has several million pattern nodes (and,
as Josh has pointed out, this is just for the mainly feedforward aspect of
visual modeling).  This includes nothing for the vast majority of V1 and
above, and nothing for audio, language, visual motion, associate cortex,
prefrontal cortex, etc.

EWP>> Matt, I am not in any way criticizing you for mentioning 10^9 bits,
because I have read similar numbers myself, and your post pointed with
very appropriate questioning to the gap between that and what the brain
would appear to have the capabilility to represent.  This very low number
is just another manifestation of the small hardware mindset that has
dominated the conventional wisdom in the AI since its beginning.  If the
only models one could make had to fit in the very small memories of most
past machines, it is only natural that one's mind would be biased toward
grossly simplified representation.

EWP>> So forget the notion that 10^9 bits can represent human-level world
knowledge. Correct me if I am wrong, but I think the memory required to
store the representation in most current best selling video games is 10 to
40 times larger.

Ed Porter

P.S., Please give me feed back on whehter this technique of distinguishing
original from responsive text is better than my use of all-caps, which
received criticism.



-----
This list is sponsored by AGIRI:  <http://www.agiri.org/email>
http://www.agiri.org/email

To unsubscribe or change your options, please go to:
<http://v2.listbox.com/member/?&;> http://v2.listbox.com/member/?&;



  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/? <http://v2.listbox.com/member/?&;> &




--
Vladimir Nesov                            mailto:[EMAIL PROTECTED]
  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55360964-a2c1a5

Reply via email to