--- Dennis Gorelik <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> 
> > And some of the Blue Brain research suggests it is even worse.  A mouse
> > cortical column of 10^5 neurons is about 10% connected,
> 
> What does mean 10% connected?
> How many connections does average mouse neuron have?
> 10000?

According to the Blue Brain project, 8000 synapses per neuron.  The simulation
used 6300.  The 1 hour video at
http://video.google.com/videoplay?docid=-2874207418572601262 gave a talk on
the project.  According to the presentation, every axon in a cortical column
very closely approaches dendrites from every other neuron about one synapse
width away, implying that a connection could potentially form.  About 10% were
actually connected with synapses.

> > but the neurons are arranged such that connections can be formed
> > between any pair of neurons.  Extending this idea to the human brain, with
> 10^6 columns of 10^5 neurons
> > each, each column should be modeled as a 10^5 by 10^5 sparse matrix,
> 
> Only poor design would require "10^5 by 10^5 matrix" if every neuron
> has to connect only to 10000 other neurons.
> 
> One pointer to 2^17 (131072) address space requires 17 bits.
> 10000 connections require 170000 bits.
> If we want to put 4 bit weighting scale on every connection, then it
> would be 85000 bytes.
> 85000 * 10000 neurons = 8.5 * 10^9 bytes = 8.5 GB (hard disks of that
> size were available on PCs ~10 years ago).

Using pointers saves memory but sacrifices speed.  Random memory access is
slow due to cache misses.  By using a matrix, you can perform vector
operations very fast in parallel using SSE2 instructions on modern processors,
or a GPU.  By your own calculations, an array only takes twice as much space
as a graph.

> Do you imply that intelligent algorithm must be universal across
> "language, speech, vision, robotics, etc"?
> In humans it's just not the case.
> Different algorithms are responsible for vision, speech, language,
> body control etc.

Neural networks are useful for all of these problems.  Few other algorithms
have that property.  But that really shouldn't be surprising, considering we
are simulating something already done by neurons.  The experiments done with
neural networks (mostly in the 1980's) confirms the basic architecture, in
particular Hebb's rule, postulated in 1949 but not fully confirmed in animals
even today.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=70910879-ed86c9

Reply via email to