--- On Thu, 6/12/08, Ed Porter <[EMAIL PROTECTED]> wrote:

> I think processor to memory, and inter processor
> communications are currently far short

Your concern is over the added cost of implementing a sparsely connected 
network, which slows memory access and requires more memory for representation 
(e.g. pointers in addition to a weight matrix). We can alleviate much of the 
problem by using connection locality.

The brain has about 10^11 neurons with 10^4 synapses per neuron. If we divide 
this work among 10^6 processors, each representing 1 mm^3 of brain tissue, then 
each processor must implement 10^5 neurons and 10^9 synapses. By my earlier 
argument, there can be at most 10^6 external connection assuming 1-2 micron 
nerve fiber diameter, so half of the connections must be local. This is true at 
any scale because when you double the size of a cube, you increase the number 
of neurons by 8 but increase the number of external connections by 4. Thus, for 
any size cube, half of the external connections are to neighboring cubes and 
half are to more distant cubes.

A 1 mm^3 cube can be implemented as a fully connected 10^5 by 10^5 matrix of 
10^10 connections. This could be implemented as a 1.25 GB array of bits with 5% 
of bits set to 1 representing a connection. The internal computation bottleneck 
is the vector product which would be implemented using 128 bit AND instructions 
in SSE2 at full serial memory bandwidth. External communication is at most one 
bit per connected neuron every cycle (20-100 ms), because the connectivity 
graph does not change rapidly. A randomly connected sparse network could be 
described compactly using hash functions.

Also, there are probably more efficient implementations of AGI than modeling 
the brain because we are not constrained to use slow neurons. For example, low 
level visual feature detection could be implemented serially by sliding a 
coefficient window over a 2-D image rather than by maintaining sets of 
identical weights for each different region of the image like the brain does. I 
don't think we really need 10^15 bits to implement the 10^9 bits of long term 
memory that Landauer says we have.

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to