--- On Fri, 6/13/08, Ed Porter <[EMAIL PROTECTED]> wrote:
> [Ed Porter] -- Why couldn't each of the 10^6 fibers
> have multiple connections along its length within the cm^3 (although it
> could be represented as one row in the matrix, with individual
> connections represented as elements in such a row)

I think you mean 10^6 fibers in 1 cubic millimeter. They would have multiple 
connections, but I am only counting interprocessor communication, which is 1 
bit to transmit the state of the neuron (on or off) or a few bits to transmit 
its activation level to neighboring processors.

With regard to representing different types of synapses (various time delays, 
strength bounds, learning rates, etc), this information can be recorded as 
characteristics of the input and output neurons and derived as needed to save 
space.

Minimizing inter-processor communication is a harder problem. This can be done 
by mapping the neural network into a hierarchical organization so that groups 
of co-located neurons are forced to communicate with other groups through 
narrow channels using a small number of neurons. We know that many problems can 
be solve this way. For example, a semantic language model made of a 20K by 20K 
word association matrix can be represented using singular value decomposition 
as a 3 layer neural network with about 100 to 200 hidden neurons [1,2]. The two 
weight matrices could then be implemented on separate processors which 
communicate through the hidden layer neurons. More generally, we know from 
chaos theory that complex systems must limit the number of interconnections to 
be stable [3], which suggests that many AI problems in general can be 
decomposed this way.

Remember we need not model the human brain in precise detail, since our goal is 
to solve AGI by any means. We are allowed to use more efficient algorithms if 
we discover them.

I ran some benchmarks on my PC (2.2 GHz Athlon-64, 3500+). It copies large 
arrays at 1 GB per second using MMX or SSE2, which is not quite fast enough for 
a 10^5 by 10^5 neural network simulation.

1. Bellegarda, Jerome R., John W. Butzberger, Yen-Lu Chow, Noah B. Coccaro, 
Devang Naik (1996), “A novel word clustering algorithm based on latent semantic 
analysis”, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, 
vol. 1, 172-175.

2. Gorrell, Genevieve (2006), “Generalized Hebbian Algorithm for Incremental 
Singular Value Decomposition in Natural Language Processing”, Proceedings of 
EACL 2006, Trento, Italy.
http://www.aclweb.org/anthology-new/E/E06/E06-1013.pdf

3. Kauffman, Stuart A. (1991), “Antichaos and Adaptation”, Scientific American, 
Aug. 1991, p. 64.


-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to