> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> 
> True, we can't explain why the human brain needs 10^15 synapses to
> store 10^9 bits of long term memory (Landauer's estimate). Typical
> neural networks store 0.15 to 0.25 bits per synapse.
> 

This study - 
http://www.cogsci.rpi.edu/CSJarchive/1986v10/i04/p0477p0493/MAIN.PDF

is just throwing a dart at the wall. You'd need something more real life
instead of word and picture recall calculations to arrive at a number even
close to actual.

> I estimate a language model with 10^9 bits of complexity could be
> implemented using 10^9 to 10^10 synapses. However, time complexity is
> hard to estimate. A naive implementation would need around 10^18 to
> 10^19 operations to train on 1 GB of text. However this could be sped
> up significantly if only a small fraction of neurons are active at any
> time.
> 
> Just looking at the speed/memory/accuracy tradeoffs of various models
> at http://cs.fit.edu/~mmahoney/compression/text.html (the 2 graphs
> below the main table), it seems that memory is more of a limitation
> than CPU speed. A "real time" language model would be allowed 10-20
> years.
> 

I'm sorry, what are those 2 graphs indicating? To get a smaller compressed
size more running memory is needed? That y-axis is a compressor runtime
memory limit specified by a command line switch or is it just what the
compressor consumes for the data to be compressed?

John



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to