Matt Mahoneys Thu 10/18/2007 9:15 PM post states MAHONEY>> There is possibly a 6 order of magnitude gap between the size of a cognitive model of human memory (10^9 bits) and the number of synapses in the brain (10^15), and precious little research to resolve this discrepancy. In fact, these numbers are so poorly known that we aren't even sure there is a gap.
EWP>> This gap, which Matt was so correct to highlight, is an important one, and points out one of the many crippling legacy of the small hardware mindset. EWP>> I have always been a big believer in memory based reasoning, and for the last 37 years I have always assumed a human level representation of world knowledge would require something like 10^12 to 10^14 bytes, which is 10^13 to 10^15 bits. (i.e., within several orders of magnitude of the human brain, a phrase I have used so many times before on this list.) My recollection is that after reading Minskys reading list in 1970 and my taking of K-line theory to heart, the number I guessed at that time for world knowledge was either 10^15 bits or bytes, I forget which. But, of course, my notions then were so primitive compared to what they are today. EWP>> Should we allow ourselves to think in terms of such big numbers? Yes. Lets take 10^13 bytes, for example. EWP>> 10^13 bytes with 2/3s of it in non-volatile memory and 10 million simple RAM opp processors, capable of performing about 20 trillion random RAM accesses/sec, and a network with a cross-sectional bandwidth of roughly 45 TBytes/sec (if you ran it hot), should be manufacturable at a marginal cost in 7 years of about $40,000, and could be profitably sold with amortization of development costs for several hundred thousand dollars if there were a market for several thousand of them -- which there almost certainly would be because of their extreme power. EWP>> Why so much more than the 10^9 bits mentioned above? EWP>> Because 10^9 bits only stores roughly 1 million atoms (nodes or links) with proper indexing and various state values. Anybody who thinks that is enough to represent human-level world knowledge in all its visual, audio, linguistic, tactile, kinesthetic, emotional, behavioral, and social complexity hasnt thought about it in sufficient depth. EWP>> For example, my foggy recollection is that Serres representation of the hierarchical memory associated the portion of the visual cortext from V1 up to the lower level of the pre-frontal cortex (from the paper I have cited so many times on this list) has several million pattern nodes (and, as Josh has pointed out, this is just for the mainly feedforward aspect of visual modeling). This includes nothing for the vast majority of V1 and above, and nothing for audio, language, visual motion, associate cortex, prefrontal cortex, etc. EWP>> Matt, I am not in any way criticizing you for mentioning 10^9 bits, because I have read similar numbers myself, and your post pointed with very appropriate questioning to the gap between that and what the brain would appear to have the capabilility to represent. This very low number is just another manifestation of the small hardware mindset that has dominated the conventional wisdom in the AI since its beginning. If the only models one could make had to fit in the very small memories of most past machines, it is only natural that ones mind would be biased toward grossly simplified representation. EWP>> So forget the notion that 10^9 bits can represent human-level world knowledge. Correct me if I am wrong, but I think the memory required to store the representation in most current best selling video games is 10 to 40 times larger. Ed Porter P.S., Please give me feed back on whehter this technique of distinguishing original from responsive text is better than my use of all-caps, which received criticism. -----Original Message----- From: Matt Mahoney [mailto:[EMAIL PROTECTED] Sent: Thursday, October 18, 2007 9:15 PM To: agi@v2.listbox.com Subject: Re: [agi] Poll --- "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote: > I'd be interested in everyone's take on the following: > > 1. What is the single biggest technical gap between current AI and > AGI? In hindsight we can say that we did not have enough hardware. However there has been no point in time since the 1950's when we knew that at the time. We are in that position today. There is possibly a 6 order of magnitude gap between the size of a cognitive model of human memory (10^9 bits) and the number of synapses in the brain (10^15), and precious little research to resolve this discrepancy. In fact, these numbers are so poorly known that we aren't even sure there is a gap. > 2. Do you have an idea as to what should should be done about (1) that > would significantly accelerate progress if it were generally adopted? Resolving the cost estimate would only let us avoid expensive mistakes like Blocks World or Cyc or 5th Generation or the 1959 Russian-English translation project, all of which began with great enthusiasm and no idea of the difficulty involved. What mistakes are we making now? > 3. If (2), how long would it take the field to attain (a) a baby mind, > (b) a mature human-equivalent AI, if your idea(s) were adopted and AGI > seriously pursued? The question is meaningless. IQ is not a point on a line. On some scales, computers surpassed humans in the 1940's. The goal of AGI is not to build human minds, but to do our work. > 4. How long to (a) and (b) if AI research continues more or less as it > is > doing now? It would make not a bit of difference. There is already a US $66 trillion/year incentive to develop AGI (the value of all human labor). Nobody on this list has the One Big Breakthrough. -- Matt Mahoney, [EMAIL PROTECTED] ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?& ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=55249026-805ef2