Re: [agi] Moore's law data - defining HEC
In a previous post Eliezer referenced a good critique of Moore's Law: http://firstmonday.org/issues/issue7_11/tuomi/index.html Assuming the facts presented in that paper, I agree with the conclusions that Moore's Law was never a valid law. But I have researched Moore's Law references on the Web and found good material at Intel's web site that indicates that Moore's Law may be valid as a semiconductor manufacturing company management planning/funding guideline. Intel is deliberately scheduling lithography process improvements on a two year basis to achieve progress at the rate predicted by Moore's Law. For example, Extreme Ultraviolet Lithography (EUV) is being researched now to be used in manufacturing after 2007. I believe they could schedule these improvement steps either faster or slower but Intel finds some economic optimum using the Moore's Law interval. A valid critique of Moore's Law is the confusion between progress in the number of transistors per CPU and the progress in CPU performance. I believe that Intel (and other CPU manufacturers) will have enough transistors (perhaps 1 billion by 2009) on each chip that performance will continue exponential progress from the current point via multiple arithmetic/logic cores per CPU, e.g. clock speed in MHz alone will not give the exponential performance increase predicted by Moore's Law but the presence of multiple ALU's per CPU will. I read recently that the PlayStation3 chip will be based upon multiple IBM PowerPC cores to achieve high vector graphics performance, and is evidence of the utility of this approach to using all those transistors. My comments in this message are restricted to CPU's alone, and I accept that Human Equivalent Computing will require similar advances with regard to other system bottlenecks, but for Cyc the CPU integer performance is key. Today's systems appear optimized for Cyc's usage pattern. If you share my enthusiasm for technology predictors/indicators leading to the Singularity, then take a look at the following Intel presentation on their lithography plans out to 2009. By then I estimate that commodity computers will only be 1000 times less powerful (-30 db HEC) than human equivalent brain power (including eyesight and other senses). The Intel paper shows a transistor gate designed for Terahertz operation under development now for production when predicted/guided by Moore's Law. http://www.intel.com/idf/us/fall2002/presentations/IRD191PS.pdf -Steve -- === Stephen L. Reed phone: 512.342.4036 Cycorp, Suite 100 fax: 512.342.4040 3721 Executive Center Drive email: [EMAIL PROTECTED] Austin, TX 78731 web: http://www.cyc.com download OpenCyc at http://www.opencyc.org === --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
RE: [agi] Moore's law data - defining HEC
Title: Message Stephen, I'll be interested to see how that compares to Ray Kurzweil's forecasts in http://www.kurzweilai.net/meme/frame.html?main=/articles/art0134.html. Please send me the spreadsheet and graph.Thanks,Amara D. AngelicaEditor, KurzweilAI.net
Re: [agi] Moore's law data - defining HEC
Ilkka Tuomi questions the existence, speed, and regularity of Moore's Law: http://firstmonday.org/issues/issue7_11/tuomi/index.html SL4 discussion of memory bandwidth (not speed) as the limiting factor in human-equivalent computing: http://sl4.org/archive/0104/1063.html http://www.google.com/search?q=+site:sl4.org+crossover+bandwidth -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
Re: [agi] Moore's law data - defining HEC
I recently put together a human brain equivalent model that takes into consideration several aspects of system performance to figure out what kind of system configuration we would need to generate a human equivalent structure (which I expect would actually be much smarter than a human in practice). For the purposes of real-world projections, taking MIPS and GB in the abstract is nigh useless because there are a slew of caveats as to how all these components actually perform in real systems. First, we balanced and normalized system memory requirements in terms of size with instructions per second and memory bandwidth/latency. For our architecture/code, we got the following normal core: 1 BIPS 32-bit integer core attached to 10^9 bytes RAM assuming common memory architectures. This is an optimum balance of transistor allocation for us. This normal core turns out to be 10^-6 human equivalent in our model. If you compare our normal core to real systems, you find that CPU performance is substantially outstripping the memory performance we require. That said, such a system could be built in a few years simply by tweaking existing generic cores commonly used for custom systems (like ARM or MIPS) and connecting scads of them with a low-latency multi-dimensional interconnect. Since you could put a dozen of these cores on a real chip, the trick would be the memory system and interconnects for each of these cores. In short, the CPU is almost where we need it to be now, but the memory is still way behind the curve. By the time memory catches up so that we can have human equivalent intelligence, we'll have enough extra CPU that we'll have human level intelligence that runs much faster than a normal human. Which is to say the curve will be more on the fast and stupid side of the curve than the slow but smart side of the curve if balanced for existing architectures. Cheers, -James Rogers [EMAIL PROTECTED] --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]