Joshua Fox wrote:
Greetings, I am new to the list. I hope that the following question adds something of value.

Estimates for the total processing speed of intelligence in the human brain are often used as crude guides to understanding the timeline towards human-equivalent intelligence.

Would someone venture to guesstimate -- even within a couple of orders of magnitude -- the total processing speed of higher order cognitive functions, in contrast to lower-order functions like sensing and actuation. (Use any definition of "higher" and "lower" order which seems reasonable to you.)

I appreciate the problems with estimating human-equivalent intelligence based on raw speed, and I recognize that tightly integrated lower-order functionality may be essential to full general intelligence. Nonetheless, it would be fascinating to learn, e.g., that the "core" of human intelligence use only 1% of the total power estimated for the brain. That would suggest that /if/ lower order functions can be "outsourced" to the many projects now working on them, and offloaded at runtime to remote systems, then human-order raw power may be closer than we thought. Joshua


Joshua,

I recently addressed a similar issue on the SL4 list, so here is an expanded version of my calculation for what I think is involved in higher order processing. My thoughts were geared towards estimating when the hardware would be available. (Answer: yesterday.)

1) Quick Introduction

The basis for these calculations is the idea that the human cognitive system does all of its real work by keeping a set of elements simultaneously active and allowing them to constrain one another. Simple enough idea. Basis of neural nets, actors, etc.

Then, starting with this idea, I use the fact that the brain is organized into cortical columns, and I would (cautiously) hypothesize that these could be implementing a grid of cells on which these elements can live, when they are active. This allows us to start talking about possible numbers for the simultaneously active elements and their operating timescale.

Finally, notice that a good chunk of the cortical column real estate is probably devoted to visual processing. Now, some of this would not just be doing data driven processing (which would come under the heading of "peripheral" work, which we want to keep out of the calculation) but interactive processing that includes top-down constraints. Difficult to say how much of this visual processing really counts as higher order thought, but my guess would be that some fraction of it is not.

2) The Calculation Itself

Approximate number of cortical columns: 1,000,000. If each of these is hosting a single concept, but they are providing a facility for moving the concept from one column to the next in real time, to allow concepts to make transient connections to near neighbors, then most of them may be just available for liquidity purposes (imagine a chinese puzzle on a large scale... more empty blocks means more potential for the blocks to move around, and hence greater liquidity). So, number of simultaneously active processes will be much less than 1,000,000.

My use of the cortical coumn idea is really just meant as an upper bound: I am not committed to this interpretation of what the columns are doing.

Second datum to use: the sensorium (the sum total of what is actively involved in our current representation of the state of the world and the content of our abstract thoughts) is likely to contain much less than 1,000,000 simultaneously active concepts. Why? Mostly because the contents of a good sized encyclopaedia would involve less than a million concepts, and we barely have enough words in our language for that many distinct, nameable concepts. It is hard to believe that we keep anything like that many concepts active at once.

Using the above two factors, we could hazard a guess at perhaps as few as 10,000 simultaneously active high-level concepts, not a million. My gut feeling is that this is a conservative estimate (i.e. too high).

Further suppose that the function of concepts, when active, is to engage in relatively simple interactions with neighbors in order to carry out multiple simultaneous relaxation along several dimensions. When the concepts are not active they have to go through different sorts of calculations (debriefing after an episode of being used), and when they are being activated they have to (effectively) travel from their home column to where they are needed. Considering these "other" computations together we notice that the cortical column may implement multiple functions that do not need to be simultaneously active.

Now, all of the above functions are consistent with the complexity and layout of the columns. Notice that what is actually being computed is relatively simple, but because of the nature of the column wiring the functions take a good deal of wiring to implement the functions ... so the columns look computationally demanding but when implemented in silicon the functionality is not nearly as difficult.

Finally, when implementing these 10,000 processes in silicon, take account of the relative clock speeds and you can probably simulate 100 to 1000 processes simultaneously, if you use FPGA hardware (like one of the Celoxica boards that Hugo de Garis is making such good use of).

The exact amount of hardware required depends on the complexity of the function computed by each element, and on the bandwidth requirements.

But assuming that one FPGA board can reliably maintain only 100 to 1000 elements, that implies a computational requirement of between 10 and 100 desktop machines with one $6,000 FPGA card in each one. (Obviously that is just the cognitive core: you'd need peripherals as well).

Assuming 100 machines rather than 10 (i.e. erring on the conservative side again), and another fifty equivalent machines for peripherals, that would be an approximate AGI cost of roughly $1 million.

3) Timeline

When will this become available? If you have a million dollars to spare, it already is.

How long has this been available? If money was no object, it was available a couple of decades ago.

4) Postscript

I really think this is too conservative. I don't believe there are as many as 10,000 concepts involved, but only a few thousand. A few hundred is too small, but between one and a few thousand seems about right.

What this implies is that the big obstacle is not how much hardware you have got, but how you use it. My personal opinion is that AGI could have been achieved a couple of decades ago, and that what stopped it from happening was a lack of understanding of the nature of the software problem. That lack of understanding persists.

Or, as Bananarama might have said: It Ain't What You Got, Its The Way That You Do It.


Richard Loosemore.







-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to