Josh,

On 6/2/08, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
>
> One good way to think of the complexity  of a single neuron is to think of
> it
> as taking about 1 MIPS to do its work at that level of organization. (It
> has
> to take an average 10k inputs and process them at roughly 100 Hz.)


While a CNS spiking neuron may indeed have this sort of bandwidth (except
maybe only ~200 of the inputs are active at any one time), the glial cells
that comprise 90% of the brain are MUCH slower.

There appears to be various approaches for trimming the computation needed
to emulate a neuron, though there remains so much uncertainty as to what
they are actually doing that at best you can only compute the exponent. I
suspect that this could be trimmed down by an easy order of magnitude with
clever programming.

This is essentially the entire processing power of the DEC KA10, i.e. the
> computer that all the classic AI programs (up to, say, SHRDLU) ran on. One
> real-time neuron equivalent. (back in 1970 it was a 6-figure machine --
> nowadays, same power in a 50-cent PIC microcontroller).


For an underwater sound classification system, I once showed how the task
could be performed by a single real-world-capability neuron. The good news
is that if you really get it right, they each do a LOT.

A neuron does NOT simply perform a dot product and feed it in to a sigmoid.
> One good way to think of what it can do is to imagine a 100x100 raster
> lasting 10 ms. It can act as an associative memory for a fairly large
> number
> of such clips, firing in an arbitrary stored pattern when it sees one of
> them
> (or anything "close enough").


Further, its inputs often incorporate differentiation or integration, and
the inhibitory synapses usually incorporate complex non-linear of often
discontinuous functions.

Compared to that, the ability to modify its behavior based on a handful of
> global scalar variables (the concentrations of neurotransmitters etc) is
> trivial.


The REAL problem with functionality is that neuroscientists are loathe to
talk about what they have seen but cannot prove exists. This makes a ~40
year gap between early observations and popular press available to AGIers.

Not simple -- how many ways could you program a KA10? But limited
> nonetheless.
> It still takes 30 billion of them to make a brain.


I suspect that the job could be done with "only" a billion or so of them,
though I have no idea how to interconnect or power them.

Note that modern processors are ~3 orders of magnitude faster than a KA10,
and my 10K architecture would provide another 4 orders of magnitude, for a
net improvement over the KA10 of ~7 orders of magnitude. Perhaps another
order of magnitude would flow from optimizing the architecture to the
application rather than emulating Pentiums or KA10s. That leaves us just one
order of magnitude short, and we can easily make that up by using just 10 of
the 10K architecture processors. In short, we could emulate human-scale
systems in a year or two with adequate funding. By that time, process
improvements would probably allow us to make such systems on single wafers,
at a manufacturing cost of just a few thousand dollars.

Steve Richfield



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to