Matt,

On 4/17/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:

Before giving my detailed comments, I'd like to comment that people who have
spent decades in wet laboratories know a **LOT** more than they are willing
to write down. Why? THEIR culture is to not write things down until you can
PROVE them with CAPTURED laboratory EVIDENCE. Hence, a researcher may notice
that he must look at ~200 synapses to find one that actually has an efficacy
> 0, but since there is no practical way of capturing and proving that 200,
and who is ever sure exactly WHAT their sub-micron electrode is connected to
in living tissue, they don't dare publish these numbers. However, their
culture doesn't seem to prohibit them from TALKING about these things, and
THAT is how I have come up with the numbers that I use - and have absolutely
NO written evidence to suport. However, if you are REALLY interested in some
of them, I could probably put you in contact with someone who would be
willing to TALK about them from first-hand experience.


> The Blue Brain project estimates 8000 synapses per neuron in mouse cortex.


I haven't read the report, but I presume that is a PHYSICAL number developed
from microscopy. Typically, <1% have efficacies > 0.

I
> haven't seen a more accurate estimate for humans, so your numbers are
> probably
> as good as mine.


Most came from William Calvin. The contact info on his web site actually
gets to him, so that would be a good place to start for refinement.

I estimate 10^11 neurons,


90% of which are glial cells and not (technically) neurons at all, though
all we care about is whether or not they compute.


> 10^15 synapses (1 bit each)


They appear to be performing rather precise analog computation. While there
is a lot of noise in voltage, there is much less noise in current/ions.
Further, either you must come up with interconnections from fractal means,
which means that you need many more synapses, or you must store the
topography, so you'll need a LOT more than 1 bit each. Either way, you'll
have to allow for much more than one bit for the interconnection, plus a lot
more for the characteristics that may also involve time-dependent things
like differentiation (e.g. for temporal adjustments as in antique RTL logic)
and integration (e.g. for averaging to detect low-level phenomena).

and a
> response time of 100 ms, or 10^16 OPS to replicate the processing of a
> human
> brain.


Glial cells constitute90% of the brain and are MUCH slower than this
However, there is a double-pulse mechanism in spiking neurons that provide
millisecond notice of significant events. Hence, more analysis is probably
needed here, but this number could be off by an order of magnitude either
way.


> The memory requirement is considerably higher than the information content
> of
> long term memory estimated by Landauer [1], about 10^9 bits.


Apples and oranges. You are comparing "completed" memory with
work-in-progress to figure out just what to "remember". Computers, of
course, also have large ratios between the space to store data, and the RAM
needed to develop that data.

This may be due
> to the constraints of slow neurons, parallelism, and the pulsed binary
> nature
> of nerve transmission.  For example, the lower levels of visual processing
> in
> the brain involve massive replication of nearly identical spot filters
> which
> could be simulated in a machine by scanning a small filter coefficient
> array
> across the retina.  It also takes large numbers of nerves to represent a
> continuous signal with any accuracy, e.g. fine motor control or
> distinguishing
> nearly identical perceptions.


William Calvin and I had a long-standing "argument" about this. Finally, we
sat on his pea gravel covered roof and pitch pea gravel at a target while
having our arms blocked at various points by the other person. This was to
separate this theory from mine that motions are a sort of successive
approximation, where groups of neurons watch what we are doing and send
corrective signals. If the massive theory was correct, even a small
interruption of movement would have made a huge error in accuracy, whereas
if the successive approximation theory was correct, we would only lose some
of the very last corrections for a small loss in accuracy. You might try
this experiment yourself, but it was pretty clear to us that we lost
amazingly little accuracy by having our throws physically interrupted.

However my work with text compression suggests that the cost of modeling 1
> GB
> of text (about one human lifetime's worth) is considerably more than a few
> GB
> of memory.  My guess is at least 10^12 bits just for ungrounded language
> modeling.  If the model is represented as a set of (sparse) graphs,
> matrices,
> or neural networks, that's about 10^13 OPS.
>
> Remember that the goal of AGI is not to duplicate the human brain, but to
> do
> the work that humans are now paid to do.  It still requires solving hard
> problems like language, vision, and robotics, which consume a significant
> fraction of the brain's computing power.


This all sounds SO much like the 1960s mantra from Carnegie Mellon. At
minimum, it would seem necessary to distill just what it was that they got
wrong that present AGI folk have right. If I were an investor, this would be
the FIRST think that I would want to hear and understand.

But what matters is that the cost of
> AGI be less than human labor, currently US $10K per year worldwide and
> growing
> at 3-4% (5% GDP growth - 1.5% population growth).  If my guess is right
> and
> Moore's law continues (halving costs every 1.5 to 2 years),


Moore's law presumed a relatively unchanging architecture and rapidly
advancing fabrication. This has broken down, now that transistors can easily
be made SO small that the electrons jump right over the gates. Sure there
will be further developments, e.g. multi-layer, but the easy stuff that
Moore's law was build on is now GONE.

However, architecture is still in a state of arrested development, with
~10,000:1 just sitting there and waiting to be taken. Think "active memory".

then AGI is at
> least 10-15 years away.


The proposed architecture that Josh and I have been discussing could bring
this to the market for about the same cost as a PC in a couple of years -
with adequate funding.

If it actually turns out there are no shortcuts to
> simulating the brain, then it is 30 years away.


IMHO, one of two things will happen:
1.  The Christians will prevail and this will NEVER EVER be allowed to
happen, or,
2.  Some rich benefactor will step forward and make this happen over the
loud objections of millions of devoutly religious.

An argument that I have used with a number of Christians, that you might
want to "keep in your pocket" for use as needed:

1.  If AI, live forever machines, etc., really work, then just THINK what
that will tell us about God and religion!
2.  If these FAIL to work and we UNDERSTAND exactly why they cannot be made
to work, then the point of failure will be THE point where God must be doing
whatever it is that God does.

Hence, for religious people even more than atheists, this is VERY important
because it may finally give religion the good-science support that it has
lacked for SO long. Hence, view these efforts as a sort of atom-smasher for
understanding reality, rather than just relying on unsupported words in old
dusty books.

Steve Richfield

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to