Steve Richfield wrote:
Richard,
On 6/5/08, *Richard Loosemore* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
There are two completely different types of project that seem to get
conflated in these discussions:
1) Copying the brain at the neural level, which is usually assumed
to be a 'blind' copy - in other words, we will not know how it
works, but will just do a complete copy and fire it up.
I suspect that we will have to learn a LOT more to be able to make
something like this work, in part because we will need new theory in
order to compute parameters that we cannot directly measure.
1.5) Combining scanned information with mathematical constraints to
produce diagrams of "perfect" neurons, even though the precise
parameters of the real-world neurons is not fully scannable.
What scanned information? What mathematical constraints? What
'perfect' neurons?
The problem is that all of this requires you to do work to understand
how the system is functioning, because you cannot do something like
build a 'perfect' neuron unless you know what its functional role is,
and to do that you need to go right up into the high-level description
of the system .... and that means, in the end, that you have to do the
entire 'cognitive level' description of the brain *first*, then use it
to understand how neurons are being used (what functional role they are
playing).
For example: does the precise morphology of the dendritic tree matter
to the functioning of the neuron? Do you need to scan this information
in in complete detail? I don't think you are going to be able to answer
this question until after you have understood how the signals exchanged
by neurons are being used (high level stuff).
Let me try to explain with an analogy. You are duplicating a space
shuttle without understanding how it works. You want to know if you can
use chewing gum for O-ring seals. Chewing gum is great, although it
does become hard and brittle and very brittle in cold weather....... but
since you do not know what functional role these O-ring seals are
playing in the design of the whole system, you decide that maybe it is
okay to use chewing gum.
So, I don't disagree that there could be a 1.5 approach, but I see now
way that it is significantly different from the 2 approach.
2) Copying the design of the human brain at the cognitive level.
This may involve a certain amount of neuroscience, but mostly it
will be at the cognitive system level, and could be done without
much reference to neurons at all.
The last 40 years of fruitless AI shows this to be pretty much of a dead
end. There is simply too many questions that we don't even know enough
to ask.
This is not true. The last 40 years of AI have been almost completely
unrelated to this 'cognitive' approach. Over the years, the vast
majority of AI researchers have subscribed to the following credo: "We
intend to build an intelligent system, but although we might take some
ideas or inspiration from how the human mind works, we feel no
obligation to copy the human design because we believe that intelligence
does not have to be done that way."
I was specifically drawing a distinction between two different ways to
build an intelligence in a way that stays close to the human design.
The regular AI approach is neither of these two.
2.5) First understanding how we think with neurons, program computers to
perform the same or better directly, without reference to neurons or
their equivalents.
This misses the point. Cognitive level approaches do not have to reduce
anything to neurons (at least, not in a significant way), so starting
with understanding "how we think with neurons" doesn't make much sense.
If you leave out the specific reference to neurons, what you have is
the cognitive level again.
Both of these ideas are very different from standard AI, but they
are also very different from one another. The criticisms that can
be leveled against the neural-copy approach do not apply to the
cognitive approach, for example.
My more "real" 1.5 and 2.5 proposals require nearly the same levels of
understanding, and ultimately lead to very similar results as
"simulation" gives way via optimization to the same sort of code as
direct AGI programming would utilize. In short, I suspect that both
paths will ultimately lead to approximately the same final result. Sure
we can argue about which path is best, but "easiest wins" usually rules.
You are not addressing the distinction that I made, though.
It is frustrating to see commentaries that drift back and forth
between these two.
My own position is that a cognitive-level copy is not just feasible
but well under way, whereas the idea of duplicating the neural level
is just a pie-in-the-sky fantasy at this point in time (it is not
possible with current or on-the-horizon technology, and will
probably not be possible until after we invent an AGI by some other
means and get it to design, build and control a nanotech brain
scanning machine).
There is nothing in the above sentence that I can agree with, from which
to state objections to the remainder! Some of it may turn out to be
correct, but too little is known and no one is even building the needed
lab equipment to determine just WHAT the situation actually is. However,
I believe that the whole "thinking" thing involves processes that no one
here will EVER guess without learning more about biological brains - if
nothing more than the mathematics of operation. However, your next
paragraph asks some of the right questions, showing that sometimes it is
possible to get to the correct place, even though the path to there is
severely flawed.
You say "I believe that the whole "thinking" thing involves processes
that no one here will EVER guess without learning more about biological
brains" .... but this is an argument from ignorance. Are you aware of
cognitive psychology? Do you know how much is known already? I could
write down an entire two-volume textbook packed with information about
what we have learned already about the "thinking thing", along with
descriptions of how to implement it in the form of an AGI system. And
yet with one wave of the hand you say that none of that information even
exists!
Richard Loosemore
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com