I just want to make one observation on this whole thread, since I have
no time for anything else tonight.
People are riding roughshod over the things that I have actually said.
In some cases this involves making extrapolations to ideas that people
THINK that I was saying, but which I have never said at all, and they
then proceed to disagree, dispute or prove wrong those things.
Okay, just one quick example.
I have never said that the entire structure of an AGI is so
complexity-bound that we "cannot understand it" ..... where 'it' is
supposed to be something like the whole shebang, or most of it, or a
very significant chunk of it.
Bad bad bad bad bad. I only seek to establish that there is *some*
complexity. There might only be one tricky little aspect of an
intelligent system that is 'complex'.
The kick, however, is in the *way* that this one little thing can
manifest itself.
I have occasionally used examples to illustrate this, so I will bring up
one of those now. Suppose that nature, in her extensive random
explorations of the design space, discovered that there was only one
type of 'symbol' design that would actually work. That design has a set
of arbitrary parameters inside of it, and these arbitrary parameters are
manipulated by a bunch of non-obvous mechanisms that, when they interact
with one another in a complex/tangled way, cause symbols to develop in a
fully grounded way, cooperate with one another during thinking
processes, do analogy, etc etc etc.
We will suppose, for the sake of argument, that nature also discovered
that this needs to happen in the context of a relaxation-type system,
rather like thing that I have referred to as a generalized connectionist
system.
Now, in all oher respects it may be possible to emulate this natural
system with some other architecture, but it may well be that the
*particular* global characterstics that we lump together under the
general heading of 'intelligence' are purely and only a result of that
one design. In other words, you might be able to build something that
partially resembled such a system, but it just turns out that WHEN YOU
TRY TO PUT ALL THE PIECES TOGETHER (sorry for the shouting, but this is
important) the pieces simply will not cooperate to yield a fully
coherent, autonomous intelligence UNLESS you use that one little design
trick that nature put in there.
So, you might build an AGI in which everything looks as though it should
work (and you might even choose a design that was mostly like the human
design), and most of that design will be completely non-complex and nice
and transparent - so most people could look at it and say "Yeah, that
ought to work ... that Loosemore guy was obviously wrong, because now we
seem to have got almost all of intelligence wrapped up inside some
mechanisms that are all pretty much understandable, with only a little
bit of complexity".
BUt whe you switched the thing on, it would be "almost but not quite".
The symbols would grow for a while, but then it would just not be able
to build them past a certain level of abstraction in a reliable way.
Or, it would go through quite rational thought processes at first, but
have a slight drift that made it become more and more irrational as time
went on. WHo knows how it would manifest itself?
The point is that one tiny like piece of the puzzle, which happened to
be crucial, was structured in such a way that its design did not LOOK as
though it would work, for the simple reason that it works by complex
interactions.
So do not put words into my mouth and try to make it seem that I am
saying that the entire structure of an AGI has got to be "mysterian" for
it to work.
So that is all I am claiming, in the end. It will take only one small
factor, like that hypothetical example above, to make your entire AGI
quest (the last fifty years, and the next fifty years if you carry on
being stubborn about it) into one great big Sisyphian crap shoot.
Hey, more when I get time.
I haven't had time to read Mark's long posts: sorry Mark, I'll get to
that tomorrow, hopefully.
Richard Loosemore
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com