Ben Goertzel wrote:
Richard,

I've been too busy to participate in this thread, but, now I'll chip
in a single comment,
anyways... regarding the intersection btw your thoughts and Novamente's
current work...

You cited the following 4 criteria,

"- Memory.  Does the mechanism use stored information about what it was
doing fifteen minutes ago, when it is making a decision about what to do
now?  An hour ago?  A million years ago?  Whatever:  if it remembers, then
it has memory.
"- Development.  Does the mechanism change its character in some way over
time?  Does it adapt?
"- Identity.  Do individuals of a certain type have their own unique
identities, so that the result of an interaction depends on more than the
type of the object, but also the particular individuals involved?
"- Nonlinearity.  Are the functions describing the behavior deeply
nonlinear?
These four characteristics are enough. Go take a look at a natural system
in physics, or an engineering system, and find one in which the components
of the system interact with memory, development, identity and nonlinearity.
You will not find any that are understood.

Someone else replied:

I am quite sure there have been many AI system that have had all four of
these features and that have worked pretty much as planned and whose
behavior is reasonably well understood

Actually, the Novamente Pet Brain system that we're now experimenting with,
for controlling virtual dogs and other animals, in virtual worlds, does include
nontrivial

-- memory
-- adaptation/development
-- identity
-- nonlinearity

Each pet has its own memory (procedural, episodic and declarative) and
develops new behaviors, skills and biases over time; each pet has its
own personality and identity; and there is plenty of nonlinearity in
multiple aspects and levels.

Yet, this is really a pretty simplistic AI system (though built in an
architecture with grander ambitions and potential), and we certainly
DO understand the system's behavior to a reasonable level -- though we
can't predict exactly what any one pet will do in any given situation;
we just have to run the system and see.

I agree that the above four features, combined, do lead to a lot of
complexity in the "complex systems" sense.  However, I don't agree
that this complexity is so severe as to render implausible an
intuitive understanding, from first principles, of the system's
qualitative large-scale behavior based on the details of its
construction.  It's true we haven't done the math to predict the
system's qualitative large-scale behavior rigorously; but as system
designers and parameter tuners, we can tell how to tweak the system to
get it to generally act in certain ways.

And it really seems to me that the same sort of situation will hold
when we go beyond virtual pets to more generally intelligent virtual
agents based on the same architecture.

How does this relate to the original context in which I cited this list
of four characteristics? It loks like your comments are completely outside the original context, so they don't add anything of relevance.

Let me bring you up to speed:

1) The mere presence of these four characteristics *somewhere* in a
system has nothing whatever to do with the argument I presented (this
was a distortion introduced by Ed Porter in one of his many fits of
misunderstanding).  Any fool could put together a non-complex system
with, for example, four distinct modules that each possessed one of
those four characteristics.  So what?  I was not talking about such
trivial systems, I was talking about systems in which the elements of
the system each interacted with the other elements in a way that
included these four characteristics.

So when you point to the fact that "somewhere" in Novamente (in a single
'pet' brain) you can find all of these, it has no bearing on the
argument I presented.  I was principally referring to these
characteristics appearing at the symbol level (and symbol-manipulation
level), not the 'pet brain' level.  You can find as much memory,
identity, etc etc as you like, in other sundry parts of Novamente, but
it won't make any difference to the place where I was pointing to it.

2)  Even if you do come back to me and say that the symbols inside
Novamente all contain all four characteristics, I can only say "so what"
a second time ;-).  The question I was asking when I laid down those
four characteristics was "How many physical systems do you know of in
which the system elements are governed by a mechanism that has all four
of these, AND where the system as a whole has a large-scale behavior
that has been mathematically proven to arise from the behaviors of the
elements of the system?"

The answer to that question (I'll save you the trouble) is 'zero'.

The inference to be made from that fact is that anyone who does put
together a system like  -  like, e.g., the fearless Mr. B. Goertzel  -
is taking quite a bizarre and extraordinary position, if he says that he
alone, of all people, is quite confident that his particular system,
unlike all the others, is quite understandable.

This is the point where we have been many times before:  when faced with
the possibility that your tangled(*) system might be just as vulnerable
to the problem as those thousands upon thousands of examples of complex
systems that are *not* understandable, you express great confidence with
words like:

... I don't agree
that this complexity is so severe as to render implausible an
intuitive understanding, from first principles, of the system's
qualitative large-scale behavior based on the details of its
construction.  It's true we haven't done the math to predict the
system's qualitative large-scale behavior rigorously; but as system
designers and parameter tuners, we can tell how to tweak the system to
get it to generally act in certain ways.

To the best of my knowledge, nobody has *ever* used "intuitive
understanding" to second-guess the stability of an artificial complex
system in which those four factors were all present in the elements in a
tightly coupled way.

So that is all we have as a reply to the complex systems problem:
engineers saying that they think they can just use "intuitive
understanding" to get around it.

Rots of ruck, as Rastro would say.



Richard Loosemore



(*) 'Tangled' just means that the elements of the system have those four
characteristics of memory, adaptation/development, identity,
nonlinearity, in tightly-coupled form.

















-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to