Richard,

I've been too busy to participate in this thread, but, now I'll chip
in a single comment,
anyways... regarding the intersection btw your thoughts and Novamente's
current work...

You cited the following 4 criteria,

> > "- Memory.  Does the mechanism use stored information about what it was
> doing fifteen minutes ago, when it is making a decision about what to do
> now?  An hour ago?  A million years ago?  Whatever:  if it remembers, then
> it has memory.
> >
> > "- Development.  Does the mechanism change its character in some way over
> time?  Does it adapt?
> >
> > "- Identity.  Do individuals of a certain type have their own unique
> identities, so that the result of an interaction depends on more than the
> type of the object, but also the particular individuals involved?
> >
> > "- Nonlinearity.  Are the functions describing the behavior deeply
> nonlinear?
> >
> > These four characteristics are enough. Go take a look at a natural system
> in physics, or an engineering system, and find one in which the components
> of the system interact with memory, development, identity and nonlinearity.
> You will not find any that are understood.

Someone else replied:

> > I am quite sure there have been many AI system that have had all four of
> these features and that have worked pretty much as planned and whose
> behavior is reasonably well understood

Actually, the Novamente Pet Brain system that we're now experimenting with,
for controlling virtual dogs and other animals, in virtual worlds, does include
nontrivial

-- memory
-- adaptation/development
-- identity
-- nonlinearity

Each pet has its own memory (procedural, episodic and declarative) and
develops new behaviors, skills and biases over time; each pet has its
own personality and identity; and there is plenty of nonlinearity in
multiple aspects and levels.

Yet, this is really a pretty simplistic AI system (though built in an
architecture with grander ambitions and potential), and we certainly
DO understand the system's behavior to a reasonable level -- though we
can't predict exactly what any one pet will do in any given situation;
we just have to run the system and see.

I agree that the above four features, combined, do lead to a lot of
complexity in the "complex systems" sense.  However, I don't agree
that this complexity is so severe as to render implausible an
intuitive understanding, from first principles, of the system's
qualitative large-scale behavior based on the details of its
construction.  It's true we haven't done the math to predict the
system's qualitative large-scale behavior rigorously; but as system
designers and parameter tuners, we can tell how to tweak the system to
get it to generally act in certain ways.

And it really seems to me that the same sort of situation will hold
when we go beyond virtual pets to more generally intelligent virtual
agents based on the same architecture.

-- Ben G

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to