HI,

Without common
interfaces, Novamente processes must have a common internal design and I
would content that this is a large disadvantage.

But, it is not the case that Novamente processes must have a common
internal design....

Can I convince you that it is sufficient for a process to be able to *ask*
another process to change it's parameters?

Well, the way the Novamente design is intended to work is: There is a
HomeostaticParameterAdaptation MindAgent, which tweaks the parameters
of the various cognitive processes, based on learned rules regarding
which parameter values best achieve system goals in the current
context.

In fact this is not yet implemented (though we had something similar
implemented in Webmind, so we have some experience with this sort of
thing), so it is not relevant to our current experimentation with NM.

So, the automated parameter tuning is centralized and the rules for
cross-mental-process parameter tuning are learned via the same
learning mechanisms utilized for other sorts of pattern recognition.

 I am *telling you* based upon experience
that good modularity and encapsulation CAN be done without blocking any of
the requirements that you've stated thus far.  I have also seen many cases
where complex design and complex tasks are made much easier (and, in
particular, where intractable problems were made tractable), through the use
of encapsulation and modularization.  Are you, as a high-level AGI
researcher, really telling me that modularization and encapsulation have no
place in AGI design?

I think that these buzzwords "modularity" and "encapsulation" can be
interpreted in a lot of different ways....

In cognitive science, the hypothesis that the human mind is modular is
very often formulated in a way that I think is incorrect, and does not
do justice to the complex interadaptations between different aspects
of the human mind.

In AI, similarly, modularity is often discussed in the context of
"Society of Mind" type architectures which consist of multiple agents
that I feel are too loosely connected to each other.  I don't think
this is the right approach.

The way I understand a modular architecture for AI is as follows.  You
divide cognitive process up into various functionalities, then for
each of these functionalities you make a "Requirements List"
specifying what kinds of problems the cognitive module carrying out
the functionality needs to be able to solve.  Then you define a
language for different modules to use to communicate with each other.
Then, you fill in *something* in each of the modules.  The idea is,
you can fill in anything in any of the boxes (modules), so long as it
can communicate according to the right protocol and solve the problems
associated with its box.

This all makes very pretty flow diagrams -- but, my contention is that
it's the wrong way to think about cognition.  Yes, given sufficient
computational resources, this could work....  But making AGI work
given limited computational resources requires one to pay massive
amounts of attention to inter-process dependencies.  Complex problem
solving usually has to with subtle dependencies between what goes on
inside one of the boxes and what goes on inside another of the boxes,
and relies on the fact that the different boxes are communicating with
each other in real time -- and ultimately not acting as separate boxes
in any meaningful sense.

But this is all an argument against human intelligence (or human-level
AI based on contemporary computing resources) being modular in the
traditional sense.

On the other hand, in software design, of course modularity is
generally a good thing -- and it has many different manifestations.

> Perhaps you have a workable AGI design in accordance with these
> principles -- if so I would love to see it.

Sure.  :-)  It's called Novamente.  Some of the middle-level implementation
details are SERIOUSLY sub-optimal for the design and it's really hurting the
effort but the high-level design and many of the low-level implementations
(particularly all the BOA stuff) are kick-ass.

Ah.....  If we're talking about the Novamente implementation details,
that's a different story.  This list is not the place to argue about
that topic, since only a handful of people on this list have been
given access to those.

In fact, we (mainly Moshe, Ari and Cassio) have undertaken a
significant effort to create a nicer, cleaner set of interfaces for
MindAgents and Tasks to use to communicate with the Novamente core.
But we have deferred implementing these interfaces until we get a bit
more staff on board, or else at least until mid-fall, due to having
other priorities.

I don't know if these new interfaces for the core will make you
happier with the software design or not, but this is something I'd be
happy to have you involved in -- but not on a public email list.

But anyway, this change in the core architecture does NOT impact the
automated adjustment of MindAgents' parameters, as the latter is to be
carried out by the HomeostaticParameterAdaptation MindAgent...

Breaking a system up in chunks but then allowing the chunks to violate
boundary crossings is not a modular design.

The language we are both using in this discussion is not precise
enough for us to communicate very usefully, I'm afraid.   I guess that
to be useful this needs to become a much more technical and specific
Novamente-architecture discussion, off-list.

I do not think there is any useful sense in which the Novamente system
is "divided up into chunk that are then allowed to violate boundary
crossings."

MindAgents are encapsulated objects, but they act on a common
knowledge store (the AtomTable).

MindAgents have parameters, which are to be tuned by the
HomeostaticParameterAdaptation MindAgent based on its analysis of the
overall system, its parameters and goals.   Is this what you mean by
boundary-crossing?  I don't really grasp this part of your
objection...

Not having a reasonably
standard interface (or five) but, instead, relying upon reasonably standard
internals to serve instead is not a modular design (and prevents you from
implementing things with radically different internals within the same
design -- a huge downside).

But Novamente does NOT in any way rely on standard internals within
its cognitive processes.  I don't know why you think it does. The
internals of MOSES/BOA have nothing to do with the internals of PLN,
for example....   I am perplexed....

Richard's "development environment" is a modular design framework.  If his
current core ideas don't work, he could probably take the functioning chunks
of Novamente, re-implement them fairly quickly within his framework (since
all he'll be doing is rewriting the interfaces, not the internals), and be
way far ahead of you using *your* own design.

I don't understand this at all.  How would Richard benefit by writing
interfaces corresponding to Novamente MindAgents, without rewriting
the AI code inside the MindAgents themselves?  It is the internals
that make the system smart....

I am afraid we are talking past each other, and would rather continue
this discussion F2F sometime...

I am also *telling you* that you have tremendously raised your opportunity
costs by not implementing your design in a modular and encapsulated fashion
that not only has a tremendous base of experience has shown to be more wise
but which will also make it vastly easier for Novamente to operate on itself
in the future.

This conversation is difficult because you are accusing NM of not
correctly adhering to certain buzzwords, yet in a sufficiently loose
and fuzzy way that I can't really tell what aspect or level of the
design you're really complaining about.

If you can't clarify effectively in the medium of this list, then
perhaps we should continue this F2F or in private emails or a phone
call....   OTOH if you can clarify sufficiently in this forum, I'm
happy to continue the dialogue on this list...

I would like to better understand the specific nature of your
complaints about the architecture, especially if they pertain to the
implementation rather than the conceptual design, because we are
planning an overhaul of the interfaces to the core system soon anyway,
and your suggestions may be able to contribute to that.

But this dialogue does not seem to be very rapidly moving toward a
situation where I understand the real nature of your complaints...

-- Ben

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to