J. Storrs Hall, PhD. wrote:
My best ideas at the moment don't have one big space where everything sits,
but something more like a Society of Mind where each agent has its own space.
New agents are being tried all the time by some heuristic search process, and
will come with new dimensions if that does them any good. Equally important
is to collapse dimensions in higher-level agents, forming abstractions.
Let's say I'm playing tennis. I want to hit a backhand. I have an agent for
each joint in my arm that reads proprioceptive info and sends motor signals.
Each one of these knows a lot about the amount of effort necessary to get
what angle or acceleration at that joint as a function of the existing
position, tiredness of the muscle, etc, etc. This info is essentially an
interpolation of memories.
I have a higher-level agent that knows how to do a backhand drive using these
lower-level ones, and it has a much more abbreviated notion of what's going
on at each joint, but it does know a lot about sequencing, timing, and how
far the ball will go -- also based on memory.
I also have a forehand agent using the same lower-level ones, and so forth. It
probably has a space very similar to the backhand one, but the warp and woof
of the remembered trajectories in the space will be all different.
At higher levels I have within-the-point strategy agents that decide which
strokes to use and where to hit to in the opposite court. The spaces for
these agents may have subspaces that map recognizeably to a 2-d tennis court,
perhaps.
Higher up I have an agent that knows how the game scoring works, in which most
the dimensions are binary -- I win the point or my opponent does. Such a
space boils down to a finite state machine. Chances are that in real life,
I've been in a tennis game at every possible score, but I didn't have to -- I
didn't build the state space for that agent purely from memory, indicating a
more sophisticated form of interpolation.
So the basic idea is like Minsky's or Brooks' or Albus' modular architectures
but with interpolating n-space trajectory memories as each agent or module.
I don't understand Hugo's architecture of Hopfield nets well enough to say
whether it's equivalent or not; it could certainly match the performance but
I couldn't say whether it could match the learning.
The problem with this, as I see it, is that the reason a physicist cares
about vector spaces is for their metrical properties: there is a
distance measure, and since that is the way the real world is, it buys
the physicist a *lot* of traction. But if you want to uses spaces for
this reason, I have to ask: why? What does the metricality of a space
buy you? And what makes you think that, in practice, you will actually
get what you wanted to get from it when you sit down and implement the
model?
If, on the other hand, you don't really care about the metric properties
(if they don't correspond to anything in your model) then your
description reduces to a framework that only has hierarchicality in it,
and nothing more (hierarchies of agents). Now, there are a million such
frameworks (mine looks identitical to yours, in that respect), so you
would not have made much progress.
I know what Minsky meant by "physics envy". Hull, the psychologist, had
the same affliction (in spades: IIRC his stuff was a monumental vector
space version of the behaviorist paradigm). I can sympathize, being an
emigre physicist myself, but I must say that that I think it buys
nothing, because at the end of the day you have no reason to suppose
that such a framework heads in the direction of a system that is
intelligent. You could build an entire system using the framework, and
then do some experiments, and then I'd be convinced. But short of that
I don't see any reason to be optimistic.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303