On 5/12/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
On Friday 11 May 2007 08:26:03 pm Pei Wang wrote:
> *. Meaning come from experience, and is grounded in experience.
I agree with this in practice but I don't think it's necessarily,
definitionally true. In practice, experience is the only good way we know of
to build the models that provide us the ability to predict the world. AI
tried it by hand-building models throughout the 80s (the "expert system" era)
and mostly failed.
However, if I have a new robot, I can copy the bits from an old one and its
mind will have just as much meaning as the old one. Thus in theory, any other
way I could have come up with the same string of bits will also give me
meaning.
That is also my plan. "Experience" is not restricted to direct,
personal experience. When I'm reading, I'm getting other people's
experience. The key difference is that whether the meaning of a
concept is determined by its experienced relation with others
concepts, or by its "denotation" in the world.
Model-theoretic semantics in logic has a meaning more or less opposite that of
the use of "model" in AI -- in the former case the world is a "model" for the
logical system, in the latter the logical system is a model of the world.
In that sense, yes, but even in AI, "meaning" is still traditionally
treated as denotation, that is, the outside object/event referred to
by a symbol. If you want your robot to build a "world model" to
describe the world "as it is", it will run into the same trouble as
model-theoretic semantics. My understanding is that this is not what
you mean. Instead, your "world model" is, in essence, a bunch of "if I
do this, I'll observe that", which is a summary of experience, or
interactions between the system and its environment, rather than the
environment "by itself".
> I fully agree with your focus. I guess your "concepts" are patterns or
> structures formed from certain "semantic primitives" by a fixed set of
> operators or connectors. I'm very interested in your choice.
My major hobby-horse in this area is that a concept has to be an active
machine, capable of recognition, generation, inference, and prediction. Of
course we know that any machine can be represented by a program and thus
given a "declarative" representation, but for practical purposes, I'm fairly
far over toward the "procedural embedding of knowledge" end of the spectrum.
I see --- it is fine to stress the procedural aspect of concept given
your context. However, to make your design flexible and general, even
in that case you will still need some "language" to specify your
concepts, rather than in a pin-ball-specific manner, right?
Pei
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936