On Friday 11 May 2007 08:26:03 pm Pei Wang wrote:

> *. Meaning come from experience, and is grounded in experience.

I agree with this in practice but I don't think it's necessarily, 
definitionally true. In practice, experience is the only good way we know of 
to build the models that provide us the ability to predict the world. AI 
tried it by hand-building models throughout the 80s (the "expert system" era) 
and mostly failed.

However, if I have a new robot, I can copy the bits from an old one and its 
mind will have just as much meaning as the old one. Thus in theory, any other 
way I could have come up with the same string of bits will also give me 
meaning.

> A more detained discussion and a proposed solution can be found in
> http://nars.wang.googlepages.com/wang.semantics.pdf

Model-theoretic semantics in logic has a meaning more or less opposite that of 
the use of "model" in AI -- in the former case the world is a "model" for the 
logical system, in the latter the logical system is a model of the world.

To avoid any confusion, let me point out that I always use the word in the AI 
sense.

> As you can see from my comment and paper, I agree with your idea in
> its basic spirit. However, I think your above presentation is too
> vague, and far from enough for semantic analysis.

True enough -- my ideas tend to form like planets, a la the nebular 
hypothesis :-)  But at this point, having gnawed on them for a few years, I 
think they're firm enough to start doing experiments.


> >   2. The hard part is learning: the AI has to build its own world
> >      model. My instinct and experience to date tell me that this is
> >      computationally expensive, involving search and the solution of
> >      tough optimization problems.
> 
> Agree, though I've been avoiding the phrase "world model", because the
> intuitive picture it provides: there is a "objective world" out there,
> and an AI is building an "internal model" of it, where the concepts
> represent objects, and beliefs represent factual relations among
> objects --- this is a picture you don't subscribe, I guess.

"World model" has a very well established meaning in AI (50 years old by now) 
and I find the basic idea sound. I DON'T think that one should assume at the 
outset there are objects and relations -- I'm using a representation where 
objects can be represented if experience indicates it's a useful category, 
but other ways of representing the world are equally accessible.

> A good idea. As I said above: input/output is necessary for AGI, but
> any concrete form of them is not, in principle. An AGI doesn't have to
> be able to move itself around in the physical world (though it must
> somehow change its environment), and doesn't have to have a certain
> human sensor (though it must somehow sense its environment).

Agreed.

> I'd suggest to add the "muscle" in as soon as possible to get a
> complete sensor-motor cycle.

Help from anyone on this list with experience with the GNU toolchain on 
ARM-based microcontrollers will be gratefully accepted :-)

> I fully agree with your focus. I guess your "concepts" are patterns or
> structures formed from certain "semantic primitives" by a fixed set of
> operators or connectors. I'm very interested in your choice.

My major hobby-horse in this area is that a concept has to be an active 
machine, capable of recognition, generation, inference, and prediction. Of 
course we know that any machine can be represented by a program and thus 
given a "declarative" representation, but for practical purposes, I'm fairly 
far over toward the "procedural embedding of knowledge" end of the spectrum.

> > I claim that most current AI experiments that try to mine meaning out
> > of experience are making an odd mistake: looking at sources that are
> > too rich, such as natural language text found on the Internet. The
> > reason is that text is already a highly compressed form of data; it
> > takes a very sophisticated system to produce or interpret. Watching a
> > ball roll around a blank tabletop and realizing that it always moves
> > in parabolas is the opposite: the input channel is very low-entropy
> > (in actual information compared to nominal bits), and thus there is
> > lots of elbow room for even poor, early, suboptimal interpretations to
> > get some traction.
> 
> I don't think you have convinced me that this kind of experiment is
> better than the others (such as those in NLP) , but you get a good
> idea and it is worth a try.

“Two roads diverged in a yellow wood and I, 
I took the path less travelled by, 
and that has made all the difference.”

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to