From: "Pei Wang" <[EMAIL PROTECTED]>

>*. If you think your theory "is compatible with AGIs developed by the
>various groups on this list", what is unique in your approach that is
>missing in other approaches?

The theory suggests some design features such as
1) separation of behavior and knowledge,
2) the importance of rule insertion,
and all of this under the constraint of MDL.

I assumed it's not difficult to modify existing
AGI projects to this design, unless there're
reasons for a different design at this level?

>*. In your "2-part architecture", it seems that you want the "mentor" to
>have all necessary knowledge (obtained via MDL), and the "agent" to
>selectively use part of the former. Does the agent need to maintain its own
>knowledge? If yes, why not to develop the agent as an AI system by its own,
>and use the world/environment as its mentor? If the agent has no knowledge,
>how can it decide what to ask to the mentor? If it just passes the user
>commands to the mentor, then how does it differ from a simple user
>interface?

It separates the AGI problem into 2 subproblems, and
the main reason is that the knowledge part is very
well-defined and neat. The 'Agent' also requires
and contains knowledge, but if you think about it,
that type of knowledge is quite different from what
you get with MDL / statistical learning. For example
if I ask the AI to give me oranges, bananas, mangoes,
it would not be very desirable if the AI generalizes
my list to all fruits, say.

>*. You said "interpreting means generating well-defined responses to formal
>queries". This is a very different usage of the word "interpreting". I'm
>afraid that it will cause confusion. Why didn't you use something like
>"problem solving", "query processing", or "question answering"?

I generally suck at naming things but for this term,
I think it's OK =). The algorithm *interprets* the
representation, which otherwise is a meaningless
set of compressed data.

"Problem solving" would belong to the behavioral part.

>*. I do think what we have in mind is a summarized "representation of our
>experience", not an (minimum or not) "representation of the external world",
>though I understand that your opinion is shared by many other people. I
>won't argue on this issue here, but want you know that to me, it is a major
>problem in your proposal.

It seems that your point is kind of philosophical...
but I'm perfectly willing to accept your version.

>*. Can your 4 requirements for mentor to be satisfied, either in theory or
>in practice?

I think they are satisfiable if we don't take MDL
to be exact. Exact MDL is known to be incomputable.
We're only trying to *bias* towards shorter
descriptions.

>*. For implementation, you seem to suggest a hybrid system with ANN,
>probabilistic/statistical methods, and standard/fuzzy logics. Since these
>theories are based on completely different assumptions, how can they be
>consistently used together?

I have not analyzed any implementation aspects yet;
One thing for sure is that there are *many* ways to
implement AGI. What I hope to achieve is to point
out some important features that AGI's should have,
from an abstract level.

I think the most interesting part would be to
compare various AGI implementations in terms of the
features I proposed... This way we may be able to
understand what are the real differences between
AGIs, and from there we may consolidate our efforts
or explore different design spaces.

I'm constantly updating the page, so your comments
are very helpful. Thanks a lot,
YKY 



____________________________________________________________
Find what you are looking for with the Lycos Yellow Pages
http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp?SRC=lycos10

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to