On Fri, Mar 28, 2008 at 9:29 PM, Robert Wensman
<[EMAIL PROTECTED]> wrote:
> A few things come to my mind:
>
> 1. To what extent is learning and reasoning a sub topic of cognitive
> architectures? Is learning and reasoning a plugin to a cognitive
> architecture, or is in fact the whole cognitive architecture about learning
> and reasoning.
>
If "cognitive architectures department of AGI research" is to be
usefully delineated, then these are not its subtopics. But neither
they are plug-ins. It is "in this chapter I introduce you to the
overall structure of the system. From other chapters you know that..."

> 2. I would like a special topic on AGI goal representation. More
> specifically, a topic that discusses how a goal specified by any human
> designer, can be related to the world model and actions that an AGI system
> creates? For example, how can the human specified goal, be related to a
> knowledge representation that is constantly developed by the AGI system?
>
Yes, more work needed on lifelong goal structures, Pollock's master
plans, integration with motivational system (which in the primitive
form is "spreading activation").

> 3. Why do AI/AGI researchers always talk about "knowledge representation".
> It gives such a strong bias towards static or useless knowledge bases. Why
> not talk more about "World modelling". Because of the more active meaning of
> the word "modelling" as opposed to "representation", it implies that things
> such as inference etc. need to be considered. Since the word "modelling" is
> also used to denote the process of creating a model, it also implies that we
> need mechanisms for learning. I really think we should consider if not
> "knowledge representation" is a concept straightly borrowed from dumb-narrow
> AI, or if it really is a key concept for AGI. Sure enough, there will always
> be knowledge representation, but the question is whether it is an
> important/relevant/sufficient/misleading concept for AGI.
>
Agreed. I think that "knowledge representation" label should not be
abandoned, but should be grown towards "how the system accomodates the
sophisticated semantics of natural language and/or its formative
domain" where "formative domain" can be "social environment",
"programmistic environment" etc.

> 4. In fact. I would suggest that AGI researchers start to distinguish
> themselves from narrow AGI by replacing the over ambiguous concepts from AI,
> one by one. For example:
>
I neither agree nor disagree with your suggestion, I just thank for
clarifying your ideas here considerably :-)

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to