PM,

Allow me to ask a question...

Should there be such a component?

This is my own view, not necessarily strongly supported by the field,
but... the way I see it, the literature on classical planning is a
graveyard of algorithms based on goal-subgoal hierarchies. Researchers try
very hard to get this sort of an approach to work (and succeed), because
that is the way we intuitively think we think. However, that doesn't mean
it is the best approach. A more bottom-up & global approach may work better.

Hierarchical planners were soundly defeated by Graphplan in 1999, an
algorithm which treated planning as a constraint satisfaction problem
(using a standard solver for those problems) rather than attempting to
solve in a pseudo-human way. Hierarchical planners came back, using
graphplan as a sub-component, but I see that as more a result of the
sticktoitiveness of the hierarchical planning community than a fundamental
result.

That's in the domain of classical planning. Outside that domain, things
like RL and monte-carlo planning are used; subgoal hierarchies do not
exist...

This could well be for lack of sophistication. Maybe the current 'flat'
techniques will be replaced by hierarchical planning in time, even in very
messy domains. (I know that hierarchical RL has been implemented more than
once, with different approaches...) Certainly I think an advanced AGI
system will need the capability to think about plans on multiple levels of
abstraction. However, we don't know "where" in the mental architecture this
occurs. What I'm saying (or, what I think I'm saying) is that it doesn't
seem right to have it at the lowest level. Subgoal prioritization may be a
rather advanced form of reasoning which emerges as a result of a lot of
lower-level stuff.

Of course, *something* to direct reasoning at the low level is needed!
However, it might be a very "different" algorithm from what you expect...
Really, I'm just trying to encourage some creative thinking here. :)

--Abram

On Mon, Jun 11, 2012 at 5:31 PM, Piaget Modeler
<[email protected]>wrote:

>  Abram,  you've characterized it properly. In my vernacular subgoals =
> goals.
>
>
> I would say that the job of this particular attention module is to
> reprioritize the open goal set,
> given all available information.
>
> So the question for me is what should all available information consist
> of?
>
> Some candidates are:   (1) The current context, for sure, (2) alerts, (3)
> expectation failures and mismatches,
> (4) past prioritizations, (5) past episodes.
>
> Anything else?
>
> Your thoughts?
>
>
> ------------------------------
> Date: Mon, 11 Jun 2012 11:11:58 -0700
> Subject: Re: [agi] Attention
> From: [email protected]
> To: [email protected]
>
>
> PM,
>
> OK. So, in this case, the goal selector is clearly selecting subgoals to
> prioritize.
>
> It's a difficult question which needs a quickly computable answer, so the
> system needs to somehow gather information over time which tells it what
> subgoals have been most useful in the past, in what situations. This
> process can use a wide variety of information; essentially anything.
> However, to make an efficient choice, the information considered at any
> particular time needs to be narrowed down somehow. The space of possible
> sub-goals is also potentially difficult, and needs to be narrowed down
> heuristically...
>
> Perhaps the best that I can say at the moment is, this seems like the sort
> of problem which requires empirical testing to see what works and what
> doesn't!
>
> --Abram
>
> On Fri, Jun 8, 2012 at 5:49 PM, Piaget Modeler 
> <[email protected]>wrote:
>
>
> Ben
>
>      Yours is a sufficient response.  Thank you.
>
> Abram
>
>      Suppose we decompose a cognitive system down into a few components:
>
>      1. A  planner (which is fed a goal, a current state and a set of
> possible actions (i.e., operators, methods, cases, etc.)),
>      2. An action selector (which is fed the current state, a prioritized
> set of goals, and a set of methods to choose from),
>      3. A goal selector / Attention module whose job is to prioritize or
> select goals for the cognitive system.
>
>     My question was what would you feed the goal selector to ensure it did
> its job (prioritizing goals) properly?
>
>     In a paper I read recently "A Case Study of Goal-Driven Autonomy in
> Domination Games" by Hector Munoz-Avila and David W. Aha
>     the authors, in their CB-gda system,  decompose the cognitive system
> into two case-based components  (a) a planning component,
>     and (b) a mismatch goal [selection] component.  The purpose of the
> latter component was to correct for errors encountered by the
>     planner.  The input for the mismatch goal selection component is a
> mismatch (the difference between the expected state and the
>     goal state).
>
>     Q: What else would be relevant input for a goal selector / Attention
> component?
>
>
>
> ------------------------------
> Date: Fri, 8 Jun 2012 17:49:15 -0400
> Subject: Re: [agi] Attention
> From: [email protected]
> To: [email protected]
>
>
>
> In the OpenCog framework, we supply some hard-coded "top level goals", and
> then the system learns how to achieve these, which may include learning
> subgoals...
>
> The top level goals are generally of the form "keep so-and-such parameter
> within range [L,R]"
>
> Experience of novelty and discovery of new things are good general
> top-level goals.  For an character in a virtual 3D environment, we add in
> stuff like getting energy (e.g. from batteries or food), staying safe, and
> partaking in social interaction....
>
> In reference to this sort of framework, I'm unsure if you're talking about
> top-level goals or learned subgoals...
>
> -- Ben G
>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/7190161-766c6f07> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
>
>
>
> --
> Abram Demski
> http://lo-tho.blogspot.com/
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/7190161-766c6f07> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Abram Demski
http://lo-tho.blogspot.com/



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to