PM,

OK. So, in this case, the goal selector is clearly selecting subgoals to
prioritize.

It's a difficult question which needs a quickly computable answer, so the
system needs to somehow gather information over time which tells it what
subgoals have been most useful in the past, in what situations. This
process can use a wide variety of information; essentially anything.
However, to make an efficient choice, the information considered at any
particular time needs to be narrowed down somehow. The space of possible
sub-goals is also potentially difficult, and needs to be narrowed down
heuristically...

Perhaps the best that I can say at the moment is, this seems like the sort
of problem which requires empirical testing to see what works and what
doesn't!

--Abram

On Fri, Jun 8, 2012 at 5:49 PM, Piaget Modeler <[email protected]>wrote:

>
> Ben
>
>      Yours is a sufficient response.  Thank you.
>
> Abram
>
>      Suppose we decompose a cognitive system down into a few components:
>
>      1. A  planner (which is fed a goal, a current state and a set of
> possible actions (i.e., operators, methods, cases, etc.)),
>      2. An action selector (which is fed the current state, a prioritized
> set of goals, and a set of methods to choose from),
>      3. A goal selector / Attention module whose job is to prioritize or
> select goals for the cognitive system.
>
>     My question was what would you feed the goal selector to ensure it did
> its job (prioritizing goals) properly?
>
>     In a paper I read recently "A Case Study of Goal-Driven Autonomy in
> Domination Games" by Hector Munoz-Avila and David W. Aha
>     the authors, in their CB-gda system,  decompose the cognitive system
> into two case-based components  (a) a planning component,
>     and (b) a mismatch goal [selection] component.  The purpose of the
> latter component was to correct for errors encountered by the
>     planner.  The input for the mismatch goal selection component is a
> mismatch (the difference between the expected state and the
>     goal state).
>
>     Q: What else would be relevant input for a goal selector / Attention
> component?
>
>
>
> ------------------------------
> Date: Fri, 8 Jun 2012 17:49:15 -0400
> Subject: Re: [agi] Attention
> From: [email protected]
> To: [email protected]
>
>
>
> In the OpenCog framework, we supply some hard-coded "top level goals", and
> then the system learns how to achieve these, which may include learning
> subgoals...
>
> The top level goals are generally of the form "keep so-and-such parameter
> within range [L,R]"
>
> Experience of novelty and discovery of new things are good general
> top-level goals.  For an character in a virtual 3D environment, we add in
> stuff like getting energy (e.g. from batteries or food), staying safe, and
> partaking in social interaction....
>
> In reference to this sort of framework, I'm unsure if you're talking about
> top-level goals or learned subgoals...
>
> -- Ben G
>
>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/7190161-766c6f07> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Abram Demski
http://lo-tho.blogspot.com/



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to