Hi Mike,

Probably rewards, risks and costs could be used to hold everything. Could probably make it one dimensional - all in terms of goodness.

My choice of terms is to allow me to organize my thinking. If I use the word recipe, it is because I know recipes involve ingredients and procedural steps. Both of these "aspects" of a recipe can present problems for me as a "cook." If I think about each aspect individually, I have less to focus on at one moment. It is this "breakdown" of the task that allows me to manage the overall job.

If I start at the most abstract level, I have huge landscape to take in. If I attempt to assess risk in the broad sense, it could mean thinking about risk of legality, risk of bad components, risk of liability, risk of breakdown, risk of shortage...

What I want to do is have a way of looking at things that will help me stay in perspective. Keep the balance, properly weigh the various elements. Richard Loosemore once said (paraphrase) that creating the system would only be small part of it, tuning would be the bulk of the problem. (he may have been talking about neural networks...)

Anyway Mike, I'm trying to go beyond abstract and get down to what could be implemented. From PM's questions, I'm guessing that he is geared toward implementation.

Stan

On 06/12/2012 11:23 AM, Mike Tintner wrote:
Stan,

Why not just the standard *rewards, risks and costs* as choice
dimensions? I think they cover everything, but always willing to hear a
counterargument...

--------------------------------------------------
From: "Stan Nilsen" <[email protected]>
Sent: Tuesday, June 12, 2012 6:10 PM
To: "AGI" <[email protected]>
Cc: "Piaget Modeler" <[email protected]>
Subject: Re: [agi] Attention

PM

Costs and benefits are only one part of making a choice. There are
also less tangible aspects like risk and cleanup, to name two simple
ones. Risk isn't the same as cost or benefit. Risk goes into a variety
of "what if" and attempts to determine what consequence may occur be
it "costly" in the dollars sense or costly in time, reputation, missed
opportunity etc. Clean up is a term I use to talk about what remains
when and if failure occurs. Is there something to be salvaged, or a
side benefit?

I wouldn't say The notion of opportunity is like Serendipity. Don't
know much about serendipity. Opportunity is what I would consider a
record in a big database of life. Opportunity has "recipe" as one of
it's fields. Recipe is what you have to do to experience the benefit
of the opportunity (assume that benefit is the "reason to be" of an
opportunity. )

Imagine you have this great big database of opportunity, what remains
is for the "intelligence" of the unit to pick/select the best of
opportunity that is "ready." The method, or "recipe," one uses to pick
from the database is going to have much to do with the "intelligence
level" that will be assessed for the unit. One could say that
intelligence is going to be assessed based on how much opportunity one
acquires, the quality of the opportunity, and the recipe one uses to
pick and choose the opportunity.

Readiness is another "field" of the opportunity record. Along with
opportunity are fields of "prerequisite" circumstances that allow the
determination of the readiness of the opportunity. There's plenty of
opportunity for multitasking in this system -

This isn't a full description of an opportunity based system. There
are lots of components that would need to be developed for a working
system. Like an "updater" that tracks environment and recalculates
merit of each opportunity based on findings...

The point of all this rambling, is that "goal" causes one to thinks
"games" while opportunity is better for thinking about intelligence.

my .02

Stan





On 06/12/2012 08:33 AM, Piaget Modeler wrote:
Would it only be costs and benefits or would we include a full fledged
opportunity "object" with whatever traits that may entail? And does this
notion differ fom Serendipity? Erik T Mueller had a serendipity
component/routine in his Daydreamer system, which recognized when goals
were serendipitously attained.

~PM

> Date: Tue, 12 Jun 2012 07:56:50 -0600
> From: [email protected]
> To: [email protected]
> CC: [email protected]
> Subject: Re: [agi] Attention
>
> PM
>
> A few months ago I mentioned to Jim Bromer that I would build the
> intelligence around the idea of opportunity. This fits directly with
> the idea of goals, in a somewhat obscure way.
>
> If you look at your candidates list below, you see part of what
makes up
> an opportunity. It leaves out important factors that go into choosing
> which opportunity is "greatest" for the moment.
>
> These additional factors get complicated. For example -
> - how reliable are the claims of this opportunity? If we pursue this
> opportunity, can we be somewhat certain we will complete the steps
> needed to consummate and yield the benefit?
>
> - what is the opportunity and how does it compare to other
opportunity.
> In other words, there needs to be a vocabulary of benefit and cost
> that goes with consideration of any opportunity. Here is where the
real
> world is complex. We venture and discover many benefits that we didn't
> expect. And we encounter costs that exceed our expectations. Will
> there be shortages we didn't anticipate...
>
> - where did this opportunity come from? If we want to evaluate
> opportunity, we also need to consider the source. Is this simply an
> idea that we cooked up according to simple logic? Is this an idea that
> comes highly recommended from a reliable source - a master chef.
>
> I understand that the ideas listed above seem off-base from looking at
> goals, but the real world "intelligent" doesn't work via figuring
> everything out. It works from suggestion and observation of what works
> and how it worked out. The intelligence that we want to embody will
one
> day need to be able to choose between opportunity rather than simply
> follow "sub-goals" that make up a recipe.
>
> To get this intelligence going, build the database of opportunity.
>
> Stan
>
> (sorry if I missed a relevant prior post, I'm way behind in list
reading
> - busy time of year... yade yada .)
>
>
>
>
> On 06/11/2012 06:31 PM, Piaget Modeler wrote:
> > Abram, you've characterized it properly. In my vernacular subgoals
= goals.
> >
> >
> > I would say that the job of this particular attention module is to
> > reprioritize the open goal set,
> > given all available information.
> >
> > So the question for me is what should all available information
consist of?
> >
> > Some candidates are: (1) The current context, for sure, (2)
alerts, (3)
> > expectation failures and mismatches,
> > (4) past prioritizations, (5) past episodes.
> >
> > Anything else?
> >
> > Your thoughts?
> >
> >
> > Date: Mon, 11 Jun 2012 11:11:58 -0700
> > Subject: Re: [agi] Attention
> > From: [email protected]
> > To: [email protected]
> >
> > PM,
> >
> > OK. So, in this case, the goal selector is clearly selecting
subgoals to
> > prioritize.
> >
> > It's a difficult question which needs a quickly computable
answer, so
> > the system needs to somehow gather information over time which
tells it
> > what subgoals have been most useful in the past, in what situations.
> > This process can use a wide variety of information; essentially
> > anything. However, to make an efficient choice, the information
> > considered at any particular time needs to be narrowed down
somehow. The
> > space of possible sub-goals is also potentially difficult, and
needs to
> > be narrowed down heuristically...
> >
> > Perhaps the best that I can say at the moment is, this seems like
the
> > sort of problem which requires empirical testing to see what
works and
> > what doesn't!
> >
> > --Abram
> >
> > On Fri, Jun 8, 2012 at 5:49 PM, Piaget Modeler
> > <[email protected] <mailto:[email protected]>>
wrote:
> >
> >
> > Ben
> >
> > Yours is a sufficient response. Thank you.
> >
> > Abram
> >
> > Suppose we decompose a cognitive system down into a few components:
> >
> > 1. A planner (which is fed a goal, a current state and a set of
> > possible actions (i.e., operators, methods, cases, etc.)),
> > 2. An action selector (which is fed the current state, a prioritized
> > set of goals, and a set of methods to choose from),
> > 3. A goal selector / Attention module whose job is to prioritize or
> > select goals for the cognitive system.
> >
> > My question was what would you feed the goal selector to ensure it
> > did its job (prioritizing goals) properly?
> >
> > In a paper I read recently "A Case Study of Goal-Driven Autonomy in
> > Domination Games" by Hector Munoz-Avila and David W. Aha
> > the authors, in their CB-gda system, decompose the cognitive system
> > into two case-based components (a) a planning component,
> > and (b) a mismatch goal [selection] component. The purpose of the
> > latter component was to correct for errors encountered by the
> > planner. The input for the mismatch goal selection component is a
> > mismatch (the difference between the expected state and the
> > goal state).
> >
> > Q: What else would be relevant input for a goal selector / Attention
> > component?
> >
> >
> >
> > Date: Fri, 8 Jun 2012 17:49:15 -0400
> > Subject: Re: [agi] Attention
> > From: [email protected] <mailto:[email protected]>
> > To: [email protected] <mailto:[email protected]>
> >
> >
> >
> > In the OpenCog framework, we supply some hard-coded "top level
> > goals", and then the system learns how to achieve these, which may
> > include learning subgoals...
> >
> > The top level goals are generally of the form "keep so-and-such
> > parameter within range [L,R]"
> >
> > Experience of novelty and discovery of new things are good general
> > top-level goals. For an character in a virtual 3D environment, we
> > add in stuff like getting energy (e.g. from batteries or food),
> > staying safe, and partaking in social interaction....
> >
> > In reference to this sort of framework, I'm unsure if you're talking
> > about top-level goals or learned subgoals...
> >
> > -- Ben G
> >
> >
> >
> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> > <https://www.listbox.com/member/archive/rss/303/7190161-766c6f07> |
> > Modify <https://www.listbox.com/member/?&;> Your Subscription
> > [Powered by Listbox] <http://www.listbox.com>
> >
> >
> >
> >
> > --
> > Abram Demski
> > http://lo-tho.blogspot.com/
> >
> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> > <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> > Modify <https://www.listbox.com/member/?&;> Your Subscription
[Powered
> > by Listbox] <http://www.listbox.com>
> >
> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> > <https://www.listbox.com/member/archive/rss/303/9320387-ea529a81> |
> > Modify
> > <https://www.listbox.com/member/?&;>
> > Your Subscription [Powered by Listbox] <http://www.listbox.com>
> >
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed:
https://www.listbox.com/member/archive/rss/303/19999924-5cfde295
> Modify Your Subscription:
https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
*AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/9320387-ea529a81> |
Modify
<https://www.listbox.com/member/?&;>
Your Subscription [Powered by Listbox] <http://www.listbox.com>




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/9320387-ea529a81
Modify Your Subscription:
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com





-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to