Hey Jim, 
Your last statement is very informative.  I like it actually.  I represents a 
clear position. 
> So the prioritization of goals should tend to develop as> generalizations and 
> specializations used to characterize and evaluate> the goals develop and as 
> the situations that the goals are employed in> develop. Are the goals met? 
> Are some goals (or sub-goals) interfering?> Are the characterization of the 
> goals accurate? Are the effects of> achieving the goal ineffective in 
> achieving a higher goal? And so on.

Let me address one point that may be holding us back.  You said

> I believe that as we learn using generalizations
Taxonomic Generalization is only one inference strategy.  When you encounter a 
particular event or episode and decide to make it an instance of a category, 
then we can say that that instance activates the category, or the category 
includes that instance. 
There are many other learning strategies besides taxonomic generalization 
though. 
In fact there are a lot of mechanisms going on in your last paragraph, just to 
let you know.  If you re-read my initial question all the way below, I think we 
have a good basis for understanding one another.  
Someone earlier gave an excellent answer to my initial question which I have 
transformed slightly:
Given an initial set of goals, with urgency and priority = 0, we can prioritize 
goals basedon the order of the steps involved in achieving the goal, and the 
level of subgoals neededto complete each step.  This gives us a tentative goal 
prioritization. Which I like. 
This was a good answer to the question and well worth my posing the question.  
I believe.This answer was given without the need to consider any of the 
additional factors in your last paragraph of the prior message, (second 
paragraph of this message). 
More questions to follow, no doubt.
~PM
------------------------------- 

> Date: Mon, 6 Oct 2014 16:49:45 -0400
> Subject: Re: [agi] Are all goals created equal?
> From: [email protected]
> To: [email protected]
> 
> How can you think of prioritizing goals without considering the
> contexts or situations that the various goals are used in? I don't
> mean that you have to use explicit examples, but that you really have
> to at least define an 'abstract' context or situation - as you did in
> your last exposition. I don't understand why this is not essential to
> the initial question.
> 
> When I reread your reply I realized that I did not quite understand
> what you were getting at and when I reread my previous response I
> realized that I did not quite understand some of what I had written.
> 
> I believe that as we learn using generalizations. 'The next time the
> something similar happens I will...' But, how do we determine that
> some event in situation A is similar to some event in situation B? My
> guess is that it is easier to develop generalizations about different
> situations if you are paranoid. 'The people in this group are just
> interested in bashing my ideas even though they don't understand
> them...'  The advantage of that kind of simplification is that you can
> develop generalizations in a more structured way. The disadvantage is
> that you are probably missing a lot of basic insights that that might
> be useful to you. Having a special skill might also cause you to see
> many events in paradigmatic ways. For example, habitually examining
> the effects of events in the terms of their potential economic impact
> would tend to cause you to develop many generalizations that could be
> interrelated using the same measurable objective of economic value.
> 
> So then some goals supersede other goals as they tend to create new
> markers of those sub-goals. But even this is relative. A sub goal can
> supersede a super goal either by directly shaping it or because the
> relevant relative relations between the goals are only a subset of all
> of the relations between the goals.
> 
> So the prioritization of goals should tend to develop as
> generalizations and specializations used to characterize and evaluate
> the goals develop and as the situations that the goals are employed in
> develop. Are the goals met? Are some goals (or sub-goals) interfering?
> Are the characterization of the goals accurate? Are the effects of
> achieving the goal ineffective in achieving a higher goal? And so on.
> 
> Jim Bromer
> 
> 
> On Tue, Sep 30, 2014 at 9:23 PM, Piaget Modeler via AGI <[email protected]> 
> wrote:
> >
> > Your exploration is going away from my initial question, why do goals get 
> > differing priorities.
> >
> > But I'll bite:  We can consider a situation to be a list of elements as 
> > follows:
> >
> > (prototype Situation
> >   :Items { }
> > )
> >
> > Your question: How with the AGI program differentiate the 
> > overgeneralizations that it will tend
> > to develop as it responds to various situations.
> >
> > Let's develop a framework with which to answer it.
> >
> > Let's assume an action has preconditions and post conditions.
> >
> > (prototype Action
> >   :Preconditions { }
> >   :Steps { }
> >   :Postconditions { }
> > )
> >
> >
> > For now, we'll ignore the steps and postconditions and focus on the 
> > preconditions.
> >
> > So now we can say that a precondition represents a situation just in case 
> > the precondition items
> > are the same as those in a situation.
> >
> >   (given [Situation ^ ?S :Items ?items]
> >               [Action ^ ?A  (same @:Preconditions ?items) ]
> >        do
> >              (print 'the preconditions of action ' ?A ' match the situation 
> > ' ?S) )
> >
> > The same goes for the postconditions of action S
> >
> > Suppose we have a set of situations.  And a set of actions.
> >
> > We can say that the items in the precondition of an action are more 
> > concrete than the items in a
> > situation if the the precondition contains more items than the situation.
> >
> > [Situation ^ situation_1 :Items { quick brown fox } ]
> > [Action ^ Action_1 :Preconditions { quick brown fox jumped } ... ]
> >
> > We can say that the items in the precondition of an action are more 
> > abstract than the items in a
> > situation if the the precondition contains fewer items than the situation.
> >
> > [Situation ^ situation_1 :Items { quick brown fox } ]
> > [Action ^ Action_1 :Preconditions { quick brown } ... ]
> >
> > We can say that the items in the precondition of an action are more 
> > specific than the items in a
> > situation if the the precondition contains  items that are lower in a 
> > taxonomy than those in the situation.
> >
> > [Situation ^ situation_1 :Items { quick brown fox } ]
> > [Action ^ Action_1 :Preconditions { quick dark-brown fox } ... ]
> > [Type :Subtype dark-brown :Supertype brown ]
> >
> >
> > We can say that the items in the precondition of an action are more general 
> > than the items in a
> > situation if the the precondition contains  items that are higeher in a 
> > taxonomy than those in the situation.
> >
> > [Situation ^ situation_1 :Items { quick brown fox } ]
> > [Action ^ Action_1 :Preconditions { quick brown animal } ... ]
> > [Type :Subtype fox :Supertype animal ]
> >
> >
> > So can you please refine your question at this point, so that it may be 
> > answered.
> >
> > How will the AGI program differentiate the overgeneralizations that it will 
> > tend
> > to develop as it responds to various situations.
> >
> > Cheers,
> >
> > ~PM
> >
> > ________________________________
> > Date: Tue, 30 Sep 2014 20:53:01 -0400
> > Subject: Re: [agi] Are all goals created equal?
> > From: [email protected]
> > To: [email protected]
> >
> >
> > On Sun, Sep 28, 2014 at 12:17 AM, Piaget Modeler via AGI <[email protected]> 
> > wrote:
> >
> > You create an AGI and endow it with an initial set of actions and needs.
> >
> > You prioritize the needs so that, for example, power is more important
> > than signal strength. Beyond that, you plug it in and let it run.
> >
> > The AGI begins to create goals and subgoals in order to satisfy its needs.
> > But how does it prioritize these goals and subgoals? We can distinguish
> > urgent from important goals by creating separate attributes for urgency
> > and priority.
> > But why would one goal get a higher priority than another, aside from
> > inheriting its priority from a basic need?
> >
> >
> > I think the terms that you are using to characterize the problem are 
> > over-generalized.  I guess that when I complain about the necessity of 
> > examining questions like this with more detailed characterizations that 
> > people who understand what I am trying to say simply dismiss it as obvious. 
> > The AGI program would be designed to use (and to learn to use) actions 
> > under appropriate conditions and every programmer 'gets' what conditional 
> > actions are.. But perhaps the problem that I see is that we need to write 
> > these programs so that they will use broad generalizations (just as, 
> > according to my point of view, you wrote your message using broad 
> > generalizations.)  So then the question of how the AGI program will 
> > differentiate the over-generalizations that it will tend to develop as it 
> > responds to various situations becomes the more essential question. So, for 
> > example, trying to get a better understanding of a particular situation and 
> > using that to further shape the characterization of that 'kind' of 
> > situation may become a prioritized goal just because it may be (or seem 
> > like) a necessary step that the program needs to take to better understand 
> > the situation.
> >
> > To add a little to this view of how an AGI might work notice that a 
> > situation may be characterized as being like different -kinds- of 
> > situations. This basic insight is something that I never read in other 
> > people's comments - Is that because it is so obvious?  A situation is 
> > comprised of the 'components' of the situation and each of these may lead 
> > to the development of different kinds of situations which the particular 
> > situation may be likened to. And the situation may belong to other 
> > categories that are based on the projection of some other kind of idea onto 
> > it.
> > Jim Bromer
> >
> > On Sun, Sep 28, 2014 at 12:17 AM, Piaget Modeler via AGI <[email protected]> 
> > wrote:
> >
> > You create an AGI and endow it with an initial set of actions and needs.
> >
> > You prioritize the needs so that, for example, power is more important
> > than signal strength. Beyond that, you plug it in and let it run.
> >
> > The AGI begins to create goals and subgoals in order to satisfy its needs.
> >
> > But how does it prioritize these goals and subgoals? We can distinguish
> > urgent from important goals by creating separate attributes for urgency
> > and priority.
> >
> > But why would one goal get a higher priority than another, aside from
> > inheriting its priority from a basic need?
> >
> > Perhaps I lack imagination.
> >
> > Your thoughts?
> >
> > ~PM
> > AGI | Archives | Modify Your Subscription
> >
> >
> > AGI | Archives | Modify Your Subscription
> > AGI | Archives | Modify Your Subscription
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to