In the OpenCog framework, we supply some hard-coded "top level goals", and then the system learns how to achieve these, which may include learning subgoals...
The top level goals are generally of the form "keep so-and-such parameter within range [L,R]" Experience of novelty and discovery of new things are good general top-level goals. For an character in a virtual 3D environment, we add in stuff like getting energy (e.g. from batteries or food), staying safe, and partaking in social interaction.... In reference to this sort of framework, I'm unsure if you're talking about top-level goals or learned subgoals... -- Ben G On Fri, Jun 8, 2012 at 5:36 PM, Abram Demski <[email protected]> wrote: > PM, > > I'm just trying to get a better idea of what you mean by 'goal selection' > and what kind of algorithm might be running to achieve that. > > In my thinking, the question of inputs should be solved 'automatically' if > you have a good idea of what you want to be doing in terms of functionality. > > In most cases, curiosity will be a sufficient motivator for a system to > demonstrate good/interesting behavior. We can apply RL-style value > propagation to cause a system to do those things for which it has the least > confidence about the result. What is your reaction to that kind of answer? > Is it getting at the kind of thing you are trying to get at, or not? > > --Abram > > > On Fri, Jun 8, 2012 at 1:57 PM, Piaget Modeler > <[email protected]>wrote: > >> >> I'm trying to identify what inputs are is important to goal selection for >> a cognitive system/cognitive architecture. >> >> If we isolate the problem of goal selection, what factors and inputs are >> necessary? >> >> For #2 - salient features such as urges, gaps, goals, impediments, >> prediction accuracy, etc. >> >> For #3 - some candidates would be preferences or prior goal selection >> episodes. >> >> Your thoughts? >> >> >> ------------------------------ >> Date: Fri, 8 Jun 2012 11:57:45 -0700 >> Subject: Re: [agi] Attention >> From: [email protected] >> To: [email protected] >> >> >> PM, >> >> What are the salient objectives here? >> >> What kinds of mechanisms are you considering? >> >> --Abram >> >> On Fri, Jun 8, 2012 at 11:04 AM, Piaget Modeler < >> [email protected]> wrote: >> >> Assuming an Attention module selects goals for an AGI, what are the >> possible inputs to >> such a component? >> >> 1. The current state. >> 2. Salient features in the current state, appropriately filtered. >> 3. What else? >> >> Kindly advise. >> >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/7190161-766c6f07> | >> Modify <https://www.listbox.com/member/?&> Your Subscription >> <http://www.listbox.com> >> >> >> >> >> -- >> Abram Demski >> http://lo-tho.blogspot.com/ >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> | >> Modify <https://www.listbox.com/member/?&> Your Subscription >> <http://www.listbox.com> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/7190161-766c6f07> | >> Modify <https://www.listbox.com/member/?&> Your Subscription >> <http://www.listbox.com> >> > > > > -- > Abram Demski > http://lo-tho.blogspot.com/ > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/212726-11ac2389> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, PhD http://goertzel.org "My humanity is a constant self-overcoming" -- Friedrich Nietzsche ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
