I was thinking that if it was discovered that a particular action usually produced a response then that would knowledge would be turned into an implicit goal (if the expectation usually occurred after the response then the matched expectation would be reinforcing). This would be the "prediction" of AI lore. But, the problem is that most actions do not usually produce the exact same response or reaction that had been seen before. So how could an AGI program determine that the action produced a kind of reaction (that could be used as an intrinsic goal). My answer to that question is that structural keys would help in identifying more complex situations that could be used to determine whether the action produced a kind of response. For example, your remarks are examples of a response which is directly related to the subject matter of my original comment. So now I have a higher expectation that you would be able to better understand my understanding of awareness which is a kind of meta knowledge about a subject matter - and about how it can be used in an activation by a program.
On Sat, Dec 7, 2013 at 4:51 PM, Piaget Modeler <[email protected]>wrote: > One approach to this conundrum is to assume that "awareness" = activation. > > When a concept in the memory is activated, the AGI is aware of it. The > question > then becomes how to activate these concepts. How to bring about the > activation. > Then the AGI must set a goal to achieve the desired concept's activation > and > formulate or retrieve a solution which achieves the goal. > > When actions are taken towards a goal, then reflective processes must > evaluate > the success or failure of the attempt. In PAM-P2, attempts have a time > frame. > When a particular solution is attempted, if the goal is not achieved > within the time > frame, the solution has failed, otherwise it has succeeded. Regulatory > processes > will reinforce and recombine successful solutions, and correct failed > solutions. > Moreover, compensatory processes will act upon the solution to achieve a > desired > state. > > All this is using intrinsic reinforcement, rather than extrinsic > reinforcement. > > ~PM > > ----------- > > > > On 12/07/2013 03:16 AM, Jim Bromer wrote: > > >> > > >> One of the problems is how do you get an AGI program to be 'aware' > > >> that it has found an appropriate solution to 'understanding' a > > >> situation without some kludgy method of external reinforcement? I > > >> believe that key structural insights may play an important role in > > >> this process. I assume that most learning takes place through an > > >> incremental process of accumulating small pieces of insight or > > >> know-how. > > >> > > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/24379807-f5817f28> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > -- Jim Bromer ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
