>
> > So my guess is that focusing on the practical level for building an agi
> > system is sufficient, and it's easier than focusing on very abstract
> > levels. When you have a system that can e.g. play soccer, tie shoe lases,
> > build fences, throw objects to hit other objects, walk through a terrain
> > to a spot, cooperate with other systems in achieving these practical
> > goals
>
* The problem is a certain level of abstractness must be achieved to successfully carry through with all these tasks in a useful way.
If we teach and train a robot to open a door, and then present it with another type of door that opens differently, it will not be able to handle it, unless it can reason at a higher level, using abstract knowledge of doors, movement and handles.  This is very important to making a general intelligence.  Simple visual object detection has the same problem.  It seems to appear in all lines of planning, acting and reasoning processes.

arnoud <[EMAIL PROTECTED]> wrote:
On Friday 16 June 2006 15:37, Eric Baum wrote:
> Ben:
> >> As for the "prediction" paradigm, it is true that any aspect of
> >> mental activity can be modeled as a prediction problem, but it
> >> doesn't follow that this is always the most useful perspective.
>
> arnoud> I think it is, because all that needs to be done is achieve
> arnoud> goals in the future. And all you need to know is what
> arnoud> actions/plans will reach those goals. So all you need is
> arnoud> (correct) prediction.
>
> It is demonstrably untrue that the ability to predict the effects of
> any action, suffices to decide what actions one should take to
> reach one's goals.
> But in most practical everyday situations there are not that many
> action options to choose from. I don't really care if that is not the case >in the context of Turing machines. My focus is on everyday practical >situations.

> Still it is true that besides a prediction system, an action proposal
> system is necessary. That action system must learn to propose the
> most plausible actions given a situation; the prediction system can
> then calculate the results for each action and determine which is
> closest to the goal that has been set.

This is essential. If a long term plan would be made only formulated in terms of (very concrete) microlevel concepts there would be a near-infinity of possible plans, and plan descriptions would be enormously long, and would contain a lot of counterfactuals, because a lot of details are not known yet (causing another combinatiry explosion). If you wanted to go to Holland and made a plan like: move leg up, put hand on phone, turn left etc etc Planning would be unfeasible. Instead you make a more abstract plan, like: order ticket, go to airport, take plane, go to hotel. You formulate it on the right level of abstraction.

And during the execution of the high level plan(go to Holland) it would cause more concrete plans (go to airport), that would cause more concrete plans(drive in car), and so on until the level of physical body movement is reached (step on brake). Each level of abstraction is tied to a certain time scale. A plan, and a prediction have a certain (natural) life time that is on the time scale of their level of abstraction.


One thing I have been working on in these regards is the use of a 'script system'
It seems very impratical to have the AGI try and recreate these plans every single time, and we can use the scripts to abstract and reason about tasks and to create new scripts.
  We as humans live most of our lives doing very repetitive tasks, I drive to work every day, eat, work and drive home.  I do these things automatically, and most of the time dont put a lot of thought into them, I just follow the script. 
  In the case of planning a trip like that, we may not know the exact details, but we know the overview of what to do, so we could take a script of travel planning, copy it, and use it as a base template for acting.
  This does not remove the combinatorial explasion search-planning problem of having an infinite amount of choices for each action, but does give us a fall-back plan, if we are pressed for time, or cannot find another solution currently.

  I am working in a small virtual world right now, and implementing a simple set of tasks in a house environment. 
Another thought I am working on is some kind of semi-supervised learning for the agents, and an interactive method for defining actions and scripts.  It doesnt appear fruitful to create an agent, define a huge set of actions, give it a goal, and expect it to successfully acieve the goal, the search pattern just gets to large, and it becomes concerned with an infinite variety of useless repetitive choices.

After gathering a number of scripts an agent can then choose among the scripts, or revert down to a higher-level set of actions it can perform.

James Ratcliff


Thank You
James Ratcliff
http://falazar.com


Want to be your own boss? Learn how on Yahoo! Small Business.
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to