[agi] Singularity Flash Report! [2006 July 11]

2006-07-11 Thread A. T. Murray
http://www.whiteboxrobotics.com -- White Box Robotics (WBR) -- 
is bringing PC Bots to market, or robots that operate under 
the control of a standard personal computer (PC) and therefore
are ideal platforms for PC-based artificial intelligence.

http://www.914pcbots.com is a forum for discussion of the 
WBR PC Bots with an A.I. Zone for artificial intelligence.

http://groups.yahoo.com/group/win32forth/message/11332 is 
a sample message from the Win32Forth discussion forum, 
pertinent here because the message helps to document how 
discussion of Mind.Forth AI has shifted from the Win32Forth 
forum to the A.I. Zone of the White Box Robotics forum.

http://home.earthlink.net/~fjrussonc/AIMind/AI-Mind.html is 
the link which Frank J. Russo posted in the A.I. Zone forum
with an announcement that he has made his own version of the 
http://mind.sourceforge.net/mind4th.html -- Mind.Forth AI.

Upshot? Since the Mentifex AI breakthrough of 7 June 2006 --
http://www.mail-archive.com/agi@v2.listbox.com/msg03034.html 
-- we may be witnessing a Darwinian proliferation of AI Minds 
based on Mind.Forth but departing from Mind.Forth in terms 
of higher code quality and in terms of added AI functionality.

http://digg.com/programming/Brain-Mind_Know_Thyself! caused 
eight thousand hits to arrive on 6 July 2006 at the
http://mind.sourceforge.net/theory5.html webpage.

Respectfully submitted,

Arthur T. Murray/Mentifex
--
http://www.blogcharm.com/Singularity/25603/Timetable.html 

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-07-11 Thread James Ratcliff
  So my guess is that focusing on the practical level for building an agi  system is sufficient, and it's easier than focusing on very abstract  levels. When you have a system that can e.g. play soccer, tie shoe lases,  build fences, throw objects to hit other objects, walk through a terrain  to a spot, cooperate with other systems in achieving these practical  goals * The problem is a certain level of abstractness must be achieved to successfully carry through with all these tasks in a useful way.  If we teach and train a robot to open a door, and then present it with another type of door that opens differently, it will not be able to handle it, unless it can reason at a higher level, using abstract knowledge of doors, movement and handles. This is very important
 to making a general intelligence. Simple visual object detection has the same problem. It seems to appear in all lines of planning, acting and reasoning processes.arnoud [EMAIL PROTECTED] wrote: On Friday 16 June 2006 15:37, Eric Baum wrote: Ben:  As for the "prediction" paradigm, it is true that any aspect of  mental activity can be modeled as a prediction problem, but it  doesn't follow that this is always the most useful perspective. arnoud I think it is, because all that needs to be done is achieve arnoud goals in the future. And all you need to know is what arnoud actions/plans will reach those goals. So all you need is arnoud (correct) prediction. It is demonstrably untrue
 that the ability to predict the effects of any action, suffices to decide what actions one should take to reach one's goals. But in most practical everyday situations there are not that many  action options to choose from. I don't really care if that is not the case in the context of Turing machines. My focus is on everyday practical situations. Still it is true that besides a prediction system, an action proposal system is necessary. That action system must learn to propose the  most plausible actions given a situation; the prediction system can  then calculate the results for each action and determine which is  closest to the goal that has been set. This is essential. If a long term plan would be made only formulated in terms of (very concrete) microlevel concepts there would be a near-infinity of possible plans, and plan descriptions would be enormously long, and would contain a
 lot of counterfactuals, because a lot of details are not known yet (causing another combinatiry explosion). If you wanted to go to Holland and made a plan like: move leg up, put hand on phone, turn left etc etc Planning would be unfeasible. Instead you make a more abstract plan, like: order ticket, go to airport, take plane, go to hotel. You formulate it on the right level of abstraction.And during the execution of the high level plan(go to Holland) it would cause more concrete plans (go to airport), that would cause more concrete plans(drive in car), and so on until the level of physical body movement is reached (step on brake). Each level of abstraction is tied to a certain time scale. A plan, and a prediction have a certain (natural) life time that is on the time scale of their level of abstraction.One thing I have been working on in these regards is the use of a 'script system'It seems very impratical to have the AGI try and
 recreate these plans every single time, and we can use the scripts to abstract and reason about tasks and to create new scripts. We as humans live most of our lives doing very repetitive tasks, I drive to work every day, eat, work and drive home. I do these things automatically, and most of the time dont put a lot of thought into them, I just follow the script.  In the case of planning a trip like that, we may not know the exact details, but we know the overview of what to do, so we could take a script of travel planning, copy it, and use it as a base template for acting. This does not remove the combinatorial explasion search-planning problem of having an infinite amount of choices for each action, but does give us a fall-back plan, if we are pressed for time, or cannot find another solution currently. I am working in a small virtual world right now, and implementing a simple set of tasks in a house
 environment. Another thought I am working on is some kind of semi-supervised learning for the agents, and an interactive method for defining actions and scripts. It doesnt appear fruitful to create an agent, define a huge set of actions, give it a goal, and expect it to successfully acieve the goal, the search pattern just gets to large, and it becomes concerned with an infinite variety of useless repetitive choices.After gathering a number of scripts an agent can then choose among the scripts, or revert down to a higher-level set of actions it can perform.James RatcliffThank YouJames Ratcliffhttp://falazar.com 
		Want to be your own boss? Learn how on  Yahoo! Small Business. 

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please 

Re: [agi] How the Brain Represents Abstract Knowledge

2006-07-11 Thread Yan King Yin

On 7/12/06, James Ratcliff [EMAIL PROTECTED] wrote: 
 This is essential. If a long term plan would be made only formulated in terms of (very concrete) microlevel concepts there would be a near-infinity of possible plans, and plan descriptions would be enormously long, and would contain a lot of counterfactuals, because a lot of details are not known yet (causing another combinatiry explosion). If you wanted to go to Holland and made a plan like: move leg up, put hand on phone, turn left etc etc Planning would be unfeasible. Instead you make a more abstract plan, like: order ticket, go to airport, take plane, go to hotel. You formulate it on the right level of abstraction.
  And during the execution of the high level plan(go to Holland) it would cause more concrete plans (go to airport), that would cause more concrete plans(drive in car), and so on until the level of physical body movement is reached (step on brake). Each level of abstraction is tied to a certain time scale. A plan, and a prediction have a certain (natural) life time that is on the time scale of their level of abstraction.
  One thing I have been working on in these regards is the use of a 'script system'
 []

Hi James, have you looked at Soar? They seem to have focusedon the issue of complex planning right from the beginning.

Ben: If you have the time, I wish you can explain the key differences between Novamente and Soar. I'd be glad to work with Novamente if it has some nice features that Soar is unlikely to have in the (near or medium) future.


YKY

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-07-11 Thread arnoud
On Tuesday 11 July 2006 18:49, James Ratcliff wrote:
   So my guess is that focusing on the practical level for building an agi
   system is sufficient, and it's easier than focusing on very abstract
   levels. When you have a system that can e.g. play soccer, tie shoe
   lases, build fences, throw objects to hit other objects, walk through a
   terrain to a spot, cooperate with other systems in achieving these
   practical goals

  * The problem is a certain level of abstractness must be achieved to
 successfully carry through with all these tasks in a useful way.

That is the big problem, I agree, but not exactly the problem I wrote about.

 If we 
 teach and train a robot to open a door, and then present it with another
 type of door that opens differently, it will not be able to handle it,
 unless it can reason at a higher level, using abstract knowledge of doors,
 movement and handles.  This is very important to making a general
 intelligence.  Simple visual object detection has the same problem. It  
 seems to appear in all lines of planning, acting and reasoning processes.

Agreed.

--


 One thing I have been working on in these regards is the use of a 'script
 system' It seems very impractical to have the AGI try and recreate these
 plans every single time, and we can use the scripts to abstract and reason
 about tasks and to create new scripts. We as humans live most of our lives
 doing very repetitive tasks, I drive to work every day, eat, work and drive
 home.  I do these things automatically, and most of the time dont put a lot
 of thought into them, I just follow the script. In the case of planning a
 trip like that, we may not know the exact details, but we know the overview
 of what to do, so we could take a script of travel planning, copy it, and
 use it as a base template for acting. 

This doesn't sound bad, but you ignore the problem of representation. In what 
representational system do you express those scripts? How do you make sure 
that a system can effectively and efficiently express effective and efficient 
plans, procedures and actions in it (avoiding the autistic representational 
systems of expert systems)? And how can a system automatically generate such 
a representational system (recursively, so that it can stepwise abstract away 
from the sensory level)? And how does it know which representational system 
is relevant in a situation?

Concept formation, how does it happen?

 This does not remove the 
 combinatorial explosion search-planning problem of having an infinite
 amount of choices for each action, but does give us a fall-back plan, if we
 are pressed for time, or cannot find another solution currently.

   I am working in a small virtual world right now, and implementing a
 simple set of tasks in a house environment. Another thought I am working on
 is some kind of semi-supervised learning for the agents, and an interactive
 method for defining actions and scripts.  

Interactive Method? Why should this be called AI?

 It doesnt appear fruitful to 
 create an agent, define a huge set of actions, give it a goal, and expect
 it to successfully achieve the goal, the search pattern just gets to large,
 and it becomes concerned with an infinite variety of useless repetitive
 choices.

So, in other words, looking for an agi system is not very fruitful?


 After gathering a number of scripts an agent can then choose among the
 scripts, or revert down to a higher-level set of actions it can perform.

It doesn't seem to be very interesting, in the context of the agi mailing 
list.

Arnoud

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Soar vs Novamente

2006-07-11 Thread James Ratcliff
(From a former Soar researcher) I don't have the time to get involved  in a big discussion board, but just in case nobody else replies I thought I'd  send you a couple of sentences.  Soar at it's core is a pretty simple  beast. It's a very high performance production rule system with built in  support for goal hierarchies, operatorsand learning. This is placed  within a strong theory of how to build and organize large complex AI  systems. It represents all knowledge symbolically, which seems like a big  difference from Novamente which appears to build in probabilistic reasoning at a  more primitive level.
  One of Soar's main strengths is its  longevity--something of an existence proof for its value. It has been  around for 20+ years now and still has a very active research community  associated with it. It's been used in a vast range of different projects  and has some very notable successes, such as systems used to control tactical  fighter aircraft in large scale military simulations. There's also a  company (http://www.soartech.com/) that  is largely based around building AI systems using Soar.  In
 evaluating it I'd say Soar's  specialty is problems that require integrating large amounts of complex  knowledge from multiple sources. If you're just trying to solve one  specific problem (e.g. finding a best plan to get from A to B) then a general  architecture isn't the best choice. You're better with a tool that does  just the one thing you want--like apure planner in that case. But if  you're interested in integrating lots of knowledge together Soar is a good  choice. I've not used Novamente so I can't say how well it stacks  up. From a quick reading it seems like Novamente has perhaps more of a  "bottom-up" approach to knowledge and reasoning as they talk about patterns  emerging from the environmental data. That's a lot closer to the neural  network/connectionist/GA school of thought than Soar which is more of a classic,  top-down reasoning system with high level goals decomposed into steadily smaller 
 pieces.  Generally, the bottom-up pattern based  systems do better at noisy pattern recognition problems (perception problems  like recognizing letters in scanned OCR text or building complex  perception-action graphs where the decisions are largely probabilistic like  playing backgammon or assigning labels to chemical molecules). Top-down  reasoning systems like Soar generally do better at higher level reasoning  problems. Selecting the correct formation and movementsfor a squad  of troops when clearing a building, or receiving English instructions from a  human operator to guide a robot through a burning building.  I don't know if any of that helps and I may have misplaced Novamente in  the scheme of things -- I've just scanned that work briefly.  Doug  (Former Soar researcher).James Ratcliff [EMAIL PROTECTED] wrote: Yan, I had heard of it, but had yet to read up on it, after breifly reading a bit here, the main pages, and the first tutorial, I am duly impressed with its abilities. Though leary of having to download and work with a
 large complex package it apepars to be. Have you or anyone else downloaded and played with the application suite, or have any more insights into its working that we may compare contrast it with the Novamente project?Ref Site: http://sitemaker.umich.edu/soarI have also invited a person from Soar to join the discussion.One goal of mine is to produce a very simplistic web interface, similar to the uses of Open Mind Common Sense, that is easy to get in, edit, and possibly use the agent, and add to the knowledge bases, and possibly open it up to a large section of the internet for supervised learning input.James RatcliffYan King Yin [EMAIL PROTECTED] wrote:  On 7/12/06, James Ratcliff [EMAIL PROTECTED]
 wrote:   This is essential. If a long term plan would be made only formulated in terms of (very concrete) microlevel concepts there would be a near-infinity of possible plans, and plan descriptions would be enormously long, and would contain a lot of counterfactuals, because a lot of details are not known yet (causing another combinatiry explosion). If you wanted to go to Holland and made a plan like: move leg up, put hand on phone, turn left etc etc Planning would be unfeasible. Instead you make a more abstract plan, like: order ticket, go to airport, take plane, go to hotel. You formulate it on the right level of abstraction.   And during the execution of the high level  plan(go to Holland) it would cause more concrete plans (go to airport), that would cause more concrete plans(drive in car), and so on until the level of physical body movement is reached (step on brake). Each level of abstraction is tied
 to a certain time scale. A plan, and a prediction have a certain (natural) life time that is on the time scale of their level of abstraction.   One thing I have been working on in these regards is the use of a 'script system'  []  Hi James, have you looked at Soar?