[agi] Soar vs Novamente

2006-07-11 Thread James Ratcliff
 (From a former Soar researcher) I don't have the time to get involved  in a big discussion board, but just in case nobody else replies I thought I'd  send you a couple of sentences.   Soar at it's core is a pretty simple  beast.  It's a very high performance production rule system with built in  support for goal hierarchies, operators and learning.  This is placed  within a strong theory of how to build and organize large complex AI  systems.  It represents all knowledge symbolically, which seems like a big  difference from Novamente which appears to build in probabilistic reasoning at a  more primitive level.
   One of Soar's main strengths is its  longevity--something of an existence proof for its value.  It has been  around for 20+ years now and still has a very active research community  associated with it.  It's been used in a vast range of different projects  and has some very notable successes, such as systems used to control tactical  fighter aircraft in large scale military simulations.  There's also a  company (http://www.soartech.com/) that  is largely based around building AI systems using Soar.   In
 evaluating it I'd say Soar's  specialty is problems that require integrating large amounts of complex  knowledge from multiple sources.  If you're just trying to solve one  specific problem (e.g. finding a best plan to get from A to B) then a general  architecture isn't the best choice.  You're better with a tool that does  just the one thing you want--like a pure planner in that case.  But if  you're interested in integrating lots of knowledge together Soar is a good  choice.  I've not used Novamente so I can't say how well it stacks  up.  From a quick reading it seems like Novamente has perhaps more of a  "bottom-up" approach to knowledge and reasoning as they talk about patterns  emerging from the environmental data.  That's a lot closer to the neural  network/connectionist/GA school of thought than Soar which is more of a classic,  top-down reasoning system with high level goals decomposed into steadily smaller 
 pieces.   Generally, the bottom-up pattern based  systems do better at noisy pattern recognition problems (perception problems  like recognizing letters in scanned OCR text or building complex  perception-action graphs where the decisions are largely probabilistic like  playing backgammon or assigning labels to chemical molecules).  Top-down  reasoning systems like Soar generally do better at higher level reasoning  problems.  Selecting the correct formation and movements for a squad  of troops when clearing a building, or receiving English instructions from a  human operator to guide a robot through a burning building.   I don't know if any of that helps and I may have misplaced Novamente in  the scheme of things -- I've just scanned that work briefly.   Doug   (Former Soar researcher).James Ratcliff <[EMAIL PROTECTED]> wrote: Yan,  I had heard of it, but had yet to read up on it, after breifly reading a bit here, the main pages, and the first tutorial, I am duly impressed with its abilities.  Though leary of having to download and work with a
 large complex package it apepars to be.  Have you or anyone else downloaded and played with the application suite, or have any more insights into its working that we may compare contrast it with the Novamente project?Ref Site: http://sitemaker.umich.edu/soarI have also invited a person from Soar to join the discussion.One goal of mine is to produce a very simplistic web interface, similar to the uses of Open Mind Common Sense, that is easy to get in, edit, and possibly use the agent, and add to the knowledge bases, and possibly open it up to a large section of the internet for supervised learning input.James RatcliffYan King Yin <[EMAIL PROTECTED]> wrote:  On 7/12/06, James Ratcliff <[EMAIL PROTECTED]>
 wrote:  > This is essential. If a long term plan would be made only formulated in terms of (very concrete) microlevel concepts there would be a near-infinity of possible plans, and plan descriptions would be enormously long, and would contain a lot of counterfactuals, because a lot of details are not known yet (causing another combinatiry explosion). If you wanted to go to Holland and made a plan like: move leg up, put hand on phone, turn left etc etc Planning would be unfeasible. Instead you make a more abstract plan, like: order ticket, go to airport, take plane, go to hotel. You formulate it on the right level of abstraction. > > And during the execution of the high level  plan(go to Holland) it would cause more concrete plans (go to airport), that would cause more concrete plans(drive in car), and so on until the level of physical body movement is reached (step on brake). Each level of abstraction is tied
 to a certain time scale. A plan, and a prediction have a certain (natural) life time that is on the time scale of their level of abstraction. > > One thing I have been working on in these regards is the use of a 'script system' > [] 

Re: [agi] Request for Book Review

2006-07-11 Thread James Ratcliff
Mark,  Im flipping through it online thanks to Google right now.Here is the link: http://books.google.com/books?vid=ISBN1591404835&id=M9uk__SlVgUC&pg=PA76&lpg=PA74&vq=planning&dq=Visions+Of+Mind:+Architectures+For+Cognition+And+Affect&sig=2sK_r2LNlpb0qQ51dnabJi59TjMAnd here on page 75-76 it is discussing modularity in AI, which has been a topic recently in the emails.If you have any other areas you want to discuss you can post the page up here so we can all reference it.JamesMark Waser <[EMAIL PROTECTED]> wrote: Has anyone read Visions Of Mind: Architectures For Cognition And Affect by Darryl Davis and is willing to comment on it? ---To unsubscribe, change your address, or temporarily deactivate your subscription, please go to
 http://v2.listbox.com/member/[EMAIL PROTECTED]Thank YouJames Ratcliffhttp://falazar.com 
		How low will we go? Check out Yahoo! Messenger’s low  PC-to-Phone call rates.
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-07-11 Thread James Ratcliff
Yan,  I had heard of it, but had yet to read up on it, after breifly reading a bit here, the main pages, and the first tutorial, I am duly impressed with its abilities.  Though leary of having to download and work with a large complex package it apepars to be.  Have you or anyone else downloaded and played with the application suite, or have any more insights into its working that we may compare contrast it with the Novamente project?Ref Site: http://sitemaker.umich.edu/soarI have also invited a person from Soar to join the discussion.One goal of mine is to produce a very simplistic web interface, similar to the uses of Open Mind Common Sense, that is easy to get in, edit, and possibly use the agent, and add to the knowledge bases, and possibly open it up to a large section of the internet for supervised learning input.James RatcliffYan King Yin <[EMAIL PROTECTED]> wrote:  On 7/12/06, James Ratcliff <[EMAIL PROTECTED]> wrote:  > This is essential. If a long term plan would be made only formulated in terms of (very concrete) microlevel concepts there would be a near-infinity of possible plans, and plan descriptions would be enormously long, and would contain a lot of counterfactuals, because a lot of details are not known yet (causing another combinatiry explosion). If you wanted to go to Holland and made a plan like: move leg up, put hand on phone, turn left etc etc Planning would be unfeasible. Instead you make a more abstract plan, like: order ticket, go to airport, take plane, go to hotel. You formulate it on the right level of abstraction. > > And during the execution of the high level
 plan(go to Holland) it would cause more concrete plans (go to airport), that would cause more concrete plans(drive in car), and so on until the level of physical body movement is reached (step on brake). Each level of abstraction is tied to a certain time scale. A plan, and a prediction have a certain (natural) life time that is on the time scale of their level of abstraction. > > One thing I have been working on in these regards is the use of a 'script system' > []   Hi James,  have you looked at Soar?  They seem to have focused on the issue of complex planning right from the beginning.   Ben:  If you have the time, I wish you can explain the key differences between Novamente and Soar.  I'd be glad to work with Novamente if it has some nice features that Soar is unlikely to have in
 the (near or medium) future.    YKY  To unsubscribe, change your address, or temporarily deactivate your subscription,  please go to http://v2.listbox.com/member/[EMAIL PROTECTED] Thank YouJames Ratcliffhttp://falazar.com 
		Do you Yahoo!? Next-gen email? Have it all with the  all-new Yahoo! Mail Beta.
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-07-11 Thread arnoud
On Tuesday 11 July 2006 18:49, James Ratcliff wrote:
> > > So my guess is that focusing on the practical level for building an agi
> > > system is sufficient, and it's easier than focusing on very abstract
> > > levels. When you have a system that can e.g. play soccer, tie shoe
> > > lases, build fences, throw objects to hit other objects, walk through a
> > > terrain to a spot, cooperate with other systems in achieving these
> > > practical goals
>
>  * The problem is a certain level of abstractness must be achieved to
> successfully carry through with all these tasks in a useful way.

That is the big problem, I agree, but not exactly the problem I wrote about.

> If we 
> teach and train a robot to open a door, and then present it with another
> type of door that opens differently, it will not be able to handle it,
> unless it can reason at a higher level, using abstract knowledge of doors,
> movement and handles.  This is very important to making a general
> intelligence.  Simple visual object detection has the same problem. It  
> seems to appear in all lines of planning, acting and reasoning processes.

Agreed.

--

>
> One thing I have been working on in these regards is the use of a 'script
> system' It seems very impractical to have the AGI try and recreate these
> plans every single time, and we can use the scripts to abstract and reason
> about tasks and to create new scripts. We as humans live most of our lives
> doing very repetitive tasks, I drive to work every day, eat, work and drive
> home.  I do these things automatically, and most of the time dont put a lot
> of thought into them, I just follow the script. In the case of planning a
> trip like that, we may not know the exact details, but we know the overview
> of what to do, so we could take a script of travel planning, copy it, and
> use it as a base template for acting. 

This doesn't sound bad, but you ignore the problem of representation. In what 
representational system do you express those scripts? How do you make sure 
that a system can effectively and efficiently express effective and efficient 
plans, procedures and actions in it (avoiding the autistic representational 
systems of expert systems)? And how can a system automatically generate such 
a representational system (recursively, so that it can stepwise abstract away 
from the sensory level)? And how does it know which representational system 
is relevant in a situation?

Concept formation, how does it happen?

> This does not remove the 
> combinatorial explosion search-planning problem of having an infinite
> amount of choices for each action, but does give us a fall-back plan, if we
> are pressed for time, or cannot find another solution currently.
>
>   I am working in a small virtual world right now, and implementing a
> simple set of tasks in a house environment. Another thought I am working on
> is some kind of semi-supervised learning for the agents, and an interactive
> method for defining actions and scripts.  

Interactive Method? Why should this be called AI?

> It doesnt appear fruitful to 
> create an agent, define a huge set of actions, give it a goal, and expect
> it to successfully achieve the goal, the search pattern just gets to large,
> and it becomes concerned with an infinite variety of useless repetitive
> choices.

So, in other words, looking for an agi system is not very fruitful?

>
> After gathering a number of scripts an agent can then choose among the
> scripts, or revert down to a higher-level set of actions it can perform.

It doesn't seem to be very interesting, in the context of the agi mailing 
list.

Arnoud

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-07-11 Thread Yan King Yin

On 7/12/06, James Ratcliff <[EMAIL PROTECTED]> wrote: 
> This is essential. If a long term plan would be made only formulated in terms of (very concrete) microlevel concepts there would be a near-infinity of possible plans, and plan descriptions would be enormously long, and would contain a lot of counterfactuals, because a lot of details are not known yet (causing another combinatiry explosion). If you wanted to go to Holland and made a plan like: move leg up, put hand on phone, turn left etc etc Planning would be unfeasible. Instead you make a more abstract plan, like: order ticket, go to airport, take plane, go to hotel. You formulate it on the right level of abstraction.
> > And during the execution of the high level plan(go to Holland) it would cause more concrete plans (go to airport), that would cause more concrete plans(drive in car), and so on until the level of physical body movement is reached (step on brake). Each level of abstraction is tied to a certain time scale. A plan, and a prediction have a certain (natural) life time that is on the time scale of their level of abstraction.
> > One thing I have been working on in these regards is the use of a 'script system'
> []
 
Hi James,  have you looked at Soar?  They seem to have focused on the issue of complex planning right from the beginning.
 
Ben:  If you have the time, I wish you can explain the key differences between Novamente and Soar.  I'd be glad to work with Novamente if it has some nice features that Soar is unlikely to have in the (near or medium) future.

 
YKY

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Singularity Flash Report! [2006 July 11]

2006-07-11 Thread J. Andrew Rogers


On Jul 11, 2006, at 7:21 AM, A. T. Murray wrote:
[...elided...]


*plonk*


J. Andrew Rogers

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] How the Brain Represents Abstract Knowledge

2006-07-11 Thread James Ratcliff
>> > So my guess is that focusing on the practical level for building an agi> > system is sufficient, and it's easier than focusing on very abstract> > levels. When you have a system that can e.g. play soccer, tie shoe lases,> > build fences, throw objects to hit other objects, walk through a terrain> > to a spot, cooperate with other systems in achieving these practical> > goals> * The problem is a certain level of abstractness must be achieved to successfully carry through with all these tasks in a useful way.  If we teach and train a robot to open a door, and then present it with another type of door that opens differently, it will not be able to handle it, unless it can reason at a higher level, using abstract knowledge of doors, movement and handles.  This is very important
 to making a general intelligence.  Simple visual object detection has the same problem.  It seems to appear in all lines of planning, acting and reasoning processes.arnoud <[EMAIL PROTECTED]> wrote: On Friday 16 June 2006 15:37, Eric Baum wrote:> Ben:> >> As for the "prediction" paradigm, it is true that any aspect of> >> mental activity can be modeled as a prediction problem, but it> >> doesn't follow that this is always the most useful perspective.>> arnoud> I think it is, because all that needs to be done is achieve> arnoud> goals in the future. And all you need to know is what> arnoud> actions/plans will reach those goals. So all you need is> arnoud> (correct) prediction.>> It is demonstrably untrue
 that the ability to predict the effects of> any action, suffices to decide what actions one should take to> reach one's goals.> But in most practical everyday situations there are not that many > action options to choose from. I don't really care if that is not the case >in the context of Turing machines. My focus is on everyday practical >situations.> Still it is true that besides a prediction system, an action proposal> system is necessary. That action system must learn to propose the > most plausible actions given a situation; the prediction system can > then calculate the results for each action and determine which is > closest to the goal that has been set. This is essential. If a long term plan would be made only formulated in terms of (very concrete) microlevel concepts there would be a near-infinity of possible plans, and plan descriptions would be enormously long, and would contain a
 lot of counterfactuals, because a lot of details are not known yet (causing another combinatiry explosion). If you wanted to go to Holland and made a plan like: move leg up, put hand on phone, turn left etc etc Planning would be unfeasible. Instead you make a more abstract plan, like: order ticket, go to airport, take plane, go to hotel. You formulate it on the right level of abstraction.And during the execution of the high level plan(go to Holland) it would cause more concrete plans (go to airport), that would cause more concrete plans(drive in car), and so on until the level of physical body movement is reached (step on brake). Each level of abstraction is tied to a certain time scale. A plan, and a prediction have a certain (natural) life time that is on the time scale of their level of abstraction.One thing I have been working on in these regards is the use of a 'script system'It seems very impratical to have the AGI try and
 recreate these plans every single time, and we can use the scripts to abstract and reason about tasks and to create new scripts.  We as humans live most of our lives doing very repetitive tasks, I drive to work every day, eat, work and drive home.  I do these things automatically, and most of the time dont put a lot of thought into them, I just follow the script.    In the case of planning a trip like that, we may not know the exact details, but we know the overview of what to do, so we could take a script of travel planning, copy it, and use it as a base template for acting.  This does not remove the combinatorial explasion search-planning problem of having an infinite amount of choices for each action, but does give us a fall-back plan, if we are pressed for time, or cannot find another solution currently.  I am working in a small virtual world right now, and implementing a simple set of tasks in a house
 environment.  Another thought I am working on is some kind of semi-supervised learning for the agents, and an interactive method for defining actions and scripts.  It doesnt appear fruitful to create an agent, define a huge set of actions, give it a goal, and expect it to successfully acieve the goal, the search pattern just gets to large, and it becomes concerned with an infinite variety of useless repetitive choices.After gathering a number of scripts an agent can then choose among the scripts, or revert down to a higher-level set of actions it can perform.James RatcliffThank YouJames Ratcliffhttp://falazar.com 
		Want to be your own boss? Learn how on  Yahoo! Small Business. 

To unsubscribe, change your addre

[agi] strong and weakly self improving processes

2006-07-11 Thread Eric Baum

Eliezer,

I enjoyed "Levels of Organization in General Intelligence". I very
much agree that there must be depth and complexity in the
computation. There is one point, however, I wish to clarify.

You state "The accelerating development of the hominid family and
the exponential increase in human culture are both instances of
*weakly self-improving processes*, characterized by an externally
constant process (evolution, modern human brains) acting on a
complexity pool (hominid genes, cultural knowledge) whose elements
interact synergetically. If we divide the process into an improver and
a content base, then weakly self-improving processes are characterized
by an external improving process with roughly constant characteristic
intelligence, and a content base within which positive feedback takes
place under the dynamics imposed by the external process." (477)...
"A seed AI is a *strongly self improving process*,
characterized by improvements to the content base that exert direct
positive feedback on the intelligence of the underlying improving
process." (478) [italics in original]
and go on to suggest the possiblity that a seed AI may thus accelerate
its progress in ways beyond what has happened to human intelligence.

I would like to respectfully suggest the possibility that this overlooks
a ramification of the layered and complex nature of intelligence.
It seems that the very top level of an intelligent system (including a 
human) may be (or indeed to some extent may intriniscally have to be) 
a module or system that actually knows very little. 
An example would be the auctioneer in a Hayek system (which only knows 
to compare bids and choose the highest) or some other kind of test
module that simply tries out alternative lower modules and receives a
simple measure of what works and keeps what works, such as various
proposals of universal algorithms etc. Such a top layer doesn't 
know anything about what it is comparing or how it is computed.
Its a chunk of fixed code.
One reason why it makes sense to assert there can't be some very smart
top level is basically the same reason why Friedrich Hayek asserted you couldn't
run a control economy. But even if there would be some way to keep modifying
the top level to make it better, one could presumably achieve just as 
powerful an ultimate intelligence by keeping it fixed and adding more 
powerful lower levels (or maybe better yet, middle levels) or more or better
chunks and modules within a middle or lower level.

Along these lines, I tend to think that creatures evolved intelligence
and "consciousness" in this fashion: a decision making unit that didn't
know much but picked the best alternative ("best" according to simple
pain/reward signals passed to it) evolved first (already in bacteria),
followed by evolution in the sophistication of the information calculated
below the top level decision unit. No doubt there was some evolution in 
"the top level" to better interface with the better information being 
passed up,  but this was not necessarily the crux of the matter.
So in some sense, "wanting" and "will" may have came first 
evolutionarily, and consciousness simply became more sophisticated
and nuanced as evolution progressed. This also seems different
than your picture.

I further think that a sufficient explanation (which is also the
simplest explanation, and is in accord with various data including all
known to me, and is thus my working assumption) for the
divergence between human and ape intelligence is that the discovery of
language allowed greatly increased "culture", ie allowed thought-programs
to be passed down from one human to another and thus to be discovered
and improved by a cumulative process, involving the efforts of
numerous humans. I think the hard problem about achieving intelligence
is crafting the software, which problem is "hard" in a technical sense of
being NP-hard and requiring major computational effort, so the ability
to make sequential small improvements, and bring to bear the
computation of millions or billions of (sophisticated, powerful)
brains, led to major improvements. I suggest these improvements are
not merely "external", but fundamentally affect thought itself. For
example, one of the distinctions between human and ape cognition is
said to be that we have "theory of mind" whereas they don't (or do
much more weakly). But I suggest that "theory of mind" must already be
a fairly complex program, built out of many sub-units, and that we
have built additional components and capabilities on what came
evolutionarily before by virtue of thinking about the problem and
passing on partial progress, for example in the mode of bed-time
stories and fiction. Both for language itself and things like theory 
of mind, one can imagine some evolutionary improvements in ability to use 
it through the Baldwin effect, but the main point here seems to be the 
use of external storage in "culture" in developing the algorithms and 
passing them on

[agi] Singularity Flash Report! [2006 July 11]

2006-07-11 Thread A. T. Murray
http://www.whiteboxrobotics.com -- White Box Robotics (WBR) -- 
is bringing PC Bots to market, or robots that operate under 
the control of a standard personal computer (PC) and therefore
are ideal platforms for PC-based artificial intelligence.

http://www.914pcbots.com is a forum for discussion of the 
WBR PC Bots with an A.I. Zone for artificial intelligence.

http://groups.yahoo.com/group/win32forth/message/11332 is 
a sample message from the Win32Forth discussion forum, 
pertinent here because the message helps to document how 
discussion of Mind.Forth AI has shifted from the Win32Forth 
forum to the A.I. Zone of the White Box Robotics forum.

http://home.earthlink.net/~fjrussonc/AIMind/AI-Mind.html is 
the link which Frank J. Russo posted in the A.I. Zone forum
with an announcement that he has made his own version of the 
http://mind.sourceforge.net/mind4th.html -- Mind.Forth AI.

Upshot? Since the Mentifex AI breakthrough of 7 June 2006 --
http://www.mail-archive.com/agi@v2.listbox.com/msg03034.html 
-- we may be witnessing a Darwinian proliferation of AI Minds 
based on Mind.Forth but departing from Mind.Forth in terms 
of higher code quality and in terms of added AI functionality.

http://digg.com/programming/Brain-Mind_Know_Thyself! caused 
eight thousand hits to arrive on 6 July 2006 at the
http://mind.sourceforge.net/theory5.html webpage.

Respectfully submitted,

Arthur T. Murray/Mentifex
--
http://www.blogcharm.com/Singularity/25603/Timetable.html 

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]