Re: [agi] NARS and Oscar [was: Commercial AGI ventures]

2002-11-05 Thread Pei Wang
I studied OSCAR years ago, but haven't followed it closely. Simply speaking,
both OSCAR and NARS are "logic-based" approaches, and their major difference
is that OSCAR stays much closer to traditional mathematical logic (in terms
of formal language, semantics, rules, control mechanism, and so on).  The
logic of OSCAR is similar to nonmonotonic logic --- defeasible, indeed, but
doesn't handle learning and revision in general that well.

Pei

- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, November 05, 2002 9:46 AM
Subject: [agi] NARS and Oscar [was: Commercial AGI ventures]


>
> Pei,
>
> I've been reading thru some of the papers on the OSCAR site,
>
> >
> > http://oscarhome.soc-sci.arizona.edu/ftp/OSCAR-web-page/OSCAR.htm
> >
>
> The general cognitive architecture Pollock proposes there seems
reasonable,
> although I feel it's very incomplete, focusing exclusively on logical
> inference and then inferential procedure learning.  To me, his logic-based
> learning methods are too "incremental" and "localized" in nature, and I
> doubt the system will be capable of creative insights or major leaps of
> learning unless it's significantly modified & augmented.  For example, for
> procedure learning, he basically relies on probabilistic enhancement of
> standard goal-regression planning algorithms, and I don't think this kind
of
> approach is anywhere near adequate
>
> However, there is definitely some unique content here, which lies in the
> details of his theory of "defeasible reasoning" [as opposed to deductive
> reasoning].
>
> I am curious whether you have any thoughts on his defeasible reasoning
> approach, and its relation to your own NARS reasoning approach...
>
> -- Ben
>
> ---
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/
>


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



Re: [agi] Inventory of AGI projects

2002-11-05 Thread shane legg

I think the key fact is that most of these projects are currently
relatively inactive --- plenty of passion out there, just not a
lot of resources.  

The last I heard both the HAL project and the CAM-brain project
where pretty much at a stand still due to lack of funding?

Perhaps a good piece of information to add to a list of AGI projects
would be an indication of the level of resources that the project has.

(I'm currently between places and only on the internet via cafes...
So I won't be very active on this list for a few weeks at least)

I suppose I should give a short who-am-I for those who don't know:
I'm a New Zealand mathematician/AI kind of a guy, worked for Ben
for a few years on Webmind and spend most of this year working for
Peter Voss on the A2I2 project.  I'm into complexity and intelligence
and am starting a PhD with Marcus Hutter at IDSIA in a few months
working on a mathematical definition of intelligence that he's come
up with.

Cheers
Shane



 --- Ben Goertzel <[EMAIL PROTECTED]> wrote: > 
> 
> Hi,
> 
> Inspired by a recent post, here is my attempt at a list of "serious AGI
> projects" underway on the planet at this time.
> 
> If anyone knows of anything that should be added to this list, please let me
> know.
> 
> 
> 
> . Novamente ...
> 
> · Pei Wang’s NARS system
> 
> · Peter Voss’s A2I2 project
> 
> · Jason Hutchens’ intelligent chat bots, an ongoing project that for a while
> was carried out at www.a-i.com
> 
> · Doug Lenat’s Cyc project
> 
> · The most serious “traditional AI” systems: SOAR and ACT-R
> 
> · Hugo de Garis’s “artificial brain”
> 
> · James Rogers’ information theory based AGI effort
> 
> · Eliezer Yudkowsky’s DGI project
> 
> · Sam Adams’ experiential learning project at IBM
> 
> · The algorithmic information theory approach to AGI theory, carried out by
> Juergen Schmidhuber and Marcus Hutter at IDSIA
> 
> . The Cog project at MIT
> 
> 
> 
> -- Ben
> 
> 
> ---
> To unsubscribe, change your address, or temporarily deactivate your subscription, 
> please go to http://v2.listbox.com/member/ 

__
Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts
http://uk.my.yahoo.com

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



[agi] NARS and Oscar [was: Commercial AGI ventures]

2002-11-05 Thread Ben Goertzel

Pei,

I've been reading thru some of the papers on the OSCAR site,

>
> http://oscarhome.soc-sci.arizona.edu/ftp/OSCAR-web-page/OSCAR.htm
>

The general cognitive architecture Pollock proposes there seems reasonable,
although I feel it's very incomplete, focusing exclusively on logical
inference and then inferential procedure learning.  To me, his logic-based
learning methods are too "incremental" and "localized" in nature, and I
doubt the system will be capable of creative insights or major leaps of
learning unless it's significantly modified & augmented.  For example, for
procedure learning, he basically relies on probabilistic enhancement of
standard goal-regression planning algorithms, and I don't think this kind of
approach is anywhere near adequate

However, there is definitely some unique content here, which lies in the
details of his theory of "defeasible reasoning" [as opposed to deductive
reasoning].

I am curious whether you have any thoughts on his defeasible reasoning
approach, and its relation to your own NARS reasoning approach...

-- Ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



RE: [agi] RE: Ethical drift

2002-11-05 Thread Ben Goertzel


David Noziglia wrote:
> It is a common belief that game theory has shown that it is
> advantageous to
> be selfish and nasty.  I assume that the members of this group
> know that is
> wrong, that game theory has in fact shown that in a situation of repeated
> interaction, it is more advantageous from a strictly self-interested
> viewpoint to make nice and cooperate.  This is a simplistic description of
> the Nash Equilibrium.
>
> Of course, Smith's Evolutionarily Stable Sets then show that there are
> situations when betrayal then becomes of greater advantage to an
> individual,
> so we can't count on a Nash calculation to lead any and all AGI's to make
> nice and keep their human companions comfortable.

These are all interesting and important results.

However, I think you'll agree that the situation of a group of agents, some
of which are improving their intelligence and modifying their nature
dramatically at a rapid pace [the likely situation with future AGI's], is a
bit different from the assumptions underlying the simulations you mention...
!!

ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



Re: [agi] RE: Ethical drift

2002-11-05 Thread C. David Noziglia

C. David Noziglia
Object Sciences Corporation
6359 Walker Lane, Alexandria, VA
(703) 253-1095

"What is true and what is not? Only God knows. And, maybe, America."
  Dr. Khaled M. Batarfi, Special to Arab
News

"Just because something is obvious doesn't mean it's true."
 ---  Esmirelda Weatherwax, witch of Lancre
- Original Message -
From: "Philip Sutton" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, November 04, 2002 8:02 PM
Subject: [agi] RE: Ethical drift


> Ben Goertzel wrote:
> > What if iterative self-revision causes the system's goal G to "drift"
> > over time...
>
> I think this is inevitable - it's just evolution keeping on going as it
always
> will.  The key issue then is what processes can be set in train to operate
> throughout time to keep evolution re-inventing/re-committing AGIs (and
> humans too) to ethical behaviour.  Maybe communities of AGIs can
> create this dynamic.
>
> Can isolated, non-socialised AGIs be ethical in relation to the whole?
>
> A book that I found facinating on the ethics issue in ealier evolutionaryu
> stages is:
>
> Good Natured: The Origins of Right and Wrong in Humans and Other
> Animals
> by Frans De Waal, Frans de Waal (Paperback - October 1997)
> Harvard Univ Pr; ISBN: 0674356616; Reprint edition (October 1997)
>
> It's well worth a read.
>
> Cheers, Philip
>
I would dare to add a note of perhaps naive optomism here.

It is a common belief that game theory has shown that it is advantageous to
be selfish and nasty.  I assume that the members of this group know that is
wrong, that game theory has in fact shown that in a situation of repeated
interaction, it is more advantageous from a strictly self-interested
viewpoint to make nice and cooperate.  This is a simplistic description of
the Nash Equilibrium.

Of course, Smith's Evolutionarily Stable Sets then show that there are
situations when betrayal then becomes of greater advantage to an individual,
so we can't count on a Nash calculation to lead any and all AGI's to make
nice and keep their human companions comfortable.  I think that Jack
Williamson's The Humanoids
(http://www.amazon.com/exec/obidos/ASIN/0312852533/ref%3Dnosim/music2u/104-7
816878-7538303) is still the best and most thoughful cautionary tale in this
line.

David


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



RE: [agi] Commercial AGI ventures

2002-11-05 Thread Ben Goertzel

Pei,

Your criteria are more stringent than mine, but your list was made with a
similar idea in mind   I'm not making restrictions about the number of
years a project has been in existence, for example.

It appears the only thing on your list that I missed is the OSCAR project

***
http://oscarhome.soc-sci.arizona.edu/ftp/OSCAR-web-page/OSCAR.htm

OSCAR is an architecture for rational agents based upon an evolving
philosophical theory of rational cognition. OSCAR is based on a schematic
view of rational cognition according to which agents have beliefs
representing their environment and an evaluative mechanism that evaluates
the world as represented by their beliefs. They then engage in activity
designed to make the world more to their liking. The whole point of an agent
is to do something, to interact with the world, and such interaction is
driven by practical cognition.
***

Also, it would seem that Arthur Murray's Mentifex project fulfills your
criteria.

And, it may be that Juergen Schmidhuber's OOPS system fulfills your criteria
as well.  There is a prototype, and a mathematical theory of general
intelligence to go with it

Ben




> -Original Message-
> From: [EMAIL PROTECTED] [mailto:owner-agi@;v2.listbox.com]On
> Behalf Of Pei Wang
> Sent: Tuesday, November 05, 2002 6:52 AM
> To: [EMAIL PROTECTED]
> Subject: Re: [agi] Commercial AGI ventures
>
>
> I have a list at
> http://www.cis.temple.edu/~pwang/203-AI/Lecture/203-1126.htm, including
> projects satisfying the following three standards:
>   a.. Each of them has the plan to eventually grow into a
> "thinking machine"
> or "artificial general intelligence" (so it is not merely about
> part of AI);
>   b.. Each of them has been carried out for more than 5 years (so
> it is more
> than a PhD project);
>   c.. Each of them has prototypes or early versions finished (so it is not
> merely a theory), and there are some publications explaining how it works
> (so it is not merely a claim).
> Pei

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



[agi] Inventory of AGI projects

2002-11-05 Thread Ben Goertzel


Hi,

Inspired by a recent post, here is my attempt at a list of "serious AGI
projects" underway on the planet at this time.

If anyone knows of anything that should be added to this list, please let me
know.



. Novamente ...

·   Pei Wang’s NARS system

·   Peter Voss’s A2I2 project

·   Jason Hutchens’ intelligent chat bots, an ongoing project that for a while
was carried out at www.a-i.com

·   Doug Lenat’s Cyc project

·   The most serious “traditional AI” systems: SOAR and ACT-R

·   Hugo de Garis’s “artificial brain”

·   James Rogers’ information theory based AGI effort

·   Eliezer Yudkowsky’s DGI project

·   Sam Adams’ experiential learning project at IBM

·   The algorithmic information theory approach to AGI theory, carried out by
Juergen Schmidhuber and Marcus Hutter at IDSIA

.   The Cog project at MIT



-- Ben


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



Re: [agi] Commercial AGI ventures

2002-11-05 Thread Pei Wang
I have a list at
http://www.cis.temple.edu/~pwang/203-AI/Lecture/203-1126.htm, including
projects satisfying the following three standards:
  a.. Each of them has the plan to eventually grow into a "thinking machine"
or "artificial general intelligence" (so it is not merely about part of AI);
  b.. Each of them has been carried out for more than 5 years (so it is more
than a PhD project);
  c.. Each of them has prototypes or early versions finished (so it is not
merely a theory), and there are some publications explaining how it works
(so it is not merely a claim).
Pei

- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, November 04, 2002 9:07 PM
Subject: RE: [agi] Commercial AGI ventures


>
> One trouble with this endeavor is that "AGI" is a fuzzy set...
>
> However, I'd be quite interested to see this list, even so.
>
> In fact, I think it'd be more valuable to simply see a generic list of all
> AGI projects, be they commercial or non.
>
> If anyone wants to create such a list, I'll be happy to post it on the
> revised www.realai.net site, due in a month or so.
>
> -- Ben
>
>
>
> > -Original Message-
> > From: [EMAIL PROTECTED] [mailto:owner-agi@;v2.listbox.com]On
> > Behalf Of Simon McClenahan
> > Sent: Monday, November 04, 2002 6:47 PM
> > To: [EMAIL PROTECTED]
> > Subject: [agi] Commercial AGI ventures
> >
> >
> > Is there a list of (potential) AGI vendors somewhere? Other than
Novamente
> > and A2I2, has someone compiled a list of commercial institutions that
are
> > pushing their wares towards AGI?
> >
> > cheers,
> > Simon
>
> ---
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/
>


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/