Re: [agi] The role of incertainty

2007-05-04 Thread James Ratcliff
This is similar to the thread I was workign on recently about Goals that 
didnt get quite as far as I would have liked either. 

1. For use as testing metrics or for our personal goals of What an AGI should 
achieve, or what is important these goals or classes of problems should be 
defined as well as we possibly can. 

A couple obvious upper level classes are 
  Navigation - in a real or virtual world.
  Natural Language - speaking, reading, talking
  Basic Problems Solving - given a simple, problem find a solution.

2. More specific for your AGI,
  What do you see the virtual pets doing?  Specifically as end user functions 
for the consumer, the selling points you would give them, and how the AGI would 
help these functions.  
  Is it going to be a rich enough situation in general to display more than 
just a blocks world type of pet, go fetch me the blue ball over there

James Ratcliff

Benjamin Goertzel [EMAIL PROTECTED] wrote: 

On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:No, I keep saying - 
I'm not asking for the odd  narrowly-defined task - but rather defining CLASSES 
of specific problems that  your/an AGI will be able to tackle. 



Well, we have thought a lot about

-- virtual agent control in simulation worlds (both pets and humanlike avatars)
-- natural language question answering 
-- recognition of patterns in large bodies of scientific data

 

 Part of the definition task should be to  explain how if you can solve one 
kind of problem, then you will be able to solve  other distinct kinds.
  




We can certainly explain that re Novamente, but IMO it is not the best way to 
get across how the system works to others with a technical interest in AGI.  It 
may well be a useful mode of description for marketing purposes, however. 

ben g
 
-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


___
James Ratcliff - http://falazar.com
Looking for something...
 
-
Never miss an email again!
Yahoo! Toolbar alerts you the instant new Mail arrives. Check it out.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-04 Thread Benjamin Goertzel




2. More specific for your AGI,
  What do you see the virtual pets doing?  Specifically as end user
functions for the consumer, the selling points you would give them, and how
the AGI would help these functions.
  Is it going to be a rich enough situation in general to display more
than just a blocks world type of pet, go fetch me the blue ball over there

James Ratcliff




Well, it's a commercial project so I can't really talk about what the
capabilities of the version 1.0 virtual pets will be.

But the idea will be to start simple, and then incrementally roll out
smarter and smarter versions.  And, the idea is to make the pets flexible
and adaptive learning systems, rather than just following fixed behavior
patterns.

One practical limitation is that we need to host a lot of pets on each
server...

However, we can do some borg mind stuff to work around this problem -- so
that each pet retains its own personality, yet benefits from collective
learning done basis on the totality of all the pets' memories...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-04 Thread Derek Zahn

Ben Goertzel writes:

Well, it's a commercial project so I can't really talk about what the 
capabilities of the version 1.0 virtual pets will be.


I did spend a few evenings looking around Second Life.  From
that experience, I think that virtual protitutes would be
a more profitable product :)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The role of incertainty

2007-05-04 Thread Benjamin Goertzel

Second Life also has a teen grid, by the way, which is not very
active right now, but which virtual pets could enhance significantly.

Virtual prostitutes are not in the plans anytime soon ;-)

On 5/4/07, Derek Zahn [EMAIL PROTECTED] wrote:


Ben Goertzel writes:

Well, it's a commercial project so I can't really talk about what the
capabilities of the version 1.0 virtual pets will be.

I did spend a few evenings looking around Second Life.  From
that experience, I think that virtual protitutes would be
a more profitable product :)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-04 Thread Mike Tintner
Is there any already existing competition in this area - virtual adaptive pets 
- that we can look at?

  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Friday, May 04, 2007 4:46 PM
  Subject: Re: [agi] The role of incertainty




2. More specific for your AGI,
  What do you see the virtual pets doing?  Specifically as end user 
functions for the consumer, the selling points you would give them, and how the 
AGI would help these functions.  
  Is it going to be a rich enough situation in general to display more than 
just a blocks world type of pet, go fetch me the blue ball over there 

James Ratcliff



  Well, it's a commercial project so I can't really talk about what the 
capabilities of the version 1.0 virtual pets will be.

  But the idea will be to start simple, and then incrementally roll out smarter 
and smarter versions.  And, the idea is to make the pets flexible and adaptive 
learning systems, rather than just following fixed behavior patterns. 

  One practical limitation is that we need to host a lot of pets on each 
server...

  However, we can do some borg mind stuff to work around this problem -- so 
that each pet retains its own personality, yet benefits from collective 
learning done basis on the totality of all the pets' memories... 

  -- Ben

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.467 / Virus Database: 269.6.2/785 - Release Date: 02/05/2007 
14:16

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-04 Thread Derek Zahn

On a less joking note, I think your ideas about applying your
cognitive engine to NPCs in RPG type games (online or otherwise)
could work out really well.  The AI behind the game entities
that are supposedly people is depressingly stupid, and games
are a bazillion-dollar business.

I hope your business direction works out well for you!


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The role of incertainty

2007-05-04 Thread Mark Waser
 However, we can do some borg mind stuff to work around this problem -- so 
 that each pet retains its own personality, yet benefits from collective 
 learning done basis on the totality of all the pets' memories... 

Nice!


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Friday, May 04, 2007 11:46 AM
  Subject: Re: [agi] The role of incertainty




2. More specific for your AGI,
  What do you see the virtual pets doing?  Specifically as end user 
functions for the consumer, the selling points you would give them, and how the 
AGI would help these functions.  
  Is it going to be a rich enough situation in general to display more than 
just a blocks world type of pet, go fetch me the blue ball over there 

James Ratcliff



  Well, it's a commercial project so I can't really talk about what the 
capabilities of the version 1.0 virtual pets will be.

  But the idea will be to start simple, and then incrementally roll out smarter 
and smarter versions.  And, the idea is to make the pets flexible and adaptive 
learning systems, rather than just following fixed behavior patterns. 

  One practical limitation is that we need to host a lot of pets on each 
server...

  However, we can do some borg mind stuff to work around this problem -- so 
that each pet retains its own personality, yet benefits from collective 
learning done basis on the totality of all the pets' memories... 

  -- Ben

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-01 Thread Pei Wang

You can take NARS (http://nars.wang.googlepages.com/) as an example,
starting at http://nars.wang.googlepages.com/wang.logic_intelligence.pdf

Pei

On 5/1/07, rooftop8000 [EMAIL PROTECTED] wrote:

It seems a lot of posts on this list are about the properties an AGI
should have. PLURALISTIC, OPEN-ENDED AGI, adaptive, sometimes irrational ..
it can be useful to talk about them, but i'd rather hear about how
this translates into real projects.

How to make a program that can deal with uncertainty
and is adaptive and can think irrationally at times.. Seems like
an awful lot of things.. how should we organize all this? How do we
take existing solutions for some of these problems and make sure new ones
can get added ..


--- Mike Tintner [EMAIL PROTECTED] wrote:

 Yes, you are very right. And my point is that there are absolutely major
 philosophical issues here - both the general philosophy of mind and
 epistemology, and the more specific philosophy of AI.  In fact, I think my
 characterisation of the issue as one of monism [general - behavioural as
 well as of substance] vs pluralism [again general - not just cultural] is
 probably the best one.

 So do post further thoughts, esp. re AI./AGI - this is well worth pursuing
 and elaborating.

 - Original Message -
 From: Richard Loosemore [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Monday, April 30, 2007 3:31 PM
 Subject: [agi] The role of incertainty


  The discussion of uncertainty reminds me of a story about Piaget that
  struck a chord with me.
 
  Apparently, when Piaget was but a pup, he had the job of scoring tests
  given to kids.  His job was to count the correct answers, but he started
  getting interested in the wrong answers.  When he mentioned to his bosses
  that the wrong answers looked really interesting in their wrongness, they
  got made at him and pointed out that wrong was just wrong, and all they
  were interested in was how to make the kids get more right answers.
 
  At that point, P had a revelation:  looking at right answers told him
  nothing about the children, whereas all the information about what they
  were really thinking was buried in the wrong answers.  So he dumped his
  dead-end job and became Jean Piaget, Famous Psychologist instead.
 
  When I read the story I had a similar feeling of Aha!  Thinking isn't
  about a lot of Right Thinking sprinkled with the occasional annoying
  Mistake.  Thinking is actually a seething cauldron of Mistakes, some of
  which get less egregious over time and become Not-Quite-So-Bad Mistakes,
  which we call rational thinking.
 
  I think this attitude to how the mind works, though it is painted in
  bright colors, is more healthy than the attitude that thinking is about
  reasoning modulated by uncertainty.
 
  (Perhaps this is what irritates me so much about the people who call
  themselves Bayesians:  people so desperate to believe that they are
  perfect that they have made a religion out of telling each other that they
  think perfectly, when in fact they are just as irrational as any other
  religious fanatic). ;-)
 
 
 
  Richard Loosemore.
 
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 
 
 
  --
  No virus found in this incoming message.
  Checked by AVG Free Edition. Version: 7.5.467 / Virus Database:
  269.6.2/780 - Release Date: 29/04/2007 06:30
 
 


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The role of incertainty

2007-05-01 Thread Mike Tintner

Pei,

Glad to see your input. I noticed NARS quite by accident many years ago  
remembered it as pos. v. important.


You certainly are implementing the principles we have just been discussing - 
which is exciting.


However, reading your papers  Ben's, it's becoming clear that there may 
well be an industry-wide bad practice going on here. You guys all focus on 
how your systems WORK...   The first thing anyone trying to understand your 
or any other system must know is what does it DO?  What are the problems it 
addresses, and the kinds of solutions it provides?


It should be commonly accepted that it is EXTREMELY BAD PRACTICE not to 
first define what problems your system is set up to solve.


Imagine if I spent 100 pages writing up these intricate mechanisms of this 
new machine, with all these wonderful new wireless and heat and electroservo 
this and that principles involved,.. and then only at the v. end do I tell 
you that it's an apple-peeler.  You'd find it a bit of a strain to read all 
that.


The only difference between the above write-up and yours and Ben's is that 
we the readers never even get to find out that what you've got is  an 
apple-peeler! I still don't know what your systems do.


It may be good for grants to cover up what you do, but it's actually not 
good for you or your thinking or the progress of AI.


I'd very much like to know what your NARS system DOES - is that possible?

P.S. Minsky is much the same.

- Original Message - 
From: Pei Wang [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, May 01, 2007 12:50 PM
Subject: Re: [agi] The role of incertainty



You can take NARS (http://nars.wang.googlepages.com/) as an example,
starting at http://nars.wang.googlepages.com/wang.logic_intelligence.pdf

Pei

On 5/1/07, rooftop8000 [EMAIL PROTECTED] wrote:

It seems a lot of posts on this list are about the properties an AGI
should have. PLURALISTIC, OPEN-ENDED AGI, adaptive, sometimes irrational 
..

it can be useful to talk about them, but i'd rather hear about how
this translates into real projects.

How to make a program that can deal with uncertainty
and is adaptive and can think irrationally at times.. Seems like
an awful lot of things.. how should we organize all this? How do we
take existing solutions for some of these problems and make sure new ones
can get added ..


--- Mike Tintner [EMAIL PROTECTED] wrote:

 Yes, you are very right. And my point is that there are absolutely 
 major

 philosophical issues here - both the general philosophy of mind and
 epistemology, and the more specific philosophy of AI.  In fact, I think 
 my
 characterisation of the issue as one of monism [general - behavioural 
 as
 well as of substance] vs pluralism [again general - not just cultural] 
 is

 probably the best one.

 So do post further thoughts, esp. re AI./AGI - this is well worth 
 pursuing

 and elaborating.

 - Original Message -
 From: Richard Loosemore [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Monday, April 30, 2007 3:31 PM
 Subject: [agi] The role of incertainty


  The discussion of uncertainty reminds me of a story about Piaget that
  struck a chord with me.
 
  Apparently, when Piaget was but a pup, he had the job of scoring 
  tests
  given to kids.  His job was to count the correct answers, but he 
  started
  getting interested in the wrong answers.  When he mentioned to his 
  bosses
  that the wrong answers looked really interesting in their wrongness, 
  they
  got made at him and pointed out that wrong was just wrong, and all 
  they

  were interested in was how to make the kids get more right answers.
 
  At that point, P had a revelation:  looking at right answers told him
  nothing about the children, whereas all the information about what 
  they
  were really thinking was buried in the wrong answers.  So he dumped 
  his

  dead-end job and became Jean Piaget, Famous Psychologist instead.
 
  When I read the story I had a similar feeling of Aha!  Thinking isn't
  about a lot of Right Thinking sprinkled with the occasional annoying
  Mistake.  Thinking is actually a seething cauldron of Mistakes, some 
  of
  which get less egregious over time and become Not-Quite-So-Bad 
  Mistakes,

  which we call rational thinking.
 
  I think this attitude to how the mind works, though it is painted in
  bright colors, is more healthy than the attitude that thinking is 
  about

  reasoning modulated by uncertainty.
 
  (Perhaps this is what irritates me so much about the people who call
  themselves Bayesians:  people so desperate to believe that they are
  perfect that they have made a religion out of telling each other that 
  they
  think perfectly, when in fact they are just as irrational as any 
  other

  religious fanatic). ;-)
 
 
 
  Richard Loosemore.
 
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member

Re: [agi] The role of incertainty

2007-05-01 Thread Pei Wang

On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:

Pei,

Glad to see your input. I noticed NARS quite by accident many years ago 
remembered it as pos. v. important.

You certainly are implementing the principles we have just been discussing -
which is exciting.

However, reading your papers  Ben's, it's becoming clear that there may
well be an industry-wide bad practice going on here. You guys all focus on
how your systems WORK...   The first thing anyone trying to understand your
or any other system must know is what does it DO?  What are the problems it
addresses, and the kinds of solutions it provides?


Well, that is exactly the problem addressed in the paper I mentioned:
my working definition of intelligence, and why I think it is a
better understanding than the others.


It should be commonly accepted that it is EXTREMELY BAD PRACTICE not to
first define what problems your system is set up to solve.


Agree.


Imagine if I spent 100 pages writing up these intricate mechanisms of this
new machine, with all these wonderful new wireless and heat and electroservo
this and that principles involved,.. and then only at the v. end do I tell
you that it's an apple-peeler.  You'd find it a bit of a strain to read all
that.


Agree.


The only difference between the above write-up and yours and Ben's is that
we the readers never even get to find out that what you've got is  an
apple-peeler! I still don't know what your systems do.


I wonder is you really read the paper I mentioned --- you can
criticize it for all kinds of reasons, but you cannot say I didn't
define the problem I'm working on, because that is what that paper is
all about! If it is still not clear from that paper, you may also want
to read http://nars.wang.googlepages.com/wang.AI_Definitions.pdf and
http://nars.wang.googlepages.com/wang.WhatAIShouldBe.pdf


It may be good for grants to cover up what you do, but it's actually not
good for you or your thinking or the progress of AI.

I'd very much like to know what your NARS system DOES - is that possible?


I guess I don't understand what you mean by DOES. If you mean the
goal of the project, then the above papers should be sufficient; if
you mean how the system works, you need to try my demo at
http://nars.wang.googlepages.com/nars%3Ademonstration ; if you mean
what domain problems it can solve by design, then the answer is
none, since it is not an expert system. Can you be more specific?

Pei


P.S. Minsky is much the same.

- Original Message -
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 01, 2007 12:50 PM
Subject: Re: [agi] The role of incertainty


 You can take NARS (http://nars.wang.googlepages.com/) as an example,
 starting at http://nars.wang.googlepages.com/wang.logic_intelligence.pdf

 Pei

 On 5/1/07, rooftop8000 [EMAIL PROTECTED] wrote:
 It seems a lot of posts on this list are about the properties an AGI
 should have. PLURALISTIC, OPEN-ENDED AGI, adaptive, sometimes irrational
 ..
 it can be useful to talk about them, but i'd rather hear about how
 this translates into real projects.

 How to make a program that can deal with uncertainty
 and is adaptive and can think irrationally at times.. Seems like
 an awful lot of things.. how should we organize all this? How do we
 take existing solutions for some of these problems and make sure new ones
 can get added ..


 --- Mike Tintner [EMAIL PROTECTED] wrote:

  Yes, you are very right. And my point is that there are absolutely
  major
  philosophical issues here - both the general philosophy of mind and
  epistemology, and the more specific philosophy of AI.  In fact, I think
  my
  characterisation of the issue as one of monism [general - behavioural
  as
  well as of substance] vs pluralism [again general - not just cultural]
  is
  probably the best one.
 
  So do post further thoughts, esp. re AI./AGI - this is well worth
  pursuing
  and elaborating.
 
  - Original Message -
  From: Richard Loosemore [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Monday, April 30, 2007 3:31 PM
  Subject: [agi] The role of incertainty
 
 
   The discussion of uncertainty reminds me of a story about Piaget that
   struck a chord with me.
  
   Apparently, when Piaget was but a pup, he had the job of scoring
   tests
   given to kids.  His job was to count the correct answers, but he
   started
   getting interested in the wrong answers.  When he mentioned to his
   bosses
   that the wrong answers looked really interesting in their wrongness,
   they
   got made at him and pointed out that wrong was just wrong, and all
   they
   were interested in was how to make the kids get more right answers.
  
   At that point, P had a revelation:  looking at right answers told him
   nothing about the children, whereas all the information about what
   they
   were really thinking was buried in the wrong answers.  So he dumped
   his
   dead-end job and became Jean Piaget, Famous Psychologist instead

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel



However, reading your papers  Ben's, it's becoming clear that there may
well be an industry-wide bad practice going on here. You guys all focus on
how your systems WORK...   The first thing anyone trying to understand
your
or any other system must know is what does it DO?  What are the problems
it
addresses, and the kinds of solutions it provides?





Hey Mike: What are the problems you address, and the kinds of solutions you
provide?

I could ask the same question about my 10 year old daughter ... or a newborn
baby...

My point is: an AGI system is by definition not restricted to a highly
particular problem domain ... so the set of problems and solutions
potentially addressable by any AGI system will be extremely broad

Thinking in terms of incremental pathways to AGI, one may posit particular
problem domains as targets for partial versions of one's AGI.  But then
there is a danger that people will see those interim problem domains and
overgeneralize and believe that is what one's AGI system is all about.

For instance, the first commercial manifestation of the Novamente AI Engine
was the use of some of its learning routines inside the Biomind ArrayGenius
product for gene expression microarray data analysis. So what?

The next commercial manifestation may well be for controlling virtual pets
operative within Second Life and/or other virtual worlds.  Again, so what?
This doesn't mean that Novamente is basically a virtual pet controller any
more than it's basically a bioinformatics analysis tool.

Once the end goal of AGI is reached, AGI systems will be able to do anything
humans can do plus way more.  And what AGI systems happen to be used to do
on the incremental pathway there, doesn't really tell you that much about
the ultimate nature of the AGI systems.   (Similarly, e.g., the early
applications that the Internet was used for in the 1970's don't really tell
you much about the ultimate nature of the Internet.  They tell you something
of course, but they leave a lot out, as anyone can see now)

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-01 Thread Pei Wang

On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:

Define the type of problems it addresses which might be [for all I know]

*understanding and precis-ing a set of newspaper stories about politics or
sewage
*solving a crime of murder - starting with limited evidence
*designing new types of buildings - starting from a limited knowledge of
other types of buildings
*navigate an agent through a domestic/ office environment with furniture
strewn everywhere, animals and people milling around -  limited rules for
navigation...
*learn to identify every plant and tree in a complex jungle scene - starting
with limited knowledge and access to public databases

Ideally I'd like to see in there, how having learned to solve one class of
problems, it is going to solve new related classes of problems - to get an
EDUCATION in various spheres ... (as distinct say from just learning in one
sphere) -


If one of the above problem is solved by an AGI system, it should be
the result of learning of the system, rather than an innate capability
built into the system. Furthermore, I don't think any of them is a
necessary capability of an AGI system. If we are taking about
possibility, I believe NARS has the potential for each of them, though
the system isn't at that stage yet. I don't see why it is impossible,
though I cannot show you that it has been done.


so the navigational agent would have to be able, having mastered a domestic
environment, to learn to navigate a jungle or forest environment.

I want to see some PROBLEM ( education)  EXAMPLES. That's all, really.
(You say that your system is attending to all these different problems more
or less simultaneously, or interweavingly, but you don't say what the
problems are).

Me, the general public, ( AGI people I suspect), have v. little idea of
what AGI can actually do or is even trying to do right now


I'm sorry if I make you disappointed --- I don't think any AGI has
reached the stage of practical application. I don't want the general
public to think that the AGI problem has been solved. Instead, I want
they to think that AGI is an important and interesting research that
should be supported, or at least tolerated.


P.S. There are two sides to talking about knowledge/ intelligence/ problems
etc.   There is the side of the subject - the thinker, the brain
manipulating ideas, the user of different techniques, logical, mathematical
etc. bits, bytes etc  And there is the side of the object(s) of knowledge -
the crimes being solved, the buildings being constructed, the genes and
society being learned about.. - what all that knowledge and those problems
are about.

There is the subjective side of the mirror reflecting and there is the
objective side of the scene being reflected.

You guys tend to be massively leaning over in describing everything in terms
of the subjective side.  But it's only when you describe intelligence 
problemsolving in terms of what you're trying to know/ solve problems about
that things start to make sense.


To me, this is a major reason that why we don't have a thinking
machine yet --- people (both the general public and the mainstream AI
researchers) believe that general intelligence can be reached by
solving domain problems one after another. I won't try to convince
you, but to point out that this belief is probably not as self-evident
as many people assume. Before a good theory is established about AI,
there won't be any good application.

Pei



- Original Message -
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 01, 2007 2:30 PM
Subject: Re: [agi] The role of incertainty


 On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:
 Pei,

 Glad to see your input. I noticed NARS quite by accident many years ago 
 remembered it as pos. v. important.

 You certainly are implementing the principles we have just been
 discussing -
 which is exciting.

 However, reading your papers  Ben's, it's becoming clear that there may
 well be an industry-wide bad practice going on here. You guys all focus
 on
 how your systems WORK...   The first thing anyone trying to understand
 your
 or any other system must know is what does it DO?  What are the problems
 it
 addresses, and the kinds of solutions it provides?

 Well, that is exactly the problem addressed in the paper I mentioned:
 my working definition of intelligence, and why I think it is a
 better understanding than the others.

 It should be commonly accepted that it is EXTREMELY BAD PRACTICE not to
 first define what problems your system is set up to solve.

 Agree.

 Imagine if I spent 100 pages writing up these intricate mechanisms of
 this
 new machine, with all these wonderful new wireless and heat and
 electroservo
 this and that principles involved,.. and then only at the v. end do I
 tell
 you that it's an apple-peeler.  You'd find it a bit of a strain to read
 all
 that.

 Agree.

 The only difference between the above write-up and yours and Ben's is
 that
 we the readers

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel

On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:


Well, that really frustrates me. You just can't produce a machine that's
going to work, unless you start with its goal/function.




I think you are making an error of projecting the methodologies that are
appropriate for narrow-purpose-specific machines, onto the quite
different problem of designing AGIs...

My colleagues at Novamente LLC have built plenty of purpose-specific
software systems for customers, so it's not as though we're unable to
work in the manner you're suggesting.  We just find it inappropriate for the
AGI task.



The obvious and most basic type of adaptive problem it seems to me that
agents/ robots should start with is navigational.



Navigation IMO is a relatively narrow problem that can likely be solved by
narrow-AI methods pretty effectively, without need for a really broad and
robust AGI.

So I don't view it as a great incremental problem for AGI.

On the other hand, for instance Learning the rules of new games via
communication
with humans, and then being able to play these games effectively does seem
to
me like an appropriate incremental problem to orient one's work toward, on
the gradual
path toward AGI.

However, I note that we will likely be approaching the navigation problem w/
Novamente
during the next year, due to our intended business course of applying our
proto-AGI system to control virtual agents in simulation worlds.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-01 Thread Mike Tintner
IN the final analysis, Ben, you're giving me excuses rather than solutions.

Your pet control program is a start - at least I have a vague, still v. vague 
idea of what you might be doing.

You could (I'm guessing) say : this AGI is designed to control a pet which will 
have to solve adaptive problems like a) hide in surprising places within a 
complex environment and b) negotiate a complex environment strewn with 
obstacles and find new ways to destinations

That more particular problem or type of problem, can then be generalised into  
vast classes of problems about agents finding new ways about complex 
environments - from searching buildings, to playing soccer or field games, to 
shopping in supermarkets or malls, etc.  (In the end, you could probably 
generalise that class to include ALL problems period - including searching 
through information environments on the Net and in conversations).

(Actually your mission statement should start the other way round - with the 
general class of problems/ activities you envisage  - and then the particular 
examples that your AGI is going to concentrate on first.. and then indicate how 
you think it might progress).

P.S. This is a truly weird conversation. It's like you're saying..Hell it's a 
box, why should I have to tell you what my box does? Only insiders care what's 
inside the box. The rest of the world wants to know what it does - and that's 
the only way they'll buy it and pay attention to it - and the only reason they 
should. Life's short.


- Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 2:39 PM
  Subject: Re: [agi] The role of incertainty






However, reading your papers  Ben's, it's becoming clear that there may 
well be an industry-wide bad practice going on here. You guys all focus on
how your systems WORK...   The first thing anyone trying to understand your
or any other system must know is what does it DO?  What are the problems it 
addresses, and the kinds of solutions it provides?



  Hey Mike: What are the problems you address, and the kinds of solutions you 
provide?

  I could ask the same question about my 10 year old daughter ... or a newborn 
baby... 

  My point is: an AGI system is by definition not restricted to a highly 
particular problem domain ... so the set of problems and solutions potentially 
addressable by any AGI system will be extremely broad

  Thinking in terms of incremental pathways to AGI, one may posit particular 
problem domains as targets for partial versions of one's AGI.  But then there 
is a danger that people will see those interim problem domains and 
overgeneralize and believe that is what one's AGI system is all about. 

  For instance, the first commercial manifestation of the Novamente AI Engine 
was the use of some of its learning routines inside the Biomind ArrayGenius 
product for gene expression microarray data analysis. So what? 

  The next commercial manifestation may well be for controlling virtual pets 
operative within Second Life and/or other virtual worlds.  Again, so what?  
This doesn't mean that Novamente is basically a virtual pet controller any 
more than it's basically a bioinformatics analysis tool. 

  Once the end goal of AGI is reached, AGI systems will be able to do anything 
humans can do plus way more.  And what AGI systems happen to be used to do on 
the incremental pathway there, doesn't really tell you that much about the 
ultimate nature of the AGI systems.   (Similarly, e.g., the early applications 
that the Internet was used for in the 1970's don't really tell you much about 
the ultimate nature of the Internet.  They tell you something of course, but 
they leave a lot out, as anyone can see now) 

  -- Ben G


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.467 / Virus Database: 269.6.2/782 - Release Date: 01/05/2007 
02:10

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel



P.S. This is a truly weird conversation. It's like you're saying..Hell
it's a box, why should I have to tell you what my box does? Only insiders
care what's inside the box. The rest of the world wants to know what it does
- and that's the only way they'll buy it and pay attention to it - and the
only reason they should. Life's short.




Well, I am not trying to sell the Novamente Cognition Engine to the average
Joe as ANYTHING, because it is not finished.

When it is finished, I will still not try to sell it to the average Joe (or
Mike ;-) as a purpose-specific product, because it is not one.

What I will try to sell to people are purpose-specific products, such as
virtual pets that they can train, or software systems they can use (if
they're biologists) to find patterns in their data, etc.   I understand that
what people want to pay for, are purpose-specific products.  However, what
will enable the construction of a wide variety of purpose-specific products,
is a general-purpose AGI engine...

To use a rough analogy, suppose it was a long time ago and I was developing
the world's first internal combustion engine.  Then we could argue...

Mike: What are you working on, Ben?

Ben: I'm building an internal combustion engine

Mike: What does it do?

Ben: Well, it's a device in which rapid oxidation, of gas and air occurs in
a confined space called a combustion chamber. This exothermic reaction of a
fuel with an oxidizer creates gases of high temperature and pressure, which
are permitted to expand. The defining feature of an internal combustion
engine is that useful work is performed by the expanding hot gases acting
directly to cause pressure, further causing movement of the piston inside
the cylinder.

Mike: What?

Ben: Well, you burn stuff in a closed chamber and it makes pistons move up
and down

Mike: Oh.  Well who the hell would want to buy something that does that? No
one wants to watch pistons move up and down, at least not in my neck of the
woods.

Ben: Well you can use it for all sorts of different things

Mike: Like what?

Ben: Well, to power a car, or a locomotive, or an electrical generator ...
or even a backpack helicopter.  Maybe a robot.  A lawnmower.

Mike: Ok, so if you want to get your engine built, you need to set a
specific goal.  For instance, your goal could be to build a lawnmower.

Ben: Well, that could be a good incremental goal -- to make a small version
of my engine to power a lawnmower.  But no particular goal is going to
encapsulate all the applications of the engine.  The main point is that I'm
building an engine that lets you burn fuel and thus create mechanical work
-- and this can be used for all sorts of different things.

Mike: But, if you want people to buy it, you have to tell them what it will
do for them.  No one wants to buy a machine that sits in their livingroom
and makes pistons bob up and down.

Ben: Ok, look.  This conversation is getting frustrating.  I'm going to
close the email window and get back to work.

Mike: Darn, this conversation is getting frustrating.  I don't want to buy a
bunch of exothermic reactions, I want to buy something that does something
specific for me.

***


-- Ben


You could ask them for the specific purpose of the generator, and they would
say: Well, it can be used to power light bulbs, or computers, or cars, or
refrigerators, or etc. etc. etc.   But yet, none of these particular
applications summarizes what it does.  What it does is to generate
electricity, which then can be used in a lot of applications.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-01 Thread Mike Tintner
Nah, analogy doesn't quite work - though could be useful.

An engine is used to move things... many different things - wheels, levers,  
etc. So if you've got an engine that is twenty times more powerful, sure you 
don't need to tell me what particular things it is going to move. It's 
generally accepted that it can move millions of things..

The difficulty here is that the problems to be solved by an AI or AGI machine 
are NOT accepted, well-defined. We cannot just take Pei's NARS, say, or 
NOvaemnte, and say well obviously it will apply to all these different kinds of 
problems. No doubt it will apply to many. But you have to explain. You have to 
classify the problems.

Indeed, you will at some point be able to (or can already) describe different 
AI architectures almost as engines - but it's bringing all those problems 
together - which is a mixture of a psychological and philosophical problem. 

Background here: the fact that psychologists are still arguing about whether g 
exists - general intelligence - is a reflection of the difficulties here - the 
unsolved problems of defining problems. However those difficulties are not that 
great or insuperable.

Not much point in arguing further here - all I can say now is TRY it - try 
focussing your work the other way round - I'm confident you'll find it makes 
life vastly easier and more productive.  Defining what it does is just as 
essential for the designer as for the consumer.


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 5:57 PM
  Subject: Re: [agi] The role of incertainty




P.S. This is a truly weird conversation. It's like you're saying..Hell 
it's a box, why should I have to tell you what my box does? Only insiders care 
what's inside the box. The rest of the world wants to know what it does - and 
that's the only way they'll buy it and pay attention to it - and the only 
reason they should. Life's short.


  Well, I am not trying to sell the Novamente Cognition Engine to the average 
Joe as ANYTHING, because it is not finished.  

  When it is finished, I will still not try to sell it to the average Joe (or 
Mike ;-) as a purpose-specific product, because it is not one.

  What I will try to sell to people are purpose-specific products, such as 
virtual pets that they can train, or software systems they can use (if they're 
biologists) to find patterns in their data, etc.   I understand that what 
people want to pay for, are purpose-specific products.  However, what will 
enable the construction of a wide variety of purpose-specific products, is a 
general-purpose AGI engine... 

  To use a rough analogy, suppose it was a long time ago and I was developing 
the world's first internal combustion engine.  Then we could argue...

  Mike: What are you working on, Ben?

  Ben: I'm building an internal combustion engine 

  Mike: What does it do?

  Ben: Well, it's a device in which rapid oxidation, of gas and air occurs in a 
confined space called a combustion chamber. This exothermic reaction of a fuel 
with an oxidizer creates gases of high temperature and pressure, which are 
permitted to expand. The defining feature of an internal combustion engine is 
that useful work is performed by the expanding hot gases acting directly to 
cause pressure, further causing movement of the piston inside the cylinder. 

  Mike: What?

  Ben: Well, you burn stuff in a closed chamber and it makes pistons move up 
and down

  Mike: Oh.  Well who the hell would want to buy something that does that? No 
one wants to watch pistons move up and down, at least not in my neck of the 
woods. 

  Ben: Well you can use it for all sorts of different things

  Mike: Like what?

  Ben: Well, to power a car, or a locomotive, or an electrical generator ... or 
even a backpack helicopter.  Maybe a robot.  A lawnmower. 

  Mike: Ok, so if you want to get your engine built, you need to set a specific 
goal.  For instance, your goal could be to build a lawnmower.

  Ben: Well, that could be a good incremental goal -- to make a small version 
of my engine to power a lawnmower.  But no particular goal is going to 
encapsulate all the applications of the engine.  The main point is that I'm 
building an engine that lets you burn fuel and thus create mechanical work -- 
and this can be used for all sorts of different things. 

  Mike: But, if you want people to buy it, you have to tell them what it will 
do for them.  No one wants to buy a machine that sits in their livingroom and 
makes pistons bob up and down.

  Ben: Ok, look.  This conversation is getting frustrating.  I'm going to close 
the email window and get back to work. 

  Mike: Darn, this conversation is getting frustrating.  I don't want to buy a 
bunch of exothermic reactions, I want to buy something that does something 
specific for me.

  ***


  -- Ben


  You could ask them for the specific purpose of the generator, and they would 
say: Well

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel



Not much point in arguing further here - all I can say now is TRY it - try
focussing your work the other way round - I'm confident you'll find it makes
life vastly easier and more productive.  Defining what it does is just as
essential for the designer as for the consumer.




Focusing on making systems that can achieve narrowly-defined tasks is
EXACTLY what the AI field has been doing for the last couple decades.

Unsurprisingly, they have had some modest success at making systems that can
achieve narrowly-defined tasks, and no success at moving toward artificial
general intelligence.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-01 Thread Pei Wang

On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:


The difficulty here is that the problems to be solved by an AI or AGI
machine are NOT accepted, well-defined. We cannot just take Pei's NARS, say,
or NOvaemnte, and say well obviously it will apply to all these different
kinds of problems. No doubt it will apply to many. But you have to explain.
You have to classify the problems.


We indeed have done that. What you suggested in exactly what I called
Capability-AI in
http://nars.wang.googlepages.com/wang.AI_Definitions.pdf . I agree
that it is closer to many people's intuitive understanding to
intelligence --- after all, we judge other people's intelligence by
what practical problems they can solve. However, this understanding
has serious limitation, as analyzed in the paper, as well as shown by
the history of AI, since your idea is quite close to mainstream AI.

Again, I'm not really trying to convince you, but to show you that if
some AGI researchers don't do what you consider as obvious, they may
have some consideration which cannot be simply rejected as obviously
wrong.

Pei


Indeed, you will at some point be able to (or can already) describe
different AI architectures almost as engines - but it's bringing all those
problems together - which is a mixture of a psychological and philosophical
problem.

Background here: the fact that psychologists are still arguing about whether
g exists - general intelligence - is a reflection of the difficulties here -
the unsolved problems of defining problems. However those difficulties are
not that great or insuperable.

Not much point in arguing further here - all I can say now is TRY it - try
focussing your work the other way round - I'm confident you'll find it makes
life vastly easier and more productive.  Defining what it does is just as
essential for the designer as for the consumer.



- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Tuesday, May 01, 2007 5:57 PM
Subject: Re: [agi] The role of incertainty







 P.S. This is a truly weird conversation. It's like you're saying..Hell
it's a box, why should I have to tell you what my box does? Only insiders
care what's inside the box. The rest of the world wants to know what it does
- and that's the only way they'll buy it and pay attention to it - and the
only reason they should. Life's short.


Well, I am not trying to sell the Novamente Cognition Engine to the average
Joe as ANYTHING, because it is not finished.

When it is finished, I will still not try to sell it to the average Joe (or
Mike ;-) as a purpose-specific product, because it is not one.

What I will try to sell to people are purpose-specific products, such as
virtual pets that they can train, or software systems they can use (if
they're biologists) to find patterns in their data, etc.   I understand that
what people want to pay for, are purpose-specific products.  However, what
will enable the construction of a wide variety of purpose-specific products,
is a general-purpose AGI engine...

To use a rough analogy, suppose it was a long time ago and I was developing
the world's first internal combustion engine.  Then we could argue...

Mike: What are you working on, Ben?

Ben: I'm building an internal combustion engine

Mike: What does it do?

Ben: Well, it's a device in which rapid oxidation, of gas and air occurs in
a confined space called a combustion chamber. This exothermic reaction of a
fuel with an oxidizer creates gases of high temperature and pressure, which
are permitted to expand. The defining feature of an internal combustion
engine is that useful work is performed by the expanding hot gases acting
directly to cause pressure, further causing movement of the piston inside
the cylinder.

Mike: What?

Ben: Well, you burn stuff in a closed chamber and it makes pistons move up
and down

Mike: Oh.  Well who the hell would want to buy something that does that? No
one wants to watch pistons move up and down, at least not in my neck of the
woods.

Ben: Well you can use it for all sorts of different things

Mike: Like what?

Ben: Well, to power a car, or a locomotive, or an electrical generator ...
or even a backpack helicopter.  Maybe a robot.  A lawnmower.

Mike: Ok, so if you want to get your engine built, you need to set a
specific goal.  For instance, your goal could be to build a lawnmower.

Ben: Well, that could be a good incremental goal -- to make a small version
of my engine to power a lawnmower.  But no particular goal is going to
encapsulate all the applications of the engine.  The main point is that I'm
building an engine that lets you burn fuel and thus create mechanical work
-- and this can be used for all sorts of different things.

Mike: But, if you want people to buy it, you have to tell them what it will
do for them.  No one wants to buy a machine that sits in their livingroom
and makes pistons bob up and down.

Ben: Ok, look.  This conversation is getting frustrating.  I'm going to
close

Re: [agi] The role of incertainty

2007-05-01 Thread Mike Tintner
No, I keep saying - I'm not asking for the odd narrowly-defined task - but 
rather defining CLASSES of specific problems that your/an AGI will be able to 
tackle. Part of the definition task should be to explain how if you can solve 
one kind of problem, then you will be able to solve other distinct kinds.

It's interesting -  I'm not being in any way critical - that this isn't 
getting through.
  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 7:04 PM
  Subject: Re: [agi] The role of incertainty







Not much point in arguing further here - all I can say now is TRY it - try 
focussing your work the other way round - I'm confident you'll find it makes 
life vastly easier and more productive.  Defining what it does is just as 
essential for the designer as for the consumer.


  Focusing on making systems that can achieve narrowly-defined tasks is EXACTLY 
what the AI field has been doing for the last couple decades. 

  Unsurprisingly, they have had some modest success at making systems that can 
achieve narrowly-defined tasks, and no success at moving toward artificial 
general intelligence.

  -- Ben G

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.467 / Virus Database: 269.6.2/782 - Release Date: 01/05/2007 
02:10

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel

On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:


 No, I keep saying - I'm not asking for the odd narrowly-defined task -
but rather defining CLASSES of specific problems that your/an AGI will be
able to tackle.




Well, we have thought a lot about

-- virtual agent control in simulation worlds (both pets and humanlike
avatars)
-- natural language question answering
-- recognition of patterns in large bodies of scientific data




Part of the definition task should be to explain how if you can solve one
kind of problem, then you will be able to solve other distinct kinds.



We can certainly explain that re Novamente, but IMO it is not the best way
to get across how the system works to others with a technical interest in
AGI.  It may well be a useful mode of description for marketing purposes,
however.

ben g

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel

On 5/1/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:




On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:

  No, I keep saying - I'm not asking for the odd narrowly-defined task -
 but rather defining CLASSES of specific problems that your/an AGI will be
 able to tackle.



Well, we have thought a lot about

-- virtual agent control in simulation worlds (both pets and humanlike
avatars)
-- natural language question answering
-- recognition of patterns in large bodies of scientific data




and, math theorem proving...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-01 Thread Mike Tintner
I think if you look at the history of most industries, you'll find that it 
often takes a long time for them to move from becoming producer-centric to 
consumer-centric. [There are some established terms for this, wh. I've 
forgotten].


When making things people are often first preoccupied with the tools and the 
machinery, rather than the ultimate function.


And producers are often extremely resistant to looking at things from the 
other POV.


As I said to Ben, the crucial cultural background here is that intelligence 
and creativity have not been properly defined in any sphere. There is no 
consensus about types of problems, about the difference between AI and AGI, 
or, more crucially, between divergent and convergent intelligence, etc. etc. 
So I don't agree that you can assume that a given AI architecture or system 
will be able to solve a whole set of problems.


And a large part of my point is that the question what does it do?  SHOULD 
be obvious, but isn't because there's clearly a whole producer-centric 
culture within AI/AGI of ignoring it.


P.S. I think I see that one problem I'm having communicating with both you  
Ben, is that you're both working within a fading dichotomy of AI -specific 
well-defined problems vs AGI - general problem-solving which supposedly 
doesn't have to be defined (and keep pushing me into the AI camp).


I'm saying you do have to define what your AGI will do - but define it as a 
tree - 1)  a general class of problems - supported by 2) examples of 
specific types of problem within that class. I'm calling for something 
different to the traditional alternatives here.


I doubt that anyone is doing much thinking about general CLASSES of 
problems. I've been trying to do it in my posts .



- Original Message - 
From: Pei Wang [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, May 01, 2007 7:08 PM
Subject: Re: [agi] The role of incertainty



On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:


The difficulty here is that the problems to be solved by an AI or AGI
machine are NOT accepted, well-defined. We cannot just take Pei's NARS, 
say,

or NOvaemnte, and say well obviously it will apply to all these different
kinds of problems. No doubt it will apply to many. But you have to 
explain.

You have to classify the problems.


We indeed have done that. What you suggested in exactly what I called
Capability-AI in
http://nars.wang.googlepages.com/wang.AI_Definitions.pdf . I agree
that it is closer to many people's intuitive understanding to
intelligence --- after all, we judge other people's intelligence by
what practical problems they can solve. However, this understanding
has serious limitation, as analyzed in the paper, as well as shown by
the history of AI, since your idea is quite close to mainstream AI.

Again, I'm not really trying to convince you, but to show you that if
some AGI researchers don't do what you consider as obvious, they may
have some consideration which cannot be simply rejected as obviously
wrong.

Pei


Indeed, you will at some point be able to (or can already) describe
different AI architectures almost as engines - but it's bringing all 
those
problems together - which is a mixture of a psychological and 
philosophical

problem.

Background here: the fact that psychologists are still arguing about 
whether
g exists - general intelligence - is a reflection of the difficulties 
here -
the unsolved problems of defining problems. However those difficulties 
are

not that great or insuperable.

Not much point in arguing further here - all I can say now is TRY it - 
try
focussing your work the other way round - I'm confident you'll find it 
makes

life vastly easier and more productive.  Defining what it does is just as
essential for the designer as for the consumer.



- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Tuesday, May 01, 2007 5:57 PM
Subject: Re: [agi] The role of incertainty







 P.S. This is a truly weird conversation. It's like you're saying..Hell
it's a box, why should I have to tell you what my box does? Only 
insiders
care what's inside the box. The rest of the world wants to know what it 
does
- and that's the only way they'll buy it and pay attention to it - and 
the

only reason they should. Life's short.


Well, I am not trying to sell the Novamente Cognition Engine to the 
average

Joe as ANYTHING, because it is not finished.

When it is finished, I will still not try to sell it to the average Joe 
(or

Mike ;-) as a purpose-specific product, because it is not one.

What I will try to sell to people are purpose-specific products, such as
virtual pets that they can train, or software systems they can use (if
they're biologists) to find patterns in their data, etc.   I understand 
that
what people want to pay for, are purpose-specific products.  However, 
what
will enable the construction of a wide variety of purpose-specific 
products,

is a general-purpose AGI engine...

To use

Re: [agi] The role of incertainty

2007-05-01 Thread Josh Treadwell

On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:


 No, I keep saying - I'm not asking for the odd narrowly-defined task -
but rather defining CLASSES of specific problems that your/an AGI will be
able to tackle. Part of the definition task should be to explain how if you
can solve one kind of problem, then you will be able to solve other distinct
kinds.



Did nature have a specific task in mind when our brains evolved?  Much like
an AGI, we as humans are capable of doing MANY things.  To sum it up, AGI
could be described as a machine that is capable of using pattern
recognition, classification, and analysis to produce better pattern
recognition, classification and analysis systems for itself.  The results of
this apply to every problem that could ever be asked to solve.

The traditional approach to AI is to do exactly what you're asking: solve
individual problems and build them up until we have something that, on every
observable level, is equivalent to a thinking person.  For the last 50
years, this hasn't produced any promising results in terms of cognition.

It's interesting -  I'm not being in any way critical - that this isn't

getting through.





--
Josh Treadwell
  [EMAIL PROTECTED]
  480-206-3776

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-01 Thread Pei Wang

On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:


As I said to Ben, the crucial cultural background here is that intelligence
and creativity have not been properly defined in any sphere. There is no
consensus about types of problems, about the difference between AI and AGI,
or, more crucially, between divergent and convergent intelligence, etc. etc.
So I don't agree that you can assume that a given AI architecture or system
will be able to solve a whole set of problems.


I don't assume that, and that is exactly why I listed five different
working definitions in the paper I mentioned, and argued that no one
can replace the other completely.


And a large part of my point is that the question what does it do?  SHOULD
be obvious, but isn't because there's clearly a whole producer-centric
culture within AI/AGI of ignoring it.


As this debate shows, what is considered as obvious by different
people are obviously different. ;-)


P.S. I think I see that one problem I'm having communicating with both you 
Ben, is that you're both working within a fading dichotomy of AI -specific
well-defined problems vs AGI - general problem-solving which supposedly
doesn't have to be defined (and keep pushing me into the AI camp).

I'm saying you do have to define what your AGI will do - but define it as a
tree - 1)  a general class of problems - supported by 2) examples of
specific types of problem within that class. I'm calling for something
different to the traditional alternatives here.

I doubt that anyone is doing much thinking about general CLASSES of
problems. I've been trying to do it in my posts .


I have made it very clear in my papers why I don't want to define
intelligence by the set of practical problems it can solve. It is fine
for you to disagree, but at least you should try to see why I take
this position before claiming it to be wrong for obvious reasons.

Pei




- Original Message -
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 01, 2007 7:08 PM
Subject: Re: [agi] The role of incertainty


 On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:

 The difficulty here is that the problems to be solved by an AI or AGI
 machine are NOT accepted, well-defined. We cannot just take Pei's NARS,
 say,
 or NOvaemnte, and say well obviously it will apply to all these different
 kinds of problems. No doubt it will apply to many. But you have to
 explain.
 You have to classify the problems.

 We indeed have done that. What you suggested in exactly what I called
 Capability-AI in
 http://nars.wang.googlepages.com/wang.AI_Definitions.pdf . I agree
 that it is closer to many people's intuitive understanding to
 intelligence --- after all, we judge other people's intelligence by
 what practical problems they can solve. However, this understanding
 has serious limitation, as analyzed in the paper, as well as shown by
 the history of AI, since your idea is quite close to mainstream AI.

 Again, I'm not really trying to convince you, but to show you that if
 some AGI researchers don't do what you consider as obvious, they may
 have some consideration which cannot be simply rejected as obviously
 wrong.

 Pei

 Indeed, you will at some point be able to (or can already) describe
 different AI architectures almost as engines - but it's bringing all
 those
 problems together - which is a mixture of a psychological and
 philosophical
 problem.

 Background here: the fact that psychologists are still arguing about
 whether
 g exists - general intelligence - is a reflection of the difficulties
 here -
 the unsolved problems of defining problems. However those difficulties
 are
 not that great or insuperable.

 Not much point in arguing further here - all I can say now is TRY it -
 try
 focussing your work the other way round - I'm confident you'll find it
 makes
 life vastly easier and more productive.  Defining what it does is just as
 essential for the designer as for the consumer.



 - Original Message -
 From: Benjamin Goertzel
 To: agi@v2.listbox.com
 Sent: Tuesday, May 01, 2007 5:57 PM
 Subject: Re: [agi] The role of incertainty



 
 
 
 
  P.S. This is a truly weird conversation. It's like you're saying..Hell
 it's a box, why should I have to tell you what my box does? Only
 insiders
 care what's inside the box. The rest of the world wants to know what it
 does
 - and that's the only way they'll buy it and pay attention to it - and
 the
 only reason they should. Life's short.


 Well, I am not trying to sell the Novamente Cognition Engine to the
 average
 Joe as ANYTHING, because it is not finished.

 When it is finished, I will still not try to sell it to the average Joe
 (or
 Mike ;-) as a purpose-specific product, because it is not one.

 What I will try to sell to people are purpose-specific products, such as
 virtual pets that they can train, or software systems they can use (if
 they're biologists) to find patterns in their data, etc.   I understand
 that
 what people want

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel



I'm saying you do have to define what your AGI will do - but define it as
a
tree - 1)  a general class of problems - supported by 2) examples of
specific types of problem within that class. I'm calling for something
different to the traditional alternatives here.

I doubt that anyone is doing much thinking about general CLASSES of
problems. I've been trying to do it in my posts .



I understand the approach you're advocating, and I certainly **could** take
it in regard to Novamente, I just don't really see any great value in taking
such an approach.  It wouldn't cause us to do our work any differently.

Maybe it would be useful for better communicating our work to certain
people, such as you, though ;-)

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-01 Thread Mike Tintner
Well, you see I think only the virtual agent problems are truly generalisable. 
The others it strikes me, haven't got a hope of producing AGI, and are actually 
narrow.

But as I said, the first can probably be generalised in terms of agents seeking 
goals within problematic environments - and you can see how, in principle, at 
any rate, an AGI that mastered some such problems, might go on to master 
related but nevertheless very different problems re very different environments.

So, I repeat, - thinking in terms of classes of problems will help you the 
producer and not just the consumer - it will help you, I would argue, focus 
your efforts on where they are most likely to be rewarding. It also involves a 
different kind of thinking, in my impression, than you have actually been 
employing - and if that's true, play with it a lot before rejecting it.

But no need to take this further - although if you do want to explore actual 
classes of problems further as such, I'm still open to that.

Been good talking to you.
  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, May 01, 2007 7:32 PM
  Subject: Re: [agi] The role of incertainty





  On 5/1/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:



On 5/1/07, Mike Tintner  [EMAIL PROTECTED] wrote:
  No, I keep saying - I'm not asking for the odd narrowly-defined task - 
but rather defining CLASSES of specific problems that your/an AGI will be able 
to tackle. 


Well, we have thought a lot about

-- virtual agent control in simulation worlds (both pets and humanlike 
avatars)
-- natural language question answering 
-- recognition of patterns in large bodies of scientific data


  and, math theorem proving...
   



--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.467 / Virus Database: 269.6.2/782 - Release Date: 01/05/2007 
02:10

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel

On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:


 Well, you see I think only the virtual agent problems are truly
generalisable. The others it strikes me, haven't got a hope of producing
AGI, and are actually narrow.




I think they are all generalizable in principle, but the virtual agents one
is the easiest one to do in a generalizable way...


But as I said, the first can probably be generalised in terms of agents

seeking goals within problematic environments - and you can see how, in
principle, at any rate, an AGI that mastered some such problems, might go on
to master related but nevertheless very different problems re very different
environments.




Agreed

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-04-30 Thread Mike Tintner
Yes, you are very right. And my point is that there are absolutely major 
philosophical issues here - both the general philosophy of mind and 
epistemology, and the more specific philosophy of AI.  In fact, I think my 
characterisation of the issue as one of monism [general - behavioural as 
well as of substance] vs pluralism [again general - not just cultural] is 
probably the best one.


So do post further thoughts, esp. re AI./AGI - this is well worth pursuing 
and elaborating.


- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, April 30, 2007 3:31 PM
Subject: [agi] The role of incertainty


The discussion of uncertainty reminds me of a story about Piaget that 
struck a chord with me.


Apparently, when Piaget was but a pup, he had the job of scoring tests 
given to kids.  His job was to count the correct answers, but he started 
getting interested in the wrong answers.  When he mentioned to his bosses 
that the wrong answers looked really interesting in their wrongness, they 
got made at him and pointed out that wrong was just wrong, and all they 
were interested in was how to make the kids get more right answers.


At that point, P had a revelation:  looking at right answers told him 
nothing about the children, whereas all the information about what they 
were really thinking was buried in the wrong answers.  So he dumped his 
dead-end job and became Jean Piaget, Famous Psychologist instead.


When I read the story I had a similar feeling of Aha!  Thinking isn't 
about a lot of Right Thinking sprinkled with the occasional annoying 
Mistake.  Thinking is actually a seething cauldron of Mistakes, some of 
which get less egregious over time and become Not-Quite-So-Bad Mistakes, 
which we call rational thinking.


I think this attitude to how the mind works, though it is painted in 
bright colors, is more healthy than the attitude that thinking is about 
reasoning modulated by uncertainty.


(Perhaps this is what irritates me so much about the people who call 
themselves Bayesians:  people so desperate to believe that they are 
perfect that they have made a religion out of telling each other that they 
think perfectly, when in fact they are just as irrational as any other 
religious fanatic). ;-)




Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 
269.6.2/780 - Release Date: 29/04/2007 06:30






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936