[agi] AGI Taxonomy

2007-05-09 Thread John G. Rose
Is there a standard taxonomy of AGI that is referred to when talking about
different AGIs or near AGIs?  Saying that a software is an AGI or not an AGI
is not descriptive enough.  There are probably very few AGIs but many close
AGIs and then many, many AIs.  Software programs are like the plant and
animal kingdom since they breed and multiply and evolve... 

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] AGI Taxonomy

2007-05-09 Thread Benjamin Goertzel

There's certainly no standard... at

http://www.agiri.org/wiki/index.php?title=AGI_Projects

I used 3 crude categories

-- Neural net based
-- Logic based
-- Integrative


;-)


On 5/9/07, John G. Rose [EMAIL PROTECTED] wrote:


Is there a standard taxonomy of AGI that is referred to when talking about
different AGIs or near AGIs?  Saying that a software is an AGI or not an
AGI
is not descriptive enough.  There are probably very few AGIs but many
close
AGIs and then many, many AIs.  Software programs are like the plant and
animal kingdom since they breed and multiply and evolve...

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] AGI Taxonomy

2007-05-09 Thread J. Storrs Hall, PhD.
In Beyond AI I have a taxonomy (and Kurzweil picked that chapter, among 
others, to post on his site). in brief:

Hypohuman AI -- below human ability and under human control
Diahuman AI -- somewhere in the human range (which is large!)
Epihuman AI -- smarter/more capable than human, but equivalent to a 
moderate-sized company (of very smart people)
Hyperhuman AI -- equivalent to or better than all humans working in a given 
subject area

and two involving a design stance rather than a capability level

Parahuman AI -- designed to work alongside humans and relate
Allohuman AI -- optimized for other things in ways that humans have a harder 
time relating to

Josh


On Wednesday 09 May 2007 06:14, John G. Rose wrote:
 Is there a standard taxonomy of AGI that is referred to when talking about
 different AGIs or near AGIs?  Saying that a software is an AGI or not an
 AGI is not descriptive enough.  There are probably very few AGIs but many
 close AGIs and then many, many AIs.  Software programs are like the plant
 and animal kingdom since they breed and multiply and evolve...

 John

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] AGI Taxonomy

2007-05-09 Thread Pei Wang

My feeling is that it is better to classify the AGI projects alone
multiple dimensions, rather than a single one.

1. Their exact goal (or their working definition of intelligence). On
this aspect, I've tried to put them into 5 groups:
   * structure (e.g., to build brain model)
   * behavior (e.g., to simulate human mind)
   * capability (e.g., to solve hard problems)
   * function (e.g., to have cognitive facilities)
   * principle (e.g., to establish general theory)
examples are in http://www.cis.temple.edu/~pwang/203-AI/Lecture/AGI.htm

2. Their technical strategy. So far I see 3 schools:
   * to integrate existing AI techniques (some people in mainstream
AI are moving in this direction)
   * to establish an overall architecture, with modules that are
based on different techniques (some mainstream AI people do this under
the name of cognitive architecture; integrative AGI projects are
also in this school)
   * to develop a unified core technique, then to extend it in
various directions (some AGI projects mainly depend on a single
technology)

3. Their major technology. This list is never complete, though the
most common ones are:
 *. logic
 *. probability theory
 *. knowledge base
 *. production system
 *. natural language processing
 (the above are often collectively called symbolic)
 *. neural network
 *. evolutionary computation
 *. robotics

Pei


On 5/9/07, John G. Rose [EMAIL PROTECTED] wrote:

Is there a standard taxonomy of AGI that is referred to when talking about
different AGIs or near AGIs?  Saying that a software is an AGI or not an AGI
is not descriptive enough.  There are probably very few AGIs but many close
AGIs and then many, many AIs.  Software programs are like the plant and
animal kingdom since they breed and multiply and evolve...

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] AGI Taxonomy

2007-05-09 Thread Panu Horsmalahti

I don't think intelligence can be measured that easily on a one dimensional
axis, with a dot marking the intelligence of humans. If you look at all
the possible intelligences, not just the organic ones we know of, measuring
intelligence becomes extremely difficult. Measuring the intelligence of
humans has been difficult in the past and so far we pretty much only have IQ
(meaning logical tasks) test. Many have opposed this, saying that
intelligence (in humans) is really something more than simple logic.
Emotional intelligence,etc. If measuring the intelligence in VERY SIMILAR
systems (compared to the vast 'mind design space') is difficult, then
measuring all intelligences must be near impossible (unless someone can pull
out that magic definition of intelligence). It also seems *very*
human-centric to compare everything to humans..

Maybe measuring intelligence is like measuring how good a tool is. It
depends on what you need it for.

2007/5/9, J. Storrs Hall, PhD. [EMAIL PROTECTED]:


In Beyond AI I have a taxonomy (and Kurzweil picked that chapter, among
others, to post on his site). in brief:

Hypohuman AI -- below human ability and under human control
Diahuman AI -- somewhere in the human range (which is large!)
Epihuman AI -- smarter/more capable than human, but equivalent to a
moderate-sized company (of very smart people)
Hyperhuman AI -- equivalent to or better than all humans working in a
given
subject area

and two involving a design stance rather than a capability level

Parahuman AI -- designed to work alongside humans and relate
Allohuman AI -- optimized for other things in ways that humans have a
harder
time relating to

Josh


On Wednesday 09 May 2007 06:14, John G. Rose wrote:
 Is there a standard taxonomy of AGI that is referred to when talking
about
 different AGIs or near AGIs?  Saying that a software is an AGI or not an
 AGI is not descriptive enough.  There are probably very few AGIs but
many
 close AGIs and then many, many AIs.  Software programs are like the
plant
 and animal kingdom since they breed and multiply and evolve...

 John

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Determinism

2007-05-09 Thread David Clark
I work very hard to produce the exact same answer to the same question.  If
some humans don't actually do that, then they are just exhibiting the flaws
that exist in our design.  This is not to be confused with answering better
over time, based on more and better information.  The exact same information
should always produce the exact same result in human or AGI.

Irrational thought could be simulated by an AGI so that a better model of
some humans could be had but the less intentional defects built into the AGI
the better.

  A computer with finite memory can
 only model (predict) a computer with less memory.  No computer can
simulate
 itself.  When we introspect on our own brains, we must simplify the model
to a
 probabilistic one, whether or not it is actually deterministic.

This is NOT true.  How many answers can be had by the formula for a single
straight line?  The answer is infinite.  A computer CAN model/simulate
anything including itself (whatever that means) given enough time.  If the
model has understanding (formulas or algorithms) then any amount of
simulated detail can be realized.

David Clark

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 08, 2007 12:47 PM
Subject: Re: [agi] Determinism


 I really hate to get into this endless discussion.  I think everyone
agrees
 that some randomness in AGI decision making is good (e.g. learning through
 exploration).  Also it does not matter if the source of randomness is a
true
 random source, such as thermal noise in neurons, or a deterministic pseudo
 random number generator, such as iterating a cryptographic hash function
with
 a secret seed.

 I think what is confusing Mike (and I am sure he will correct me) is that
the
 inability of humans to predict their own thoughts (what will I later
decide to
 have for dinner?) is something that needs to be programmed into an AGI.
There
 is actually no other way to program it.  A computer with finite memory can
 only model (predict) a computer with less memory.  No computer can
simulate
 itself.  When we introspect on our own brains, we must simplify the model
to a
 probabilistic one, whether or not it is actually deterministic.


 -- Matt Mahoney, [EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] AGI Taxonomy

2007-05-09 Thread J. Storrs Hall, PhD.
Notice that I didn't use the word intelligence -- the key issue here is when 
we can expect the existence of AGI to make a significant difference in the 
world. Computers have had a big impact because they have abilities well 
beyond those of humans in certain limited areas. Of course, so did steam 
shovels. 

The key issue is ability, and it assumes a context that specifies what kind of 
ability we're talking about. For AGIs it would be those abilities that 
currently remain exclusive to humans. We live in a world whose parameters are 
largely couched in those terms -- for better or worse. That won't be true 50 
years from now, for the first time in history. At that point it would make 
sense to come up with a new scale.

Josh

On Wednesday 09 May 2007 11:44, Panu Horsmalahti wrote:
 I don't think intelligence can be measured that easily on a one dimensional
 axis, with a dot marking the intelligence of humans. If you look at all
 the possible intelligences, not just the organic ones we know of, measuring
 intelligence becomes extremely difficult. Measuring the intelligence of
 humans has been difficult in the past and so far we pretty much only have
 IQ (meaning logical tasks) test. Many have opposed this, saying that
 intelligence (in humans) is really something more than simple logic.
 Emotional intelligence,etc. If measuring the intelligence in VERY SIMILAR
 systems (compared to the vast 'mind design space') is difficult, then
 measuring all intelligences must be near impossible (unless someone can
 pull out that magic definition of intelligence). It also seems *very*
 human-centric to compare everything to humans..

 Maybe measuring intelligence is like measuring how good a tool is. It
 depends on what you need it for.

 2007/5/9, J. Storrs Hall, PhD. [EMAIL PROTECTED]:
  In Beyond AI I have a taxonomy (and Kurzweil picked that chapter, among
  others, to post on his site). in brief:
 
  Hypohuman AI -- below human ability and under human control
  Diahuman AI -- somewhere in the human range (which is large!)
  Epihuman AI -- smarter/more capable than human, but equivalent to a
  moderate-sized company (of very smart people)
  Hyperhuman AI -- equivalent to or better than all humans working in a
  given
  subject area
 
  and two involving a design stance rather than a capability level
 
  Parahuman AI -- designed to work alongside humans and relate
  Allohuman AI -- optimized for other things in ways that humans have a
  harder
  time relating to
 
  Josh
 
  On Wednesday 09 May 2007 06:14, John G. Rose wrote:
   Is there a standard taxonomy of AGI that is referred to when talking
 
  about
 
   different AGIs or near AGIs?  Saying that a software is an AGI or not
   an AGI is not descriptive enough.  There are probably very few AGIs but
 
  many
 
   close AGIs and then many, many AIs.  Software programs are like the
 
  plant
 
   and animal kingdom since they breed and multiply and evolve...
  
   John
  
   -
   This list is sponsored by AGIRI: http://www.agiri.org/email
   To unsubscribe or change your options, please go to:
   http://v2.listbox.com/member/?;
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Determinism

2007-05-09 Thread Matt Mahoney

--- David Clark [EMAIL PROTECTED] wrote:
   A computer with finite memory can
  only model (predict) a computer with less memory.  No computer can
 simulate
  itself.  When we introspect on our own brains, we must simplify the model
 to a
  probabilistic one, whether or not it is actually deterministic.
 
 This is NOT true.  How many answers can be had by the formula for a single
 straight line?  The answer is infinite.  A computer CAN model/simulate
 anything including itself (whatever that means) given enough time.  If the
 model has understanding (formulas or algorithms) then any amount of
 simulated detail can be realized.

By simulate, I mean in the formal sense, as a universal Turing machine can
simulate any other Turing machine, for example, you can write a program in C
that runs programs written in Pascal (e.g. a compiler or interpreter).  Thus,
you can predict what the Pascal program will do. 

Languages like Pascal and C define Turing machines.  They have unlimited
memory.  Real machines have finite memory, so you do the simulation properly
you need to also define the hardware limits of the target machine.  So if the
real program reports an out of memory error, the simulation should too, at
precisely the same point.  Now if the target machine (running Pascal) has 2 MB
memory, and your machine (running C) has 1 MB, then you can't do it.  Your
simulator will run out of memory first.

Likewise, you can't simulate your own machine, because you need additional
memory to run the simulator.

When we lack the memory for an exact simulation, we can use an approximation,
one that usually but not always gives the right answer.  For example, we
forecast the weather using an approximation of the state of the Earth's
atmosphere and get an approximate answer.  We can do the same with programs. 
For example, if a program outputs a string of bits according to some
algorithm, then you can often predict most of the bits by looking up the last
few bits of context in a table and predicting whatever bit was last output in
this context.  The cache and branch prediction logic in your CPU do something
like this.  This is an example of your computer simulating itself using a
simplified, probabilistic model.  A more accurate model would analyze the
entire program and make exact predictions, but this is not only impractical
but also impossible.  So we must have some cache misses and branch
mispredictions.

In the same way, the brain cannot predict itself.  The brain has finite
memory.  Even if the brain were deterministic (no neuron noise), this would
still be the case.  If a powerful enough computer knew the exact state of your
brain, it could predict what you would think next, but you could not predict
what that computer would output.  I know in theory you could follow the
computer's algorithm on pencil and paper, but even then you would still not
know the result of that manual computation until you did it.  No matter what
you do, you cannot predict your own thoughts with 100% accuracy.  Your mental
model must be probabilistic, whether the hardware is deterministic or not.




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


[agi] Help get the 400k SIAI matching challenge on DIGG's front page

2007-05-09 Thread Stefan Pernar

Hello,

As you may know the SIAI has started a matching challenge of 400'000
USD please help to get the word out by digging the story and thereby
putting it on Digg's front page:

http://digg.com/general_sciences/SIAI_seeks_funding_for_AI_research

Xie Xie,

Stefan
--
Stefan Pernar
App. 1-6-I, Piao Home
No. 19 Jiang Tai Xi Lu
100016 Beijing
China
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936