Is there a standard taxonomy of AGI that is referred to when talking about
different AGIs or near AGIs? Saying that a software is an AGI or not an AGI
is not descriptive enough. There are probably very few AGIs but many close
AGIs and then many, many AIs. Software programs are like the plant
There's certainly no standard... at
http://www.agiri.org/wiki/index.php?title=AGI_Projects
I used 3 crude categories
-- Neural net based
-- Logic based
-- Integrative
;-)
On 5/9/07, John G. Rose [EMAIL PROTECTED] wrote:
Is there a standard taxonomy of AGI that is referred to when talking
In Beyond AI I have a taxonomy (and Kurzweil picked that chapter, among
others, to post on his site). in brief:
Hypohuman AI -- below human ability and under human control
Diahuman AI -- somewhere in the human range (which is large!)
Epihuman AI -- smarter/more capable than human, but equivalent
My feeling is that it is better to classify the AGI projects alone
multiple dimensions, rather than a single one.
1. Their exact goal (or their working definition of intelligence). On
this aspect, I've tried to put them into 5 groups:
* structure (e.g., to build brain model)
* behavior
I don't think intelligence can be measured that easily on a one dimensional
axis, with a dot marking the intelligence of humans. If you look at all
the possible intelligences, not just the organic ones we know of, measuring
intelligence becomes extremely difficult. Measuring the intelligence of
I work very hard to produce the exact same answer to the same question. If
some humans don't actually do that, then they are just exhibiting the flaws
that exist in our design. This is not to be confused with answering better
over time, based on more and better information. The exact same
Notice that I didn't use the word intelligence -- the key issue here is when
we can expect the existence of AGI to make a significant difference in the
world. Computers have had a big impact because they have abilities well
beyond those of humans in certain limited areas. Of course, so did
--- David Clark [EMAIL PROTECTED] wrote:
A computer with finite memory can
only model (predict) a computer with less memory. No computer can
simulate
itself. When we introspect on our own brains, we must simplify the model
to a
probabilistic one, whether or not it is actually
Hello,
As you may know the SIAI has started a matching challenge of 400'000
USD please help to get the word out by digging the story and thereby
putting it on Digg's front page:
http://digg.com/general_sciences/SIAI_seeks_funding_for_AI_research
Xie Xie,
Stefan
--
Stefan Pernar
App. 1-6-I,