Benjamin Goertzel wrote:
Well, in my 1993 book "The Structure of Intelligence" I defined
intelligence as
"The ability to achieve complex goals in complex environments."
I followed this up with a mathematical definition of complexity grounded in
algorithmic information theory (roughly: the complexity of X is the
amount of
pattern immanent in X or emergent between X and other Y's in its
environment).
This was closely related to what Hutter and Legg did last year, in a
more rigorous
paper that gave an algorithmic information theory based definition of
intelligence.
Having put some time into this sort of definitional work, I then moved
on to more
interesting things like figuring out how to actually make an intelligent
software system
given feasible computational resources.
The catch with the above definition is that a truly general intelligence
is possible
only w/ infinitely many computational resources. So, different AGIs may
be able
to achieve different sorts of complex goals in different sorts of
complex environments.
And if an AGI is sufficiently different from us humans, we may not even
be able
to comprehend the complexity of the goals or environments that are most
relevant
to it.
So, there is a general theory of what AGI is, it's just not very useful.
To make it pragmatic one has to specify some particular classes of goals and
environments. For example
goal = getting good grades
environment = online universities
Then, to connect this kind of pragmatic definition with the mathematical
definition, one would have the prove the complexity of the goal (getting
good
grades) and the environment (online universities) based on some relevant
computational model. But the latter seems very tedious and boring work...
And IMO, all this does not move us very far toward AGI, though it may help
avoid some conceptual pitfalls that could have been fallen into
otherwise...
Unfortunately, I do not think any of the existing definitions of
intelligence (include yours above, and those offered by Hutter, Legg,
etc) are worth anything, for the following reason:
Take a look at the word 'goal'. The only way that this term can be
defined is subjectively: you have to use ANOTHER intelligence to
interpret what counts as a goal or not.
For example, in your above example you wrote "goal = getting good
grades" .... but it is impossible to come up with any kind of objective
formalization of this. It would take an entire intelligence just to say
what counts as the meaning of "getting good grades".
So you need to say "The definition of intelligence is [some definition
using the term "goal"], and the definition of "goal" is "Whatever an
intelligent system would subjectively classify as a 'goal'".
But if you cannot define intelligence without inserting a subjective
term in the definition, why bother with the circumlocution: why not cut
to the chase and just define it this way:
"The definition of intelligence is "Whatever an intelligent system would
subjectively classify as 'intelligence'".
In exactly the same way, if you look at the standard approach to AI
(Russell and Norvig, e.g.) you will find it triumphantly declaring that
we now treat AI in a more objective, scientific and rigorous way because
we define the AI endeavor in terms of "agents", "goals" etc. But when
you dissect the meanings of terms like "agents" and "goals" you find the
same surrepticious dependence on subjective terms. Pure nonsense. Sham
science.
Richard Loosemore.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936