In late April I was too busy to join the thread "Circular definitions
of intelligence". However, since some of you know that I proposed my
working definition of intelligence before, I have no choice but to
take Richard's challenge. ;-)

Before addressing Richard's (very good) points, let me summarize my
opinions presented in
http://www.cogsci.indiana.edu/pub/wang.intelligence.ps and
http://nars.wang.googlepages.com/wang.AI_Definitions.pdf , especially
for the people who don't want to read papers.

First, I know that many people think this discussion is a waste of
time. I agree that spending all the time arguing about definitions
won't get AGI to anywhere, but the other extreme is equally bad. The
recent discussions in this mailing list make me think that it is still
necessary to spend some time on this issue, since the definition of
intelligence one accepts directly determines one's research goal and
criteria in evaluating other people's work. Nobody can do or even talk
about AI or AGI without an idea about what it means.

Though at the current time we cannot expect a perfect definition (we
don't know that much yet), it doesn't mean any definition or vague
notion are equally good. A good definition should be (1) clear, (2)
simple, (3) instructive, and (4) close to the common usage of the term
in everyday language. Since these requests often conflict with each
other, our choice must be based on a balance among them, rather than
on a single factor.

Unlike in many other fields where the definition of the field doesn't
matter too much, in AI it is the root of many other problems, since
the only widely accepted sample of intelligence, human intelligence,
can be specified and duplicated in several aspects or perspectives,
and each of them lead the research to a different direction. Though
all these directions are fruitful, they produce very different fruits,
and cannot encompass one another (though partial overlaps exist).

Based on the above general consideration, I define "intelligence" as
"the ability to adapt and work with insufficient knowledge and
resources", which requires the system to depend on finite
computational capacity, to open to novel observations and tasks, to
respond in real time, and to learn from experience.

NARS is designed and implemented according to this working definition
of intelligence.

In the following I'll comment on Richard's opinions.

On 4/26/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:

I spent a good deal of effort, yesterday, trying to get you to "define
intelligence in an abstract way that is not closely coupled to human
intelligence" and yet, in the end, the only thing you could produce was
a definition that either:

a) Contained a term that had to be interpreted by an intelligence - so
this was not an objective definition, it was circular,

Though "circular definition" should be rejected in general, this
notion cannot be interpreted too widely. I'll say that defining
"intelligence" by "mind", "cognition", "thinking", or "consciences"
doesn't contribute much, but I don't mind people to use concepts like
"goal" in their definitions (though I don't do that for other
reasons), because "goal" is a much simpler and more clear concept than
"intelligence", though like all human concepts, it has its own
fuzziness and vagueness.

Richard is right when saying that intelligence is required to
recognize goal, but in that sense, all human concepts are created by
human intelligence, rather than obtained from the objective world.
Under that consideration, all meaningful definitions of intelligence
will be judged as "circular". Even so, to define "intelligence" using
"goal" is much less circular than using "intelligence" itself.

Again, for our current question, no answer is perfect, but it doesn't
mean all answers are equally bad (or equally good).

b) Was a definition of such broad scope that it did not even slightly
coincide with the commonsense usage of the word "intelligent" ... for
example, it allowed an algorithm that optimized ANYTHING WHATSOEVER to
be have the word 'intelligent' attached to it,

Agree. If all computers are already intelligent, then we should
continue to go with computer science, since the new label "AI"
contribute nothing.

According to my definition, a thermostat is not intelligence, and nor
is an algorithm that provide "optimum" solutions by going through all
possibilities and pick the best.

To me, whether a system is intelligent is not determined by what
practical problems it can solve at a given moment, but by how it
solves problems --- by design or via learning. Among learning systems,
to me the most important thing is not how complex the results are, but
how realistic the situation is. For example, to me, a system assuming
sufficient resources is not intelligent, no matter how great the
result is.

I don't think intelligence should be measured by problem-solving
capabilities. For example, Windows XP is much more capable than
Windows 3.1, though I don't think it is more intelligent --- to me,
both of them have little intelligence. Yes, intelligence is a matter
of degree, but it doesn't mean that any system will have a non-zero
degree in this scale.

BTW, I think it is too early to talk about numerical measurement of
intelligence, though we can use the term qualitatively and
comparatively.

c) Was couched in terms of a pure mathematical formalism (Hutter's),
about which I cannot even *say* whether it coincides with the
commonsense usage of the word "intelligent" because there is simply no
basis for comparing this definition with anything in the real world --
as meaningless as defining a unicorn in terms of measure theory!

I think two issues are mixed here.

To criticize the formalness of Hutter's work is not fair, because he
makes its relation with computer system quite clear. It is true that
he definition doesn't fully match the commonsense usage of the word,
but no clear definition will --- we need a definition exactly because
the commonsense usage of the word is too messy to guide our research.

To criticize his assumption as "too far away from reality" is a
different matter, which is also why I don't agree with Hutter and
Legg. Formal systems can be built on different assumptions, some of
which are closer to reality than some others. For example, it is
possible to build a formal model with the assumption of infinite
resources, and another one with the assumption of finite resources. We
cannot say that they are equally unrealistic just because they are
both formal.

In all other areas of science, a formal scientific definition often does
extend the original (commonsense) meaning of a term - you cite the
example of gravity, which originally only meant something that happened
on the Earth.  But one thing that a formal scientific definition NEVER
does is to make a mockery of the original commonsense definition.

Again, it is a balance. I believe my definition capture the essence of
intelligence in a deep level, though I acknowledge its difference on
the surface level with the CURRENT commonsense usage of the word ---
the commonsense usage of words do evolve with the progress of science.

I am eagerly awaiting any definition from you that does not fall into
one of these traps.  Instead, it seems to me, you give only assertions
that such a definition exists, without actualy showing it.

*********

Unless you or someone else comes up with a definition that does not fall
into one of these traps, I am not going to waste any more time arguing
the point.

Consider that, folks, to be a challenge:  to those who think there is
such a definition, I await your reply.

Richard Loosemore

So I've tried. I won't challenge people to find imperfectness in my
definition (I know there are many), but do want to challenge people to
propose better ones. I believe this is how this field can move forward
--- not only by finding problems in the existing ideas, but also by
suggesting better ones.

Pei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to