On 5/16/07, Shane Legg <[EMAIL PROTECTED]> wrote:

> No. To me that is not intelligence, though it works even better.

This seems to me to be very divergent from the usual meaning
of the word intelligence.  It opens up the possibility that a super
computer that is able to win a Nobel prize by running a somewhat
efficient AI algorithm could be less intelligent than a basic pocket
calculator that is solving optimization problems in a very efficient way.

This just shows the complexity of "the usual meaning of the word
intelligence" --- many people do associate with the ability of solving
hard problems, but at the same time, many people (often the same
people!) don't think a brute-force solution show any intelligence.
When "intelligence" is used on human, there is no problem, since few
hard problem can be solved by the human mind by brute-force. However,
when we extend this notion to cover computers, we cannot keep both,
since their consistent is no longer guaranteed (as shown by your
example). At this point, you see "capability" as more essential, while
I see "adaptivity" as more essential.

There is no objective or truer answer, though we should understand
what we've given up. In my side, I admit that an intelligent system
doesn't always provide a batter solution than a non-intelligent one,
and I don't see it as a problem --- today, conventional computers
solve many problems better than the human mind, but I don't take that
as reason for them to be more intelligent.

In this discussion, I have no intention to convince people that my
definition of intelligence is better --- for most people, that will
happen only when my system is producing results that they consider as
impressive, which will not happen soon. Instead, I want to argue that
at the current moment there are multiple notions of intelligence,
which are all justifiable to various degrees. However they are not
equivalent, since they lead the research to different directions.

To me, on this topic the current danger is not that we have different
understandings on what intelligence is (this is undesired, though
inevitable at the current stage), but that there are still many people
who believe that "intelligence" only has one true/correct
understanding, which is how they understand it.

It seems to me that what you are defining would be better termed
"intelligence efficiency" rather than "intelligence".

What if I suggest to rename your notion "universal problem solver"?  ;-)

> They don't need to have the test in mind, indeed, but how can you
> justify the authority and fairness of the testing results, if many
> systems are not built to achieve what you measure?

I don't see that as a problem.  By construction universal intelligence
measures how well a system is able to act as an extremely general
purpose problem solver (roughly stated).  This is what I would like to
have, and so universal intelligence is a good measure of what I am
interested in achieving.  I happen to believe that this is also a decent
formalisation of the meaning of "intelligence" for machines.   Some
systems might be very good at what they have been designed to do,
but what I want to know is how good are they as a general purpose
 problem solver?   If I can't give them a problem, by defining a goal
for them, and have them come up with a very clever solution to my
problem, they aren't what I'm interested in with my AI work.

This is exactly the problem I mentioned above. What if I don't think
"universal intelligence" means "general-purpose problem solver" as you
define it? Of course, if I'm the only one, you don't need to worry,
but I really don't see how you can put the current AGI projects, which
are as diverse one can image, into the framework you are proposing. If
you simply say that the one that don't fit in are uninteresting to
you, the others can say the same to your framework, right? For
example, I'm not interested in brute-force solutions, even though I
know some day some of them may solve Nobel-Prize-level problems. I
agree that kind of solution has huge practical potentials, and I value
other people's work on that direction. I just doing think it is AI,
and its success won't solve the problem I'm interested in.

In summary, I highly appreciate your attempt to unify the field by
building a common evaluation framework, but I hope to show that your
understanding/specification of the problem is not the only possible
one at the current moment.

Pei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to