On Saturday 22 July 2006 19:48, Shane Legg wrote:
> On 7/13/06, James Ratcliff <[EMAIL PROTECTED]> wrote:
>
> On the topic of measuring intelligence, what do you think about the actual
>
> > structure of comparison of some of today's AI systems.  I would like to
> > see someone come up with and get support for a general fairly widespread
> > set of test s for general AI other than the turing test.
>
> After some months looking around for tests of intelligence for machines
> what I found
> was... not very much.  Few people have proposed tests of intelligence for
> machines,

But quite a lot of people are and have been trying do make/train systems to do 
tasks that require some kind of intelligence. That they don't always 
explicitely label those tasks as intelligence tests, doesn't imply that they 
are not.

> and other than the Turing test, none of these test have been developed or
> used much.

Presently, all artificial systems would fail tests that really require much of 
some sort of general intelligence. There are only systems that can succeed in 
fairly simple (domain specific) tasks.

By the way the Turing test to me doesn't seem to me to be a very good test. 
The real mark of intelligence can be found in action, not in language.

>
> Naturally I'd like "universal intelligence", that Hutter and myself have
> formulated,
> to lead to a practical test that was widely used.  However making the test
> practical
> poses a number of problems, the most significant of which, I think, is the
> sensitivity
> that universal intelligence has to the choice of reference universal Turing
> machine.
> Maybe, with more insights, this problem can be, if not solved, at least
> dealt with in
> a reasonably acceptable way?

It's hard to assign a value to intelligence; something like: if system s 
succeeds in task t, it has minimally intelligence level x. However, it is 
fairly obvious that building an entire modern city requires more intelligence 
than cleaning the floor. It's obvious that the first task is far more 
complex. How much more? That's hard to say. But is that necessary? Is it 
always necessary to be able to assign values? I don't think so. It can even 
be harmful. In the social sciences using mathematics can lead to 
pseudo-exactness and pseudo-certainty (and the loss of the ability to 
formulate non-trivial laws). 

AI (and AGI) researcher just start with tasks that they judge to require some 
intelligence (that they think might be feasible for their systems to reach). 
That doesn't seem to be a bad practice to me. If you still want a criterium, 
use: how many agreement is there, how much controversy among researchers (and 
other people) that a certain task requires intelligence. For ordening of 
tasks on intelligence required you can do something similar. 

And if nobody is very sure or there is little agreement, just do a lot of 
tests, or every one. Do a great variaty of tasks of very different natures, 
that are expected to require x-lke intelligence (where x can be insect, 
human, dog, god etc.etc.). There will always be some vagueness. But is that 
really so bad? 

Further, if a formal method is found for feasibly assigning required 
intelligence values to all possible tasks, I suspect that the implementation 
of that method is already an AGI system, possibly of infinite intelligence 
(agreed? ;-).

Bye,
Arnoud

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to