On Feb 4, 2008 11:42 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> The test, I suggest, is essentially; not the Turing Test or anything like
> that but "The General Test." If your system is an AGI, or has AGI potential,
> then it must first of all have a skill and be able to solve problems in a
> given doman. The "test" is then: can it a) independently learn a skill in an
> adjacent domain, and/or b) pass a problemsolving test in an adjacent domain
> (to be set by someone other than the systembuilder!). If it can play soccer,
> can it learn how to play rugby and solve problems in rugby? If it can build
> Lego constructions, can it learn to build a machine? If it can search for
> hidden items, can it learn to play hide-and-seek? The General Test then is
> simply a test of whether a system can generalize its skill(s). If it knows
> how to put together a set of elements in certain kinds of ways, can it then
> learn to put those same elements together [and perhaps some new ones] in new
> kinds of ways?
Interesting test. However, as others have mentioned it is a difficult
test to evaluate in practice. Here's my proposal:
I propose that the purpose of any 'intelligent' system / agent is to
pursue goals. These goals can be specified in any way; 'classical'
victory condition-type goals, maintainance goals, whatever. An
intelligent system should be evaluated based on how well it can pursue
goals.
In particular, the quality of an intelligent system should be
evaluated based on:
- Optimality. An intelligence should try to optimize its goal-seeking
behavior for maximum 'reward'.
- Adaptability. If the world changes and makes different strategies
optimal, the system should account for this in its behaviour.
- Generality. A general intelligence should be able to 'solve' as wide
a range of goals or subgoals as possible.
Clearly humans are classified intelligent with this metric. Dogs are
still intelligent, but less intelligent. They have much less
generality in the goals they can solve. They are also less adaptable
('creative') than people.
Is a washing machine intelligent? It certainly minimally fits the
'intelligence' criteria of being able to solve a goal. The goal of my
washing machine is to make it easy for me to wash my clothes. Does it
do this optimally? No. Is it adaptable? Not really. Can it solve any
other goals? No.
Perhaps to be flagged 'intelligent' some minimal benchmark in
optimality, adaptability and generality is required. This is not the
interesting end of the scale.
> That's what people should be doing here centrally - discussing and
> exchanging their ideas about how to solve the General Test. The fact that no
> one is discussing this (despite vast volumes of overall discussion) suggests
> very powerfully that no one *has* an idea.
I think solutions are easy. Asking the right question is hard. Here's
my favorites:
"What kind of information does a general intelligence need to store
and manipulate?"
"What are the most fundamental elements of information you need to
store? What requirements are there on these pieces of information? How
can they be combined?"
"What is the simplest goal an intelligent system could possibly learn to solve?"
"What features must a good intelligent system have? If you were
writing a software engineering spec for an AI, what would it look
like?"
... anyone?
-J
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93782378-41d2ce