Bob: To create a good test environment for AGIs there needs to be no be no
single optimal or correct solution to the problem of existing in that
environment. Being intelligent involves continuous learning in a
dynamic environment where there are multiple possible trade-offs and
strategies (aka "complex goals in complex environments
Good point. But not a small point. You're talking about an entire new
CULTURE. Not a monistic culture of measurable correctness, where there are
right/wrong, true/false answers (incl. with probabilities). But a
pluralistic culture of evaluative *goodness/badness*, where there are only
more or less effective and profitable answers, which all depend on somewhat
arbitrary and changeable criteria of what's good and bad, and what are the
likely rewards, risks and costs.
For example, for a narrow AI factory robot there is a right and a wrong way
to pick up an object off a belt (basically because right and wrong have been
artificially defined by its programmer).
But for a real world AGI robot, there is and can be no right way to grasp an
object
What for instance is the right way to grip and swing a tennis racket or a
golf club? There are libraries on the subject and no agreement.
A real world AGI will like a human often have to arrive at, and settle for
routine ways of handling certain objects, but will have to know that they
can always be changed and improved.
--------------------------------------------------
From: "Bob Mottram" <[email protected]>
Sent: Saturday, June 09, 2012 11:47 AM
To: "AGI" <[email protected]>
Subject: Re: [agi] Early AGI Tests
On 09.06.2012 04:04, Jim Bromer wrote:
Because we would be testing primitive programs, they have to be
defined by the programmers.� Other AGI programmers might then comment
sympathetically to see if the -test- could be made a little more
sophisticated for the programmer's program. The program would have to
learn.� The learning could take place through direct instruction but
the program would have to be able to figure some things out for
itself.� The learning method would have to go beyond basic filling of
variable types or of basic numerical computation.
Some common problems with this kind of thing:
i) The environment is too simple.
ii) The agent-environment system reaches an equilibrium state, and then
stays there. This is an "informational death" or "end of history"
situation.
iii) The design of the agent-environment system intentionally or
unintentionally precludes virtual machine stratification.
iv) The communications system between agents is not Turing complete, with
only simple signals or gestures being possible.
To create a good test environment for AGIs there needs to be no be no
single optimal or correct solution to the problem of existing in that
environment. Being intelligent involves continuous learning in a dynamic
environment where there are multiple possible trade-offs and strategies
(aka "complex goals in complex environments").
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5
Modify Your Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com