On 7/6/07, Pete de Lepper <[EMAIL PROTECTED]> wrote:

I guess what I'm thinking is how would an AGI determine how much of it's
time should be spent playing? If you impose a hard limit (say 30%) of it's
time should devoted to play is the AGI actually intelligent?

It will mostly like the case for a typical human being, where (1)
whether an activity is "play" is often a matter of degree, (2) play
time is not a hard limit, but depends on context (whether there are
other urgent goals), and (3) nobody (neither the designer nor  the
system itself) can actually enforce such a hard limit --- don't think
the system as switching between a "working mode" and a "playing mode"
from time to time.

If it's an
open variable left to it's own discretion, then what would stop it from
deciding bouncing a ball is more "fun" than doing what it's intended
purpose is and spending a 100% of it's time doing that -leaving it
somewhat like a teenager you want off your lawn.

It may think bouncing a ball is more "fun" than weather modeling,
though still has to spend most of its time on the latter, if the
pressure for doing the latter is strong enough.

The key point here is that though the system's behaviors can be
strongly influenced by the goals/tasks imposed on it by its
designers/teachers/monitors, a truly intelligent system will
eventually develop "its own goals", according to its experience. They
can either be good or bad, from our point of view. You can call this
phenomenon "autonomy", "originality", "intentionality", "free will",
"independence", or "rebel", though it is basically the same process.

Pei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&id_secret=13977682-fb7ea1

Reply via email to