I wrote a blog post enlarging a little on the ideas I developed in my
response to the "playful AGI" thread...

See

http://multiverseaccordingtoben.blogspot.com/2008/08/logic-of-play.html

Some of the new content I put there:

"
Still, I have to come back to the tendency of play to give rise to goal
drift ... this is an interesting twist that apparently relates to the
wildness and spontaneity that exists in much playing. Yes, most particular
forms of play do seem to arise via the syllogism I've given above. Yet,
because it involves activities that originate as simulacra of goals that go
BEYOND what the mind can currently do, play also seems to have an innate
capability to drive the mind BEYOND its accustomed limits ... in a way that
often transcends the goal G that the play-goal G1 was designed to
emulate....

This brings up the topic of meta-goals: goals that have to do explicitly
with goal-system maintenance and evolution. It seems that playing is in fact
a meta-goal, quite separately from the fact of each instance of playing
generally involving an imitation of some other specific real-life goal.
Playing is a meta-goal that should be valued by organisms that value growth
and spontaneity ... including growth of their goal systems in unpredictable,
adaptive ways....
"

-- Ben G

On Tue, Aug 26, 2008 at 9:07 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:

>
> About play... I would argue that it emerges in any sufficiently
> generally-intelligent system
> that is faced with goals that are difficult for it ... as a consequence of
> other general cognitive
> processes...
>
> If an intelligent system has a goal G which is time-consuming or difficult
> to achieve ...
>
> it may then synthesize another goal G1 which is easier to achieve
>
> We then have the uncertain syllogism
>
> Achieving G implies reward
> G1 is similar to G
> |-
> Achieving G1 implies reward
>
> As links between goal-achievement and reward are to some extent modified by
> uncertain
> inference (or analogous process, implemented e.g. in neural nets), we thus
> have the
> emergence of "play" ... in cases where G1 is much easier to achieve than G
> ...
>
> Of course, if working toward G1 is actually good practice for working
> toward G, this may give the intelligent
> system (if it's smart and mature enough to strategize) or evolution impetus
> to create
> additional bias toward the pursuit of G1
>
> In this view, play is a quite general structural phenomenon ... and the
> play that human kids do with blocks and sticks and so forth is a special
> case, oriented toward ultimate goals G involving physical manipulation
>
> And the knack in gaining anything from play is in appropriate
> similarity-assessment ... i.e. in measuring similarity between G and G1 in
> such a way that achieving G1 actually teaches things useful for achieving G
>
> So for any goal-achieving system that has long-term goals which it can't
> currently effectively work directly toward, play may be an effective
> strategy...
>
> In this view, we don't really need to design an AI system with play in
> mind.  Rather, if it can explicitly or implicitly carry out the above
> inference, concept-creation and subgoaling processes, play should emerge
> from its interaction w/ the world...
>
> ben g
>
>
>
> On Tue, Aug 26, 2008 at 8:20 AM, David Hart <[EMAIL PROTECTED]> wrote:
>
>> On 8/26/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
>>
>>> Is anyone trying to design a self-exploring robot or computer? Does this
>>> principle have a name?
>>
>>
>> Interestingly, some views on AI advocate specifically prohibiting
>> self-awareness and self-exploration as a precaution against the development
>> of unfriendly AI. In my opinion, these views erroneously transfer familiar
>> human motives onto 'alien' AGI cognitive architectures - there's a history
>> of discussing this topic  on SL4 and other places.
>>
>> I believe however that most approaches to designing AGI (those that do not
>> specifically prohibit self-aware and self-explortative behaviors) take for
>> granted, and indeed intentionally promote, self-awareness and
>> self-exploration at most stages of AGI development. In other words,
>> efficient and effective recursive self-improvement (RSI) requires
>> self-awareness and self-exploration. If any term exists to describe a
>> 'self-exploring robot or computer', that term is RSI. Coining a lesser term
>> for 'self-exploring AI' may be useful in some proto-AGI contexts, but I
>> suspect that 'RSI' is ultimately a more useful and meaningful term.
>>
>> -dave
>>  ------------------------------
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "Nothing will ever be attempted if all possible objections must be first
> overcome " - Dr Samuel Johnson
>
>
>


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to