Terren,

Your broad distinctions are fine, but I feel you are not emphasizing the area of most interest for AGI, which is *how* we adapt rather than why. Interestingly, your blog uses the example of a screwdriver - Kauffman uses the same in Chap 12 of Reinventing the Sacred as an example of human creativity/divergence - i.e. our capacity to find infinite uses for a screwdriver.

"Do we think we could write an algorithm, an effective procedure, to generate a possibly infinite list of all possible uses of screwdrivers in all possible circumstances, some of which do not yet exist? I don't think we could get started."

What "emerges" here, v. usefully, is that the capacity for play overlaps with classically-defined, and a shade more rigorous and targeted, divergent thinking, e.g. "find as many uses as you can for a screwdriver, rubber teat, needle etc".

...How would you design a divergent (as well as play) machine that can deal with the above open-ended problems? (Again surely essential for an AGI)

With full general intelligence, the problem more typically starts with the function-to-be-fulfilled - e.g. how do you open this paint can? - and only then do you search for a novel tool, like a screwdriver or another can lid.



Terren:> Actually, kittens play because it's fun. Evolution has equipped them with the rewarding sense of fun because it optimizes their fitness as hunters. But kittens are adaptation executors, evolution is the fitness optimizer. It's a subtle but important distinction.

See http://www.overcomingbias.com/2007/11/adaptation-exec.html

Terren

They're adaptation executors, not fitness optimizers.

--- On Mon, 8/25/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
Kittens play with small moving objects because it teaches
them to be better hunters. Play is not a goal in itself, but
a subgoal that may or may not be a useful part of a
successful AGI design.

 -- Matt Mahoney, [EMAIL PROTECTED]



----- Original Message ----
From: Mike Tintner <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Monday, August 25, 2008 8:59:06 AM
Subject: Re: [agi] How Would You Design a Play Machine?

Brad,

That's sad.  The suggestion is for a mental exercise,
not a full-scale
project. And play is fundamental to the human mind-and-body
- it
characterises our more mental as well as more physical
activities -
drawing, designing, scripting, humming and singing scat in
the bath,
dreaming/daydreaming & much more. It is generally
acknowledged by
psychologists to be an essential dimension of creativity -
which is the goal
of AGI. It is also an essential dimension of animal
behaviour and animal
evolution.  Many of the smartest companies have their play
areas.

But I'm not aware of any program or computer design for
play - as distinct
from elaborating systematically and methodically or
"genetically" on
themes - are you? In which case it would be good to think
about one - it'll
open your mind & give you new perspectives.

This should be a group where people are not too frightened
to play around
with ideas.

Brad:> Mike Tintner wrote: "...how would you design
a play machine - a
machine
> that can play around as a child does?"
>
> I wouldn't.  IMHO that's just another waste of
time and effort (unless
> it's being done purely for research purposes).
It's a diversion of
> intellectual and financial resources that those
serious about building an
> AGI any time in this century cannot afford.  I firmly
believe if we had
> not set ourselves the goal of developing human-style
intelligence
> (embodied or not) fifty years ago, we would already
have a working,
> non-embodied AGI.
>
> Turing was wrong (or at least he was wrongly
interpreted).  Those who
> extended his imitation test to humanoid, embodied AI
were even more wrong.
> We *do not need embodiment* to be able to build a
powerful AGI that can be
> of immense utility to humanity while also surpassing
human intelligence in
> many ways.  To be sure, we want that AGI to be
empathetic with human
> intelligence, but we do not need to make it equivalent
(i.e., "just like
> us").
>
> I don't want to give the impression that a
non-Turing intelligence will be
> easy to design and build.  It will probably require at
least another
> twenty years of "two steps forward, one step
back" effort.  So, if we are
> going to develop a non-human-like, non-embodied AGI
within the first
> quarter of this century, we are going to have to
"just say no" to Turing
> and start to use human intelligence as an inspiration,
not a destination.
>
> Cheers,
>
> Brad
>
>
>
> Mike Tintner wrote:
>> Just a v. rough, first thought. An essential
requirement of  an AGI is
>> surely that it must be able to play - so how would
you design a play
>> machine - a machine that can play around as a
child does?
>>
>> You can rewrite the brief as you choose, but my
first thoughts are - it
>> should be able to play with
>> a) bricks
>> b)plasticine
>> c) handkerchiefs/ shawls
>> d) toys [whose function it doesn't know]
>> and
>> e) draw.
>>
>> Something that should be soon obvious is that a
robot will be vastly more
>> flexible than a computer, but if you want to do it
all on computer, fine.
>>
>> How will it play - manipulate things every which
way?
>> What will be the criteria of learning - of having
done something
>> interesting?
>> How do infants, IOW, play?
>>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to