Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Brad Paulsen
Mike Tintner wrote: ...how would you design a play machine - a machine 
that can play around as a child does?


I wouldn't.  IMHO that's just another waste of time and effort (unless it's 
being done purely for research purposes).  It's a diversion of intellectual 
and financial resources that those serious about building an AGI any time 
in this century cannot afford.  I firmly believe if we had not set 
ourselves the goal of developing human-style intelligence (embodied or not) 
fifty years ago, we would already have a working, non-embodied AGI.


Turing was wrong (or at least he was wrongly interpreted).  Those who 
extended his imitation test to humanoid, embodied AI were even more wrong. 
 We *do not need embodiment* to be able to build a powerful AGI that can 
be of immense utility to humanity while also surpassing human intelligence 
in many ways.  To be sure, we want that AGI to be empathetic with human 
intelligence, but we do not need to make it equivalent (i.e., just like 
us).


I don't want to give the impression that a non-Turing intelligence will be 
easy to design and build.  It will probably require at least another twenty 
years of two steps forward, one step back effort.  So, if we are going to 
develop a non-human-like, non-embodied AGI within the first quarter of this 
century, we are going to have to just say no to Turing and start to use 
human intelligence as an inspiration, not a destination.


Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of  an AGI is 
surely that it must be able to play - so how would you design a play 
machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are - it 
should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be vastly 
more flexible than a computer, but if you want to do it all on computer, 
fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?

How do infants, IOW, play?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: I Made a Mistake

2008-08-25 Thread Valentina Poletti
Chill down Jim, he took it back.

On 8/24/08, Jim Bromer [EMAIL PROTECTED] wrote:

 Intolerance of another person's ideas through intimidation or ridicule
 is intellectual repression.  You won't elevate a discussion by
 promoting a program anti-intellectual repression.  Intolerance of a
 person for his religious beliefs is a form of intellectual
 intolerance



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Valentina Poletti
In other words, Vladimir, you are suggesting that an AGI must be at some
level controlled from humans, therefore not 'fully-embodied' in order to
prevent non-friendly AGI as the outcome.

Therefore humans must somehow be able to control its goals, correct?

Now, what if controlling those goals would entail not being able to create
an AGI, would you suggest we should not create one, in order to avoid the
disastrous consequences you mentioned?

Valentina



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] rpi.edu

2008-08-25 Thread Brad Paulsen

Eric,

http://www.cogsci.rpi.edu/research/rair/asc_rca/

Sorry, couldn't answer your question based on quick read.

Cheers,
Brad


Eric Burton wrote:

Does anyone know if Rensselaer Institute is still on track to crack
the Turing Test by 2009? There was a Slashdot article or two about
their software called 'RASCALS' earlier this year.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Vladimir Nesov
On Mon, Aug 25, 2008 at 1:07 PM, Valentina Poletti [EMAIL PROTECTED] wrote:
 In other words, Vladimir, you are suggesting that an AGI must be at some
 level controlled from humans, therefore not 'fully-embodied' in order to
 prevent non-friendly AGI as the outcome.

Controlled in Friendliness sense of the word. (I still have no idea
what embodied refers to, now that you, me and Terren used it in
different senses, and I recall reading a paper about 6 different
meanings of this word in academic literature, none of them very
useful).


 Therefore humans must somehow be able to control its goals, correct?

 Now, what if controlling those goals would entail not being able to create
 an AGI, would you suggest we should not create one, in order to avoid the
 disastrous consequences you mentioned?


Why would anyone suggest creating a disaster, as you pose the question?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner

Brad,

That's sad.  The suggestion is for a mental exercise, not a full-scale 
project. And play is fundamental to the human mind-and-body - it 
characterises our more mental as well as more physical activities - 
drawing, designing, scripting, humming and singing scat in the bath, 
dreaming/daydreaming  much more. It is generally acknowledged by 
psychologists to be an essential dimension of creativity - which is the goal 
of AGI. It is also an essential dimension of animal behaviour and animal 
evolution.  Many of the smartest companies have their play areas.


But I'm not aware of any program or computer design for play - as distinct 
from elaborating systematically and methodically or genetically on 
themes - are you? In which case it would be good to think about one - it'll 
open your mind  give you new perspectives.


This should be a group where people are not too frightened to play around 
with ideas.


Brad: Mike Tintner wrote: ...how would you design a play machine - a 
machine

that can play around as a child does?

I wouldn't.  IMHO that's just another waste of time and effort (unless 
it's being done purely for research purposes).  It's a diversion of 
intellectual and financial resources that those serious about building an 
AGI any time in this century cannot afford.  I firmly believe if we had 
not set ourselves the goal of developing human-style intelligence 
(embodied or not) fifty years ago, we would already have a working, 
non-embodied AGI.


Turing was wrong (or at least he was wrongly interpreted).  Those who 
extended his imitation test to humanoid, embodied AI were even more wrong. 
We *do not need embodiment* to be able to build a powerful AGI that can be 
of immense utility to humanity while also surpassing human intelligence in 
many ways.  To be sure, we want that AGI to be empathetic with human 
intelligence, but we do not need to make it equivalent (i.e., just like 
us).


I don't want to give the impression that a non-Turing intelligence will be 
easy to design and build.  It will probably require at least another 
twenty years of two steps forward, one step back effort.  So, if we are 
going to develop a non-human-like, non-embodied AGI within the first 
quarter of this century, we are going to have to just say no to Turing 
and start to use human intelligence as an inspiration, not a destination.


Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of  an AGI is 
surely that it must be able to play - so how would you design a play 
machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are - it 
should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be vastly more 
flexible than a computer, but if you want to do it all on computer, fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?

How do infants, IOW, play?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?; Powered by 
Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Matt Mahoney
Kittens play with small moving objects because it teaches them to be better 
hunters. Play is not a goal in itself, but a subgoal that may or may not be a 
useful part of a successful AGI design.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, August 25, 2008 8:59:06 AM
Subject: Re: [agi] How Would You Design a Play Machine?

Brad,

That's sad.  The suggestion is for a mental exercise, not a full-scale 
project. And play is fundamental to the human mind-and-body - it 
characterises our more mental as well as more physical activities - 
drawing, designing, scripting, humming and singing scat in the bath, 
dreaming/daydreaming  much more. It is generally acknowledged by 
psychologists to be an essential dimension of creativity - which is the goal 
of AGI. It is also an essential dimension of animal behaviour and animal 
evolution.  Many of the smartest companies have their play areas.

But I'm not aware of any program or computer design for play - as distinct 
from elaborating systematically and methodically or genetically on 
themes - are you? In which case it would be good to think about one - it'll 
open your mind  give you new perspectives.

This should be a group where people are not too frightened to play around 
with ideas.

Brad: Mike Tintner wrote: ...how would you design a play machine - a 
machine
 that can play around as a child does?

 I wouldn't.  IMHO that's just another waste of time and effort (unless 
 it's being done purely for research purposes).  It's a diversion of 
 intellectual and financial resources that those serious about building an 
 AGI any time in this century cannot afford.  I firmly believe if we had 
 not set ourselves the goal of developing human-style intelligence 
 (embodied or not) fifty years ago, we would already have a working, 
 non-embodied AGI.

 Turing was wrong (or at least he was wrongly interpreted).  Those who 
 extended his imitation test to humanoid, embodied AI were even more wrong. 
 We *do not need embodiment* to be able to build a powerful AGI that can be 
 of immense utility to humanity while also surpassing human intelligence in 
 many ways.  To be sure, we want that AGI to be empathetic with human 
 intelligence, but we do not need to make it equivalent (i.e., just like 
 us).

 I don't want to give the impression that a non-Turing intelligence will be 
 easy to design and build.  It will probably require at least another 
 twenty years of two steps forward, one step back effort.  So, if we are 
 going to develop a non-human-like, non-embodied AGI within the first 
 quarter of this century, we are going to have to just say no to Turing 
 and start to use human intelligence as an inspiration, not a destination.

 Cheers,

 Brad



 Mike Tintner wrote:
 Just a v. rough, first thought. An essential requirement of  an AGI is 
 surely that it must be able to play - so how would you design a play 
 machine - a machine that can play around as a child does?

 You can rewrite the brief as you choose, but my first thoughts are - it 
 should be able to play with
 a) bricks
 b)plasticine
 c) handkerchiefs/ shawls
 d) toys [whose function it doesn't know]
 and
 e) draw.

 Something that should be soon obvious is that a robot will be vastly more 
 flexible than a computer, but if you want to do it all on computer, fine.

 How will it play - manipulate things every which way?
 What will be the criteria of learning - of having done something 
 interesting?
 How do infants, IOW, play?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Valentina Poletti
On 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Mon, Aug 25, 2008 at 1:07 PM, Valentina Poletti [EMAIL PROTECTED]
 wrote:
  In other words, Vladimir, you are suggesting that an AGI must be at some
  level controlled from humans, therefore not 'fully-embodied' in order to
  prevent non-friendly AGI as the outcome.

 Controlled in Friendliness sense of the word. (I still have no idea
 what embodied refers to, now that you, me and Terren used it in
 different senses, and I recall reading a paper about 6 different
 meanings of this word in academic literature, none of them very
 useful).


Agree

 Therefore humans must somehow be able to control its goals, correct?
 
  Now, what if controlling those goals would entail not being able to
 create
  an AGI, would you suggest we should not create one, in order to avoid the
  disastrous consequences you mentioned?
 

 Why would anyone suggest creating a disaster, as you pose the question


Also agree. As far as you know, has anyone, including Eliezer, suggested any
method or approach (as theoretical or complicated as it may be) to solve
this problem? I'm asking this because the Singularity has confidence in
creating a self-improving AGI in the next few decades, and, assuming they
have no intention to create the above mentioned disaster.. I figure someone
must have figured some way to approach this problem.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-25 Thread Matt Mahoney
John, I have looked at your patent and various web pages. You list a lot of 
nice sounding ethical terms (honor, love, hope, peace, etc) but give no details 
on how to implement them. You have already admitted that you have no 
experimental results, haven't actually built anything, and have no other 
results such as refereed conference or journal papers describing your system. 
If I am wrong about this, please let me know.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: John LaMuth [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, August 24, 2008 11:21:30 PM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)

 
 
- Original Message -  
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, August 24, 2008 2:46 PM
Subject: Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment)

I have challenged this list as well 
as the singularity and SL4 lists to come up with an example of a mathematical, 
software, biological, or physical example of RSI, or at least a plausible 
argument that one could be created, and nobody has. To qualify, an agent 
has to modify itself or create a more intelligent copy of itself according to 
an 
intelligence test chosen by the original. The following are not examples of 
RSI:
 
 1. Evolution of life, including humans.
 2. 
Emergence of language, culture, writing, communication technology, and 
computers.

 -- Matt Mahoney, [EMAIL PROTECTED]
 
###
*
 
Matt
 
Where have you been for the last 2 months 
??
 
I had been talking then about my 2 US Patents 
for ethical/friendly AI
along lines of a recursive 
simulation targeting language (topic 2) above.
 
This language agent employs feedback loops and LTM 
to increase comprehension and accuracy
(and BTW - resolves the ethical safeguard problems 
for AI) ...
 
No-one yet has proven me wrong ?? Howsabout YOU 
???
 
More at
www.angelfire.com/rnb/fairhaven/specs.html
 
 
John LaMuth
 
www.ethicalvalues.com 
 
 
 


 
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner


Matt: Kittens play with small moving objects because it teaches them to be 
better hunters. Play is not a goal in itself, but a subgoal that may or may 
not be a useful part of a successful AGI design.


Certainly, crude imitation of, and preparation for, adult activities is one 
aspect of play. But pure exploration - experimentation -and embroidery also 
are important. An infant dropping  throwing things  handling things every 
which way. Doodling - creating lines that go off and twist and turn in every 
direction. Babbling - playing around with sounds. Sputtering - playing 
around with silly noises - kids love that, no? (Even some of us adults too). 
Playing with stories and events - and alternative endings, beginnings and 
middles.  Make believe. Playing around with the rules of invented games.


Human development allots a great deal of time for such play. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Vladimir Nesov
On Mon, Aug 25, 2008 at 6:23 PM, Valentina Poletti [EMAIL PROTECTED] wrote:

 On 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 Why would anyone suggest creating a disaster, as you pose the question?


 Also agree. As far as you know, has anyone, including Eliezer, suggested any
 method or approach (as theoretical or complicated as it may be) to solve
 this problem? I'm asking this because the Singularity has confidence in
 creating a self-improving AGI in the next few decades, and, assuming they
 have no intention to create the above mentioned disaster.. I figure someone
 must have figured some way to approach this problem.

I see no realistic alternative (as in with high probability of
occurring in actual future) to creating a Friendly AI. If we don't, we
are likely doomed one way or another, most thoroughly through
Unfriendly AI. As I mentioned, one way to see Friendly AI is as a
second chance substrate, which is a first thing to do to ensure any
kind of safety from fatal or just vanilla bad mistakes in the future.
Of course, establishing a dynamics that know a mistake and when to
recover or prevent or guide is a tricky part.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Actually, kittens play because it's fun. Evolution has equipped them with the 
rewarding sense of fun because it optimizes their fitness as hunters. But 
kittens are adaptation executors, evolution is the fitness optimizer. It's a 
subtle but important distinction.

See http://www.overcomingbias.com/2007/11/adaptation-exec.html

Terren

They're adaptation executors, not fitness optimizers. 

--- On Mon, 8/25/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 Kittens play with small moving objects because it teaches
 them to be better hunters. Play is not a goal in itself, but
 a subgoal that may or may not be a useful part of a
 successful AGI design.
 
  -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 
 - Original Message 
 From: Mike Tintner [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Monday, August 25, 2008 8:59:06 AM
 Subject: Re: [agi] How Would You Design a Play Machine?
 
 Brad,
 
 That's sad.  The suggestion is for a mental exercise,
 not a full-scale 
 project. And play is fundamental to the human mind-and-body
 - it 
 characterises our more mental as well as more physical
 activities - 
 drawing, designing, scripting, humming and singing scat in
 the bath, 
 dreaming/daydreaming  much more. It is generally
 acknowledged by 
 psychologists to be an essential dimension of creativity -
 which is the goal 
 of AGI. It is also an essential dimension of animal
 behaviour and animal 
 evolution.  Many of the smartest companies have their play
 areas.
 
 But I'm not aware of any program or computer design for
 play - as distinct 
 from elaborating systematically and methodically or
 genetically on 
 themes - are you? In which case it would be good to think
 about one - it'll 
 open your mind  give you new perspectives.
 
 This should be a group where people are not too frightened
 to play around 
 with ideas.
 
 Brad: Mike Tintner wrote: ...how would you design
 a play machine - a 
 machine
  that can play around as a child does?
 
  I wouldn't.  IMHO that's just another waste of
 time and effort (unless 
  it's being done purely for research purposes). 
 It's a diversion of 
  intellectual and financial resources that those
 serious about building an 
  AGI any time in this century cannot afford.  I firmly
 believe if we had 
  not set ourselves the goal of developing human-style
 intelligence 
  (embodied or not) fifty years ago, we would already
 have a working, 
  non-embodied AGI.
 
  Turing was wrong (or at least he was wrongly
 interpreted).  Those who 
  extended his imitation test to humanoid, embodied AI
 were even more wrong. 
  We *do not need embodiment* to be able to build a
 powerful AGI that can be 
  of immense utility to humanity while also surpassing
 human intelligence in 
  many ways.  To be sure, we want that AGI to be
 empathetic with human 
  intelligence, but we do not need to make it equivalent
 (i.e., just like 
  us).
 
  I don't want to give the impression that a
 non-Turing intelligence will be 
  easy to design and build.  It will probably require at
 least another 
  twenty years of two steps forward, one step
 back effort.  So, if we are 
  going to develop a non-human-like, non-embodied AGI
 within the first 
  quarter of this century, we are going to have to
 just say no to Turing 
  and start to use human intelligence as an inspiration,
 not a destination.
 
  Cheers,
 
  Brad
 
 
 
  Mike Tintner wrote:
  Just a v. rough, first thought. An essential
 requirement of  an AGI is 
  surely that it must be able to play - so how would
 you design a play 
  machine - a machine that can play around as a
 child does?
 
  You can rewrite the brief as you choose, but my
 first thoughts are - it 
  should be able to play with
  a) bricks
  b)plasticine
  c) handkerchiefs/ shawls
  d) toys [whose function it doesn't know]
  and
  e) draw.
 
  Something that should be soon obvious is that a
 robot will be vastly more 
  flexible than a computer, but if you want to do it
 all on computer, fine.
 
  How will it play - manipulate things every which
 way?
  What will be the criteria of learning - of having
 done something 
  interesting?
  How do infants, IOW, play?
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Vladimir Nesov
On Mon, Aug 25, 2008 at 9:22 PM, Terren Suydam [EMAIL PROTECTED] wrote:

 Actually, kittens play because it's fun. Evolution has equipped them with the 
 rewarding sense of fun because it optimizes their fitness as hunters. But 
 kittens are adaptation executors, evolution is the fitness optimizer. It's a 
 subtle but important distinction.

 See http://www.overcomingbias.com/2007/11/adaptation-exec.html


Saying that play is not adaptive requires some backing (I expect it
plays some role, so you need to be more convincing).

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Abram Demski
Mike,

I agree with Brad somewhat, because I do not think copying human (or
animal) intellect is the goal. It is a means to the end of general
intelligence.

However, that certainly doesn't stop me from participating in a
thought experiment.

I think the big thing with artificial play is figuring out a good
goal-creation scheme. My definition of play directly follows from
this intuition: play is activity that results from a system that is
rapidly changing its goals. In other words, play is behavior that is
goal-oriented, but barely.

The definition should probably be somewhat more specific-- when
playing, people and animals don't just adopt totally arbitrary goals;
we seem to prefer interesting goals. This is because there is a
hidden biological agenda-- learning. But, learning is not *our* goal.
Out goal is whatever arbitrary goal we have adopted for the purpose of
play.

One system I know of does something like this-- the PURR-PUSS
system. Its rule is simple: if an unexpected event happens once, then
the system will adopt the goal of trying to get it to happen again, by
recreating the circumstances that led to it the first time. In
carrying out the attempt, it should be able to greatly refine its
concept of the circumstances that led to it-- because many of its
attempts to recreate the event will probably fail. Many of these
curiosity-based goals may be active at once. Since this is the
system's only motivational factor, it could be called an artificial
playing system (at least by my definition).

-Abram

On Mon, Aug 25, 2008 at 8:59 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 Brad,

 That's sad.  The suggestion is for a mental exercise, not a full-scale
 project. And play is fundamental to the human mind-and-body - it
 characterises our more mental as well as more physical activities - drawing,
 designing, scripting, humming and singing scat in the bath,
 dreaming/daydreaming  much more. It is generally acknowledged by
 psychologists to be an essential dimension of creativity - which is the goal
 of AGI. It is also an essential dimension of animal behaviour and animal
 evolution.  Many of the smartest companies have their play areas.

 But I'm not aware of any program or computer design for play - as distinct
 from elaborating systematically and methodically or genetically on themes
 - are you? In which case it would be good to think about one - it'll open
 your mind  give you new perspectives.

 This should be a group where people are not too frightened to play around
 with ideas.

 Brad: Mike Tintner wrote: ...how would you design a play machine - a
 machine

 that can play around as a child does?

 I wouldn't.  IMHO that's just another waste of time and effort (unless
 it's being done purely for research purposes).  It's a diversion of
 intellectual and financial resources that those serious about building an
 AGI any time in this century cannot afford.  I firmly believe if we had not
 set ourselves the goal of developing human-style intelligence (embodied or
 not) fifty years ago, we would already have a working, non-embodied AGI.

 Turing was wrong (or at least he was wrongly interpreted).  Those who
 extended his imitation test to humanoid, embodied AI were even more wrong.
 We *do not need embodiment* to be able to build a powerful AGI that can be
 of immense utility to humanity while also surpassing human intelligence in
 many ways.  To be sure, we want that AGI to be empathetic with human
 intelligence, but we do not need to make it equivalent (i.e., just like
 us).

 I don't want to give the impression that a non-Turing intelligence will be
 easy to design and build.  It will probably require at least another twenty
 years of two steps forward, one step back effort.  So, if we are going to
 develop a non-human-like, non-embodied AGI within the first quarter of this
 century, we are going to have to just say no to Turing and start to use
 human intelligence as an inspiration, not a destination.

 Cheers,

 Brad



 Mike Tintner wrote:

 Just a v. rough, first thought. An essential requirement of  an AGI is
 surely that it must be able to play - so how would you design a play machine
 - a machine that can play around as a child does?

 You can rewrite the brief as you choose, but my first thoughts are - it
 should be able to play with
 a) bricks
 b)plasticine
 c) handkerchiefs/ shawls
 d) toys [whose function it doesn't know]
 and
 e) draw.

 Something that should be soon obvious is that a robot will be vastly more
 flexible than a computer, but if you want to do it all on computer, fine.

 How will it play - manipulate things every which way?
 What will be the criteria of learning - of having done something
 interesting?
 How do infants, IOW, play?



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?; Powered by
 

Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Terren Suydam

Hi Vlad,

Thanks for taking the time to read my article and pose excellent questions. My 
attempts at answers below.

--- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam
 What is the point of building general intelligence if all
 it does is
 takes the future from us and wastes it on whatever happens
 to act as
 its goal?

Indeed. Personally, I have no desire to build anything smarter than humans. 
That's a deal with the devil, so to speak, and one I believe most ordinary 
folks would be afraid to endorse, especially if they were made aware of the 
risks. The Singularity is not an inevitability, if we demand approaches that 
are safe in principle. And self-modifying approaches are not safe, assuming 
that they could work.

I do however revel in the possibility of creating something that we must admit 
is intelligent in a general sense. Achieving such a goal would go a long way 
towards understanding our own abilities. So for me it's about research and 
understanding, with applications towards improving the quality of life. I 
advocate the slow and steady evolutionary approach because we can control the 
process (if not the agent) at each step of the way. We can stop the process at 
any point, study it, and make decisions about when and how to proceed.

I'm all for limiting the intelligence of our creations before they ever get to 
the point that they can build their own or modify themselves. I'm against 
self-modifying approaches, largely because I don't believe it's possible to 
constrain their actions in the way Eliezer hopes. Iterative, recursive 
processes are generally emergent and unpredictable (the interesting ones, 
anyway). Not sure what kind of guarantees you could make for such systems in 
light of such emergent unpredictability.
 
 The problem with powerful AIs is that they could get their
 goals wrong
 and never get us the chance to fix that. And thus one of
 the
 fundamental problems that Friendliness theory needs to
 solve is giving
 us a second chance, building in deep down in the AI process
 the
 dynamic that will make it change itself to be what it was
 supposed to
 be. All the specific choices and accidental outcomes need
 to descend
 from the initial conditions, be insensitive to what went
 horribly
 wrong. This ability might be an end in itself, the whole
 point of
 building an AI, when considered as applying to the dynamics
 of the
 world as a whole and not just AI aspect of it. After all,
 we may make
 mistakes or be swayed by unlucky happenstance in all
 matters, not just
 in a particular self-vacuous matter of building AI.

I don't deny the possibility of disaster. But my stance is, if the only 
approach you have to mitigate disaster is being able to control the AI itself, 
well, the game is over before you even start it. It seems profoundly naive to 
me that anyone could, even in principle, guarantee a super-intelligent AI to 
renormalize, in whatever sense that means. Then you have the difference 
between theory and practice... just forget it. Why would anyone want to gamble 
on that?

  Right, in a way that suggests you didn't grasp
 what I was saying,
  and that may be a failure on my part.
 
 That's why I was exploring -- I didn't
 get what you meant, and I
 hypothesized a coherent concept that seemed to fit what you
 said. I
 still don't understand that concept.

Maybe I'll try again some other time if I can increase my own clarity on the 
concept. 

 http://machineslikeus.com/news/design-bad-or-why-artificial-intelligence-needs-artificial-life
 
 
 (answering to the article)
 
 Creating *an* intelligence might be good in itself, but not
 good
 enough and too likely with negative side effects like
 wiping out the
 humanity to sum out positive in the end. It is a tasty
 cookie with
 black death in it.

With the evolutionary approach, there is no self-modification. The agent never 
has access to its own code, because it's a simulation, not a program. So you 
don't have these hard take-off scenarios. However, it is very slow and that 
isn't appealing. AI folks want intelligence and they want it now. If the 
Singularity occurs to the detriment of the human race, it will be because of 
this rush to be the first to build something intelligent. I take some comfort 
in my belief that quick approaches simply won't succeed, but I admit I'm not 
100% confident in that belief.

 You can't assert that we are not closer to AI than 50
 years ago --
 it's just unclear how closer we are. Great many
 techniques were
 developed in these years, and some good lessons learned the
 wrong way.
 Is it useful? Most certainly some of it, but how can we
 tell...

Fair enough. It's a minor point though.

 Intelligence was created by a blind idiot evolutionary
 process that
 has no foresight and no intelligence. Of course it can be
 designed.
 Intelligence is all that evolution is, but immensely
 faster, better
 and flexible.

In certain domains, 

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Charles Hixson
Play is a form a strategy testing in an environment that doesn't 
severely penalize failures.  As such, every AGI will necessarily spend a 
lot of time playing.


If you have some other particular definition, then perhaps I could 
understand your response if you were to define the term.


OTOH, if this is interpreted as being a machine that doesn't do anything 
BUT play (using my supplied definition), then your response has some 
merit, but even that can be very useful.  Almost all of mathematics, 
e.g., is derived out of such play.


I have a strong suspicion that machines that don't have a play mode 
can never proceed past the reptilian level of mentation.  (Here I'm 
talking about thought processes that are mediated via the reptile 
brain in entities like mammals.  Actual reptiles may have some more 
advanced faculties of which I'm unaware.  (Note that, e.g., shrews don't 
have much play capability, but they have SOME.)



Brad Paulsen wrote:
Mike Tintner wrote: ...how would you design a play machine - a 
machine that can play around as a child does?


I wouldn't.  IMHO that's just another waste of time and effort (unless 
it's being done purely for research purposes).  It's a diversion of 
intellectual and financial resources that those serious about building 
an AGI any time in this century cannot afford.  I firmly believe if we 
had not set ourselves the goal of developing human-style intelligence 
(embodied or not) fifty years ago, we would already have a working, 
non-embodied AGI.


Turing was wrong (or at least he was wrongly interpreted).  Those who 
extended his imitation test to humanoid, embodied AI were even more 
wrong.  We *do not need embodiment* to be able to build a powerful AGI 
that can be of immense utility to humanity while also surpassing human 
intelligence in many ways.  To be sure, we want that AGI to be 
empathetic with human intelligence, but we do not need to make it 
equivalent (i.e., just like us).


I don't want to give the impression that a non-Turing intelligence 
will be easy to design and build.  It will probably require at least 
another twenty years of two steps forward, one step back effort.  
So, if we are going to develop a non-human-like, non-embodied AGI 
within the first quarter of this century, we are going to have to 
just say no to Turing and start to use human intelligence as an 
inspiration, not a destination.


Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of  an AGI 
is surely that it must be able to play - so how would you design a 
play machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are - 
it should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be vastly 
more flexible than a computer, but if you want to do it all on 
computer, fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?

How do infants, IOW, play?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-25 Thread Abram Demski
Matt,

What is your opinion on Goedel machines?

http://www.idsia.ch/~juergen/goedelmachine.html

--Abram

On Sun, Aug 24, 2008 at 5:46 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Eric Burton [EMAIL PROTECTED] wrote:


These have profound impacts on AGI design. First, AIXI is (provably) not 
computable,
which means there is no easy shortcut to AGI. Second, universal intelligence 
is not
computable because it requires testing in an infinite number of 
environments. Since
there is no other well accepted test of intelligence above human level, it 
casts doubt on
the main premise of the singularity: that if humans can create agents with 
greater than
human intelligence, then so can they.

I don't know for sure that these statements logically follow from one
another.

 They don't. I cannot prove that there is no non-evolutionary model of 
 recursive self improvement (RSI). Nor can I prove that there is. But it is a 
 question we need to answer before an evolutionary model becomes technically 
 feasible, because an evolutionary model is definitely unfriendly.

Higher intelligence bootstrapping itself has already been proven on
Earth. Presumably it can happen in a simulation space as well, right?

 If you mean the evolution of humans, that is not an example of RSI. One 
 requirement of friendly AI is that an AI cannot alter its human-designed 
 goals. (Another is that we get the goals right, which is unsolved). However, 
 in an evolutionary environment, the parents do not get to choose the goals of 
 their children. Evolution chooses goals that maximize reproductive fitness, 
 regardless of what you want.

 I have challenged this list as well as the singularity and SL4 lists to come 
 up with an example of a mathematical, software, biological, or physical 
 example of RSI, or at least a plausible argument that one could be created, 
 and nobody has. To qualify, an agent has to modify itself or create a more 
 intelligent copy of itself according to an intelligence test chosen by the 
 original. The following are not examples of RSI:

 1. Evolution of life, including humans.
 2. Emergence of language, culture, writing, communication technology, and 
 computers.
 3. A chess playing (or tic-tac-toe, or factoring, or SAT solving) program 
 that makes modified copies of itself by
 randomly flipping bits in a compressed representation of its source
 code, and playing its copies in death matches.
 4. Selective breeding of children for those that get higher grades in school.
 5. Genetic engineering of humans for larger brains.

 1 fails because evolution is smarter than all of human civilization if you 
 measure intelligence in bits of memory. A model of evolution uses 10^37 bits 
 (10^10 bits of DNA per cell x 10^14 cells in the human body x 10^10 humans x 
 10^3 ratio of biomass to human mass). Human civilization has at most 10^25 
 bits (10^15 synapses in the human brain x 10^10 humans).

 2 fails because individual humans are not getting smarter with each 
 generation, at least not nearly as fast as civilization is advancing. Rather, 
 there are more humans, and we are getting better organized through 
 specialization of tasks. Human brains are not much different than they were 
 10,000 years ago.

 3 fails because there are no known classes of problems that are provably hard 
 to solve but easy to verify. Tic-tac-toe and chess have bounded complexity. 
 It has not been proven that factoring is harder than multiplication. We don't 
 know that P != NP, and even if we did, many NP-complete problems have special 
 cases that are easy to solve, and we don't know how to program the parent to 
 avoid these cases through successive generations.

 4 fails because there is no evidence that above a certain level (about IQ 
 200) that childhood intelligence correlates with adult success. The problem 
 is that adults of average intelligence can't agree on how success should be 
 measured*.

 5 fails for the same reason.

 *For example, the average person recognizes Einstein as a genius not because 
 they are
 awed by his theories of general relativity, but because other people
 have said so. If you just read his papers (without understanding their great 
 insights) and knew that he never learned to drive a car, you might conclude 
 differently.

  -- Matt Mahoney, [EMAIL PROTECTED]



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

I'm not saying play isn't adaptive. I'm saying that kittens play not because 
they're optimizing their fitness, but because they're intrinsically motivated 
to (it feels good). The reason it feels good has nothing to do with the kitten, 
but with the evolutionary process that designed that adaption.

It may seem like a minor distinction, but it helps to understand why, for 
example, people have sex with birth control. We don't have sex to maximize our 
genetic fitness, but because it feels good (or a thousand other reasons). We 
are adaption executers, not fitness optimizers.

--- On Mon, 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
 
  Actually, kittens play because it's fun. Evolution
 has equipped them with the rewarding sense of fun because it
 optimizes their fitness as hunters. But kittens are
 adaptation executors, evolution is the fitness optimizer.
 It's a subtle but important distinction.
 
  See
 http://www.overcomingbias.com/2007/11/adaptation-exec.html
 
 
 Saying that play is not adaptive requires some backing (I
 expect it
 plays some role, so you need to be more convincing).
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner

Terren,

Your broad distinctions are fine, but I feel you are not emphasizing the 
area of most interest for AGI, which is *how* we adapt rather than why. 
Interestingly, your blog uses the example of a screwdriver - Kauffman uses 
the same in Chap 12 of Reinventing the Sacred as an example of human 
creativity/divergence - i.e. our capacity to find infinite uses for a 
screwdriver.


Do we think we could write an algorithm, an effective procedure, to 
generate a possibly infinite list of all possible uses of screwdrivers in 
all possible circumstances, some of which do not yet exist? I don't think we 
could get started.


What emerges here, v. usefully, is that the capacity for play overlaps 
with classically-defined, and a shade more rigorous and targeted,  divergent 
thinking, e.g. find as many uses as you can for a screwdriver, rubber teat, 
needle etc.


...How would you design a divergent (as well as play) machine that can deal 
with the above open-ended problems? (Again surely essential for an AGI)


With full general intelligence, the problem more typically starts with the 
function-to-be-fulfilled - e.g. how do you open this paint can? - and only 
then do you search for a novel tool, like a screwdriver or another can lid.




Terren: Actually, kittens play because it's fun. Evolution has equipped 
them with the rewarding sense of fun because it optimizes their fitness as 
hunters. But kittens are adaptation executors, evolution is the fitness 
optimizer. It's a subtle but important distinction.


See http://www.overcomingbias.com/2007/11/adaptation-exec.html

Terren

They're adaptation executors, not fitness optimizers.

--- On Mon, 8/25/08, Matt Mahoney [EMAIL PROTECTED] wrote:

Kittens play with small moving objects because it teaches
them to be better hunters. Play is not a goal in itself, but
a subgoal that may or may not be a useful part of a
successful AGI design.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, August 25, 2008 8:59:06 AM
Subject: Re: [agi] How Would You Design a Play Machine?

Brad,

That's sad.  The suggestion is for a mental exercise,
not a full-scale
project. And play is fundamental to the human mind-and-body
- it
characterises our more mental as well as more physical
activities -
drawing, designing, scripting, humming and singing scat in
the bath,
dreaming/daydreaming  much more. It is generally
acknowledged by
psychologists to be an essential dimension of creativity -
which is the goal
of AGI. It is also an essential dimension of animal
behaviour and animal
evolution.  Many of the smartest companies have their play
areas.

But I'm not aware of any program or computer design for
play - as distinct
from elaborating systematically and methodically or
genetically on
themes - are you? In which case it would be good to think
about one - it'll
open your mind  give you new perspectives.

This should be a group where people are not too frightened
to play around
with ideas.

Brad: Mike Tintner wrote: ...how would you design
a play machine - a
machine
 that can play around as a child does?

 I wouldn't.  IMHO that's just another waste of
time and effort (unless
 it's being done purely for research purposes).
It's a diversion of
 intellectual and financial resources that those
serious about building an
 AGI any time in this century cannot afford.  I firmly
believe if we had
 not set ourselves the goal of developing human-style
intelligence
 (embodied or not) fifty years ago, we would already
have a working,
 non-embodied AGI.

 Turing was wrong (or at least he was wrongly
interpreted).  Those who
 extended his imitation test to humanoid, embodied AI
were even more wrong.
 We *do not need embodiment* to be able to build a
powerful AGI that can be
 of immense utility to humanity while also surpassing
human intelligence in
 many ways.  To be sure, we want that AGI to be
empathetic with human
 intelligence, but we do not need to make it equivalent
(i.e., just like
 us).

 I don't want to give the impression that a
non-Turing intelligence will be
 easy to design and build.  It will probably require at
least another
 twenty years of two steps forward, one step
back effort.  So, if we are
 going to develop a non-human-like, non-embodied AGI
within the first
 quarter of this century, we are going to have to
just say no to Turing
 and start to use human intelligence as an inspiration,
not a destination.

 Cheers,

 Brad



 Mike Tintner wrote:
 Just a v. rough, first thought. An essential
requirement of  an AGI is
 surely that it must be able to play - so how would
you design a play
 machine - a machine that can play around as a
child does?

 You can rewrite the brief as you choose, but my
first thoughts are - it
 should be able to play with
 a) bricks
 b)plasticine
 c) handkerchiefs/ shawls
 d) toys [whose function it doesn't know]
 and
 e) draw.

 Something that should be soon obvious is that a

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Vladimir Nesov
On Mon, Aug 25, 2008 at 11:17 PM, Terren Suydam [EMAIL PROTECTED] wrote:

 I'm not saying play isn't adaptive. I'm saying that kittens play not
 because they're optimizing their fitness, but because they're intrinsically
 motivated to (it feels good). The reason it feels good has nothing to do
 with the kitten, but with the evolutionary process that designed that 
 adaption.

 It may seem like a minor distinction, but it helps to understand why,
 for example, people have sex with birth control. We don't have sex to
 maximize our genetic fitness, but because it feels good (or a thousand
 other reasons). We are adaption executers, not fitness optimizers.


The word because was misplaced. Cats hunt mice because they were
designed to, and they were designed to, because it's adaptive. Saying
that a particular cat instance hunts because it feels good is not very
explanatory, like saying that it hunts because such is its nature or
because the laws of physics drive the cat physical configuration
through the hunting dynamics. Evolutionary design, on the other hand,
is the point of explanation for the complex adaptation, the simple
regularity in the Nature that causally produced the phenomenon we are
explaining.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-25 Thread William Pearson
2008/8/25 Terren Suydam [EMAIL PROTECTED]:

 --- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam
 wrong. This ability might be an end in itself, the whole
 point of
 building an AI, when considered as applying to the dynamics
 of the
 world as a whole and not just AI aspect of it. After all,
 we may make
 mistakes or be swayed by unlucky happenstance in all
 matters, not just
 in a particular self-vacuous matter of building AI.

 I don't deny the possibility of disaster. But my stance is, if the only 
 approach you have to mitigate disaster is being able to control the AI 
 itself, well, the game is over before you even start it. It seems profoundly 
 naive to me that anyone could, even in principle, guarantee a 
 super-intelligent AI to renormalize, in whatever sense that means. Then you 
 have the difference between theory and practice... just forget it. Why would 
 anyone want to gamble on that?

You may be interested in goedel machines. I think this roughly fits
the template that Eliezer is looking for, something that reliably self
modifies to be better.

http://www.idsia.ch/~juergen/goedelmachine.html

Although he doesn't like explicit utility functions, the provably
better is something he want. Although what you would accept as axioms
for the proofs upon which humanity fate rests I really don't know.

Personally I think strong self-modification is not going to be useful,
the very act of trying to understand the way the code for an
intelligence is assembled will change the way that some of that code
is assembled. That is I think that intelligences have to be weakly
self modifying, in the same way bits of the brain rewire themselves
locally and subconciously, so to, AI  will  need to have the same sort
of changes in order to keep up with humans. Computers at the moment
can do lots of things better that humans (logic, bayesian stats), but
are really lousy at adapting and managing themselves so the blind
spots of infallible computers are always exploited by slow and error
prone, but changeable, humans.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Hi Mike,

As may be obvious by now, I'm not that interested in designing cognition. I'm 
interested in designing simulations in which intelligent behavior emerges.

But the way you're using the word 'adapt', in a cognitive sense of playing with 
goals, is different from the way I was using 'adaptation', which is the result 
of an evolutionary process. 

Terren

--- On Mon, 8/25/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: Re: [agi] How Would You Design a Play Machine?
 To: agi@v2.listbox.com
 Date: Monday, August 25, 2008, 3:41 PM
 Terren,
 
 Your broad distinctions are fine, but I feel you are not
 emphasizing the 
 area of most interest for AGI, which is *how* we adapt
 rather than why. 
 Interestingly, your blog uses the example of a screwdriver
 - Kauffman uses 
 the same in Chap 12 of Reinventing the Sacred as an example
 of human 
 creativity/divergence - i.e. our capacity to find infinite
 uses for a 
 screwdriver.
 
 Do we think we could write an algorithm, an effective
 procedure, to 
 generate a possibly infinite list of all possible uses of
 screwdrivers in 
 all possible circumstances, some of which do not yet exist?
 I don't think we 
 could get started.
 
 What emerges here, v. usefully, is that the
 capacity for play overlaps 
 with classically-defined, and a shade more rigorous and
 targeted,  divergent 
 thinking, e.g. find as many uses as you can for a
 screwdriver, rubber teat, 
 needle etc.
 
 ...How would you design a divergent (as well as play)
 machine that can deal 
 with the above open-ended problems? (Again surely essential
 for an AGI)
 
 With full general intelligence, the problem more typically
 starts with the 
 function-to-be-fulfilled - e.g. how do you open this paint
 can? - and only 
 then do you search for a novel tool, like a screwdriver or
 another can lid.
 
 
 
 Terren: Actually, kittens play because it's fun.
 Evolution has equipped 
 them with the rewarding sense of fun because it optimizes
 their fitness as 
 hunters. But kittens are adaptation executors, evolution is
 the fitness 
 optimizer. It's a subtle but important distinction.
 
  See
 http://www.overcomingbias.com/2007/11/adaptation-exec.html
 
  Terren
 
  They're adaptation executors, not fitness
 optimizers.
 
  --- On Mon, 8/25/08, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  Kittens play with small moving objects because it
 teaches
  them to be better hunters. Play is not a goal in
 itself, but
  a subgoal that may or may not be a useful part of
 a
  successful AGI design.
 
   -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 
  - Original Message 
  From: Mike Tintner
 [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Monday, August 25, 2008 8:59:06 AM
  Subject: Re: [agi] How Would You Design a Play
 Machine?
 
  Brad,
 
  That's sad.  The suggestion is for a mental
 exercise,
  not a full-scale
  project. And play is fundamental to the human
 mind-and-body
  - it
  characterises our more mental as well as more
 physical
  activities -
  drawing, designing, scripting, humming and singing
 scat in
  the bath,
  dreaming/daydreaming  much more. It is
 generally
  acknowledged by
  psychologists to be an essential dimension of
 creativity -
  which is the goal
  of AGI. It is also an essential dimension of
 animal
  behaviour and animal
  evolution.  Many of the smartest companies have
 their play
  areas.
 
  But I'm not aware of any program or computer
 design for
  play - as distinct
  from elaborating systematically and methodically
 or
  genetically on
  themes - are you? In which case it would be good
 to think
  about one - it'll
  open your mind  give you new perspectives.
 
  This should be a group where people are not too
 frightened
  to play around
  with ideas.
 
  Brad: Mike Tintner wrote: ...how would
 you design
  a play machine - a
  machine
   that can play around as a child does?
  
   I wouldn't.  IMHO that's just another
 waste of
  time and effort (unless
   it's being done purely for research
 purposes).
  It's a diversion of
   intellectual and financial resources that
 those
  serious about building an
   AGI any time in this century cannot afford. 
 I firmly
  believe if we had
   not set ourselves the goal of developing
 human-style
  intelligence
   (embodied or not) fifty years ago, we would
 already
  have a working,
   non-embodied AGI.
  
   Turing was wrong (or at least he was wrongly
  interpreted).  Those who
   extended his imitation test to humanoid,
 embodied AI
  were even more wrong.
   We *do not need embodiment* to be able to
 build a
  powerful AGI that can be
   of immense utility to humanity while also
 surpassing
  human intelligence in
   many ways.  To be sure, we want that AGI to
 be
  empathetic with human
   intelligence, but we do not need to make it
 equivalent
  (i.e., just like
   us).
  
   I don't want to give the impression that
 a
  non-Turing intelligence will be
   easy to design and build.  It will 

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

 Saying
 that a particular cat instance hunts because it feels good
 is not very explanatory

Even if I granted that, saying that a particular cat plays to increase its 
hunting skills is incorrect. It's an important distinction because by analogy 
we must talk about particular AGI instances. When we talk about, for instance, 
whether an AGI will play, will it play because it's trying to optimize its 
fitness, or because it is motivated in some other way?  We have to be that 
precise if we're talking about design.

Terren

--- On Mon, 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 The word because was misplaced. Cats hunt mice
 because they were
 designed to, and they were designed to, because it's
 adaptive. Saying
 that a particular cat instance hunts because it feels good
 is not very
 explanatory, like saying that it hunts because such is its
 nature or
 because the laws of physics drive the cat physical
 configuration
 through the hunting dynamics. Evolutionary design, on the
 other hand,
 is the point of explanation for the complex adaptation, the
 simple
 regularity in the Nature that causally produced the
 phenomenon we are
 explaining.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Jonathan El-Bizri
On Mon, Aug 25, 2008 at 12:52 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:


 The word because was misplaced. Cats hunt mice because they were
 designed to, and they were designed to, because it's adaptive.


And the adaption they have evolved in to, uses a pleasure process as a
motivator.

Saying
 that a particular cat instance hunts because it feels good is not very
 explanatory, like saying that it hunts because such is its nature or
 because the laws of physics drive the cat physical configuration
 through the hunting dynamics.


Not at all. It defines the process by drives a cat to hunt, and also to
practice - ie - play hunting. This is opposed to hunting due to reflex,
like, say, a venus flytrap. I am reminded of a possibly apocryphal story
about picasso:

--
A woman asks Picasso to draw something for her on a napkin. He puts down a
few lines, and says That will be $10,000.

What! says the woman, That only took you five seconds to draw.

No, that took me 40 years to draw.
-

Cats has evolved to see the process as a goal or reward in itself, over and
above the requirements for food: If cats just hunted because they were
hungry, they would never spend their downtime during kittenhood practicing
and watching other cats hunt, and wouldn't be any good at hunting.

And the result has many more advantages than simply optimising it's hunting
strategies: it has evolved a cat that bonds with it's fellow kittens, learns
to cooperate, and ultimately, becomes a better hunter because it sees the
process of hunting as a game.

Jonathan El-Bizri



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Vladimir Nesov
On Tue, Aug 26, 2008 at 12:19 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 Saying
 that a particular cat instance hunts because it feels good
 is not very explanatory

 Even if I granted that, saying that a particular cat plays to increase
 its hunting skills is incorrect. It's an important distinction because
 by analogy we must talk about particular AGI instances. When we
 talk about, for instance, whether an AGI will play, will it play because
 it's trying to optimize its fitness, or because it is motivated in some
 other way?  We have to be that precise if we're talking about design.

Of course. Different optimization processes at work, different causes.
Let's say (ignoring if it's actually so for the sake of illustration)
that cat plays because it provides it with developmental advantage
through training its nervous system, giving it better hunting skills,
and so an adaptation that drives cat to play was chosen *by
evolution*. Cat doesn't play because *it* reasons that it would give
it superior hunting skills, cat plays because of the emotional drive
installed by evolution (or a more general drive inherent in its
cognitive dynamics). When AGI plays to get better at some skill, it
may be either a result of programmer's advice, in which case play
happens because *programmer* says so, or as a result of its own
conclusion that play helps with skills, and if skills are desirable,
play inherits the desirability. In the last case, play happens because
AGI decides so, which in turn happens because there is a causal link
from play to a desirable state of having superior skills.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner
Terren:As may be obvious by now, I'm not that interested in designing 
cognition. I'm interested in designing simulations in which intelligent 
behavior emerges.But the way you're using the word 'adapt', in a cognitive 
sense of playing with goals, is different from the way I was using 
'adaptation', which is the result of an evolutionary process.


Two questions: 1)  how do you propose that your simulations will avoid the 
kind of criticisms you've been making of other systems of being too guided 
by programmers' intentions? How can you set up a simulation without making 
massive, possibly false assumptions about the nature of evolution?


2) Have you thought about the evolution of play in animals?

(We play BTW with just about every dimension of activities - goals, rules, 
tools, actions, movements.. ).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Terren Suydam

Hi Will,

I don't doubt that provable-friendliness is possible within limited, 
well-defined domains that can be explicitly defined and hard-coded. I know 
chess programs will never try to kill me.

I don't believe however that you can prove friendliness within a framework that 
has the robustness required to make sense of a dynamic, unstable world. The 
basic problem, as I see it, is that Friendliness is a moving target, and 
context dependent. It cannot be defined within the kind of rigorous logical 
frameworks required to prove such a concept.

Terren

--- On Mon, 8/25/08, William Pearson [EMAIL PROTECTED] wrote:
 You may be interested in goedel machines. I think this
 roughly fits
 the template that Eliezer is looking for, something that
 reliably self
 modifies to be better.
 
 http://www.idsia.ch/~juergen/goedelmachine.html
 
 Although he doesn't like explicit utility functions,
 the provably
 better is something he want. Although what you would accept
 as axioms
 for the proofs upon which humanity fate rests I really
 don't know.
 
 Personally I think strong self-modification is not going to
 be useful,
 the very act of trying to understand the way the code for
 an
 intelligence is assembled will change the way that some of
 that code
 is assembled. That is I think that intelligences have to be
 weakly
 self modifying, in the same way bits of the brain rewire
 themselves
 locally and subconciously, so to, AI  will  need to have
 the same sort
 of changes in order to keep up with humans. Computers at
 the moment
 can do lots of things better that humans (logic, bayesian
 stats), but
 are really lousy at adapting and managing themselves so the
 blind
 spots of infallible computers are always exploited by slow
 and error
 prone, but changeable, humans.
 
   Will Pearson
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

If an AGI played because it recognized that it would improve its skills in some 
domain, then I wouldn't call that play, I'd call it practice. Those are 
overlapping but distinct concepts. 

Play, as distinct from pactice, is its own reward - the reward felt by a 
kitten. The spirit of Mike's question, I think, was about identifying the 
essential goalless-ness of play, the sense in which playing fosters adaptivity 
of goals. If you really want to interpret goal-satisfaction in play, it must be 
a meta-goal of mastering one's environment - and that is such a broadly defined 
goal that I don't see how one could specify it to a seed AI. I believe that's 
why evolution used the trick of making it fun.

Terren

--- On Mon, 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Of course. Different optimization processes at work,
 different causes.
 Let's say (ignoring if it's actually so for the
 sake of illustration)
 that cat plays because it provides it with developmental
 advantage
 through training its nervous system, giving it better
 hunting skills,
 and so an adaptation that drives cat to play was chosen *by
 evolution*. Cat doesn't play because *it* reasons that
 it would give
 it superior hunting skills, cat plays because of the
 emotional drive
 installed by evolution (or a more general drive inherent in
 its
 cognitive dynamics). When AGI plays to get better at some
 skill, it
 may be either a result of programmer's advice, in which
 case play
 happens because *programmer* says so, or as a result of its
 own
 conclusion that play helps with skills, and if skills are
 desirable,
 play inherits the desirability. In the last case, play
 happens because
 AGI decides so, which in turn happens because there is a
 causal link
 from play to a desirable state of having superior skills.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Hi Mike,

Comments below...

--- On Mon, 8/25/08, Mike Tintner [EMAIL PROTECTED] wrote:
 Two questions: 1)  how do you propose that your simulations
 will avoid the 
 kind of criticisms you've been making of other systems
 of being too guided 
 by programmers' intentions? How can you set up a
 simulation without making 
 massive, possibly false assumptions about the nature of
 evolution?

Because I don't care about individual agents. Agents that fail to meet the 
requirements the environment demands, die. There's going to be a lot of death 
in my simulations. The risk I take is that nothing ever survives and I fail to 
demonstrate the feasibility of the approach.
 
 2) Have you thought about the evolution of play in animals?
 
 (We play BTW with just about every dimension of
 activities - goals, rules, 
 tools, actions, movements.. ).

Not much. Play is such an advanced concept in intelligence, and my aims are far 
lower than that.  I don't realistically expect to survive to see the evolution 
of human intelligence using the evolutionary approach I'm talking about.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Vladimir Nesov
On Tue, Aug 26, 2008 at 1:26 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 If an AGI played because it recognized that it would improve its skills
 in some domain, then I wouldn't call that play, I'd call it practice. Those
 are overlapping but distinct concepts.

 Play, as distinct from pactice, is its own reward - the reward felt by a
 kitten. The spirit of Mike's question, I think, was about identifying the
 essential goalless-ness of play, the sense in which playing fosters
 adaptivity of goals. If you really want to interpret goal-satisfaction in 
 play,
 it must be a meta-goal of mastering one's environment - and that is such
 a broadly defined goal that I don't see how one could specify it to a seed
 AI. I believe that's why evolution used the trick of making it fun.


What do you mean by trick? Fun of playing is evolutionary encoded,
no tricks. You can try to encode it into a seed AI by adding a
reference to an actual kitten in a right way, saying fun is that
thing over there! without saying what it is explicitly, and providing
this AI with a kitten. How to do it technically is of course a
Friendly AI-complete problem, but its solution doesn't need to include
all the fine points of the fun concept itself. On this subject, see:

http://www.overcomingbias.com/2008/08/mirrors-and-pai.html
-- in what sense AI can be as a mirror for complex concept instead of
a pencil sketch explicitly hacked together by programmers;

http://www.overcomingbias.com/2008/08/unnatural-categ.html
-- why morality concept needs to be transferred in all details, and
can't be learned from few examples;

http://www.overcomingbias.com/2008/08/computations.html
-- what a real-life concept may look like.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Jonathan El-Bizri
On Mon, Aug 25, 2008 at 2:26 PM, Terren Suydam [EMAIL PROTECTED] wrote:


 If an AGI played because it recognized that it would improve its skills in
 some domain, then I wouldn't call that play, I'd call it practice. Those are
 overlapping but distinct concepts.


The evolution of play is how nature has convinced us to practice skills of a
general but un-predefinable type. Would it make sense to think of practice
as the narrow AI version of play?

Part of play is the specification of arbitrary goals and limitations within
the overlying process. Games without rules aren't 'fun' to people or
kittens.



 Play, as distinct from pactice, is its own reward - the reward felt by a
 kitten. The spirit of Mike's question, I think, was about identifying the
 essential goalless-ness of play, the sense in which playing fosters
 adaptivity of goals. If you really want to interpret goal-satisfaction in
 play, it must be a meta-goal of mastering one's environment - and that is
 such a broadly defined goal that I don't see how one could specify it to a
 seed AI. I believe that's why evolution used the trick of making it fun.


But making it 'fun' doesn't answer the question of what the implicit goals
are. Piaget's theories of assimilation can bring us closer to this, I am of
the mind that they encompass at least part of the intellectual drive toward
play and investigation.

Jonathan El-Bizri



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread David Hart
Where is the hard dividing line between designed cognition and designed
simulation (where intelligent behavior is intended to be emergent in both
cases)? Even if an approach is taken where everything possible is done allow
a 'natural' type evolution of behavior, the simulation design and parameters
will still influence the outcome, sometimes in unknown and unknowable ways.
Any amount of guidance in such a simulation (e.g. to help avoid so many of
the useless eddies in a fully open-ended simulation) amounts to designed
cognition.

That being said, I'm particularly interested in the OCF being used as a
platform for 'pure simulation' (Alife and more sophisticated game
theoretical simulations), and finding ways to work the resulting experience
and methods into the OCP design, which is itself a hybrid approach (designed
cognition + designed simulation) intended to take advantage of the benefits
of both.

-dave

On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Terren:As may be obvious by now, I'm not that interested in designing
 cognition. I'm interested in designing simulations in which intelligent
 behavior emerges.But the way you're using the word 'adapt', in a cognitive
 sense of playing with goals, is different from the way I was using
 'adaptation', which is the result of an evolutionary process.

 Two questions: 1)  how do you propose that your simulations will avoid the
 kind of criticisms you've been making of other systems of being too guided
 by programmers' intentions? How can you set up a simulation without making
 massive, possibly false assumptions about the nature of evolution?

 2) Have you thought about the evolution of play in animals?

 (We play BTW with just about every dimension of activities - goals,
 rules, tools, actions, movements.. ).





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner
Terren: The spirit of Mike's question, I think, was about identifying the 
essential goalless-ness of play..


Well, the key thing for me (although it was, technically, a play-ful 
question :) )  is the distinction between programmed/planned exploration of 
a basically known environment and ad hoc exploration of a deeply unknown 
environment. In many ways, it follows on from my previous thread on 
Philosophy of Learning in  AGI, which asked - how do you learn an unfamiliar 
subject/skill/ activity - could any definite set of principles guide you? 
(This, I presume, is what Ben is somehow dealing with).


If you're an infant, or even often an adult, you don't know what this 
strange object is for or how to manipulate it - so how do you go about 
moving it and testing its properties? How do you go about moving your hand, 
(or manipulator if you're a robot)? {I'd be interested in Bob M's input 
here] - exploring its  properties and capacities for movement too? What are 
the principles if any that should constrain you?


Equally, if you're exploring an environment - a new kind of room, or a new 
kind of territory like a garden, wood, forest, how do you go about moving 
through it, deciding on paths, orienting yourself, mapping etc.?  Remember 
that these are initially alien environments, so the adult or AGI equivalent 
is exploring a strange planet, or  videogame world with alien kinds of laws.


Play - divergent thinking - exploration - these are all overlapping 
dimensions of a general intelligence developing its intelligence, and 
central to AGI.


And for the more abstractly inclined, I should point out that these 
questions easily translate into the most abstract forms - like how do you 
explore a new area of, or for, logic, or maths? How do you go about 
exploring, or developing, a maths of, say, abstract art?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Charles Hixson

Jonathan El-Bizri wrote:



On Mon, Aug 25, 2008 at 2:26 PM, Terren Suydam [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:



If an AGI played because it recognized that it would improve its
skills in some domain, then I wouldn't call that play, I'd call it
practice. Those are overlapping but distinct concepts.


The evolution of play is how nature has convinced us to practice 
skills of a general but un-predefinable type. Would it make sense to 
think of practice as the narrow AI version of play?
No.  Because in practice one is honing skills with a definite chosen 
purpose (and usually no instinctive guide), whereas in play one is 
honing skills without the knowledge that one is doing so.  It's very 
different, e.g., to play a game of chess, and to practice playing chess.


Part ...

Jonathan El-Bizri




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Eric Burton
Is friendliness really so context-dependent? Do you have to be human
to act friendly at the exception of acting busy, greedy, angry, etc? I
think friendliness is a trait we project onto things pretty readily
implying it's wired at some fundamental level. It comes from the
social circuits, it's about being considerate or inocuous. But I don't
know

On 8/25/08, Terren Suydam [EMAIL PROTECTED] wrote:

 Hi Will,

 I don't doubt that provable-friendliness is possible within limited,
 well-defined domains that can be explicitly defined and hard-coded. I know
 chess programs will never try to kill me.

 I don't believe however that you can prove friendliness within a framework
 that has the robustness required to make sense of a dynamic, unstable world.
 The basic problem, as I see it, is that Friendliness is a moving target,
 and context dependent. It cannot be defined within the kind of rigorous
 logical frameworks required to prove such a concept.

 Terren

 --- On Mon, 8/25/08, William Pearson [EMAIL PROTECTED] wrote:
 You may be interested in goedel machines. I think this
 roughly fits
 the template that Eliezer is looking for, something that
 reliably self
 modifies to be better.

 http://www.idsia.ch/~juergen/goedelmachine.html

 Although he doesn't like explicit utility functions,
 the provably
 better is something he want. Although what you would accept
 as axioms
 for the proofs upon which humanity fate rests I really
 don't know.

 Personally I think strong self-modification is not going to
 be useful,
 the very act of trying to understand the way the code for
 an
 intelligence is assembled will change the way that some of
 that code
 is assembled. That is I think that intelligences have to be
 weakly
 self modifying, in the same way bits of the brain rewire
 themselves
 locally and subconciously, so to, AI  will  need to have
 the same sort
 of changes in order to keep up with humans. Computers at
 the moment
 can do lots of things better that humans (logic, bayesian
 stats), but
 are really lousy at adapting and managing themselves so the
 blind
 spots of infallible computers are always exploited by slow
 and error
 prone, but changeable, humans.

   Will Pearson


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Terren Suydam

Eric,

We're talking Friendliness (capital F), a convention suggested by Eliezer 
Yudkowsky, that signifies the sense in which an AI does no harm to humans.

Yes, it's context dependent. Do no harm is the mantra within the medical 
community, but clearly there are circumstances in which you do a little harm to 
achieve greater health in the long run. Chemotherapy is a perfect example. 
Would we trust an AI if it proposed something like chemotherapy? Before we 
understood that to be a valid treatment, would we really believe it was being 
Friendly?  You want me to drink *what*?

Or take any number of ethical dilemmas, in which it's ok to steal food if it's 
to feed your kids. Or killing ten people to save twenty. etc. How do you define 
Friendliness in these circumstances? Depends on the context.

Terren

--- On Mon, 8/25/08, Eric Burton [EMAIL PROTECTED] wrote:
 Is friendliness really so context-dependent? Do you have to
 be human
 to act friendly at the exception of acting busy, greedy,
 angry, etc? I
 think friendliness is a trait we project onto things pretty
 readily
 implying it's wired at some fundamental level. It comes
 from the
 social circuits, it's about being considerate or
 inocuous. But I don't
 know
 
 On 8/25/08, Terren Suydam [EMAIL PROTECTED]
 wrote:
 
  Hi Will,
 
  I don't doubt that provable-friendliness is
 possible within limited,
  well-defined domains that can be explicitly defined
 and hard-coded. I know
  chess programs will never try to kill me.
 
  I don't believe however that you can prove
 friendliness within a framework
  that has the robustness required to make sense of a
 dynamic, unstable world.
  The basic problem, as I see it, is that
 Friendliness is a moving target,
  and context dependent. It cannot be defined within the
 kind of rigorous
  logical frameworks required to prove such a concept.
 
  Terren
 
  --- On Mon, 8/25/08, William Pearson
 [EMAIL PROTECTED] wrote:
  You may be interested in goedel machines. I think
 this
  roughly fits
  the template that Eliezer is looking for,
 something that
  reliably self
  modifies to be better.
 
  http://www.idsia.ch/~juergen/goedelmachine.html
 
  Although he doesn't like explicit utility
 functions,
  the provably
  better is something he want. Although what you
 would accept
  as axioms
  for the proofs upon which humanity fate rests I
 really
  don't know.
 
  Personally I think strong self-modification is not
 going to
  be useful,
  the very act of trying to understand the way the
 code for
  an
  intelligence is assembled will change the way that
 some of
  that code
  is assembled. That is I think that intelligences
 have to be
  weakly
  self modifying, in the same way bits of the brain
 rewire
  themselves
  locally and subconciously, so to, AI  will  need
 to have
  the same sort
  of changes in order to keep up with humans.
 Computers at
  the moment
  can do lots of things better that humans (logic,
 bayesian
  stats), but
  are really lousy at adapting and managing
 themselves so the
  blind
  spots of infallible computers are always exploited
 by slow
  and error
  prone, but changeable, humans.
 
Will Pearson
 
 
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
 
 
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Hi Johnathon,

I disagree, play without rules can certainly be fun. Running just to run, 
jumping just to jump. Play doesn't have to be a game, per se. It's simply a 
purposeless expression of the joy of being alive. It turns out of course that 
play is helpful for achieving certain goals that we interpret as being 
installed by evolution. But we don't play to achieve goals, we do it because 
it's fun. As Mike said, this very discussion is a kind of play, and while we 
can certainly identify goals that we try to accomplish in the course of hashing 
these things out, there's an element in it, for me anyway, of just doing it 
because I love doing it. I suspect that's true for others here. I hope so, 
anyway.

Of course, those that are dogmatically functionalist will view such language as 
'fun' as totally irrelevant. That's ok. The cool thing about AI is that 
eventually, it will shed light on whether subjective experience (to 
functionalists, an inconvenience to be done away with) is critical to 
intelligence.

To address your second question, the implicit goal is always reproduction. If 
there is one basic reductionist element to all of life, it is that. Making play 
fun is a way of getting us to play at all, so that we are more likely to 
reproduce. There's a limit however to the usefulness and accuracy of reducing 
everything to reproduction. 

Terren

--- On Mon, 8/25/08, Jonathan El-Bizri [EMAIL PROTECTED] wrote:
Part of play is the specification of arbitrary goals and limitations within the 
overlying process. Games without rules aren't 'fun' to people or kittens. 

 


Play, as distinct from pactice, is its own reward - the reward felt by a 
kitten. The spirit of Mike's question, I think, was about identifying the 
essential goalless-ness of play, the sense in which playing fosters adaptivity 
of goals. If you really want to interpret goal-satisfaction in play, it must be 
a meta-goal of mastering one's environment - and that is such a broadly defined 
goal that I don't see how one could specify it to a seed AI. I believe that's 
why evolution used the trick of making it fun.



But making it 'fun' doesn't answer the question of what the implicit goals are. 
Piaget's theories of assimilation can bring us closer to this, I am of the mind 
that they encompass at least part of the intellectual drive toward play and 
investigation.


Jonathan El-Bizri






  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Hi David,

 Any amount of guidance in such a simulation (e.g. to help avoid so many
of the useless
 eddies in a fully open-ended simulation) amounts to
designed cognition.


No, it amounts to guided evolution. The difference between a designed 
simulation and a designed cognition is the focus on the agent itself. In the 
latter, you design the agent and turn it loose, testing it to see if it does 
what you want it to. In the former (the simulation), you turn a bunch of 
candidate agents loose and let them compete to do what you want them to. The 
ones that don't, die. You're specifying the environment, not the agent. If you 
do it right, you don't even have to specify the goals.  With designed 
cognition, you must specify the goals, either directly (un-embodied), or in 
some meta-fashion (embodied). 

Terren

--- On Mon, 8/25/08, David Hart [EMAIL PROTECTED] wrote:
From: David Hart [EMAIL PROTECTED]
Subject: Re: [agi] How Would You Design a Play Machine?
To: agi@v2.listbox.com
Date: Monday, August 25, 2008, 6:04 PM

Where is the hard dividing line between designed cognition and designed 
simulation (where intelligent behavior is intended to be emergent in both 
cases)? Even if an approach is taken where everything possible is done allow a 
'natural' type evolution of behavior, the simulation design and parameters will 
still influence the outcome, sometimes in unknown and unknowable ways. Any 
amount of guidance in such a simulation (e.g. to help avoid so many of the 
useless eddies in a fully open-ended simulation) amounts to designed cognition.


That being said, I'm particularly interested in the OCF being used as a 
platform for 'pure simulation' (Alife and more sophisticated game theoretical 
simulations), and finding ways to work the resulting experience and methods 
into the OCP design, which is itself a hybrid approach (designed cognition + 
designed simulation) intended to take advantage of the benefits of both.


-dave

On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:
Terren:As may be obvious by now, I'm not that interested in designing 
cognition. I'm interested in designing simulations in which intelligent 
behavior emerges.But the way you're using the word 'adapt', in a cognitive 
sense of playing with goals, is different from the way I was using 
'adaptation', which is the result of an evolutionary process.




Two questions: 1)  how do you propose that your simulations will avoid the kind 
of criticisms you've been making of other systems of being too guided by 
programmers' intentions? How can you set up a simulation without making 
massive, possibly false assumptions about the nature of evolution?




2) Have you thought about the evolution of play in animals?



(We play BTW with just about every dimension of activities - goals, rules, 
tools, actions, movements.. ).











---

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription: https://www.listbox.com/member/?;


Powered by Listbox: http://www.listbox.com







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com