Re: [agi] How Would You Design a Play Machine?

2008-08-30 Thread David Clark
You make the statement below as if it were a fact and I don't believe it to
be fact at all.

If a disembodied AGI has models suggested by an embodied person, then that
concept can have meaning in a real world setting without the AGI actually
having a body at all.  If a disembodied AGI has a hypothesis about the real
world and doesn't have a direct way to test if it is true, then it could
just ask a human to do so on it's behalf.  Disabled persons are not stupid
or useless just because some/most of their ability to interact with the
world is impaired.

If a climate model has algorithms and data from the real world, do you argue
that the result will be nothing but semantic free gibberish?

I know that some systems (specifically systems without models or a lot of
human interaction) have had grounding problems but your statement below
seems to be stating something that is far from proven fact.

Your conclusions about concept of self and unemboodied agent means
ungrounded symbols are also not shared by me and not explained or proven by
you.

Your saying something is doesn't necessarily make it true.

-- David Clark

- Original Message - 
From: Terren Suydam [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, August 29, 2008 9:18 AM
Subject: Re: [agi] How Would You Design a Play Machine?


 To an unembodied agent, the concept of self is indistinguishable from any
other concept it works with. I use concept in quotes because to the
unembodied agent, it is not a concept at all, but merely a symbol with no
semantic context attached. All such an agent can do is perform operations on
ungrounded symbols - at best, the result of which can appear to be
intelligent within some domain (e.g., a chess program).




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-30 Thread Mike Tintner
David: I know that some systems (specifically systems without models or a 
lot of

human interaction) have had grounding problems but your statement below
seems to be stating something that is far from proven fact.

Your conclusions about concept of self and unemboodied agent means
ungrounded symbols are also not shared by me and not explained or proven 
by

you.

Your saying something is doesn't necessarily make it true.

Terren: To an unembodied agent, the concept of self is indistinguishable 
from any

other concept it works with. I use concept in quotes because to the
unembodied agent, it is not a concept at all, but merely a symbol with no
semantic context attached. All such an agent can do is perform operations 
on

ungrounded symbols - at best, the result of which can appear to be
intelligent within some domain (e.g., a chess program).


David,

MAN: But enough of talking about me, darling. Let's talk about you... What 
do you think about me?


And how is the computer going to get the joke, without having a self, that's 
been in a conversation, and had physical emotional urges to talk about 
themself, and had to wait impatiently while others talked about themselves, 
and having a gut that can laugh?


MAN: You're not a human being, David. You're just a machine. You talk 
robotically, you walk robotically, you think robotically. You don't have any 
feelings.


And how's it going to understand any of that? How's it going to know that 
the man is exaggerating?


MAN: I have terrible problems of self-control whenever I see a doughnut.

And that, esp self-control?

Or:

Suppose Bob's goal is to create a human-level AI; and he thinks he knows 
how to do it, but the completion of his
approach is likely to take him an indeterminate number of years of work, 
during which he will

have trouble feeding himself.
Consider two options Bob has:
A) Spend 10 years hacking in his basement, based on his AI ideas
B) Spend those 10 years working as a financial trader, and donate 50% of his 
profits to others

creating AI

How's a computer going to understand the pressures on Bob, and why they 
reflect pressures on Ben?


One can go on in this vein covering all of human and animal affairs and 
life. That doesn't leave a lot.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-29 Thread Terren Suydam

--- On Fri, 8/29/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
 I don't see why an un-embodied system couldn't
 successfully use the
 concept of self in its models. It's just another
 concept, except that
 it's linked to real features of the system.

To an unembodied agent, the concept of self is indistinguishable from any other 
concept it works with. I use concept in quotes because to the unembodied 
agent, it is not a concept at all, but merely a symbol with no semantic context 
attached. All such an agent can do is perform operations on ungrounded symbols 
- at best, the result of which can appear to be intelligent within some domain 
(e.g., a chess program).

 Even though this particular
 AGI never
 heard about any of those other tools being used for cutting
 bread (and
 is not self-aware in any sense), it still can (when asked
 for advice)
 make a reasonable suggestion to try the T2
 (because of the
 similarity) = coming up with a novel idea 
 demonstrating general
 intelligence.

Sounds like magic to me. You're taking something that we humans can do and 
sticking it in as a black box into a hugely simplified agent in a way that 
imparts no understanding about how we do it.  Maybe you left that part out for 
brevity - care to elaborate?

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-29 Thread Jiri Jelinek
Terren,

to the unembodied agent, it is not a concept at all, but merely a symbol with 
no semantic context attached

It's an issue when trying to learn from NL only, but you can injects
semantics (critical for grounding) when teaching through a
formal_language[-based interface], get the thinking algorithms working
and possibly focus on NL-to-formal_language conversions later.

To an unembodied agent, the concept of self is indistinguishable from any 
other concept it works with.

An AGI should be able to use tools (external/internal applications)
and it can learn to view itself (or just some of its modules) as its
tool(s).
You can design an interface [possibly just for advanced users] for
mapping learned concepts/actions to the interface of available tools.
Just like it can learn how to use a command line calculator, it can
learn how to use self as a tool. Then it can learn that an alias to
use for that tool is I/Me.
By design, it can also clearly distinguish between using a particular
tool in theory and in practice.

 All such an agent can do is perform operations on ungrounded symbols - at 
 best, the result of which can appear to be intelligent within some domain 
 (e.g., a chess program).

You can ground when using semantic-supporting input formats. I don't
see why would it have to be specific to a single domain. You can use
very general data representation structures and fill it with data with
many domains. You just have to get the KR right (unlike CYC). Easy
to say, I know, but I don't see a good reason why it couldn't (in
principle) work and I'm working on figuring that out.

 Even though this particular
 AGI never
 heard about any of those other tools being used for cutting
 bread (and
 is not self-aware in any sense), it still can (when asked
 for advice)
 make a reasonable suggestion to try the T2
 (because of the
 similarity) = coming up with a novel idea 
 demonstrating general
 intelligence.

 Sounds like magic to me. You're taking something that we humans can do and 
 sticking it in as a black box into a hugely simplified agent in a way that 
 imparts no understanding about how we do it.  Maybe you left that part out 
 for brevity - care to elaborate?

It must sound like magic when assuming the no semantic context
attached, but that doesn't have to be the case. With right teaching
methods, the system gets semantics, can make models and can apply
knowledge learned from scenario1 to scenario2 in unique ways. What
does the right teaching methods mean? For example, when learning an
action concept (e.g. buy), it needs to grasp [at least some] roles
involved (e.g. seller, buyer, goods, price, ..) and how
relationships between the role-players changes in relevant stages. You
can design user friendly interface for teaching systems in meaningful
ways so it can later think using queriable models and understand
relationships [changes] between concepts etc... Sorry about the
brevity (busy schedule).

Regards,
Jiri Jelinek

PS: we might be slightly off-topic in this thread.. (?)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Bob Mottram
2008/8/27 Mike Tintner [EMAIL PROTECTED]:
You on your side insist that you don't have to have such precisely defined 
goals
 - your intuitive (and by definition, ill-defined) sense of intelligence will
 do.


As a child I don't believe that I set out with the goal of becoming a
software developer.  Indeed, such jobs barely even existed at the
time.  However, through play and experience I may have noticed that I
had certain skills, and later noticed that these might be useful in
particular kinds of situations.  This doesn't seem to be a situation
in which there was a well defined goal tree in advance, which I was
simply moving incrementally towards - although many people might like
to give such a whiggish impression in biographies or CVs.  Rather
there were various ideas and technologies developing at the time, some
of which were transmitted to me and were able to use me as a P2P host
for further propogation.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Mike Tintner
Just in case there is any confusion, ill-defined is in this particular 
context is in no way pejorative. The crux of a General Intelligence for me 
is that it is necessarily a machine that works with more or less ill-defined 
goals to solve ill-structured problems. Bob's self-description is to a 
greater or lesser extent true of how most of us conduct most of our 
activities and lives. The test of a GI, artificial or natural, is how well 
it *creates* goal definitions and structures for solving problems, and the 
actual solutions ad hoc.


(I still think  of course that current AGI should have a not-so-ill 
structured definition of its problem-solving goals).



Bob: You on your side insist that you don't have to have such precisely 
defined goals
- your intuitive (and by definition, ill-defined) sense of intelligence 
will

do.



As a child I don't believe that I set out with the goal of becoming a
software developer.  Indeed, such jobs barely even existed at the
time.  However, through play and experience I may have noticed that I
had certain skills, and later noticed that these might be useful in
particular kinds of situations.  This doesn't seem to be a situation
in which there was a well defined goal tree in advance, which I was
simply moving incrementally towards - although many people might like
to give such a whiggish impression in biographies or CVs.  Rather
there were various ideas and technologies developing at the time, some
of which were transmitted to me and were able to use me as a P2P host
for further propogation.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Bob Mottram
2008/8/28 Mike Tintner [EMAIL PROTECTED]:
 (I still think  of course that current AGI should have a not-so-ill
 structured definition of its problem-solving goals).


It's certainly true that an AGI could be endowed with well defined
goals.  Some people also begin from an early age with well defined
goals.  However, you then need to look more carefully at where these
goals originated.  Children who have well defined goals about what
they wish to do with their lives are often simply downloading these
from parents.  Similarly in the AGI case you could have high level
goals artificially inserted by a human programmer.  In both natural
and artificial cases you could then ask whether these systems are
truly intelligent, or merely acting as clever executors.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Terren Suydam

Hi Jiri, 

Comments below...

--- On Thu, 8/28/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
 That's difficult to reconcile if you don't
 believe embodiment is all that important.
 
 Not really. We might be qualia-driven, but for our AGIs
 it's perfectly
 ok (and only natural) to be driven by given
 goals.

I've argued elsewhere that goals that are not grounded in an AGI's experience 
impart no meaning. Either an agent has some kind of embodied experience, in 
which case the specified goal is not grounded in anything the agent can relate 
to, or it is not embodied at all, in which case it is a mindless automaton.
 
 question I would pose to you non-embodied advocates is:
 how in the world will you motivate your creation? I suppose
 that you won't. You'll just tell it what to do
 (specify its goals) and it will do it..
 
 Correct. AGIs driven by human-like-qualia would be less
 safe  harder
 to control. Human-like-qualia are too high-level to be
 safe. When
 implementing qualia (not that we know hot to do it ;-))
  increasing
 granularity for safety, you would IMO end up with basically
 giving
 the goals - which is of course easier without messing
 with qualia
 implementation. Forget qualia as a motivation for our AGIs.
 Our AGIs
 are supposed to work for us, not for themselves.
 
So much talk about Friendliness implies that the AGI will have no ability to 
choose its own goals. It seems that AGI researchers are usually looking to 
create clever slaves. That may fit your notion of general intelligence, but not 
mine.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Brad Paulsen

Eric,

It was a real-life near-death experience (auto accident).

I'm aware of the tryptamine compound and its presence in hallucinogenic 
drugs such as LSD.  According to Wikipedia, it is not related to the NDE 
drug of choice which is Ketamine (Ketalar or ketamine HCL -- street name 
back in the day was Special K).  Ketamine is a chemical secreted into the 
brain when your body detects an over-generation of Glutamate. Glutamate 
(i.e., the food flavor enhancer, MSG) is a neurotransmitter released in 
massive quantities when your senses lead your brain to believe it is in 
mortal danger.  It's your brain's way of, literally, trying to think its 
(your) way out of danger - fast.  Trouble is, too much Glutamate can 
irreparably damage the brain, hence the Ketamine push and the NDE experience.


Ketamine is a Schedule 3 drug.  Today, it is primarily used as an 
anesthetic in surgery performed on geriatric adults, children and animals 
(by vets).  It takes a much higher dose than that used for anesthetic 
purposes to achieve the NDE experience.  Back in the day, it was used as an 
adjunct to psychotherapy.  The Russians claimed it worked wonders for all 
sorts of addiction, especially alcoholism.  I do not recommend use of 
Ketamine unsupervised by a qualified medical practitioner.  Just like LSD, 
people have been known to react badly (bad trip).  But, then, I don't 
recommend near-fatal auto accidents either.  ;-)


Cheers,
Brad

Eric Burton wrote:

Hi,


Err ... I don't have to mention that I didn't stay dead, do I?  Good.


Was this the archetypal death/rebirth experience found in for instance
tryptamine ecstacy or a real-life near-death experience?

Eric B


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Jiri Jelinek
Terren,

is not embodied at all, in which case it is a mindless automaton

Researchers and philosophers define mind and intelligence in many
different ways = their classifications of particular AI systems
differ. What really counts though are problem solving abilities of the
system. Not how it's labeled according to a particular definition of
mind.

 So much talk about Friendliness implies that the AGI will have no ability to 
 choose its own goals.

Developer's choice.. My approach:
Main goals - definitely not;
Sub goals - sure, with restrictions though.

It seems that AGI researchers are usually looking to create clever slaves.

We are talking about our machines.
What else are they supposed to be?

clever slaves. That may fit your notion of general intelligence, but not mine.

To me, general intelligence is a cross-domain ability to gain
knowledge in one context and correctly apply it in another [in terms
of problem solving]. The source of the primary goal(s) (/problem(s) to
solve) doesn't (from my perspective) have anything to do with the
level of system's intelligence. It doesn't make it more or less
intelligent. It's just a separate thing. The system gets the initial
goal [from whatever source] and *then* it's time to apply its
intelligence.

Regards,
Jiri Jelinek


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Terren Suydam

Jiri,

I think where you're coming from is a perspective that doesn't consider or 
doesn't care about the prospect of a conscious intelligence, an awake being 
capable of self reflection and free will (or at least the illusion of it).

I don't think any kind of algorithmic approach, which is to say, un-embodied, 
will ever result in conscious intelligence. But an embodied agent that is able 
to construct ever-deepening models of its experience such that it eventually 
includes itself in its models, well, that is another story. I think btw that is 
a valid description of humans.

We may argue about whether consciousness (mindfulness) is necessary for general 
intelligence. I think it is, and that informs much of my perspective. When I 
say something like mindless automaton, I'm implicitly suggesting that it 
won't be intelligent in a general sense, although it could be in a narrow sense 
(like a chess program).

Terren


--- On Thu, 8/28/08, Jiri Jelinek [EMAIL PROTECTED] wrote:

 From: Jiri Jelinek [EMAIL PROTECTED]
 Subject: Re: [agi] How Would You Design a Play Machine?
 To: agi@v2.listbox.com
 Date: Thursday, August 28, 2008, 10:39 PM
 Terren,
 
 is not embodied at all, in which case it is a mindless
 automaton
 
 Researchers and philosophers define mind and intelligence
 in many
 different ways = their classifications of particular AI
 systems
 differ. What really counts though are problem solving
 abilities of the
 system. Not how it's labeled according to a particular
 definition of
 mind.
 
  So much talk about Friendliness implies that the AGI
 will have no ability to choose its own goals.
 
 Developer's choice.. My approach:
 Main goals - definitely not;
 Sub goals - sure, with restrictions though.
 
 It seems that AGI researchers are usually looking to
 create clever slaves.
 
 We are talking about our machines.
 What else are they supposed to be?
 
 clever slaves. That may fit your notion of general
 intelligence, but not mine.
 
 To me, general intelligence is a cross-domain ability to
 gain
 knowledge in one context and correctly apply it in another
 [in terms
 of problem solving]. The source of the primary goal(s)
 (/problem(s) to
 solve) doesn't (from my perspective) have anything to
 do with the
 level of system's intelligence. It doesn't make it
 more or less
 intelligent. It's just a separate thing. The system
 gets the initial
 goal [from whatever source] and *then* it's time to
 apply its
 intelligence.
 
 Regards,
 Jiri Jelinek
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Jiri Jelinek
Terren,

 I don't think any kind of algorithmic approach, which is to say, un-embodied, 
 will ever result in conscious intelligence. But an embodied agent that is 
 able to construct ever-deepening models of its experience such that it 
 eventually includes itself in its models, well, that is another story.

I don't see why an un-embodied system couldn't successfully use the
concept of self in its models. It's just another concept, except that
it's linked to real features of the system.

 We may argue about whether consciousness (mindfulness) is necessary for 
 general intelligence. I think it is, and that informs much of my perspective.

General intelligence can IMO be demonstrated even when the system
under evaluation doesn't [ATM] understand particular concepts like
self and even if it doesn't [ATM] have the ability to perceive a
relationship between self and its actual environment (=stuff often
associated with consciousness). In fact, it can know relatively
little. Let's say I need to cut a bread, but don't have a knife. I
only have a few other tools, one of which (let's call it T2) has
similar parameters to a knife. Even though this particular AGI never
heard about any of those other tools being used for cutting bread (and
is not self-aware in any sense), it still can (when asked for advice)
make a reasonable suggestion to try the T2 (because of the
similarity) = coming up with a novel idea  demonstrating general
intelligence.

Regards,
Jiri Jelinek


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Eric Burton
Brad, scary stuff. Dissociatives/NMDA inhibitors were secret option
number three! ;D

On 8/29/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
 Terren,

 I don't think any kind of algorithmic approach, which is to say,
 un-embodied, will ever result in conscious intelligence. But an embodied
 agent that is able to construct ever-deepening models of its experience
 such that it eventually includes itself in its models, well, that is
 another story.

 I don't see why an un-embodied system couldn't successfully use the
 concept of self in its models. It's just another concept, except that
 it's linked to real features of the system.

 We may argue about whether consciousness (mindfulness) is necessary for
 general intelligence. I think it is, and that informs much of my
 perspective.

 General intelligence can IMO be demonstrated even when the system
 under evaluation doesn't [ATM] understand particular concepts like
 self and even if it doesn't [ATM] have the ability to perceive a
 relationship between self and its actual environment (=stuff often
 associated with consciousness). In fact, it can know relatively
 little. Let's say I need to cut a bread, but don't have a knife. I
 only have a few other tools, one of which (let's call it T2) has
 similar parameters to a knife. Even though this particular AGI never
 heard about any of those other tools being used for cutting bread (and
 is not self-aware in any sense), it still can (when asked for advice)
 make a reasonable suggestion to try the T2 (because of the
 similarity) = coming up with a novel idea  demonstrating general
 intelligence.

 Regards,
 Jiri Jelinek


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Brad Paulsen

Terren,

OK, you hooked me.  A virgin is something I haven't been called (or even 
been associated with) in about forty-five years.  So, I feel compelled to 
defend my non-virginity at all costs.  I'm 58 now.  You do the math (don't 
forget to subtract for the 30 years I was married). ;-)  My widowed 
girlfriend of the last eight years is a mother of two 
30-something-year-olds (a boy and a girl) and four grandchildren, ages 11 
(going on 16) down to 2.  All girls!  The woman is post-menopausal and 
insatiable!  A little Astroglide (thank you, NASA!) and we're ready to 
rumble.  No birth control required!  Sex under 50?  OK.  Sex after 50?  To 
the moon!!  The Bradster is one lucky puppy.  So there!


I thought orgasms were cool, too.  Until I died.  Now THAT was cool.  So, 
for orgasms, it's sort of a quantity vs. quality thing for me these days. 
I'll eventually get to do that dying thing again (probably just once, 
though).  But between now and then, I hope to have lots and lots of 
orgasms!  Not as cool as dying, but a bit easier to come by. (I won't say 
it if you don't think it!) ;-)


Err ... I don't have to mention that I didn't stay dead, do I?  Good.

I don't recall whether or not I said one could describe an orgasm to a 
virgin in lieu of experiencing the real thing.  But, the AGI I have in 
mind is of the non-Turing/Loebner, non-orgasmic type, so the description 
will just have to do.  In my design, this is required only so the AGI can 
empathize with human experience.  It may need to know what a happy ending 
is, but it doesn't have to have one.  Who knows, though?  Maybe we've 
finally discovered that it's not Microsoft's fault we have to re-boot 
Windows at least twice a day.  Maybe a re-boot is sort of like an orgasm 
for Windows?  Explains that little happy chiming sound it makes during 
boot-up, right?  Maybe, just maybe, Windows was, to quote Steely Dan, 
programmed by fellows with compassion and vision.


Anyhow, that example fits with views I've expressed in the context of 
explaining how my AGI design requires empathy on the part of the AGI so it 
can empathize with human experiences without having to actually have 
them.  So, maybe I did say that.  Since I have no intention of developing a 
Turing/Loebner AGI, the ability to empathize is all my design really needs. 
 And, it may not even need that.  Benign indifference may be enough.  My 
design is still evolving even as I work on the implementation (it's a big 
job and I'm only one man).


If I do my job right, my AGI will have no sense of self.  I achieve that, 
mostly, by building a non-embodied AGI.  Embodiment leads directly to a 
sense of self which leads inexorably to an I am me and you're not world 
view.  I don't know about you, but an AGI with a sense of self gives me the 
willies.  Turns out, by NOT bestowing a sense of self on a 
non-Turing/Loebner AGI, one does away with a great many rather sticky 
problems in the area of morality and ethics.  How do I know what it's like 
to not have a sense of self?  A...  That's where the dying but not 
really dying part fits the puzzle.  Talk about experiences that are hard to 
explain!  But, that's another topic for another thread.


Now, to the meaty stuff...  You wrote: ... the really interesting question 
I would pose to you non-embodied advocates is: how in the world will you 
motivate your creation?


Some animals and all humans are motivated to maximize pleasure and minimize 
pain.  This requires the existence of a brain and a nervous system, 
preferably both peripheral and central.  In animals other than humans and 
some higher-order primates and mammals, motivation is more typically called 
instinct.  The difference?  Motivations are usually conscious and somewhat 
 malleable.  Instincts are usually not.  To be sure, there is some gray 
area here, but not enough, I think, to alone derail my argument.  While 
human motivations may appear more complex, this is almost always because 
they are more abstract.  They can usually be boiled down to fit the 
pleasure/pain model (e.g., reward/punishment).  There has been some 
interesting recent work on altruism reported in the cog sci literature. 
When I can lay hands on some URIs, I'll post them here.


With that conceptual background established, my reply is that your question 
contains the implicit assumption we non-embodied advocates are planning 
to build Turing/Loebner AGIs.  Some of us may be.  I am not.  Since my AGI 
model is not of the T/L variety, motivation does NOT apply.  But, I'm 
prepared to meet you halfway and cop to instinct.  My AGI WILL have at 
least one overriding instinct.  I've discussed it here recently (but it 
seemed most people who commented on my post didn't fully get it).  Here 
it is:


My AGI will be equipped with an instinctual drive to resolve cognitive 
dissonance (simulated, of course) engendered by its own inability to 
understand or answer queries posed by humans (or other AGIs).  I hasten 

Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Mike Tintner
Actually, exploring this further - human thinking is v. fundamentally different 
from the computational kind or most AGI conceptions - because it is massively 
and structurally metacognitive, self-examining (which comes under being a 
machine that works by self-control).

Interestingly, Minsky's model of mind in The Emotion Machine includes this with 
three levels above Deliberative Thinking:

Reflective Thinking
Self-Reflective Thinking
Self-Conscious Reflection

We don't just think about a problem, we simultaneously think about how we think 
about it, and consciously manage and take decisions about that thinking. We ask 
ourselves questions like:

-How long should we think about it?
-Should we follow our intuitions
-do we need examples?
-should we visualise
-should we follow our feelings of confusion?
-should we articulate our thoughts clearly and slowly or just let them whizz 
along, half-articulated?
-how would so-and-so handle it
-should we examine that part of the problem, or will it take too long?
-should we check the evidence?
-should we give up, or compromise?
-should we read a book for ideas? or consult a dictionary/thesaurus?

Such questions are all parts of our inner thinking dialogue.

As Minsky says, we have many ways to think,  we consciously choose from among 
them -  as a result different people devote very different amounts of time and 
resources to thinking at different times. But Minsky wants to make all this 
into an automatic process - and it can't be - how you think about problematic 
problems is fundamentally problematic in itself - which is why thinking is such 
a hesitant business.
  David Hart:/ MT : Is anyone trying to design a self-exploring robot or 
computer? Does this principle have a name?

  Interestingly, some views on AI advocate specifically prohibiting 
self-awareness and self-exploration as a precaution against the development of 
unfriendly AI. In my opinion, these views erroneously transfer familiar human 
motives onto 'alien' AGI cognitive architectures - there's a history of 
discussing this topic  on SL4 and other places.

  I believe however that most approaches to designing AGI (those that do not 
specifically prohibit self-aware and self-explortative behaviors) take for 
granted, and indeed intentionally promote, self-awareness and self-exploration 
at most stages of AGI development. In other words, efficient and effective 
recursive self-improvement (RSI) requires self-awareness and self-exploration. 
If any term exists to describe a 'self-exploring robot or computer', that term 
is RSI. Coining a lesser term for 'self-exploring AI' may be useful in some 
proto-AGI contexts, but I suspect that 'RSI' is ultimately a more useful and 
meaningful term.

  -dave


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Ben Goertzel


 If I do my job right, my AGI will have no sense of self.



I have doubts that is possible, though I'm sure you can make an AGI with a
very different sense of self than any human has.

My reasoning:

1)
To get to a high level of intelligence likely requires some serious
self-analysis and self-modification  (whether conscious or unconscious ...
in a young human child it's likely largely unconscious)

2)
In order to do self-analysis and self-modification, having and maintaining a
model of oneself seems the most effective strategy

An AGI however could clearly be more effective at self-model management than
humans are...

It will be interesting, one day, to discover which properties of the human
mind are generic across limited-resources minds and which are more
particular to human minds...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Ben Goertzel
An interesting thing to keep in mind when discussing play, though, is
**subgoal alienation**

When G1 arises as a subgoal of G, nevertheless, it may happen that G1
survives as a goal even if G disappears; or that G1 remains important even
if G loses importance.  One may wish to design AGI systems to minimize this
phenomenon, but it certainly occurs strongly in humans.

Play may be an example of this.  We may retain the desire to play games that
originated as practice for G, even though we have no interest in G anymore.

And, subgoal alienation may occur on the evolutionary as well as the
individual level.

ben g

On Tue, Aug 26, 2008 at 9:49 AM, Ben Goertzel [EMAIL PROTECTED] wrote:



 Examples of the kind of similarity I'm thinking of:

 -- The analogy btw chess or go and military strategy

 -- The analogy btw roughhousing and actual fighting

 In logical terms, these are intensional rather than extensional
 similarities

 ben


 On Tue, Aug 26, 2008 at 9:38 AM, Mike Tintner [EMAIL PROTECTED]wrote:

  Ben:If an intelligent system has a goal G which is time-consuming or
 difficult to achieve ...
 it may then synthesize another goal G1 which is easier to achieve
 We then have the uncertain syllogism

 Achieving G implies reward
 G1 is similar to G

 Ben,

 The be-all and end-all here though, I presume is similarity. Is it a
 logic-al concept?  Finding similarities - rough likenesses as opposed to
 rational, precise, logicomathematical commonalities - is actually, I would
 argue, a process of imagination and (though I can't find a ready term)
 physical/embodied improvisation. Hence rational, logical, computing
 approaches have failed to produce any new (in the normal sense of
 surprising)  metaphors or analogies or be creative.

 Maybe you could give an example of what you mean by similarity

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome  - Dr Samuel Johnson





-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Ben Goertzel
I wrote a blog post enlarging a little on the ideas I developed in my
response to the playful AGI thread...

See

http://multiverseaccordingtoben.blogspot.com/2008/08/logic-of-play.html

Some of the new content I put there:


Still, I have to come back to the tendency of play to give rise to goal
drift ... this is an interesting twist that apparently relates to the
wildness and spontaneity that exists in much playing. Yes, most particular
forms of play do seem to arise via the syllogism I've given above. Yet,
because it involves activities that originate as simulacra of goals that go
BEYOND what the mind can currently do, play also seems to have an innate
capability to drive the mind BEYOND its accustomed limits ... in a way that
often transcends the goal G that the play-goal G1 was designed to
emulate

This brings up the topic of meta-goals: goals that have to do explicitly
with goal-system maintenance and evolution. It seems that playing is in fact
a meta-goal, quite separately from the fact of each instance of playing
generally involving an imitation of some other specific real-life goal.
Playing is a meta-goal that should be valued by organisms that value growth
and spontaneity ... including growth of their goal systems in unpredictable,
adaptive ways


-- Ben G

On Tue, Aug 26, 2008 at 9:07 AM, Ben Goertzel [EMAIL PROTECTED] wrote:


 About play... I would argue that it emerges in any sufficiently
 generally-intelligent system
 that is faced with goals that are difficult for it ... as a consequence of
 other general cognitive
 processes...

 If an intelligent system has a goal G which is time-consuming or difficult
 to achieve ...

 it may then synthesize another goal G1 which is easier to achieve

 We then have the uncertain syllogism

 Achieving G implies reward
 G1 is similar to G
 |-
 Achieving G1 implies reward

 As links between goal-achievement and reward are to some extent modified by
 uncertain
 inference (or analogous process, implemented e.g. in neural nets), we thus
 have the
 emergence of play ... in cases where G1 is much easier to achieve than G
 ...

 Of course, if working toward G1 is actually good practice for working
 toward G, this may give the intelligent
 system (if it's smart and mature enough to strategize) or evolution impetus
 to create
 additional bias toward the pursuit of G1

 In this view, play is a quite general structural phenomenon ... and the
 play that human kids do with blocks and sticks and so forth is a special
 case, oriented toward ultimate goals G involving physical manipulation

 And the knack in gaining anything from play is in appropriate
 similarity-assessment ... i.e. in measuring similarity between G and G1 in
 such a way that achieving G1 actually teaches things useful for achieving G

 So for any goal-achieving system that has long-term goals which it can't
 currently effectively work directly toward, play may be an effective
 strategy...

 In this view, we don't really need to design an AI system with play in
 mind.  Rather, if it can explicitly or implicitly carry out the above
 inference, concept-creation and subgoaling processes, play should emerge
 from its interaction w/ the world...

 ben g



 On Tue, Aug 26, 2008 at 8:20 AM, David Hart [EMAIL PROTECTED] wrote:

 On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Is anyone trying to design a self-exploring robot or computer? Does this
 principle have a name?


 Interestingly, some views on AI advocate specifically prohibiting
 self-awareness and self-exploration as a precaution against the development
 of unfriendly AI. In my opinion, these views erroneously transfer familiar
 human motives onto 'alien' AGI cognitive architectures - there's a history
 of discussing this topic  on SL4 and other places.

 I believe however that most approaches to designing AGI (those that do not
 specifically prohibit self-aware and self-explortative behaviors) take for
 granted, and indeed intentionally promote, self-awareness and
 self-exploration at most stages of AGI development. In other words,
 efficient and effective recursive self-improvement (RSI) requires
 self-awareness and self-exploration. If any term exists to describe a
 'self-exploring robot or computer', that term is RSI. Coining a lesser term
 for 'self-exploring AI' may be useful in some proto-AGI contexts, but I
 suspect that 'RSI' is ultimately a more useful and meaningful term.

 -dave
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome  - Dr Samuel Johnson





-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Mike Tintner
Ben,

Again, this provokes some playful developments.

As I think you may have more or less noted, the goals of the whole thread and 
of most people responding are somewhat ill-defined, (which in this context is 
fine).

(And the following relates to the adjacent thread too). The human mind doesn't 
start with - isn't started by - goals; (nor should any AGI),  it starts with 
*drives.*

You have drives to food, warmth, activity, (as you note - to mental 
exercise/activity) and more... Which are extremely general and can each be 
satisfied in an infinity of ways.

You then have to *specify* goals for your drives, which are still v. general,  
albeit a level more specific, - but do point to some kind of action  - and then 
have to be more and more precisely specified.  I'm hungry... right, I want 
Chinese...  right, I'll go to Chang's.. and then you specify strategies, 
tactics and moves.

But humans again and again, plunge into many activities with mixed, conflicting 
drives,  and ill- or *unspecified goals*. Like what exactly were you or I doing 
in formulating our different posts?   Goals were often being redefined ad hoc 
or formulated for the first time down the line, after we'd started.

And this is a characteristic - and sometimes failing/ sometimes adaptive 
advantage - of much, if not most human activity. We enter many activities with 
confused goals, and often fail to satisfactorily define them at all. I 
criticise current AGI, as you know, (and remember it consists of highly 
developed, highly advanced projects), for having no v. practical definition ( 
therefore goal) of intelligence or the problems it wants to solve;. You on 
your side insist that you don't have to have such precisely defined goals - 
your intuitive (and by definition, ill-defined) sense of intelligence will do. 
The specific argument doesn't matter here - the point is it illustrates how the 
goals of a general intelligence are, and have to be continually played with - 
  a) sometimes not defined at all b) sometimes half- or ill-defined c) usually 
mixed and d) continuously provisional, and in *creative development* - with the 
frequent disadvantage, evidenced by a trillion undergrad essays, that goals may 
be  way too ill-defined.




  Ben:I wrote a blog post enlarging a little on the ideas I developed in my 
response to the playful AGI thread...

  See

  http://multiverseaccordingtoben.blogspot.com/2008/08/logic-of-play.html

  Some of the new content I put there:

  
  Still, I have to come back to the tendency of play to give rise to goal drift 
... this is an interesting twist that apparently relates to the wildness and 
spontaneity that exists in much playing. Yes, most particular forms of play do 
seem to arise via the syllogism I've given above. Yet, because it involves 
activities that originate as simulacra of goals that go BEYOND what the mind 
can currently do, play also seems to have an innate capability to drive the 
mind BEYOND its accustomed limits ... in a way that often transcends the goal G 
that the play-goal G1 was designed to emulate

  This brings up the topic of meta-goals: goals that have to do explicitly with 
goal-system maintenance and evolution. It seems that playing is in fact a 
meta-goal, quite separately from the fact of each instance of playing generally 
involving an imitation of some other specific real-life goal. Playing is a 
meta-goal that should be valued by organisms that value growth and spontaneity 
... including growth of their goal systems in unpredictable, adaptive ways
  

  -- Ben G


  On Tue, Aug 26, 2008 at 9:07 AM, Ben Goertzel [EMAIL PROTECTED] wrote:


About play... I would argue that it emerges in any sufficiently 
generally-intelligent system
that is faced with goals that are difficult for it ... as a consequence of 
other general cognitive
processes...

If an intelligent system has a goal G which is time-consuming or difficult 
to achieve ...

it may then synthesize another goal G1 which is easier to achieve

We then have the uncertain syllogism

Achieving G implies reward
G1 is similar to G
|-
Achieving G1 implies reward

As links between goal-achievement and reward are to some extent modified by 
uncertain
inference (or analogous process, implemented e.g. in neural nets), we thus 
have the
emergence of play ... in cases where G1 is much easier to achieve than G 
...

Of course, if working toward G1 is actually good practice for working 
toward G, this may give the intelligent
system (if it's smart and mature enough to strategize) or evolution impetus 
to create
additional bias toward the pursuit of G1

In this view, play is a quite general structural phenomenon ... and the 
play that human kids do with blocks and sticks and so forth is a special case, 
oriented toward ultimate goals G involving physical manipulation

And the knack in gaining anything from play is in appropriate 

Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Eric Burton
Hi,

 Err ... I don't have to mention that I didn't stay dead, do I?  Good.

Was this the archetypal death/rebirth experience found in for instance
tryptamine ecstacy or a real-life near-death experience?

Eric B


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Charles Hixson
Admittedly I don't have any proof, but I don't see any reason to doubt 
my assertions.  There's nothing in them that appears to be to be 
specific to any particular implementation of an (almost) AGI.


OTOH, you didn't define play, so I'm still presuming that you accept the 
definition that I proffered.  But then you also didn't explicitly accept 
it, so I'm not certain.  To quote myself: 

Play is a form a strategy testing in an environment that doesn't 
severely penalize failures.


There's nothing about that statement that appears to me to be specific 
to any particular implementation.  It seems *to me*, and again I 
acknowledge that I have no proof of this, that any (approaching) AGI of 
any construction would necessarily engage in this activity.


P.S.:  I'm being specific about (approaching) AGI as I doubt the 
possibility, and especially the feasibility, of constructing an actual 
AGI, rather than something which merely approaches being an AGI at the 
limit.  I'm less certain about an actual AGI, but I suspect that it, 
also, would need to play for the same reasons.



Brad Paulsen wrote:

Charles,

By now you've probably read my reply to Tintner's reply.  I think that 
probably says it all (and them some!).


What you say holds IFF you are planing on building an airplane that 
flies just like a bird.  In other words, if you are planning on 
building a human-like AGI (that could, say, pass the Turing test).  
My position is, and has been for decades, that attempting to pass the 
Turing test (or win either of the two, one-time-only, Loebner Prizes) 
is a waste of precious time and intellectual resources.


Thought experiments?  No problem.  Discussing ideas?  No problem. 
Human-like AGI?  Big problem.


Cheers,
Brad

Charles Hixson wrote:
Play is a form a strategy testing in an environment that doesn't 
severely penalize failures.  As such, every AGI will necessarily 
spend a lot of time playing.


If you have some other particular definition, then perhaps I could 
understand your response if you were to define the term.


OTOH, if this is interpreted as being a machine that doesn't do 
anything BUT play (using my supplied definition), then your response 
has some merit, but even that can be very useful.  Almost all of 
mathematics, e.g., is derived out of such play.


I have a strong suspicion that machines that don't have a play mode 
can never proceed past the reptilian level of mentation.  (Here I'm 
talking about thought processes that are mediated via the reptile 
brain in entities like mammals.  Actual reptiles may have some more 
advanced faculties of which I'm unaware.  (Note that, e.g., shrews 
don't have much play capability, but they have SOME.)



Brad Paulsen wrote:
Mike Tintner wrote: ...how would you design a play machine - a 
machine that can play around as a child does?


I wouldn't.  IMHO that's just another waste of time and effort 
(unless it's being done purely for research purposes).  It's a 
diversion of intellectual and financial resources that those serious 
about building an AGI any time in this century cannot afford.  I 
firmly believe if we had not set ourselves the goal of developing 
human-style intelligence (embodied or not) fifty years ago, we would 
already have a working, non-embodied AGI.


Turing was wrong (or at least he was wrongly interpreted).  Those 
who extended his imitation test to humanoid, embodied AI were even 
more wrong.  We *do not need embodiment* to be able to build a 
powerful AGI that can be of immense utility to humanity while also 
surpassing human intelligence in many ways.  To be sure, we want 
that AGI to be empathetic with human intelligence, but we do not 
need to make it equivalent (i.e., just like us).


I don't want to give the impression that a non-Turing intelligence 
will be easy to design and build.  It will probably require at least 
another twenty years of two steps forward, one step back effort.  
So, if we are going to develop a non-human-like, non-embodied AGI 
within the first quarter of this century, we are going to have to 
just say no to Turing and start to use human intelligence as an 
inspiration, not a destination.


Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of  an AGI 
is surely that it must be able to play - so how would you design a 
play machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are 
- it should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be 
vastly more flexible than a computer, but if you want to do it all 
on computer, fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?

How do infants, IOW, play?






---
agi

Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Jiri Jelinek
On Tue, Aug 26, 2008 at 12:09 AM, Terren Suydam [EMAIL PROTECTED] wrote:
Pleasure and pain are peculiar aspects of embodied experience - strictly 
speaking they are motivators and de-motivators, but what actually motivates us 
humans is the subjective feel ...
That's difficult to reconcile if you don't believe embodiment is all that 
important.

Not really. We might be qualia-driven, but for our AGIs it's perfectly
ok (and only natural) to be driven by given goals.

question I would pose to you non-embodied advocates is: how in the world will 
you motivate your creation? I suppose that you won't. You'll just tell it what 
to do (specify its goals) and it will do it..

Correct. AGIs driven by human-like-qualia would be less safe  harder
to control. Human-like-qualia are too high-level to be safe. When
implementing qualia (not that we know hot to do it ;-))  increasing
granularity for safety, you would IMO end up with basically giving
the goals - which is of course easier without messing with qualia
implementation. Forget qualia as a motivation for our AGIs. Our AGIs
are supposed to work for us, not for themselves.

Regards,
Jiri Jelinek


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Bob Mottram
2008/8/24 Mike Tintner [EMAIL PROTECTED]:
 Just a v. rough, first thought. An essential requirement of  an AGI is
 surely that it must be able to play - so how would you design a play machine
 - a machine that can play around as a child does?




Play may be about characterising the state space.  As an embodied
entity you need to know which areas of the space are relatively
predictable and which are not.  Armed with this knowledge when
planning an action in future you can make a reasonable estimate of the
possible range of outcomes or affordances, which may be very useful in
practical situations.

You'll notice that play tends to be directed towards activities with
high novelty.  With enough experience through play an unfamiliar or
novel situation can be decomposed into a set of more predictable
outcomes.  Eventually the novelty wears off because prediction matches
observation, and so the system moves on.  Finding new novel situations
to explore may involve the deliberate introduction of random or risky
(seemingly mal-adaptive) behavior.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Vladimir Nesov
On Tue, Aug 26, 2008 at 8:09 AM, Terren Suydam [EMAIL PROTECTED] wrote:
 I know we've gotten a little off-track here from play, but the really
 interesting question I would pose to you non-embodied advocates is:
 how in the world will you motivate your creation?  I suppose that you
 won't. You'll just tell it what to do (specify its goals) and it will do it,
 because it has no autonomy at all. Am I guilty of anthropomorphizing
 if I say autonomy is important to intelligence?


This is fuzzy, mysterious and frustrating. Unless you *functionally*
explain what you mean by autonomy and embodiment, the conversation
degrades to a kind of meaningless philosophy that occupied some smart
people for thousands of years without any results.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Mike Tintner


Bob M: Play may be about characterising the state space.  As an embodied

entity you need to know which areas of the space are relatively
predictable and which are not.  Armed with this knowledge when
planning an action in future you can make a reasonable estimate of the
possible range of outcomes or affordances, which may be very useful in
practical situations. You'll notice that play tends to be directed 
towards activities with

high novelty.  With enough experience through play an unfamiliar or
novel situation can be decomposed into a set of more predictable
outcomes.


What I was particularly interested in asking you is the following: part of 
the condition of being human is that you have to not just explore the 
outside world, but your own body and brain. And in fact it's potentially 
endless, because the degrees of freedom and range of possibilities for both 
are vast. So there is room to never stop exploring and developing your golf 
swing, say, or working out new ways to dredge out well-buried memories, and 
integrate them into new structures - for example, we can all develop a 
memory for dialogue, say, or for physical structures, (incl. from the past). 
Clearly, play along with development generally are a part of 
self-(one-s-own-system)-exploration.


Now robots too have similarly vast if not quite so vast possibilities of 
movement and thought. So in principle it sounds like a good, if not 
long-term essential idea to have them play and explore themselves as humans 
do. In principle, it would be a good idea for a pure AGI computer to explore 
its own vast possibilities/ways-of-thinking. Is anyone trying to design a 
self-exploring robot or computer? Does this principle have a name?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread David Hart
On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Is anyone trying to design a self-exploring robot or computer? Does this
 principle have a name?


Interestingly, some views on AI advocate specifically prohibiting
self-awareness and self-exploration as a precaution against the development
of unfriendly AI. In my opinion, these views erroneously transfer familiar
human motives onto 'alien' AGI cognitive architectures - there's a history
of discussing this topic  on SL4 and other places.

I believe however that most approaches to designing AGI (those that do not
specifically prohibit self-aware and self-explortative behaviors) take for
granted, and indeed intentionally promote, self-awareness and
self-exploration at most stages of AGI development. In other words,
efficient and effective recursive self-improvement (RSI) requires
self-awareness and self-exploration. If any term exists to describe a
'self-exploring robot or computer', that term is RSI. Coining a lesser term
for 'self-exploring AI' may be useful in some proto-AGI contexts, but I
suspect that 'RSI' is ultimately a more useful and meaningful term.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Mike Tintner

Terren:I know we've gotten a little off-track here from play, but the really

interesting question I would pose to you non-embodied advocates is:
how in the world will you motivate your creation?


Again, I think you're missing out the most important aspect of having a body 
,  ( is there a good definition of this? I think roboticists make some kind 
of deal of it). A body IS play, in a broad sense. It's first of all 
continuously *roving.*  -continuously moving, continuously thinking, 
*whether something is called for or not* (unlike machines which only act to 
order). Frankly, the idea that a human or animal body and brain are 
programmed in an *extended* way - for a minute of continuous action, say, as 
opposed to short routines/habits tossed together, can't be taken seriously - 
we have a major problem concentrating, following a train of thought or 
sticking to a train of movement, for that long. Our mind is continuously 
going off at tangents. The plus side of that is that we are highly adaptable 
and flexible - very ready to get a new handle on things.


The second, still more important advantage of a body, (the part, I think, 
that roboticists stress) is that it incorporates a vast range of 
possibilities which surely *do not have to be laboriously pre-specified* - 
vast ranges of possible movement and thought that can be playfully explored 
as required, rather than explicitly coded for beforehand. Start moving your 
hand around, twiddling your fingers independently  together, and twisting 
the whole unit, every which way.It's never-ending. And a good deal of it 
will be novel. So the basic general principle of learning any new movement, 
presumably,is have a stab at it - stick your hand out at the object in a 
loosely appropriate shape, and then play around with your grip/handling - 
explore your body's range of possibilities. There's no beforehand.


Ditto the brain has a vast capacity for ranges of free *non-pre-specified* 
association - start thinking of - visualising - your screwdriver. Now think 
of similar *shapes*. You should find you can keep going for a good while - a 
stream of new, divergent, not convergently, algorithmically pre-arranged 
associations, (as Kauffman insists).The brain is designed for free, 
unprogrammed association in a way that computers clearly haven't been - or 
haven't been to date. It can freely handle and play with ideas as the hand 
can objects.


God/Evolution clearly looked at Matt's bill for an army of programmers to 
develop an AGI, and decided He couldn't afford it - he'd try something 
simpler and more ingenious. Play around first, program routines second, 
develop culture and AI third.


P.S. The whole concept of an unembodied intelligence is a nonsense. There 
is *no such thing*.  The real distinction, presumably, is between embodied 
intelligences that can control their bodies, like humans, and those, like 
computers to date, that can't (or barely). Unembodied intelligences don't 
and *can't* exist.


*Self-control* - being able to control your body - is perhaps the most vital 
dimension of having a body in the sense of the standard debate. Without 
that, you can't understand the distinction between inert matter and life - 
one of the most fundamental early distinctions in understanding the world. 
Without that, I doubt that you can really understand anything.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Ben Goertzel
Note that in this view play has nothing to do with having a body.  An AGi
concerned solely with mathematical theorem proving would also be able to
play...

On Tue, Aug 26, 2008 at 9:07 AM, Ben Goertzel [EMAIL PROTECTED] wrote:


 About play... I would argue that it emerges in any sufficiently
 generally-intelligent system
 that is faced with goals that are difficult for it ... as a consequence of
 other general cognitive
 processes...

 If an intelligent system has a goal G which is time-consuming or difficult
 to achieve ...

 it may then synthesize another goal G1 which is easier to achieve

 We then have the uncertain syllogism

 Achieving G implies reward
 G1 is similar to G
 |-
 Achieving G1 implies reward

 As links between goal-achievement and reward are to some extent modified by
 uncertain
 inference (or analogous process, implemented e.g. in neural nets), we thus
 have the
 emergence of play ... in cases where G1 is much easier to achieve than G
 ...

 Of course, if working toward G1 is actually good practice for working
 toward G, this may give the intelligent
 system (if it's smart and mature enough to strategize) or evolution impetus
 to create
 additional bias toward the pursuit of G1

 In this view, play is a quite general structural phenomenon ... and the
 play that human kids do with blocks and sticks and so forth is a special
 case, oriented toward ultimate goals G involving physical manipulation

 And the knack in gaining anything from play is in appropriate
 similarity-assessment ... i.e. in measuring similarity between G and G1 in
 such a way that achieving G1 actually teaches things useful for achieving G

 So for any goal-achieving system that has long-term goals which it can't
 currently effectively work directly toward, play may be an effective
 strategy...

 In this view, we don't really need to design an AI system with play in
 mind.  Rather, if it can explicitly or implicitly carry out the above
 inference, concept-creation and subgoaling processes, play should emerge
 from its interaction w/ the world...

 ben g



 On Tue, Aug 26, 2008 at 8:20 AM, David Hart [EMAIL PROTECTED] wrote:

 On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Is anyone trying to design a self-exploring robot or computer? Does this
 principle have a name?


 Interestingly, some views on AI advocate specifically prohibiting
 self-awareness and self-exploration as a precaution against the development
 of unfriendly AI. In my opinion, these views erroneously transfer familiar
 human motives onto 'alien' AGI cognitive architectures - there's a history
 of discussing this topic  on SL4 and other places.

 I believe however that most approaches to designing AGI (those that do not
 specifically prohibit self-aware and self-explortative behaviors) take for
 granted, and indeed intentionally promote, self-awareness and
 self-exploration at most stages of AGI development. In other words,
 efficient and effective recursive self-improvement (RSI) requires
 self-awareness and self-exploration. If any term exists to describe a
 'self-exploring robot or computer', that term is RSI. Coining a lesser term
 for 'self-exploring AI' may be useful in some proto-AGI contexts, but I
 suspect that 'RSI' is ultimately a more useful and meaningful term.

 -dave
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome  - Dr Samuel Johnson





-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Ben Goertzel
About play... I would argue that it emerges in any sufficiently
generally-intelligent system
that is faced with goals that are difficult for it ... as a consequence of
other general cognitive
processes...

If an intelligent system has a goal G which is time-consuming or difficult
to achieve ...

it may then synthesize another goal G1 which is easier to achieve

We then have the uncertain syllogism

Achieving G implies reward
G1 is similar to G
|-
Achieving G1 implies reward

As links between goal-achievement and reward are to some extent modified by
uncertain
inference (or analogous process, implemented e.g. in neural nets), we thus
have the
emergence of play ... in cases where G1 is much easier to achieve than G
...

Of course, if working toward G1 is actually good practice for working toward
G, this may give the intelligent
system (if it's smart and mature enough to strategize) or evolution impetus
to create
additional bias toward the pursuit of G1

In this view, play is a quite general structural phenomenon ... and the play
that human kids do with blocks and sticks and so forth is a special case,
oriented toward ultimate goals G involving physical manipulation

And the knack in gaining anything from play is in appropriate
similarity-assessment ... i.e. in measuring similarity between G and G1 in
such a way that achieving G1 actually teaches things useful for achieving G

So for any goal-achieving system that has long-term goals which it can't
currently effectively work directly toward, play may be an effective
strategy...

In this view, we don't really need to design an AI system with play in
mind.  Rather, if it can explicitly or implicitly carry out the above
inference, concept-creation and subgoaling processes, play should emerge
from its interaction w/ the world...

ben g


On Tue, Aug 26, 2008 at 8:20 AM, David Hart [EMAIL PROTECTED] wrote:

 On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Is anyone trying to design a self-exploring robot or computer? Does this
 principle have a name?


 Interestingly, some views on AI advocate specifically prohibiting
 self-awareness and self-exploration as a precaution against the development
 of unfriendly AI. In my opinion, these views erroneously transfer familiar
 human motives onto 'alien' AGI cognitive architectures - there's a history
 of discussing this topic  on SL4 and other places.

 I believe however that most approaches to designing AGI (those that do not
 specifically prohibit self-aware and self-explortative behaviors) take for
 granted, and indeed intentionally promote, self-awareness and
 self-exploration at most stages of AGI development. In other words,
 efficient and effective recursive self-improvement (RSI) requires
 self-awareness and self-exploration. If any term exists to describe a
 'self-exploring robot or computer', that term is RSI. Coining a lesser term
 for 'self-exploring AI' may be useful in some proto-AGI contexts, but I
 suspect that 'RSI' is ultimately a more useful and meaningful term.

 -dave
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Ben Goertzel
Examples of the kind of similarity I'm thinking of:

-- The analogy btw chess or go and military strategy

-- The analogy btw roughhousing and actual fighting

In logical terms, these are intensional rather than extensional similarities

ben

On Tue, Aug 26, 2008 at 9:38 AM, Mike Tintner [EMAIL PROTECTED]wrote:

  Ben:If an intelligent system has a goal G which is time-consuming or
 difficult to achieve ...
 it may then synthesize another goal G1 which is easier to achieve
 We then have the uncertain syllogism

 Achieving G implies reward
 G1 is similar to G

 Ben,

 The be-all and end-all here though, I presume is similarity. Is it a
 logic-al concept?  Finding similarities - rough likenesses as opposed to
 rational, precise, logicomathematical commonalities - is actually, I would
 argue, a process of imagination and (though I can't find a ready term)
 physical/embodied improvisation. Hence rational, logical, computing
 approaches have failed to produce any new (in the normal sense of
 surprising)  metaphors or analogies or be creative.

 Maybe you could give an example of what you mean by similarity

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Russell Wallace
On Tue, Aug 26, 2008 at 2:38 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 The be-all and end-all here though, I presume is similarity. Is it a
 logic-al concept?  Finding similarities - rough likenesses as opposed to
 rational, precise, logicomathematical commonalities - is actually, I would
 argue, a process of imagination and (though I can't find a ready term)
 physical/embodied improvisation. Hence rational, logical, computing
 approaches have failed to produce any new (in the normal sense of
 surprising)  metaphors or analogies or be creative.

 Maybe you could give an example of what you mean by similarity

See AM, Eurisko, Copycat.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Terren Suydam

That's a fair criticism. I did explain what I mean by embodiment in a previous 
post, and what I mean by autonomy in the article of mine I referenced. But I do 
recognize that in both cases there is still some ambiguity, so I will withdraw 
the question until I can formulate it in more concise terms. 

Terren

--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 
 This is fuzzy, mysterious and frustrating. Unless you
 *functionally*
 explain what you mean by autonomy and embodiment, the
 conversation
 degrades to a kind of meaningless philosophy that occupied
 some smart
 people for thousands of years without any results.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Terren Suydam

I don't think it's necessary to be self-aware to do self-modifications. 
Self-awareness implies that the entity has a model of the world that separates 
self from other, but this kind of distinction is not necessary to do 
self-modifications. It could act on itself without the awareness that it was 
acting on itself.  (Goedelian machines would qualify, imo).

The reverse is true, as well. Humans are self aware but we cannot improve 
ourselves in the dangerous ways we talk about with the hard-takeoff scenarios 
of the Singularity. We ought to be worried about self-modifying agents, yes, 
but self-aware agents that can't modify themselves are much less worrying. 
They're all around us.

--- On Tue, 8/26/08, David Hart [EMAIL PROTECTED] wrote:
From: David Hart [EMAIL PROTECTED]
Subject: Re: [agi] How Would You Design a Play Machine?
To: agi@v2.listbox.com
Date: Tuesday, August 26, 2008, 8:20 AM

On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:
Is anyone trying to design a self-exploring robot or computer? Does this 
principle have a name?
Interestingly, some views on AI advocate specifically prohibiting 
self-awareness and self-exploration as a precaution against the development of 
unfriendly AI. In my opinion, these views erroneously transfer familiar human 
motives onto 'alien' AGI cognitive architectures - there's a history of 
discussing this topic  on SL4 and other places.


I believe however that most approaches to designing AGI (those that do not 
specifically prohibit self-aware and self-explortative behaviors) take for 
granted, and indeed intentionally promote, self-awareness and self-exploration 
at most stages of AGI development. In other words, efficient and effective 
recursive self-improvement (RSI) requires self-awareness and self-exploration. 
If any term exists to describe a 'self-exploring robot or computer', that term 
is RSI. Coining a lesser term for 'self-exploring AI' may be useful in some 
proto-AGI contexts, but I suspect that 'RSI' is ultimately a more useful and 
meaningful term.


-dave





  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread John LaMuth
- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, August 26, 2008 6:49 AM
  Subject: Re: [agi] How Would You Design a Play Machine?


  Examples of the kind of similarity I'm thinking of:

  -- The analogy btw chess or go and military strategy

  -- The analogy btw roughhousing and actual fighting

  In logical terms, these are intensional rather than extensional similarities

  ben

  ###

  ***

  Ben

  You have rightfully nailed this issue down as one is serious and the other 
is not to be taken this way a meta-order perspective)...

  The same goes for humor and comedy -- the meta-message being don't take me 
seriously

  That is why I segregated analogical humor seperately (from routine 
seriousness) in my 2nd patent 7236963
  www.emotionchip.net 

  This specialized meta-order-type of disqualification is built directly into 
the schematics ...

  You are correct -- it all hinges on intentions...

  John LaMuth

  www.ethicalvalues.com 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread John LaMuth
- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, August 26, 2008 6:49 AM
  Subject: Re: [agi] How Would You Design a Play Machine?


  Examples of the kind of similarity I'm thinking of:

  -- The analogy btw chess or go and military strategy

  -- The analogy btw roughhousing and actual fighting

  In logical terms, these are intensional rather than extensional similarities

  ben

  ###

  ***

  Ben

  You have rightfully nailed this issue down as one is serious and the other 
is not to be taken this way a meta-order perspective)...

  The same goes for humor and comedy -- the meta-message being don't take me 
seriously

  That is why I segregated analogical humor seperately (from routine 
seriousness) in my 2nd patent 7236963
  www.emotionchip.net 

  This specialized meta-order-type of disqualification is built directly into 
the schematics ...

  You are correct -- it all hinges on intentions...

  John LaMuth

  www.ethicalvalues.com 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Brad Paulsen

Mike,

So you feel that my disagreement with your proposal is sad?  That's quite 
an ego you have there, my friend.  You asked for input and you got it.  The 
fact that you didn't like my input doesn't make me or the effort I spent 
composing it sad.  I haven't read all of the replies to your post yet, 
but judging by the index listing in my e-mail client, it has already 
drained a considerable amount of time and intellectual energy from the 
members of this list.  You want sad?  That's sad.


Nice try at ignoring the substance of what I wrote while continuing to 
advance you own views.  I did NOT say THINKING about your idea, or any idea 
for that matter, was a waste of time.  Indeed, the second sentence of my 
reply contained the following ...(unless [studying human play is] being 
done purely for research purposes).  I did think about your idea.  I 
concluded what it proposes (not the idea itself) is, in fact, a waste of 
time for people who want to design and build a working AGI before 
mid-century.  I'm sure some list members will agree with you.  I'm also 
sure some will agree with me.  But, most will have their own views on this 
issue.  That's the way it works.


The AGI I (and many others) have in mind will be to human intelligence what 
an airplane is to a bird.  For many of the same reasons airplanes don't 
play like birds do, my AGI won't play (or create) like humans do.  And, 
just as the airplane flies BETTER THAN the bird (for human purposes), my 
AGI will create BETTER THAN any human (for human purposes).


You wrote, [Play] is generally acknowledged by psychologists to be an 
essential dimension of creativity - which is the goal of AGI.


Wrong.  ONE of the goals (not THE goal) of AGI is *inspired* by human 
creativity.  Indeed, I am counting on the creativity of the first 
generation of AGIs to help humans build (or keep humans away from building) 
the second generation of AGIs.  But... neither generation has to (and, 
IMHO, shouldn't) have human-style creativity.


In fact, I suggest we not use the word creativity when discussing 
AGI-type knowledge synthesis because that is a term that has been applied 
solely to human-style intelligence.  Perhaps, idea mining would be a 
better way to describe what I think about when I think about AGI-style 
creativity.  Knowledge synthesis also works for me and has a greater 
syllable count.  Either phrase fits the mechanism I have in mind for an AGI 
that works with MASSIVE quantities of data, using well-studied and 
established data mining techniques, to discover important (to humans and, 
eventually, AGIs themselves) associations.  It would have been impossible 
to build this type of idea mining capability into an AI before the mid 
1990's (before the Internet went public).  It's possible now.  Indeed, 
Google is encouraging it by publishing an open source REST (if memory 
serves) API to the Googleverse.  No human intelligence would be capable of 
doing such data mining without the aid of a computer and, even then, it's 
not easy for the human intellect (associations between massive amounts of 
data are often, themselves, still quite massive - ask the CIA or the NSA or 
Google).


Certainly play is ...fundamental to the human mind-and-body  My point 
was simply that this should have little or no interest to those of us 
attempting to build a working, non-human-style AGI.  We can discuss it all 
we like (however, I don't intend to continue doing so after this reply -- 
I've stated my case).  Such discussion may be worthwhile (if only to show 
up its inherent wrongness) but spending any time attempting to design or 
build an AGI containing a simulation of human-style play (or creativity) is 
not.  There are only so many minutes in a day and only so many days in a 
life.  The human-style (Turing test) approach to AI has been tried.  It 
failed (not in every respect, of course, but the Loebner Prizes - the $25K 
and $100K prizes - established in 1990 remain unclaimed).   I don't intend 
to spend one more minute or hour of my life trying to win the Loebner Prize.


The enormous amount of intellectual energy spent (largely wasted), from the 
mid 1950's to the end of the 1980's, trying to create a human-like AI is a 
true tragedy.  But, perhaps, even more tragic is that unquestioningly 
holding up Turing's imitation game as the gold standard of AI created 
what we call in the commercial software industry a reference problem.  To 
get new clients to buy your software, you need a good reference from 
former/current clients.  Anyone who has attempted to get funding for an AGI 
project since the mid-1990s will attest that the (unintentional but 
nevertheless real) damage caused by Turing and his followers continues to 
have a very real, negative effect on the field of AI/AGI.  I have done, and 
will continue to do, my best to see that this same mistake is not repeated 
in this century's quest to build a beneficial (to humanity) AGI. 
Unfortunately, we 

Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Brad Paulsen

Charles,

By now you've probably read my reply to Tintner's reply.  I think that 
probably says it all (and them some!).


What you say holds IFF you are planing on building an airplane that flies 
just like a bird.  In other words, if you are planning on building a 
human-like AGI (that could, say, pass the Turing test).  My position is, 
and has been for decades, that attempting to pass the Turing test (or win 
either of the two, one-time-only, Loebner Prizes) is a waste of precious 
time and intellectual resources.


Thought experiments?  No problem.  Discussing ideas?  No problem. 
Human-like AGI?  Big problem.


Cheers,
Brad

Charles Hixson wrote:
Play is a form a strategy testing in an environment that doesn't 
severely penalize failures.  As such, every AGI will necessarily spend a 
lot of time playing.


If you have some other particular definition, then perhaps I could 
understand your response if you were to define the term.


OTOH, if this is interpreted as being a machine that doesn't do anything 
BUT play (using my supplied definition), then your response has some 
merit, but even that can be very useful.  Almost all of mathematics, 
e.g., is derived out of such play.


I have a strong suspicion that machines that don't have a play mode 
can never proceed past the reptilian level of mentation.  (Here I'm 
talking about thought processes that are mediated via the reptile 
brain in entities like mammals.  Actual reptiles may have some more 
advanced faculties of which I'm unaware.  (Note that, e.g., shrews don't 
have much play capability, but they have SOME.)



Brad Paulsen wrote:
Mike Tintner wrote: ...how would you design a play machine - a 
machine that can play around as a child does?


I wouldn't.  IMHO that's just another waste of time and effort (unless 
it's being done purely for research purposes).  It's a diversion of 
intellectual and financial resources that those serious about building 
an AGI any time in this century cannot afford.  I firmly believe if we 
had not set ourselves the goal of developing human-style intelligence 
(embodied or not) fifty years ago, we would already have a working, 
non-embodied AGI.


Turing was wrong (or at least he was wrongly interpreted).  Those who 
extended his imitation test to humanoid, embodied AI were even more 
wrong.  We *do not need embodiment* to be able to build a powerful AGI 
that can be of immense utility to humanity while also surpassing human 
intelligence in many ways.  To be sure, we want that AGI to be 
empathetic with human intelligence, but we do not need to make it 
equivalent (i.e., just like us).


I don't want to give the impression that a non-Turing intelligence 
will be easy to design and build.  It will probably require at least 
another twenty years of two steps forward, one step back effort.  
So, if we are going to develop a non-human-like, non-embodied AGI 
within the first quarter of this century, we are going to have to 
just say no to Turing and start to use human intelligence as an 
inspiration, not a destination.


Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of  an AGI 
is surely that it must be able to play - so how would you design a 
play machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are - 
it should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be vastly 
more flexible than a computer, but if you want to do it all on 
computer, fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?

How do infants, IOW, play?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Brad Paulsen
Mike Tintner wrote: ...how would you design a play machine - a machine 
that can play around as a child does?


I wouldn't.  IMHO that's just another waste of time and effort (unless it's 
being done purely for research purposes).  It's a diversion of intellectual 
and financial resources that those serious about building an AGI any time 
in this century cannot afford.  I firmly believe if we had not set 
ourselves the goal of developing human-style intelligence (embodied or not) 
fifty years ago, we would already have a working, non-embodied AGI.


Turing was wrong (or at least he was wrongly interpreted).  Those who 
extended his imitation test to humanoid, embodied AI were even more wrong. 
 We *do not need embodiment* to be able to build a powerful AGI that can 
be of immense utility to humanity while also surpassing human intelligence 
in many ways.  To be sure, we want that AGI to be empathetic with human 
intelligence, but we do not need to make it equivalent (i.e., just like 
us).


I don't want to give the impression that a non-Turing intelligence will be 
easy to design and build.  It will probably require at least another twenty 
years of two steps forward, one step back effort.  So, if we are going to 
develop a non-human-like, non-embodied AGI within the first quarter of this 
century, we are going to have to just say no to Turing and start to use 
human intelligence as an inspiration, not a destination.


Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of  an AGI is 
surely that it must be able to play - so how would you design a play 
machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are - it 
should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be vastly 
more flexible than a computer, but if you want to do it all on computer, 
fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?

How do infants, IOW, play?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner

Brad,

That's sad.  The suggestion is for a mental exercise, not a full-scale 
project. And play is fundamental to the human mind-and-body - it 
characterises our more mental as well as more physical activities - 
drawing, designing, scripting, humming and singing scat in the bath, 
dreaming/daydreaming  much more. It is generally acknowledged by 
psychologists to be an essential dimension of creativity - which is the goal 
of AGI. It is also an essential dimension of animal behaviour and animal 
evolution.  Many of the smartest companies have their play areas.


But I'm not aware of any program or computer design for play - as distinct 
from elaborating systematically and methodically or genetically on 
themes - are you? In which case it would be good to think about one - it'll 
open your mind  give you new perspectives.


This should be a group where people are not too frightened to play around 
with ideas.


Brad: Mike Tintner wrote: ...how would you design a play machine - a 
machine

that can play around as a child does?

I wouldn't.  IMHO that's just another waste of time and effort (unless 
it's being done purely for research purposes).  It's a diversion of 
intellectual and financial resources that those serious about building an 
AGI any time in this century cannot afford.  I firmly believe if we had 
not set ourselves the goal of developing human-style intelligence 
(embodied or not) fifty years ago, we would already have a working, 
non-embodied AGI.


Turing was wrong (or at least he was wrongly interpreted).  Those who 
extended his imitation test to humanoid, embodied AI were even more wrong. 
We *do not need embodiment* to be able to build a powerful AGI that can be 
of immense utility to humanity while also surpassing human intelligence in 
many ways.  To be sure, we want that AGI to be empathetic with human 
intelligence, but we do not need to make it equivalent (i.e., just like 
us).


I don't want to give the impression that a non-Turing intelligence will be 
easy to design and build.  It will probably require at least another 
twenty years of two steps forward, one step back effort.  So, if we are 
going to develop a non-human-like, non-embodied AGI within the first 
quarter of this century, we are going to have to just say no to Turing 
and start to use human intelligence as an inspiration, not a destination.


Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of  an AGI is 
surely that it must be able to play - so how would you design a play 
machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are - it 
should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be vastly more 
flexible than a computer, but if you want to do it all on computer, fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?

How do infants, IOW, play?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?; Powered by 
Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Matt Mahoney
Kittens play with small moving objects because it teaches them to be better 
hunters. Play is not a goal in itself, but a subgoal that may or may not be a 
useful part of a successful AGI design.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, August 25, 2008 8:59:06 AM
Subject: Re: [agi] How Would You Design a Play Machine?

Brad,

That's sad.  The suggestion is for a mental exercise, not a full-scale 
project. And play is fundamental to the human mind-and-body - it 
characterises our more mental as well as more physical activities - 
drawing, designing, scripting, humming and singing scat in the bath, 
dreaming/daydreaming  much more. It is generally acknowledged by 
psychologists to be an essential dimension of creativity - which is the goal 
of AGI. It is also an essential dimension of animal behaviour and animal 
evolution.  Many of the smartest companies have their play areas.

But I'm not aware of any program or computer design for play - as distinct 
from elaborating systematically and methodically or genetically on 
themes - are you? In which case it would be good to think about one - it'll 
open your mind  give you new perspectives.

This should be a group where people are not too frightened to play around 
with ideas.

Brad: Mike Tintner wrote: ...how would you design a play machine - a 
machine
 that can play around as a child does?

 I wouldn't.  IMHO that's just another waste of time and effort (unless 
 it's being done purely for research purposes).  It's a diversion of 
 intellectual and financial resources that those serious about building an 
 AGI any time in this century cannot afford.  I firmly believe if we had 
 not set ourselves the goal of developing human-style intelligence 
 (embodied or not) fifty years ago, we would already have a working, 
 non-embodied AGI.

 Turing was wrong (or at least he was wrongly interpreted).  Those who 
 extended his imitation test to humanoid, embodied AI were even more wrong. 
 We *do not need embodiment* to be able to build a powerful AGI that can be 
 of immense utility to humanity while also surpassing human intelligence in 
 many ways.  To be sure, we want that AGI to be empathetic with human 
 intelligence, but we do not need to make it equivalent (i.e., just like 
 us).

 I don't want to give the impression that a non-Turing intelligence will be 
 easy to design and build.  It will probably require at least another 
 twenty years of two steps forward, one step back effort.  So, if we are 
 going to develop a non-human-like, non-embodied AGI within the first 
 quarter of this century, we are going to have to just say no to Turing 
 and start to use human intelligence as an inspiration, not a destination.

 Cheers,

 Brad



 Mike Tintner wrote:
 Just a v. rough, first thought. An essential requirement of  an AGI is 
 surely that it must be able to play - so how would you design a play 
 machine - a machine that can play around as a child does?

 You can rewrite the brief as you choose, but my first thoughts are - it 
 should be able to play with
 a) bricks
 b)plasticine
 c) handkerchiefs/ shawls
 d) toys [whose function it doesn't know]
 and
 e) draw.

 Something that should be soon obvious is that a robot will be vastly more 
 flexible than a computer, but if you want to do it all on computer, fine.

 How will it play - manipulate things every which way?
 What will be the criteria of learning - of having done something 
 interesting?
 How do infants, IOW, play?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner


Matt: Kittens play with small moving objects because it teaches them to be 
better hunters. Play is not a goal in itself, but a subgoal that may or may 
not be a useful part of a successful AGI design.


Certainly, crude imitation of, and preparation for, adult activities is one 
aspect of play. But pure exploration - experimentation -and embroidery also 
are important. An infant dropping  throwing things  handling things every 
which way. Doodling - creating lines that go off and twist and turn in every 
direction. Babbling - playing around with sounds. Sputtering - playing 
around with silly noises - kids love that, no? (Even some of us adults too). 
Playing with stories and events - and alternative endings, beginnings and 
middles.  Make believe. Playing around with the rules of invented games.


Human development allots a great deal of time for such play. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Actually, kittens play because it's fun. Evolution has equipped them with the 
rewarding sense of fun because it optimizes their fitness as hunters. But 
kittens are adaptation executors, evolution is the fitness optimizer. It's a 
subtle but important distinction.

See http://www.overcomingbias.com/2007/11/adaptation-exec.html

Terren

They're adaptation executors, not fitness optimizers. 

--- On Mon, 8/25/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 Kittens play with small moving objects because it teaches
 them to be better hunters. Play is not a goal in itself, but
 a subgoal that may or may not be a useful part of a
 successful AGI design.
 
  -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 
 - Original Message 
 From: Mike Tintner [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Monday, August 25, 2008 8:59:06 AM
 Subject: Re: [agi] How Would You Design a Play Machine?
 
 Brad,
 
 That's sad.  The suggestion is for a mental exercise,
 not a full-scale 
 project. And play is fundamental to the human mind-and-body
 - it 
 characterises our more mental as well as more physical
 activities - 
 drawing, designing, scripting, humming and singing scat in
 the bath, 
 dreaming/daydreaming  much more. It is generally
 acknowledged by 
 psychologists to be an essential dimension of creativity -
 which is the goal 
 of AGI. It is also an essential dimension of animal
 behaviour and animal 
 evolution.  Many of the smartest companies have their play
 areas.
 
 But I'm not aware of any program or computer design for
 play - as distinct 
 from elaborating systematically and methodically or
 genetically on 
 themes - are you? In which case it would be good to think
 about one - it'll 
 open your mind  give you new perspectives.
 
 This should be a group where people are not too frightened
 to play around 
 with ideas.
 
 Brad: Mike Tintner wrote: ...how would you design
 a play machine - a 
 machine
  that can play around as a child does?
 
  I wouldn't.  IMHO that's just another waste of
 time and effort (unless 
  it's being done purely for research purposes). 
 It's a diversion of 
  intellectual and financial resources that those
 serious about building an 
  AGI any time in this century cannot afford.  I firmly
 believe if we had 
  not set ourselves the goal of developing human-style
 intelligence 
  (embodied or not) fifty years ago, we would already
 have a working, 
  non-embodied AGI.
 
  Turing was wrong (or at least he was wrongly
 interpreted).  Those who 
  extended his imitation test to humanoid, embodied AI
 were even more wrong. 
  We *do not need embodiment* to be able to build a
 powerful AGI that can be 
  of immense utility to humanity while also surpassing
 human intelligence in 
  many ways.  To be sure, we want that AGI to be
 empathetic with human 
  intelligence, but we do not need to make it equivalent
 (i.e., just like 
  us).
 
  I don't want to give the impression that a
 non-Turing intelligence will be 
  easy to design and build.  It will probably require at
 least another 
  twenty years of two steps forward, one step
 back effort.  So, if we are 
  going to develop a non-human-like, non-embodied AGI
 within the first 
  quarter of this century, we are going to have to
 just say no to Turing 
  and start to use human intelligence as an inspiration,
 not a destination.
 
  Cheers,
 
  Brad
 
 
 
  Mike Tintner wrote:
  Just a v. rough, first thought. An essential
 requirement of  an AGI is 
  surely that it must be able to play - so how would
 you design a play 
  machine - a machine that can play around as a
 child does?
 
  You can rewrite the brief as you choose, but my
 first thoughts are - it 
  should be able to play with
  a) bricks
  b)plasticine
  c) handkerchiefs/ shawls
  d) toys [whose function it doesn't know]
  and
  e) draw.
 
  Something that should be soon obvious is that a
 robot will be vastly more 
  flexible than a computer, but if you want to do it
 all on computer, fine.
 
  How will it play - manipulate things every which
 way?
  What will be the criteria of learning - of having
 done something 
  interesting?
  How do infants, IOW, play?
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Vladimir Nesov
On Mon, Aug 25, 2008 at 9:22 PM, Terren Suydam [EMAIL PROTECTED] wrote:

 Actually, kittens play because it's fun. Evolution has equipped them with the 
 rewarding sense of fun because it optimizes their fitness as hunters. But 
 kittens are adaptation executors, evolution is the fitness optimizer. It's a 
 subtle but important distinction.

 See http://www.overcomingbias.com/2007/11/adaptation-exec.html


Saying that play is not adaptive requires some backing (I expect it
plays some role, so you need to be more convincing).

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Abram Demski
Mike,

I agree with Brad somewhat, because I do not think copying human (or
animal) intellect is the goal. It is a means to the end of general
intelligence.

However, that certainly doesn't stop me from participating in a
thought experiment.

I think the big thing with artificial play is figuring out a good
goal-creation scheme. My definition of play directly follows from
this intuition: play is activity that results from a system that is
rapidly changing its goals. In other words, play is behavior that is
goal-oriented, but barely.

The definition should probably be somewhat more specific-- when
playing, people and animals don't just adopt totally arbitrary goals;
we seem to prefer interesting goals. This is because there is a
hidden biological agenda-- learning. But, learning is not *our* goal.
Out goal is whatever arbitrary goal we have adopted for the purpose of
play.

One system I know of does something like this-- the PURR-PUSS
system. Its rule is simple: if an unexpected event happens once, then
the system will adopt the goal of trying to get it to happen again, by
recreating the circumstances that led to it the first time. In
carrying out the attempt, it should be able to greatly refine its
concept of the circumstances that led to it-- because many of its
attempts to recreate the event will probably fail. Many of these
curiosity-based goals may be active at once. Since this is the
system's only motivational factor, it could be called an artificial
playing system (at least by my definition).

-Abram

On Mon, Aug 25, 2008 at 8:59 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 Brad,

 That's sad.  The suggestion is for a mental exercise, not a full-scale
 project. And play is fundamental to the human mind-and-body - it
 characterises our more mental as well as more physical activities - drawing,
 designing, scripting, humming and singing scat in the bath,
 dreaming/daydreaming  much more. It is generally acknowledged by
 psychologists to be an essential dimension of creativity - which is the goal
 of AGI. It is also an essential dimension of animal behaviour and animal
 evolution.  Many of the smartest companies have their play areas.

 But I'm not aware of any program or computer design for play - as distinct
 from elaborating systematically and methodically or genetically on themes
 - are you? In which case it would be good to think about one - it'll open
 your mind  give you new perspectives.

 This should be a group where people are not too frightened to play around
 with ideas.

 Brad: Mike Tintner wrote: ...how would you design a play machine - a
 machine

 that can play around as a child does?

 I wouldn't.  IMHO that's just another waste of time and effort (unless
 it's being done purely for research purposes).  It's a diversion of
 intellectual and financial resources that those serious about building an
 AGI any time in this century cannot afford.  I firmly believe if we had not
 set ourselves the goal of developing human-style intelligence (embodied or
 not) fifty years ago, we would already have a working, non-embodied AGI.

 Turing was wrong (or at least he was wrongly interpreted).  Those who
 extended his imitation test to humanoid, embodied AI were even more wrong.
 We *do not need embodiment* to be able to build a powerful AGI that can be
 of immense utility to humanity while also surpassing human intelligence in
 many ways.  To be sure, we want that AGI to be empathetic with human
 intelligence, but we do not need to make it equivalent (i.e., just like
 us).

 I don't want to give the impression that a non-Turing intelligence will be
 easy to design and build.  It will probably require at least another twenty
 years of two steps forward, one step back effort.  So, if we are going to
 develop a non-human-like, non-embodied AGI within the first quarter of this
 century, we are going to have to just say no to Turing and start to use
 human intelligence as an inspiration, not a destination.

 Cheers,

 Brad



 Mike Tintner wrote:

 Just a v. rough, first thought. An essential requirement of  an AGI is
 surely that it must be able to play - so how would you design a play machine
 - a machine that can play around as a child does?

 You can rewrite the brief as you choose, but my first thoughts are - it
 should be able to play with
 a) bricks
 b)plasticine
 c) handkerchiefs/ shawls
 d) toys [whose function it doesn't know]
 and
 e) draw.

 Something that should be soon obvious is that a robot will be vastly more
 flexible than a computer, but if you want to do it all on computer, fine.

 How will it play - manipulate things every which way?
 What will be the criteria of learning - of having done something
 interesting?
 How do infants, IOW, play?



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?; Powered by
 

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Charles Hixson
Play is a form a strategy testing in an environment that doesn't 
severely penalize failures.  As such, every AGI will necessarily spend a 
lot of time playing.


If you have some other particular definition, then perhaps I could 
understand your response if you were to define the term.


OTOH, if this is interpreted as being a machine that doesn't do anything 
BUT play (using my supplied definition), then your response has some 
merit, but even that can be very useful.  Almost all of mathematics, 
e.g., is derived out of such play.


I have a strong suspicion that machines that don't have a play mode 
can never proceed past the reptilian level of mentation.  (Here I'm 
talking about thought processes that are mediated via the reptile 
brain in entities like mammals.  Actual reptiles may have some more 
advanced faculties of which I'm unaware.  (Note that, e.g., shrews don't 
have much play capability, but they have SOME.)



Brad Paulsen wrote:
Mike Tintner wrote: ...how would you design a play machine - a 
machine that can play around as a child does?


I wouldn't.  IMHO that's just another waste of time and effort (unless 
it's being done purely for research purposes).  It's a diversion of 
intellectual and financial resources that those serious about building 
an AGI any time in this century cannot afford.  I firmly believe if we 
had not set ourselves the goal of developing human-style intelligence 
(embodied or not) fifty years ago, we would already have a working, 
non-embodied AGI.


Turing was wrong (or at least he was wrongly interpreted).  Those who 
extended his imitation test to humanoid, embodied AI were even more 
wrong.  We *do not need embodiment* to be able to build a powerful AGI 
that can be of immense utility to humanity while also surpassing human 
intelligence in many ways.  To be sure, we want that AGI to be 
empathetic with human intelligence, but we do not need to make it 
equivalent (i.e., just like us).


I don't want to give the impression that a non-Turing intelligence 
will be easy to design and build.  It will probably require at least 
another twenty years of two steps forward, one step back effort.  
So, if we are going to develop a non-human-like, non-embodied AGI 
within the first quarter of this century, we are going to have to 
just say no to Turing and start to use human intelligence as an 
inspiration, not a destination.


Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of  an AGI 
is surely that it must be able to play - so how would you design a 
play machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are - 
it should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be vastly 
more flexible than a computer, but if you want to do it all on 
computer, fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?

How do infants, IOW, play?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

I'm not saying play isn't adaptive. I'm saying that kittens play not because 
they're optimizing their fitness, but because they're intrinsically motivated 
to (it feels good). The reason it feels good has nothing to do with the kitten, 
but with the evolutionary process that designed that adaption.

It may seem like a minor distinction, but it helps to understand why, for 
example, people have sex with birth control. We don't have sex to maximize our 
genetic fitness, but because it feels good (or a thousand other reasons). We 
are adaption executers, not fitness optimizers.

--- On Mon, 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
 
  Actually, kittens play because it's fun. Evolution
 has equipped them with the rewarding sense of fun because it
 optimizes their fitness as hunters. But kittens are
 adaptation executors, evolution is the fitness optimizer.
 It's a subtle but important distinction.
 
  See
 http://www.overcomingbias.com/2007/11/adaptation-exec.html
 
 
 Saying that play is not adaptive requires some backing (I
 expect it
 plays some role, so you need to be more convincing).
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner

Terren,

Your broad distinctions are fine, but I feel you are not emphasizing the 
area of most interest for AGI, which is *how* we adapt rather than why. 
Interestingly, your blog uses the example of a screwdriver - Kauffman uses 
the same in Chap 12 of Reinventing the Sacred as an example of human 
creativity/divergence - i.e. our capacity to find infinite uses for a 
screwdriver.


Do we think we could write an algorithm, an effective procedure, to 
generate a possibly infinite list of all possible uses of screwdrivers in 
all possible circumstances, some of which do not yet exist? I don't think we 
could get started.


What emerges here, v. usefully, is that the capacity for play overlaps 
with classically-defined, and a shade more rigorous and targeted,  divergent 
thinking, e.g. find as many uses as you can for a screwdriver, rubber teat, 
needle etc.


...How would you design a divergent (as well as play) machine that can deal 
with the above open-ended problems? (Again surely essential for an AGI)


With full general intelligence, the problem more typically starts with the 
function-to-be-fulfilled - e.g. how do you open this paint can? - and only 
then do you search for a novel tool, like a screwdriver or another can lid.




Terren: Actually, kittens play because it's fun. Evolution has equipped 
them with the rewarding sense of fun because it optimizes their fitness as 
hunters. But kittens are adaptation executors, evolution is the fitness 
optimizer. It's a subtle but important distinction.


See http://www.overcomingbias.com/2007/11/adaptation-exec.html

Terren

They're adaptation executors, not fitness optimizers.

--- On Mon, 8/25/08, Matt Mahoney [EMAIL PROTECTED] wrote:

Kittens play with small moving objects because it teaches
them to be better hunters. Play is not a goal in itself, but
a subgoal that may or may not be a useful part of a
successful AGI design.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, August 25, 2008 8:59:06 AM
Subject: Re: [agi] How Would You Design a Play Machine?

Brad,

That's sad.  The suggestion is for a mental exercise,
not a full-scale
project. And play is fundamental to the human mind-and-body
- it
characterises our more mental as well as more physical
activities -
drawing, designing, scripting, humming and singing scat in
the bath,
dreaming/daydreaming  much more. It is generally
acknowledged by
psychologists to be an essential dimension of creativity -
which is the goal
of AGI. It is also an essential dimension of animal
behaviour and animal
evolution.  Many of the smartest companies have their play
areas.

But I'm not aware of any program or computer design for
play - as distinct
from elaborating systematically and methodically or
genetically on
themes - are you? In which case it would be good to think
about one - it'll
open your mind  give you new perspectives.

This should be a group where people are not too frightened
to play around
with ideas.

Brad: Mike Tintner wrote: ...how would you design
a play machine - a
machine
 that can play around as a child does?

 I wouldn't.  IMHO that's just another waste of
time and effort (unless
 it's being done purely for research purposes).
It's a diversion of
 intellectual and financial resources that those
serious about building an
 AGI any time in this century cannot afford.  I firmly
believe if we had
 not set ourselves the goal of developing human-style
intelligence
 (embodied or not) fifty years ago, we would already
have a working,
 non-embodied AGI.

 Turing was wrong (or at least he was wrongly
interpreted).  Those who
 extended his imitation test to humanoid, embodied AI
were even more wrong.
 We *do not need embodiment* to be able to build a
powerful AGI that can be
 of immense utility to humanity while also surpassing
human intelligence in
 many ways.  To be sure, we want that AGI to be
empathetic with human
 intelligence, but we do not need to make it equivalent
(i.e., just like
 us).

 I don't want to give the impression that a
non-Turing intelligence will be
 easy to design and build.  It will probably require at
least another
 twenty years of two steps forward, one step
back effort.  So, if we are
 going to develop a non-human-like, non-embodied AGI
within the first
 quarter of this century, we are going to have to
just say no to Turing
 and start to use human intelligence as an inspiration,
not a destination.

 Cheers,

 Brad



 Mike Tintner wrote:
 Just a v. rough, first thought. An essential
requirement of  an AGI is
 surely that it must be able to play - so how would
you design a play
 machine - a machine that can play around as a
child does?

 You can rewrite the brief as you choose, but my
first thoughts are - it
 should be able to play with
 a) bricks
 b)plasticine
 c) handkerchiefs/ shawls
 d) toys [whose function it doesn't know]
 and
 e) draw.

 Something that should be soon obvious

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Vladimir Nesov
On Mon, Aug 25, 2008 at 11:17 PM, Terren Suydam [EMAIL PROTECTED] wrote:

 I'm not saying play isn't adaptive. I'm saying that kittens play not
 because they're optimizing their fitness, but because they're intrinsically
 motivated to (it feels good). The reason it feels good has nothing to do
 with the kitten, but with the evolutionary process that designed that 
 adaption.

 It may seem like a minor distinction, but it helps to understand why,
 for example, people have sex with birth control. We don't have sex to
 maximize our genetic fitness, but because it feels good (or a thousand
 other reasons). We are adaption executers, not fitness optimizers.


The word because was misplaced. Cats hunt mice because they were
designed to, and they were designed to, because it's adaptive. Saying
that a particular cat instance hunts because it feels good is not very
explanatory, like saying that it hunts because such is its nature or
because the laws of physics drive the cat physical configuration
through the hunting dynamics. Evolutionary design, on the other hand,
is the point of explanation for the complex adaptation, the simple
regularity in the Nature that causally produced the phenomenon we are
explaining.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Hi Mike,

As may be obvious by now, I'm not that interested in designing cognition. I'm 
interested in designing simulations in which intelligent behavior emerges.

But the way you're using the word 'adapt', in a cognitive sense of playing with 
goals, is different from the way I was using 'adaptation', which is the result 
of an evolutionary process. 

Terren

--- On Mon, 8/25/08, Mike Tintner [EMAIL PROTECTED] wrote:

 From: Mike Tintner [EMAIL PROTECTED]
 Subject: Re: [agi] How Would You Design a Play Machine?
 To: agi@v2.listbox.com
 Date: Monday, August 25, 2008, 3:41 PM
 Terren,
 
 Your broad distinctions are fine, but I feel you are not
 emphasizing the 
 area of most interest for AGI, which is *how* we adapt
 rather than why. 
 Interestingly, your blog uses the example of a screwdriver
 - Kauffman uses 
 the same in Chap 12 of Reinventing the Sacred as an example
 of human 
 creativity/divergence - i.e. our capacity to find infinite
 uses for a 
 screwdriver.
 
 Do we think we could write an algorithm, an effective
 procedure, to 
 generate a possibly infinite list of all possible uses of
 screwdrivers in 
 all possible circumstances, some of which do not yet exist?
 I don't think we 
 could get started.
 
 What emerges here, v. usefully, is that the
 capacity for play overlaps 
 with classically-defined, and a shade more rigorous and
 targeted,  divergent 
 thinking, e.g. find as many uses as you can for a
 screwdriver, rubber teat, 
 needle etc.
 
 ...How would you design a divergent (as well as play)
 machine that can deal 
 with the above open-ended problems? (Again surely essential
 for an AGI)
 
 With full general intelligence, the problem more typically
 starts with the 
 function-to-be-fulfilled - e.g. how do you open this paint
 can? - and only 
 then do you search for a novel tool, like a screwdriver or
 another can lid.
 
 
 
 Terren: Actually, kittens play because it's fun.
 Evolution has equipped 
 them with the rewarding sense of fun because it optimizes
 their fitness as 
 hunters. But kittens are adaptation executors, evolution is
 the fitness 
 optimizer. It's a subtle but important distinction.
 
  See
 http://www.overcomingbias.com/2007/11/adaptation-exec.html
 
  Terren
 
  They're adaptation executors, not fitness
 optimizers.
 
  --- On Mon, 8/25/08, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  Kittens play with small moving objects because it
 teaches
  them to be better hunters. Play is not a goal in
 itself, but
  a subgoal that may or may not be a useful part of
 a
  successful AGI design.
 
   -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 
  - Original Message 
  From: Mike Tintner
 [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Monday, August 25, 2008 8:59:06 AM
  Subject: Re: [agi] How Would You Design a Play
 Machine?
 
  Brad,
 
  That's sad.  The suggestion is for a mental
 exercise,
  not a full-scale
  project. And play is fundamental to the human
 mind-and-body
  - it
  characterises our more mental as well as more
 physical
  activities -
  drawing, designing, scripting, humming and singing
 scat in
  the bath,
  dreaming/daydreaming  much more. It is
 generally
  acknowledged by
  psychologists to be an essential dimension of
 creativity -
  which is the goal
  of AGI. It is also an essential dimension of
 animal
  behaviour and animal
  evolution.  Many of the smartest companies have
 their play
  areas.
 
  But I'm not aware of any program or computer
 design for
  play - as distinct
  from elaborating systematically and methodically
 or
  genetically on
  themes - are you? In which case it would be good
 to think
  about one - it'll
  open your mind  give you new perspectives.
 
  This should be a group where people are not too
 frightened
  to play around
  with ideas.
 
  Brad: Mike Tintner wrote: ...how would
 you design
  a play machine - a
  machine
   that can play around as a child does?
  
   I wouldn't.  IMHO that's just another
 waste of
  time and effort (unless
   it's being done purely for research
 purposes).
  It's a diversion of
   intellectual and financial resources that
 those
  serious about building an
   AGI any time in this century cannot afford. 
 I firmly
  believe if we had
   not set ourselves the goal of developing
 human-style
  intelligence
   (embodied or not) fifty years ago, we would
 already
  have a working,
   non-embodied AGI.
  
   Turing was wrong (or at least he was wrongly
  interpreted).  Those who
   extended his imitation test to humanoid,
 embodied AI
  were even more wrong.
   We *do not need embodiment* to be able to
 build a
  powerful AGI that can be
   of immense utility to humanity while also
 surpassing
  human intelligence in
   many ways.  To be sure, we want that AGI to
 be
  empathetic with human
   intelligence, but we do not need to make it
 equivalent
  (i.e., just like
   us).
  
   I don't want to give the impression that
 a
  non-Turing intelligence will be
   easy to design and build

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

 Saying
 that a particular cat instance hunts because it feels good
 is not very explanatory

Even if I granted that, saying that a particular cat plays to increase its 
hunting skills is incorrect. It's an important distinction because by analogy 
we must talk about particular AGI instances. When we talk about, for instance, 
whether an AGI will play, will it play because it's trying to optimize its 
fitness, or because it is motivated in some other way?  We have to be that 
precise if we're talking about design.

Terren

--- On Mon, 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 The word because was misplaced. Cats hunt mice
 because they were
 designed to, and they were designed to, because it's
 adaptive. Saying
 that a particular cat instance hunts because it feels good
 is not very
 explanatory, like saying that it hunts because such is its
 nature or
 because the laws of physics drive the cat physical
 configuration
 through the hunting dynamics. Evolutionary design, on the
 other hand,
 is the point of explanation for the complex adaptation, the
 simple
 regularity in the Nature that causally produced the
 phenomenon we are
 explaining.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Jonathan El-Bizri
On Mon, Aug 25, 2008 at 12:52 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:


 The word because was misplaced. Cats hunt mice because they were
 designed to, and they were designed to, because it's adaptive.


And the adaption they have evolved in to, uses a pleasure process as a
motivator.

Saying
 that a particular cat instance hunts because it feels good is not very
 explanatory, like saying that it hunts because such is its nature or
 because the laws of physics drive the cat physical configuration
 through the hunting dynamics.


Not at all. It defines the process by drives a cat to hunt, and also to
practice - ie - play hunting. This is opposed to hunting due to reflex,
like, say, a venus flytrap. I am reminded of a possibly apocryphal story
about picasso:

--
A woman asks Picasso to draw something for her on a napkin. He puts down a
few lines, and says That will be $10,000.

What! says the woman, That only took you five seconds to draw.

No, that took me 40 years to draw.
-

Cats has evolved to see the process as a goal or reward in itself, over and
above the requirements for food: If cats just hunted because they were
hungry, they would never spend their downtime during kittenhood practicing
and watching other cats hunt, and wouldn't be any good at hunting.

And the result has many more advantages than simply optimising it's hunting
strategies: it has evolved a cat that bonds with it's fellow kittens, learns
to cooperate, and ultimately, becomes a better hunter because it sees the
process of hunting as a game.

Jonathan El-Bizri



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Vladimir Nesov
On Tue, Aug 26, 2008 at 12:19 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 Saying
 that a particular cat instance hunts because it feels good
 is not very explanatory

 Even if I granted that, saying that a particular cat plays to increase
 its hunting skills is incorrect. It's an important distinction because
 by analogy we must talk about particular AGI instances. When we
 talk about, for instance, whether an AGI will play, will it play because
 it's trying to optimize its fitness, or because it is motivated in some
 other way?  We have to be that precise if we're talking about design.

Of course. Different optimization processes at work, different causes.
Let's say (ignoring if it's actually so for the sake of illustration)
that cat plays because it provides it with developmental advantage
through training its nervous system, giving it better hunting skills,
and so an adaptation that drives cat to play was chosen *by
evolution*. Cat doesn't play because *it* reasons that it would give
it superior hunting skills, cat plays because of the emotional drive
installed by evolution (or a more general drive inherent in its
cognitive dynamics). When AGI plays to get better at some skill, it
may be either a result of programmer's advice, in which case play
happens because *programmer* says so, or as a result of its own
conclusion that play helps with skills, and if skills are desirable,
play inherits the desirability. In the last case, play happens because
AGI decides so, which in turn happens because there is a causal link
from play to a desirable state of having superior skills.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner
Terren:As may be obvious by now, I'm not that interested in designing 
cognition. I'm interested in designing simulations in which intelligent 
behavior emerges.But the way you're using the word 'adapt', in a cognitive 
sense of playing with goals, is different from the way I was using 
'adaptation', which is the result of an evolutionary process.


Two questions: 1)  how do you propose that your simulations will avoid the 
kind of criticisms you've been making of other systems of being too guided 
by programmers' intentions? How can you set up a simulation without making 
massive, possibly false assumptions about the nature of evolution?


2) Have you thought about the evolution of play in animals?

(We play BTW with just about every dimension of activities - goals, rules, 
tools, actions, movements.. ).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

If an AGI played because it recognized that it would improve its skills in some 
domain, then I wouldn't call that play, I'd call it practice. Those are 
overlapping but distinct concepts. 

Play, as distinct from pactice, is its own reward - the reward felt by a 
kitten. The spirit of Mike's question, I think, was about identifying the 
essential goalless-ness of play, the sense in which playing fosters adaptivity 
of goals. If you really want to interpret goal-satisfaction in play, it must be 
a meta-goal of mastering one's environment - and that is such a broadly defined 
goal that I don't see how one could specify it to a seed AI. I believe that's 
why evolution used the trick of making it fun.

Terren

--- On Mon, 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Of course. Different optimization processes at work,
 different causes.
 Let's say (ignoring if it's actually so for the
 sake of illustration)
 that cat plays because it provides it with developmental
 advantage
 through training its nervous system, giving it better
 hunting skills,
 and so an adaptation that drives cat to play was chosen *by
 evolution*. Cat doesn't play because *it* reasons that
 it would give
 it superior hunting skills, cat plays because of the
 emotional drive
 installed by evolution (or a more general drive inherent in
 its
 cognitive dynamics). When AGI plays to get better at some
 skill, it
 may be either a result of programmer's advice, in which
 case play
 happens because *programmer* says so, or as a result of its
 own
 conclusion that play helps with skills, and if skills are
 desirable,
 play inherits the desirability. In the last case, play
 happens because
 AGI decides so, which in turn happens because there is a
 causal link
 from play to a desirable state of having superior skills.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Hi Mike,

Comments below...

--- On Mon, 8/25/08, Mike Tintner [EMAIL PROTECTED] wrote:
 Two questions: 1)  how do you propose that your simulations
 will avoid the 
 kind of criticisms you've been making of other systems
 of being too guided 
 by programmers' intentions? How can you set up a
 simulation without making 
 massive, possibly false assumptions about the nature of
 evolution?

Because I don't care about individual agents. Agents that fail to meet the 
requirements the environment demands, die. There's going to be a lot of death 
in my simulations. The risk I take is that nothing ever survives and I fail to 
demonstrate the feasibility of the approach.
 
 2) Have you thought about the evolution of play in animals?
 
 (We play BTW with just about every dimension of
 activities - goals, rules, 
 tools, actions, movements.. ).

Not much. Play is such an advanced concept in intelligence, and my aims are far 
lower than that.  I don't realistically expect to survive to see the evolution 
of human intelligence using the evolutionary approach I'm talking about.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Vladimir Nesov
On Tue, Aug 26, 2008 at 1:26 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 If an AGI played because it recognized that it would improve its skills
 in some domain, then I wouldn't call that play, I'd call it practice. Those
 are overlapping but distinct concepts.

 Play, as distinct from pactice, is its own reward - the reward felt by a
 kitten. The spirit of Mike's question, I think, was about identifying the
 essential goalless-ness of play, the sense in which playing fosters
 adaptivity of goals. If you really want to interpret goal-satisfaction in 
 play,
 it must be a meta-goal of mastering one's environment - and that is such
 a broadly defined goal that I don't see how one could specify it to a seed
 AI. I believe that's why evolution used the trick of making it fun.


What do you mean by trick? Fun of playing is evolutionary encoded,
no tricks. You can try to encode it into a seed AI by adding a
reference to an actual kitten in a right way, saying fun is that
thing over there! without saying what it is explicitly, and providing
this AI with a kitten. How to do it technically is of course a
Friendly AI-complete problem, but its solution doesn't need to include
all the fine points of the fun concept itself. On this subject, see:

http://www.overcomingbias.com/2008/08/mirrors-and-pai.html
-- in what sense AI can be as a mirror for complex concept instead of
a pencil sketch explicitly hacked together by programmers;

http://www.overcomingbias.com/2008/08/unnatural-categ.html
-- why morality concept needs to be transferred in all details, and
can't be learned from few examples;

http://www.overcomingbias.com/2008/08/computations.html
-- what a real-life concept may look like.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Jonathan El-Bizri
On Mon, Aug 25, 2008 at 2:26 PM, Terren Suydam [EMAIL PROTECTED] wrote:


 If an AGI played because it recognized that it would improve its skills in
 some domain, then I wouldn't call that play, I'd call it practice. Those are
 overlapping but distinct concepts.


The evolution of play is how nature has convinced us to practice skills of a
general but un-predefinable type. Would it make sense to think of practice
as the narrow AI version of play?

Part of play is the specification of arbitrary goals and limitations within
the overlying process. Games without rules aren't 'fun' to people or
kittens.



 Play, as distinct from pactice, is its own reward - the reward felt by a
 kitten. The spirit of Mike's question, I think, was about identifying the
 essential goalless-ness of play, the sense in which playing fosters
 adaptivity of goals. If you really want to interpret goal-satisfaction in
 play, it must be a meta-goal of mastering one's environment - and that is
 such a broadly defined goal that I don't see how one could specify it to a
 seed AI. I believe that's why evolution used the trick of making it fun.


But making it 'fun' doesn't answer the question of what the implicit goals
are. Piaget's theories of assimilation can bring us closer to this, I am of
the mind that they encompass at least part of the intellectual drive toward
play and investigation.

Jonathan El-Bizri



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread David Hart
Where is the hard dividing line between designed cognition and designed
simulation (where intelligent behavior is intended to be emergent in both
cases)? Even if an approach is taken where everything possible is done allow
a 'natural' type evolution of behavior, the simulation design and parameters
will still influence the outcome, sometimes in unknown and unknowable ways.
Any amount of guidance in such a simulation (e.g. to help avoid so many of
the useless eddies in a fully open-ended simulation) amounts to designed
cognition.

That being said, I'm particularly interested in the OCF being used as a
platform for 'pure simulation' (Alife and more sophisticated game
theoretical simulations), and finding ways to work the resulting experience
and methods into the OCP design, which is itself a hybrid approach (designed
cognition + designed simulation) intended to take advantage of the benefits
of both.

-dave

On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Terren:As may be obvious by now, I'm not that interested in designing
 cognition. I'm interested in designing simulations in which intelligent
 behavior emerges.But the way you're using the word 'adapt', in a cognitive
 sense of playing with goals, is different from the way I was using
 'adaptation', which is the result of an evolutionary process.

 Two questions: 1)  how do you propose that your simulations will avoid the
 kind of criticisms you've been making of other systems of being too guided
 by programmers' intentions? How can you set up a simulation without making
 massive, possibly false assumptions about the nature of evolution?

 2) Have you thought about the evolution of play in animals?

 (We play BTW with just about every dimension of activities - goals,
 rules, tools, actions, movements.. ).





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner
Terren: The spirit of Mike's question, I think, was about identifying the 
essential goalless-ness of play..


Well, the key thing for me (although it was, technically, a play-ful 
question :) )  is the distinction between programmed/planned exploration of 
a basically known environment and ad hoc exploration of a deeply unknown 
environment. In many ways, it follows on from my previous thread on 
Philosophy of Learning in  AGI, which asked - how do you learn an unfamiliar 
subject/skill/ activity - could any definite set of principles guide you? 
(This, I presume, is what Ben is somehow dealing with).


If you're an infant, or even often an adult, you don't know what this 
strange object is for or how to manipulate it - so how do you go about 
moving it and testing its properties? How do you go about moving your hand, 
(or manipulator if you're a robot)? {I'd be interested in Bob M's input 
here] - exploring its  properties and capacities for movement too? What are 
the principles if any that should constrain you?


Equally, if you're exploring an environment - a new kind of room, or a new 
kind of territory like a garden, wood, forest, how do you go about moving 
through it, deciding on paths, orienting yourself, mapping etc.?  Remember 
that these are initially alien environments, so the adult or AGI equivalent 
is exploring a strange planet, or  videogame world with alien kinds of laws.


Play - divergent thinking - exploration - these are all overlapping 
dimensions of a general intelligence developing its intelligence, and 
central to AGI.


And for the more abstractly inclined, I should point out that these 
questions easily translate into the most abstract forms - like how do you 
explore a new area of, or for, logic, or maths? How do you go about 
exploring, or developing, a maths of, say, abstract art?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Charles Hixson

Jonathan El-Bizri wrote:



On Mon, Aug 25, 2008 at 2:26 PM, Terren Suydam [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:



If an AGI played because it recognized that it would improve its
skills in some domain, then I wouldn't call that play, I'd call it
practice. Those are overlapping but distinct concepts.


The evolution of play is how nature has convinced us to practice 
skills of a general but un-predefinable type. Would it make sense to 
think of practice as the narrow AI version of play?
No.  Because in practice one is honing skills with a definite chosen 
purpose (and usually no instinctive guide), whereas in play one is 
honing skills without the knowledge that one is doing so.  It's very 
different, e.g., to play a game of chess, and to practice playing chess.


Part ...

Jonathan El-Bizri




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Hi Johnathon,

I disagree, play without rules can certainly be fun. Running just to run, 
jumping just to jump. Play doesn't have to be a game, per se. It's simply a 
purposeless expression of the joy of being alive. It turns out of course that 
play is helpful for achieving certain goals that we interpret as being 
installed by evolution. But we don't play to achieve goals, we do it because 
it's fun. As Mike said, this very discussion is a kind of play, and while we 
can certainly identify goals that we try to accomplish in the course of hashing 
these things out, there's an element in it, for me anyway, of just doing it 
because I love doing it. I suspect that's true for others here. I hope so, 
anyway.

Of course, those that are dogmatically functionalist will view such language as 
'fun' as totally irrelevant. That's ok. The cool thing about AI is that 
eventually, it will shed light on whether subjective experience (to 
functionalists, an inconvenience to be done away with) is critical to 
intelligence.

To address your second question, the implicit goal is always reproduction. If 
there is one basic reductionist element to all of life, it is that. Making play 
fun is a way of getting us to play at all, so that we are more likely to 
reproduce. There's a limit however to the usefulness and accuracy of reducing 
everything to reproduction. 

Terren

--- On Mon, 8/25/08, Jonathan El-Bizri [EMAIL PROTECTED] wrote:
Part of play is the specification of arbitrary goals and limitations within the 
overlying process. Games without rules aren't 'fun' to people or kittens. 

 


Play, as distinct from pactice, is its own reward - the reward felt by a 
kitten. The spirit of Mike's question, I think, was about identifying the 
essential goalless-ness of play, the sense in which playing fosters adaptivity 
of goals. If you really want to interpret goal-satisfaction in play, it must be 
a meta-goal of mastering one's environment - and that is such a broadly defined 
goal that I don't see how one could specify it to a seed AI. I believe that's 
why evolution used the trick of making it fun.



But making it 'fun' doesn't answer the question of what the implicit goals are. 
Piaget's theories of assimilation can bring us closer to this, I am of the mind 
that they encompass at least part of the intellectual drive toward play and 
investigation.


Jonathan El-Bizri






  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam

Hi David,

 Any amount of guidance in such a simulation (e.g. to help avoid so many
of the useless
 eddies in a fully open-ended simulation) amounts to
designed cognition.


No, it amounts to guided evolution. The difference between a designed 
simulation and a designed cognition is the focus on the agent itself. In the 
latter, you design the agent and turn it loose, testing it to see if it does 
what you want it to. In the former (the simulation), you turn a bunch of 
candidate agents loose and let them compete to do what you want them to. The 
ones that don't, die. You're specifying the environment, not the agent. If you 
do it right, you don't even have to specify the goals.  With designed 
cognition, you must specify the goals, either directly (un-embodied), or in 
some meta-fashion (embodied). 

Terren

--- On Mon, 8/25/08, David Hart [EMAIL PROTECTED] wrote:
From: David Hart [EMAIL PROTECTED]
Subject: Re: [agi] How Would You Design a Play Machine?
To: agi@v2.listbox.com
Date: Monday, August 25, 2008, 6:04 PM

Where is the hard dividing line between designed cognition and designed 
simulation (where intelligent behavior is intended to be emergent in both 
cases)? Even if an approach is taken where everything possible is done allow a 
'natural' type evolution of behavior, the simulation design and parameters will 
still influence the outcome, sometimes in unknown and unknowable ways. Any 
amount of guidance in such a simulation (e.g. to help avoid so many of the 
useless eddies in a fully open-ended simulation) amounts to designed cognition.


That being said, I'm particularly interested in the OCF being used as a 
platform for 'pure simulation' (Alife and more sophisticated game theoretical 
simulations), and finding ways to work the resulting experience and methods 
into the OCP design, which is itself a hybrid approach (designed cognition + 
designed simulation) intended to take advantage of the benefits of both.


-dave

On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:
Terren:As may be obvious by now, I'm not that interested in designing 
cognition. I'm interested in designing simulations in which intelligent 
behavior emerges.But the way you're using the word 'adapt', in a cognitive 
sense of playing with goals, is different from the way I was using 
'adaptation', which is the result of an evolutionary process.




Two questions: 1)  how do you propose that your simulations will avoid the kind 
of criticisms you've been making of other systems of being too guided by 
programmers' intentions? How can you set up a simulation without making 
massive, possibly false assumptions about the nature of evolution?




2) Have you thought about the evolution of play in animals?



(We play BTW with just about every dimension of activities - goals, rules, 
tools, actions, movements.. ).











---

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription: https://www.listbox.com/member/?;


Powered by Listbox: http://www.listbox.com







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com