]
To: agi@v2.listbox.com
Sent: Friday, August 29, 2008 9:18 AM
Subject: Re: [agi] How Would You Design a Play Machine?
To an unembodied agent, the concept of self is indistinguishable from any
other concept it works with. I use concept in quotes because to the
unembodied agent, it is not a concept
David: I know that some systems (specifically systems without models or a
lot of
human interaction) have had grounding problems but your statement below
seems to be stating something that is far from proven fact.
Your conclusions about concept of self and unemboodied agent means
ungrounded
--- On Fri, 8/29/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
I don't see why an un-embodied system couldn't
successfully use the
concept of self in its models. It's just another
concept, except that
it's linked to real features of the system.
To an unembodied agent, the concept of self is
Terren,
to the unembodied agent, it is not a concept at all, but merely a symbol with
no semantic context attached
It's an issue when trying to learn from NL only, but you can injects
semantics (critical for grounding) when teaching through a
formal_language[-based interface], get the thinking
2008/8/27 Mike Tintner [EMAIL PROTECTED]:
You on your side insist that you don't have to have such precisely defined
goals
- your intuitive (and by definition, ill-defined) sense of intelligence will
do.
As a child I don't believe that I set out with the goal of becoming a
software developer.
Just in case there is any confusion, ill-defined is in this particular
context is in no way pejorative. The crux of a General Intelligence for me
is that it is necessarily a machine that works with more or less ill-defined
goals to solve ill-structured problems. Bob's self-description is to a
2008/8/28 Mike Tintner [EMAIL PROTECTED]:
(I still think of course that current AGI should have a not-so-ill
structured definition of its problem-solving goals).
It's certainly true that an AGI could be endowed with well defined
goals. Some people also begin from an early age with well
Hi Jiri,
Comments below...
--- On Thu, 8/28/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
That's difficult to reconcile if you don't
believe embodiment is all that important.
Not really. We might be qualia-driven, but for our AGIs
it's perfectly
ok (and only natural) to be driven by given
Eric,
It was a real-life near-death experience (auto accident).
I'm aware of the tryptamine compound and its presence in hallucinogenic
drugs such as LSD. According to Wikipedia, it is not related to the NDE
drug of choice which is Ketamine (Ketalar or ketamine HCL -- street name
back in
Terren,
is not embodied at all, in which case it is a mindless automaton
Researchers and philosophers define mind and intelligence in many
different ways = their classifications of particular AI systems
differ. What really counts though are problem solving abilities of the
system. Not how it's
sense
(like a chess program).
Terren
--- On Thu, 8/28/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
From: Jiri Jelinek [EMAIL PROTECTED]
Subject: Re: [agi] How Would You Design a Play Machine?
To: agi@v2.listbox.com
Date: Thursday, August 28, 2008, 10:39 PM
Terren,
is not embodied at all
Terren,
I don't think any kind of algorithmic approach, which is to say, un-embodied,
will ever result in conscious intelligence. But an embodied agent that is
able to construct ever-deepening models of its experience such that it
eventually includes itself in its models, well, that is
Brad, scary stuff. Dissociatives/NMDA inhibitors were secret option
number three! ;D
On 8/29/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
Terren,
I don't think any kind of algorithmic approach, which is to say,
un-embodied, will ever result in conscious intelligence. But an embodied
agent that
Terren,
OK, you hooked me. A virgin is something I haven't been called (or even
been associated with) in about forty-five years. So, I feel compelled to
defend my non-virginity at all costs. I'm 58 now. You do the math (don't
forget to subtract for the 30 years I was married). ;-) My
Actually, exploring this further - human thinking is v. fundamentally different
from the computational kind or most AGI conceptions - because it is massively
and structurally metacognitive, self-examining (which comes under being a
machine that works by self-control).
Interestingly, Minsky's
If I do my job right, my AGI will have no sense of self.
I have doubts that is possible, though I'm sure you can make an AGI with a
very different sense of self than any human has.
My reasoning:
1)
To get to a high level of intelligence likely requires some serious
self-analysis and
An interesting thing to keep in mind when discussing play, though, is
**subgoal alienation**
When G1 arises as a subgoal of G, nevertheless, it may happen that G1
survives as a goal even if G disappears; or that G1 remains important even
if G loses importance. One may wish to design AGI systems
I wrote a blog post enlarging a little on the ideas I developed in my
response to the playful AGI thread...
See
http://multiverseaccordingtoben.blogspot.com/2008/08/logic-of-play.html
Some of the new content I put there:
Still, I have to come back to the tendency of play to give rise to goal
Ben,
Again, this provokes some playful developments.
As I think you may have more or less noted, the goals of the whole thread and
of most people responding are somewhat ill-defined, (which in this context is
fine).
(And the following relates to the adjacent thread too). The human mind
Hi,
Err ... I don't have to mention that I didn't stay dead, do I? Good.
Was this the archetypal death/rebirth experience found in for instance
tryptamine ecstacy or a real-life near-death experience?
Eric B
---
agi
Archives:
Admittedly I don't have any proof, but I don't see any reason to doubt
my assertions. There's nothing in them that appears to be to be
specific to any particular implementation of an (almost) AGI.
OTOH, you didn't define play, so I'm still presuming that you accept the
definition that I
On Tue, Aug 26, 2008 at 12:09 AM, Terren Suydam [EMAIL PROTECTED] wrote:
Pleasure and pain are peculiar aspects of embodied experience - strictly
speaking they are motivators and de-motivators, but what actually motivates us
humans is the subjective feel ...
That's difficult to reconcile if you
2008/8/24 Mike Tintner [EMAIL PROTECTED]:
Just a v. rough, first thought. An essential requirement of an AGI is
surely that it must be able to play - so how would you design a play machine
- a machine that can play around as a child does?
Play may be about characterising the state space.
On Tue, Aug 26, 2008 at 8:09 AM, Terren Suydam [EMAIL PROTECTED] wrote:
I know we've gotten a little off-track here from play, but the really
interesting question I would pose to you non-embodied advocates is:
how in the world will you motivate your creation? I suppose that you
won't. You'll
Bob M: Play may be about characterising the state space. As an embodied
entity you need to know which areas of the space are relatively
predictable and which are not. Armed with this knowledge when
planning an action in future you can make a reasonable estimate of the
possible range of
On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:
Is anyone trying to design a self-exploring robot or computer? Does this
principle have a name?
Interestingly, some views on AI advocate specifically prohibiting
self-awareness and self-exploration as a precaution against the development
of
Terren:I know we've gotten a little off-track here from play, but the really
interesting question I would pose to you non-embodied advocates is:
how in the world will you motivate your creation?
Again, I think you're missing out the most important aspect of having a body
, ( is there a good
Note that in this view play has nothing to do with having a body. An AGi
concerned solely with mathematical theorem proving would also be able to
play...
On Tue, Aug 26, 2008 at 9:07 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
About play... I would argue that it emerges in any sufficiently
About play... I would argue that it emerges in any sufficiently
generally-intelligent system
that is faced with goals that are difficult for it ... as a consequence of
other general cognitive
processes...
If an intelligent system has a goal G which is time-consuming or difficult
to achieve ...
Examples of the kind of similarity I'm thinking of:
-- The analogy btw chess or go and military strategy
-- The analogy btw roughhousing and actual fighting
In logical terms, these are intensional rather than extensional similarities
ben
On Tue, Aug 26, 2008 at 9:38 AM, Mike Tintner [EMAIL
On Tue, Aug 26, 2008 at 2:38 PM, Mike Tintner [EMAIL PROTECTED] wrote:
The be-all and end-all here though, I presume is similarity. Is it a
logic-al concept? Finding similarities - rough likenesses as opposed to
rational, precise, logicomathematical commonalities - is actually, I would
argue,
That's a fair criticism. I did explain what I mean by embodiment in a previous
post, and what I mean by autonomy in the article of mine I referenced. But I do
recognize that in both cases there is still some ambiguity, so I will withdraw
the question until I can formulate it in more concise
,
but self-aware agents that can't modify themselves are much less worrying.
They're all around us.
--- On Tue, 8/26/08, David Hart [EMAIL PROTECTED] wrote:
From: David Hart [EMAIL PROTECTED]
Subject: Re: [agi] How Would You Design a Play Machine?
To: agi@v2.listbox.com
Date: Tuesday, August 26, 2008
- Original Message -
From: Ben Goertzel
To: agi@v2.listbox.com
Sent: Tuesday, August 26, 2008 6:49 AM
Subject: Re: [agi] How Would You Design a Play Machine?
Examples of the kind of similarity I'm thinking of:
-- The analogy btw chess or go and military strategy
- Original Message -
From: Ben Goertzel
To: agi@v2.listbox.com
Sent: Tuesday, August 26, 2008 6:49 AM
Subject: Re: [agi] How Would You Design a Play Machine?
Examples of the kind of similarity I'm thinking of:
-- The analogy btw chess or go and military strategy
Mike,
So you feel that my disagreement with your proposal is sad? That's quite
an ego you have there, my friend. You asked for input and you got it. The
fact that you didn't like my input doesn't make me or the effort I spent
composing it sad. I haven't read all of the replies to your
Charles,
By now you've probably read my reply to Tintner's reply. I think that
probably says it all (and them some!).
What you say holds IFF you are planing on building an airplane that flies
just like a bird. In other words, if you are planning on building a
human-like AGI (that could,
Mike Tintner wrote: ...how would you design a play machine - a machine
that can play around as a child does?
I wouldn't. IMHO that's just another waste of time and effort (unless it's
being done purely for research purposes). It's a diversion of intellectual
and financial resources that
Brad,
That's sad. The suggestion is for a mental exercise, not a full-scale
project. And play is fundamental to the human mind-and-body - it
characterises our more mental as well as more physical activities -
drawing, designing, scripting, humming and singing scat in the bath,
PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, August 25, 2008 8:59:06 AM
Subject: Re: [agi] How Would You Design a Play Machine?
Brad,
That's sad. The suggestion is for a mental exercise, not a full-scale
project. And play is fundamental to the human mind-and-body - it
characterises our more mental
Matt: Kittens play with small moving objects because it teaches them to be
better hunters. Play is not a goal in itself, but a subgoal that may or may
not be a useful part of a successful AGI design.
Certainly, crude imitation of, and preparation for, adult activities is one
aspect of play.
or may not be a useful part of a
successful AGI design.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, August 25, 2008 8:59:06 AM
Subject: Re: [agi] How Would You Design a Play Machine?
Brad
On Mon, Aug 25, 2008 at 9:22 PM, Terren Suydam [EMAIL PROTECTED] wrote:
Actually, kittens play because it's fun. Evolution has equipped them with the
rewarding sense of fun because it optimizes their fitness as hunters. But
kittens are adaptation executors, evolution is the fitness
Mike,
I agree with Brad somewhat, because I do not think copying human (or
animal) intellect is the goal. It is a means to the end of general
intelligence.
However, that certainly doesn't stop me from participating in a
thought experiment.
I think the big thing with artificial play is figuring
Play is a form a strategy testing in an environment that doesn't
severely penalize failures. As such, every AGI will necessarily spend a
lot of time playing.
If you have some other particular definition, then perhaps I could
understand your response if you were to define the term.
OTOH, if
I'm not saying play isn't adaptive. I'm saying that kittens play not because
they're optimizing their fitness, but because they're intrinsically motivated
to (it feels good). The reason it feels good has nothing to do with the kitten,
but with the evolutionary process that designed that
: [agi] How Would You Design a Play Machine?
Brad,
That's sad. The suggestion is for a mental exercise,
not a full-scale
project. And play is fundamental to the human mind-and-body
- it
characterises our more mental as well as more physical
activities -
drawing, designing, scripting, humming
On Mon, Aug 25, 2008 at 11:17 PM, Terren Suydam [EMAIL PROTECTED] wrote:
I'm not saying play isn't adaptive. I'm saying that kittens play not
because they're optimizing their fitness, but because they're intrinsically
motivated to (it feels good). The reason it feels good has nothing to do
'adaptation', which is the result
of an evolutionary process.
Terren
--- On Mon, 8/25/08, Mike Tintner [EMAIL PROTECTED] wrote:
From: Mike Tintner [EMAIL PROTECTED]
Subject: Re: [agi] How Would You Design a Play Machine?
To: agi@v2.listbox.com
Date: Monday, August 25, 2008, 3:41 PM
Terren
Saying
that a particular cat instance hunts because it feels good
is not very explanatory
Even if I granted that, saying that a particular cat plays to increase its
hunting skills is incorrect. It's an important distinction because by analogy
we must talk about particular AGI instances.
On Mon, Aug 25, 2008 at 12:52 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
The word because was misplaced. Cats hunt mice because they were
designed to, and they were designed to, because it's adaptive.
And the adaption they have evolved in to, uses a pleasure process as a
motivator.
Saying
On Tue, Aug 26, 2008 at 12:19 AM, Terren Suydam [EMAIL PROTECTED] wrote:
Saying
that a particular cat instance hunts because it feels good
is not very explanatory
Even if I granted that, saying that a particular cat plays to increase
its hunting skills is incorrect. It's an important
Terren:As may be obvious by now, I'm not that interested in designing
cognition. I'm interested in designing simulations in which intelligent
behavior emerges.But the way you're using the word 'adapt', in a cognitive
sense of playing with goals, is different from the way I was using
If an AGI played because it recognized that it would improve its skills in some
domain, then I wouldn't call that play, I'd call it practice. Those are
overlapping but distinct concepts.
Play, as distinct from pactice, is its own reward - the reward felt by a
kitten. The spirit of Mike's
Hi Mike,
Comments below...
--- On Mon, 8/25/08, Mike Tintner [EMAIL PROTECTED] wrote:
Two questions: 1) how do you propose that your simulations
will avoid the
kind of criticisms you've been making of other systems
of being too guided
by programmers' intentions? How can you set up a
On Tue, Aug 26, 2008 at 1:26 AM, Terren Suydam [EMAIL PROTECTED] wrote:
If an AGI played because it recognized that it would improve its skills
in some domain, then I wouldn't call that play, I'd call it practice. Those
are overlapping but distinct concepts.
Play, as distinct from pactice,
On Mon, Aug 25, 2008 at 2:26 PM, Terren Suydam [EMAIL PROTECTED] wrote:
If an AGI played because it recognized that it would improve its skills in
some domain, then I wouldn't call that play, I'd call it practice. Those are
overlapping but distinct concepts.
The evolution of play is how
Where is the hard dividing line between designed cognition and designed
simulation (where intelligent behavior is intended to be emergent in both
cases)? Even if an approach is taken where everything possible is done allow
a 'natural' type evolution of behavior, the simulation design and
Terren: The spirit of Mike's question, I think, was about identifying the
essential goalless-ness of play..
Well, the key thing for me (although it was, technically, a play-ful
question :) ) is the distinction between programmed/planned exploration of
a basically known environment and ad hoc
Jonathan El-Bizri wrote:
On Mon, Aug 25, 2008 at 2:26 PM, Terren Suydam [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
If an AGI played because it recognized that it would improve its
skills in some domain, then I wouldn't call that play, I'd call it
practice. Those are
Hi Johnathon,
I disagree, play without rules can certainly be fun. Running just to run,
jumping just to jump. Play doesn't have to be a game, per se. It's simply a
purposeless expression of the joy of being alive. It turns out of course that
play is helpful for achieving certain goals that we
[EMAIL PROTECTED]
Subject: Re: [agi] How Would You Design a Play Machine?
To: agi@v2.listbox.com
Date: Monday, August 25, 2008, 6:04 PM
Where is the hard dividing line between designed cognition and designed
simulation (where intelligent behavior is intended to be emergent in both
cases)? Even
62 matches
Mail list logo