Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Brad Paulsen
Mike Tintner wrote: ...how would you design a play machine - a machine that can play around as a child does? I wouldn't. IMHO that's just another waste of time and effort (unless it's being done purely for research purposes). It's a diversion of intellectual and financial resources that

Re: [agi] Re: I Made a Mistake

2008-08-25 Thread Valentina Poletti
Chill down Jim, he took it back. On 8/24/08, Jim Bromer [EMAIL PROTECTED] wrote: Intolerance of another person's ideas through intimidation or ridicule is intellectual repression. You won't elevate a discussion by promoting a program anti-intellectual repression. Intolerance of a person

Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Valentina Poletti
In other words, Vladimir, you are suggesting that an AGI must be at some level controlled from humans, therefore not 'fully-embodied' in order to prevent non-friendly AGI as the outcome. Therefore humans must somehow be able to control its goals, correct? Now, what if controlling those goals

Re: [agi] rpi.edu

2008-08-25 Thread Brad Paulsen
Eric, http://www.cogsci.rpi.edu/research/rair/asc_rca/ Sorry, couldn't answer your question based on quick read. Cheers, Brad Eric Burton wrote: Does anyone know if Rensselaer Institute is still on track to crack the Turing Test by 2009? There was a Slashdot article or two about their

Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Vladimir Nesov
On Mon, Aug 25, 2008 at 1:07 PM, Valentina Poletti [EMAIL PROTECTED] wrote: In other words, Vladimir, you are suggesting that an AGI must be at some level controlled from humans, therefore not 'fully-embodied' in order to prevent non-friendly AGI as the outcome. Controlled in Friendliness

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner
Brad, That's sad. The suggestion is for a mental exercise, not a full-scale project. And play is fundamental to the human mind-and-body - it characterises our more mental as well as more physical activities - drawing, designing, scripting, humming and singing scat in the bath,

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Matt Mahoney
Kittens play with small moving objects because it teaches them to be better hunters. Play is not a goal in itself, but a subgoal that may or may not be a useful part of a successful AGI design. -- Matt Mahoney, [EMAIL PROTECTED] - Original Message From: Mike Tintner [EMAIL

Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Valentina Poletti
On 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote: On Mon, Aug 25, 2008 at 1:07 PM, Valentina Poletti [EMAIL PROTECTED] wrote: In other words, Vladimir, you are suggesting that an AGI must be at some level controlled from humans, therefore not 'fully-embodied' in order to prevent

Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-25 Thread Matt Mahoney
John, I have looked at your patent and various web pages. You list a lot of nice sounding ethical terms (honor, love, hope, peace, etc) but give no details on how to implement them. You have already admitted that you have no experimental results, haven't actually built anything, and have no

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner
Matt: Kittens play with small moving objects because it teaches them to be better hunters. Play is not a goal in itself, but a subgoal that may or may not be a useful part of a successful AGI design. Certainly, crude imitation of, and preparation for, adult activities is one aspect of play.

Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Vladimir Nesov
On Mon, Aug 25, 2008 at 6:23 PM, Valentina Poletti [EMAIL PROTECTED] wrote: On 8/25/08, Vladimir Nesov [EMAIL PROTECTED] wrote: Why would anyone suggest creating a disaster, as you pose the question? Also agree. As far as you know, has anyone, including Eliezer, suggested any method or

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam
Actually, kittens play because it's fun. Evolution has equipped them with the rewarding sense of fun because it optimizes their fitness as hunters. But kittens are adaptation executors, evolution is the fitness optimizer. It's a subtle but important distinction. See

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Vladimir Nesov
On Mon, Aug 25, 2008 at 9:22 PM, Terren Suydam [EMAIL PROTECTED] wrote: Actually, kittens play because it's fun. Evolution has equipped them with the rewarding sense of fun because it optimizes their fitness as hunters. But kittens are adaptation executors, evolution is the fitness

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Abram Demski
Mike, I agree with Brad somewhat, because I do not think copying human (or animal) intellect is the goal. It is a means to the end of general intelligence. However, that certainly doesn't stop me from participating in a thought experiment. I think the big thing with artificial play is figuring

Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Terren Suydam
Hi Vlad, Thanks for taking the time to read my article and pose excellent questions. My attempts at answers below. --- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote: On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam What is the point of building general intelligence if all it does is

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Charles Hixson
Play is a form a strategy testing in an environment that doesn't severely penalize failures. As such, every AGI will necessarily spend a lot of time playing. If you have some other particular definition, then perhaps I could understand your response if you were to define the term. OTOH, if

Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-25 Thread Abram Demski
Matt, What is your opinion on Goedel machines? http://www.idsia.ch/~juergen/goedelmachine.html --Abram On Sun, Aug 24, 2008 at 5:46 PM, Matt Mahoney [EMAIL PROTECTED] wrote: Eric Burton [EMAIL PROTECTED] wrote: These have profound impacts on AGI design. First, AIXI is (provably) not

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam
I'm not saying play isn't adaptive. I'm saying that kittens play not because they're optimizing their fitness, but because they're intrinsically motivated to (it feels good). The reason it feels good has nothing to do with the kitten, but with the evolutionary process that designed that

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner
Terren, Your broad distinctions are fine, but I feel you are not emphasizing the area of most interest for AGI, which is *how* we adapt rather than why. Interestingly, your blog uses the example of a screwdriver - Kauffman uses the same in Chap 12 of Reinventing the Sacred as an example of

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Vladimir Nesov
On Mon, Aug 25, 2008 at 11:17 PM, Terren Suydam [EMAIL PROTECTED] wrote: I'm not saying play isn't adaptive. I'm saying that kittens play not because they're optimizing their fitness, but because they're intrinsically motivated to (it feels good). The reason it feels good has nothing to do

Re: [agi] The Necessity of Embodiment

2008-08-25 Thread William Pearson
2008/8/25 Terren Suydam [EMAIL PROTECTED]: --- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote: On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam wrong. This ability might be an end in itself, the whole point of building an AI, when considered as applying to the dynamics of the world

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam
Hi Mike, As may be obvious by now, I'm not that interested in designing cognition. I'm interested in designing simulations in which intelligent behavior emerges. But the way you're using the word 'adapt', in a cognitive sense of playing with goals, is different from the way I was using

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam
Saying that a particular cat instance hunts because it feels good is not very explanatory Even if I granted that, saying that a particular cat plays to increase its hunting skills is incorrect. It's an important distinction because by analogy we must talk about particular AGI instances.

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Jonathan El-Bizri
On Mon, Aug 25, 2008 at 12:52 PM, Vladimir Nesov [EMAIL PROTECTED] wrote: The word because was misplaced. Cats hunt mice because they were designed to, and they were designed to, because it's adaptive. And the adaption they have evolved in to, uses a pleasure process as a motivator. Saying

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Vladimir Nesov
On Tue, Aug 26, 2008 at 12:19 AM, Terren Suydam [EMAIL PROTECTED] wrote: Saying that a particular cat instance hunts because it feels good is not very explanatory Even if I granted that, saying that a particular cat plays to increase its hunting skills is incorrect. It's an important

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner
Terren:As may be obvious by now, I'm not that interested in designing cognition. I'm interested in designing simulations in which intelligent behavior emerges.But the way you're using the word 'adapt', in a cognitive sense of playing with goals, is different from the way I was using

Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Terren Suydam
Hi Will, I don't doubt that provable-friendliness is possible within limited, well-defined domains that can be explicitly defined and hard-coded. I know chess programs will never try to kill me. I don't believe however that you can prove friendliness within a framework that has the

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam
If an AGI played because it recognized that it would improve its skills in some domain, then I wouldn't call that play, I'd call it practice. Those are overlapping but distinct concepts. Play, as distinct from pactice, is its own reward - the reward felt by a kitten. The spirit of Mike's

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam
Hi Mike, Comments below... --- On Mon, 8/25/08, Mike Tintner [EMAIL PROTECTED] wrote: Two questions: 1) how do you propose that your simulations will avoid the kind of criticisms you've been making of other systems of being too guided by programmers' intentions? How can you set up a

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Vladimir Nesov
On Tue, Aug 26, 2008 at 1:26 AM, Terren Suydam [EMAIL PROTECTED] wrote: If an AGI played because it recognized that it would improve its skills in some domain, then I wouldn't call that play, I'd call it practice. Those are overlapping but distinct concepts. Play, as distinct from pactice,

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Jonathan El-Bizri
On Mon, Aug 25, 2008 at 2:26 PM, Terren Suydam [EMAIL PROTECTED] wrote: If an AGI played because it recognized that it would improve its skills in some domain, then I wouldn't call that play, I'd call it practice. Those are overlapping but distinct concepts. The evolution of play is how

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread David Hart
Where is the hard dividing line between designed cognition and designed simulation (where intelligent behavior is intended to be emergent in both cases)? Even if an approach is taken where everything possible is done allow a 'natural' type evolution of behavior, the simulation design and

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Mike Tintner
Terren: The spirit of Mike's question, I think, was about identifying the essential goalless-ness of play.. Well, the key thing for me (although it was, technically, a play-ful question :) ) is the distinction between programmed/planned exploration of a basically known environment and ad hoc

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Charles Hixson
Jonathan El-Bizri wrote: On Mon, Aug 25, 2008 at 2:26 PM, Terren Suydam [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: If an AGI played because it recognized that it would improve its skills in some domain, then I wouldn't call that play, I'd call it practice. Those are

Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Eric Burton
Is friendliness really so context-dependent? Do you have to be human to act friendly at the exception of acting busy, greedy, angry, etc? I think friendliness is a trait we project onto things pretty readily implying it's wired at some fundamental level. It comes from the social circuits, it's

Re: [agi] The Necessity of Embodiment

2008-08-25 Thread Terren Suydam
Eric, We're talking Friendliness (capital F), a convention suggested by Eliezer Yudkowsky, that signifies the sense in which an AI does no harm to humans. Yes, it's context dependent. Do no harm is the mantra within the medical community, but clearly there are circumstances in which you do a

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam
Hi Johnathon, I disagree, play without rules can certainly be fun. Running just to run, jumping just to jump. Play doesn't have to be a game, per se. It's simply a purposeless expression of the joy of being alive. It turns out of course that play is helpful for achieving certain goals that we

Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Terren Suydam
Hi David, Any amount of guidance in such a simulation (e.g. to help avoid so many of the useless eddies in a fully open-ended simulation) amounts to designed cognition. No, it amounts to guided evolution. The difference between a designed simulation and a designed cognition is the focus on