Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Mike Tintner
Dave Hart: MT:Sorry, I forgot to ask for what I most wanted to know - what form of RSI in any specific areas has been considered? To quote Charles Babbage, I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question. The best we can hope for is

Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Ben Goertzel
About recursive self-improvement ... yes, I have thought a lot about it, but don't have time to write a huge discourse on it here One point is that if you have a system with N interconnected modules, you can approach RSI by having the system separately think about how to improve each module.

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser
Hi Terren, Obviously you need to complicated your original statement I believe that ethics is *entirely* driven by what is best evolutionarily... in such a way that we don't derive ethics from parasites. Saying that ethics is entirely driven by evolution is NOT the same as saying that

Re: [agi] How Would You Design a Play Machine?

2008-08-29 Thread Terren Suydam
--- On Fri, 8/29/08, Jiri Jelinek [EMAIL PROTECTED] wrote: I don't see why an un-embodied system couldn't successfully use the concept of self in its models. It's just another concept, except that it's linked to real features of the system. To an unembodied agent, the concept of self is

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Terren Suydam
--- On Fri, 8/29/08, Mark Waser [EMAIL PROTECTED] wrote: Saying that ethics is entirely driven by evolution is NOT the same as saying that evolution always results in ethics. Ethics is computationally/cognitively expensive to successfully implement (because a stupid implementation gets

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread Eric Burton
A succesful AGI should have n methods of data-mining its experience for knowledge, I think. If it should have n ways of generating those methods or n sets of ways to generate ways of generating those methods etc I don't know. On 8/28/08, j.k. [EMAIL PROTECTED] wrote: On 08/28/2008 04:47 PM, Matt

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser
OK. How about this . . . . Ethics is that behavior that, when shown by you, makes me believe that I should facilitate your survival. Obviously, it is then to your (evolutionary) benefit to behave ethically. Ethics can't be explained simply by examining interactions between individuals. It's

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Eric Burton
I remember Richard Dawkins saying that group selection is a lie. Maybe we shoud look past it now? It seems like a problem. On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote: OK. How about this . . . . Ethics is that behavior that, when shown by you, makes me believe that I should facilitate your

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread Abram Demski
I like that argument. Also, it is clear that humans can invent better algorithms to do specialized things. Even if an AGI couldn't think up better versions of itself, it would be able to do the equivalent of equipping itself with fancy calculators. --Abram On Thu, Aug 28, 2008 at 9:04 PM, j.k.

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser
Group selection (as used as the term of art in evolutionary biology) does not seem to be experimentally supported (and there have been a lot of recent experiments looking for such an effect). It would be nice if people could let the idea drop unless there is actually some proof for it other

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Charles Hixson
Dawkins tends to see an truth, and then overstate it. What he says isn't usually exactly wrong, so much as one-sided. This may be an exception. Some meanings of group selection don't appear to map onto reality. Others map very weakly. Some have reasonable explanatory power. If you don't

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Matt Mahoney
Group selection is not dead, just weaker than individual selection. Altruism in many species is evidence for its existence. http://en.wikipedia.org/wiki/Group_selection In any case, evolution of culture and ethics in humans is primarily memetic, not genetic. Taboos against nudity are nearly

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.
On 08/29/2008 10:09 AM, Abram Demski wrote: I like that argument. Also, it is clear that humans can invent better algorithms to do specialized things. Even if an AGI couldn't think up better versions of itself, it would be able to do the equivalent of equipping itself with fancy calculators.

[agi] Frame Semantics

2008-08-29 Thread Mike Tintner
Advances in Frame Semantics: Corpus and Computational Approaches and Insights Theme Session to be held at ICLC 11, Berkeley, CA Date: July 28 - August 3, 2009 Organizer: Miriam R. L. Petruck Theme Session Description: Fillmore (1975) introduced the notion of a frame into linguistics over

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread William Pearson
2008/8/29 j.k. [EMAIL PROTECTED]: On 08/28/2008 04:47 PM, Matt Mahoney wrote: The premise is that if humans can create agents with above human intelligence, then so can they. What I am questioning is whether agents at any intelligence level can do this. I don't believe that agents at any

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread Matt Mahoney
It seems that the debate over recursive self improvement depends on what you mean by improvement. If you define improvement as intelligence as defined by the Turing test, then RSI is not possible because the Turing test does not test for superhuman intelligence. If you mean improvement as more

Re: [agi] AGI-09 - Preliminary Call for Papers

2008-08-29 Thread Bill Hibbard
The special rate at the Crowne Plaza does not apply to the night of Monday, 9 March. If the post-conference workshops on Monday extend into the afternoon, it would be useful if the special rate was available on Monday night. Thanks, Bill --- agi

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.
On 08/29/2008 01:29 PM, William Pearson wrote: 2008/8/29 j.k.[EMAIL PROTECTED]: An AGI with an intelligence the equivalent of a 99.-percentile human might be creatable, recognizable and testable by a human (or group of humans) of comparable intelligence. That same AGI at some later

Re: [agi] AGI-09 - Preliminary Call for Papers

2008-08-29 Thread Ben Goertzel
Hi Bill, Bruce Klein is the one dealing with this aspect of AGI-09, so I've cc'd this message to him To get a special rate we need to reserve a block of rooms in advance. So we'd need to estimate in advance the number of rooms needed for Monday night, which will be many fewer than needed for

Re: [agi] How Would You Design a Play Machine?

2008-08-29 Thread Jiri Jelinek
Terren, to the unembodied agent, it is not a concept at all, but merely a symbol with no semantic context attached It's an issue when trying to learn from NL only, but you can injects semantics (critical for grounding) when teaching through a formal_language[-based interface], get the thinking

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread William Pearson
2008/8/29 j.k. [EMAIL PROTECTED]: On 08/29/2008 01:29 PM, William Pearson wrote: 2008/8/29 j.k.[EMAIL PROTECTED]: An AGI with an intelligence the equivalent of a 99.-percentile human might be creatable, recognizable and testable by a human (or group of humans) of comparable

Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Mike Tintner
Ben, It looks like what you've thought about is aspects of the information processing side of RSI but not the knowledge side. IOW you have thought about the technical side but not abouthow you progress from one domain of knowledge about the world to another, or from one subdomain to another.

Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Ben Goertzel
On Fri, Aug 29, 2008 at 6:53 PM, Mike Tintner [EMAIL PROTECTED]wrote: Ben, It looks like what you've thought about is aspects of the information processing side of RSI but not the knowledge side. IOW you have thought about the technical side but not abouthow you progress from one domain of

Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Matt Mahoney
Mike Tintner wrote: You may have noticed that AGI-ers are staggeringly resistant to learning new domains. Remember you are dealing with human brains. You can only write into long term memory at a rate of 2 bits per second. :-) AGI spans just about every field of science, from ethics to

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.
On 08/29/2008 03:14 PM, William Pearson wrote: 2008/8/29 j.k.[EMAIL PROTECTED]: ... The human-level AGI running a million times faster could simultaneously interact with tens of thousands of scientists at their pace, so there is no reason to believe it need be starved for interaction to the

Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Mike Tintner
Matt: AGI spans just about every field of science, from ethics to quantum mechanics, child development to algorithmic information theory, genetics to economics. Just so. And every field of the arts. And history. And philosophy. And technology. Including social technology. And organizational