AW: [agi] How general can be and should be AGI?

2008-04-27 Thread Dr. Matthias Heger
Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54 Yes, truly general AI is only possible in the case of infinite processing power, which is likely not physically realizable. How much generality can be achieved with how much Processing power, is not yet known -- math

Re: [agi] How general can be and should be AGI?

2008-04-27 Thread Pei Wang
On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote: What I wanted to say is that any intelligence has to be narrow in a sense if it wants be powerful and useful. There must always be strong assumptions of the world deep in any algorithm of useful intelligence.

AW: [agi] How general can be and should be AGI?

2008-04-27 Thread Dr. Matthias Heger
Performance not an unimportant question. I assume that AGI has necessarily has costs which grow exponentially with the number of states and actions so that AGI will always be interesting only for toy domains. My assumption is that human intelligence is not truly general intelligence and therefore

Re: [agi] How general can be and should be AGI?

2008-04-27 Thread Pei Wang
If by truly general you mean absolutely general, I agree it is not possible, but it is not what we are after. Again, I hope you find out what people are doing under the name AGI, then make your argument against it, rather than against the AGI in your imagination. For example, I fully agree that

Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Richard Loosemore
Jim Bromer wrote: Richard Loosemore [EMAIL PROTECTED] said: ... your tangled(*) system might be just as vulnerable to the problem as those thousands upon thousands of examples of complex systems that are *not* understandable... To the best of my knowledge, nobody has *ever* used intuitive

Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Richard Loosemore
Ben Goertzel wrote: Richard, Richard Loosemore wrote: 2) Even if you do come back to me and say that the symbols inside Novamente all contain all four characteristics, I can only say so what a second time ;-). The question I was asking when I laid down those four characteristics was

AW: [agi] How general can be and should be AGI?

2008-04-27 Thread Dr. Matthias Heger
Ok. Maybe we mean different things under the name AGI. I agree that traditional AI is just the beginning. And even if human intelligence is no proof for that what I mean with AGI it is clear that human intelligence is far way more powerful than any AI until now. But perhaps only for subtle

Re: [agi] How general can be and should be AGI?

2008-04-27 Thread Ben Goertzel
On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote: Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54 Yes, truly general AI is only possible in the case of infinite processing power, which is likely not physically realizable. How much

Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
Richard, Question: How many systems do you know of in which the system elements are governed by a mechanism that has all four of these, AND where the system as a whole has a large-scale behavior that has been shown (by any method of showing except detailed simulation of the system) to arise

Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Russell Wallace
On Sun, Apr 27, 2008 at 2:44 PM, Richard Loosemore [EMAIL PROTECTED] wrote: Question: How many systems do you know of in which the system elements are governed by a mechanism that has all four of these, AND where the system as a whole has a large-scale behavior that has been shown (by any

Re: [agi] How general can be and should be AGI?

2008-04-27 Thread Mike Tintner
Matthias: a state description could be: ...I am in a kitchen. The door is open. It has two windows. There is a sink. And three cupboards. Two chairs. A fly is on the right window. The sun is shining. The color of the chair is... etc. etc.

Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Richard Loosemore
Ben Goertzel wrote: Richard, Question: How many systems do you know of in which the system elements are governed by a mechanism that has all four of these, AND where the system as a whole has a large-scale behavior that has been shown (by any method of showing except detailed simulation of

Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Richard Loosemore
Russell Wallace wrote: On Sun, Apr 27, 2008 at 2:44 PM, Richard Loosemore [EMAIL PROTECTED] wrote: Question: How many systems do you know of in which the system elements are governed by a mechanism that has all four of these, AND where the system as a whole has a large-scale behavior that has

Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
No: I am specifically asking for some system other than an AGI system, because I am looking for an external example of someone overcoming the complex systems problem. The specific criteria you've described would seem to apply mainly to living systems ... and we just don't have that much

Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Russell Wallace
On Sun, Apr 27, 2008 at 6:08 PM, Ben Goertzel [EMAIL PROTECTED] wrote: Certainly, the failure of the Biosphere experiment is evidence in your favor. There, the scientists failed to predict basic high-level properties of a pretty simple closed ecosystem, based on their knowledge of the

Re: Six reasons why complexity is unavoidable [WAS Re: [agi] Core of intelligence ...]

2008-04-27 Thread Mark Waser
Several posts on this coming in quick succession -- you might want to read all of them before replying to any of them. I've just realized that much of the problem that we're all having in this discussion is the lack of realization of how much of a bottom-up design Richard is assuming vs. how

Re: [agi] How general can be and should be AGI?

2008-04-27 Thread William Pearson
2008/4/27 Dr. Matthias Heger [EMAIL PROTECTED]: Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54 Yes, truly general AI is only possible in the case of infinite processing power, which is likely not physically realizable. How much generality can be achieved with

Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Mark Waser
Engineering in the real world is nearly always a mixture of rigor and intuition. Just like analysis of complex biological systems is. AIEe! NO! You are clearly not an engineer because a true engineer just wouldn't say this. Engineering should *NEVER* involve intuition. Engineering

AW: [agi] How general can be and should be AGI?

2008-04-27 Thread Dr. Matthias Heger
Mike Tintner wrote What is totally missing is a philosophical and semiotic perspective. A philosopher looks at things v. differently and asks essentially : how much information can we get about a given subject (and the world generally)? A semioticist asks: how much and what kinds of

Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Richard Loosemore
Russell Wallace wrote: On Sun, Apr 27, 2008 at 6:08 PM, Ben Goertzel [EMAIL PROTECTED] wrote: Certainly, the failure of the Biosphere experiment is evidence in your favor. There, the scientists failed to predict basic high-level properties of a pretty simple closed ecosystem, based on their

Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
On Sun, Apr 27, 2008 at 5:51 PM, Mark Waser [EMAIL PROTECTED] wrote: Engineering in the real world is nearly always a mixture of rigor and intuition. Just like analysis of complex biological systems is. AIEe! NO! You are clearly not an engineer because a true engineer just

Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
I don't agree with Mark Waser that we can engineer the complexity out of intelligence. I agree with Richard Loosemore that intelligent systems are intrinsically complex systems in the Santa Fe Institute type sense However, I don't agree with Richard as to the *extent* of the complexity problem.

Re: **SPAM** Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Mark Waser
Engineering should *NEVER* involve intuition. Engineering does not require exact answers as long as you have error bars but the second that you revert to intuition and guesses, it is *NOT* engineering anymore. Well, we may be using the word intuition differently. Given your examples, we

Re: **SPAM** Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
Rules of thumb are not intuition ... but applying them requires intuition... unlike applying rigorous methods... However even the most rigorous science requires rules of thumb (hence intuition) to do the problem set-up before the calculations start... ben On Sun, Apr 27, 2008 at 6:56 PM, Mark

Re: **SPAM** Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Mark Waser
I don't agree with Mark Waser that we can engineer the complexity out of intelligence. I agree with Richard Loosemore that intelligent systems are intrinsically complex systems in the Santa Fe Institute type sense I hate to do this but . . . . Richard's definition of complexity is *NOT* the

Re: **SPAM** Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
I said and repeat that we can engineer the complexity out of intelligence in the Richard Loosemore sense. I did not say and do not believe that we can engineer the complexity out of intelligence in the Santa Fe Institute sense. OK, gotcha... Yeah... IMO, complexity in the sense you ascribe

Re: **SPAM** Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Richard Loosemore
I just want to make one observation on this whole thread, since I have no time for anything else tonight. People are riding roughshod over the things that I have actually said. In some cases this involves making extrapolations to ideas that people THINK that I was saying, but which I have

Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
Actually, I have to clarify that my knowledge of this totally digressive topic is about 12 years obsolete. Maybe it's all done differently now... However, one wouldn't bother to use this formula if the soil was too different in composition from the soil around Vegas. So in reality the

Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Russell Wallace
On Sun, Apr 27, 2008 at 11:09 PM, Richard Loosemore [EMAIL PROTECTED] wrote: It was no such evidence: Biosphere 2 had almsot nothing in the way of complexity, compared with AGI systems, and it was controlled by trial and error in such a way that it failed. Hey, great example of how to