Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54
Yes, truly general AI is only possible in the case of infinite
processing power, which is
likely not physically realizable.
How much generality can be achieved with how much
Processing power, is not yet known -- math
On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
What I wanted to say is that any intelligence has
to be narrow in a sense if it wants be powerful and useful. There must
always be strong assumptions of the world deep in any algorithm of useful
intelligence.
Performance not an unimportant question. I assume that AGI has necessarily
has costs which grow exponentially with the number of states and actions so
that AGI will always be interesting only for toy domains.
My assumption is that human intelligence is not truly general intelligence
and therefore
If by truly general you mean absolutely general, I agree it is not
possible, but it is not what we are after. Again, I hope you find out
what people are doing under the name AGI, then make your argument
against it, rather than against the AGI in your imagination.
For example, I fully agree that
Jim Bromer wrote:
Richard Loosemore [EMAIL PROTECTED] said:
... your tangled(*) system might be just as vulnerable
to the problem as those thousands upon thousands of examples of complex
systems that are *not* understandable...
To the best of my knowledge, nobody has *ever* used intuitive
Ben Goertzel wrote:
Richard,
Richard Loosemore wrote:
2) Even if you do come back to me and say that the symbols inside
Novamente all contain all four characteristics, I can only say so what
a second time ;-). The question I was asking when I laid down those
four characteristics was
Ok. Maybe we mean different things under the name AGI.
I agree that traditional AI is just the beginning. And even if human
intelligence is no proof for that what I mean with AGI
it is clear that human intelligence is far way more powerful than any AI
until now. But perhaps only for subtle
On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54
Yes, truly general AI is only possible in the case of infinite
processing power, which is
likely not physically realizable.
How much
Richard,
Question: How many systems do you know of in which the system elements
are governed by a mechanism that has all four of these, AND where the system
as a whole has a large-scale behavior that has been shown (by any method of
showing except detailed simulation of the system) to arise
On Sun, Apr 27, 2008 at 2:44 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Question: How many systems do you know of in which the system elements
are governed by a mechanism that has all four of these, AND where the system
as a whole has a large-scale behavior that has been shown (by any
Matthias: a state description could be:
...I am in a kitchen. The door is open. It has two windows. There is a
sink. And three cupboards. Two chairs. A fly is on
the right window. The sun is shining. The color of the chair is... etc.
etc.
Ben Goertzel wrote:
Richard,
Question: How many systems do you know of in which the system elements
are governed by a mechanism that has all four of these, AND where the system
as a whole has a large-scale behavior that has been shown (by any method of
showing except detailed simulation of
Russell Wallace wrote:
On Sun, Apr 27, 2008 at 2:44 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Question: How many systems do you know of in which the system elements
are governed by a mechanism that has all four of these, AND where the system
as a whole has a large-scale behavior that has
No: I am specifically asking for some system other than an AGI system,
because I am looking for an external example of someone overcoming the
complex systems problem.
The specific criteria you've described would seem to apply mainly to living
systems ... and we just don't have that much
On Sun, Apr 27, 2008 at 6:08 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Certainly, the failure of the Biosphere experiment is evidence in your favor.
There, the scientists failed to predict basic high-level properties of
a pretty simple
closed ecosystem, based on their knowledge of the
Several posts on this coming in quick succession -- you might want to read
all of them before replying to any of them.
I've just realized that much of the problem that we're all having in this
discussion is the lack of realization of how much of a bottom-up design
Richard is assuming vs. how
2008/4/27 Dr. Matthias Heger [EMAIL PROTECTED]:
Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54
Yes, truly general AI is only possible in the case of infinite
processing power, which is
likely not physically realizable.
How much generality can be achieved with
Engineering in the real world is nearly always a mixture of rigor and
intuition. Just like analysis of complex biological systems is.
AIEe! NO! You are clearly not an engineer because a true engineer
just wouldn't say this.
Engineering should *NEVER* involve intuition. Engineering
Mike Tintner wrote
What is totally missing is a philosophical and semiotic perspective. A
philosopher looks at things v. differently and asks essentially : how much
information can we get about a given subject (and the world generally)? A
semioticist asks: how much and what kinds of
Russell Wallace wrote:
On Sun, Apr 27, 2008 at 6:08 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Certainly, the failure of the Biosphere experiment is evidence in your favor.
There, the scientists failed to predict basic high-level properties of
a pretty simple
closed ecosystem, based on their
On Sun, Apr 27, 2008 at 5:51 PM, Mark Waser [EMAIL PROTECTED] wrote:
Engineering in the real world is nearly always a mixture of rigor and
intuition. Just like analysis of complex biological systems is.
AIEe! NO! You are clearly not an engineer because a true engineer
just
I don't agree with Mark Waser that we can engineer the complexity out
of intelligence.
I agree with Richard Loosemore that intelligent systems are
intrinsically complex systems in the Santa Fe Institute type sense
However, I don't agree with Richard as to the *extent* of the
complexity problem.
Engineering should *NEVER* involve intuition. Engineering does not
require
exact answers as long as you have error bars but the second that you
revert
to intuition and guesses, it is *NOT* engineering anymore.
Well, we may be using the word intuition differently.
Given your examples, we
Rules of thumb are not intuition ... but applying them requires
intuition... unlike applying rigorous methods...
However even the most rigorous science requires rules of thumb (hence
intuition) to do the problem set-up before the calculations start...
ben
On Sun, Apr 27, 2008 at 6:56 PM, Mark
I don't agree with Mark Waser that we can engineer the complexity out
of intelligence.
I agree with Richard Loosemore that intelligent systems are
intrinsically complex systems in the Santa Fe Institute type sense
I hate to do this but . . . .
Richard's definition of complexity is *NOT* the
I said and repeat that we can engineer the complexity out of intelligence
in the Richard Loosemore sense.
I did not say and do not believe that we can engineer the complexity out
of intelligence in the Santa Fe Institute sense.
OK, gotcha...
Yeah... IMO, complexity in the sense you ascribe
I just want to make one observation on this whole thread, since I have
no time for anything else tonight.
People are riding roughshod over the things that I have actually said.
In some cases this involves making extrapolations to ideas that people
THINK that I was saying, but which I have
Actually, I have to clarify that my knowledge of this totally
digressive topic is about
12 years obsolete. Maybe it's all done differently now...
However, one wouldn't bother to use this formula if the soil was too
different
in composition from the soil around Vegas. So in reality the
On Sun, Apr 27, 2008 at 11:09 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
It was no such evidence: Biosphere 2 had almsot nothing in the way of
complexity, compared with AGI systems, and it was controlled by trial and
error in such a way that it failed.
Hey, great example of how to
29 matches
Mail list logo