2008/4/29 Ed Porter [EMAIL PROTECTED]:
But I agree the project is really quite ambitious in that it is trying to
create an embodied robot with a real AGI for a brain.
It may well make major contributions to AGI.
It sounds like a promising start, but it should also be noted that
there have
There's been a lot of argument (some of it from me, indeed) about what
type of intelligence is necessary for AGI. Let me take a shot at
resolving it.
Suppose we say there are two types of intelligence (not in any
rigorous sense, just in broad classification):
Deliberative. Able to prove
On Tue, Apr 29, 2008 at 10:12 AM, Bob Mottram [EMAIL PROTECTED] wrote:
In biological terms D came from S. If you read about the history of
numbers, or abstract concepts such as money, they have a clear origin
in S but eventually transcended it. Even within the D realm S terms
are still
Bob: Particularly I'd be interested in having
the robot learn a model of its own body kinematics - the beginnings of
a sense of self - based on data mining its sensory data and also using
experimental movements to confirm or refute hypotheses, which mught to
a naive observer look like play.
I disagree with your breakdown. There are several key divides:
Concrete vs abstract
Continuous vs discrete
spatial vs symbolic
deliberative vs reactive
I can be very deliberative, thinking in 2-d pictures (when designing a machine
part in my head, for example). I know lots of people who are
Russell,
This is a definite start and I'm just trying to put together a reasoned
thesis on this area. You're absolutely right that this is essential to
understanding AGI - General Intelligence - and literally no one does have
other than tiny fragments of understanding here, either in AI/AGI
Moving on from my previous post, the key distinction in mentality between
the literate and the new multimediate mentality is between PRE-SEMIOTIC and
SEMIOTIC.
The presemiotic person starts from the POV of his specialist sign system
and medium, when thinking about solving particular
Mike,
I derived a few things from your response - even enjoyed it. One point
passed over too quickly was the question of How knowable is the world?
I take this to be a rhetorical question meant to suggest that we need
all of it to be considered intelligent. This suggestion seems to be
This is all pretty old stuff for mainstream AI -- see Herb Simon and bounded
rationality. What needs work is the cross-modal interaction, and
understanding the details of how the heuristics arise in the first place from
the pressures of real-time processing constraints and deliberative
Sorry to intrude, but I think the formula complexity is the border
between order and chaos resolves this dispute nicely...
Choice 1: The operators end up being clean and modular in their design,
which means that if we were able to examine them from the outside, we would
be able to understand
Stan
I'm putting together a detailed paper on this, so overall it will be best to
wait for that.
My posts today give the barest beginning to my thinking, which is that you
start to understand the semiotic requirements for a general intelligence by
thinking about the *things* that it must
Josh,
Gigerenzer doesn't sound like old stuff or irrelevant to me , with my
limited knowledge, (and also seems like a pretty good example of how v.
much more practical it can be to think imaginatively than mathematically,
no?)::
how do real people make good decisions under the usual
Mark Waser wrote:
If I understand Richard correctly, he is assuming that it is
necessary to make symbols themselves complex and that each symbol
needs his four forces of doom: Memory, Development, Identity, and
Non-Linearity.
I have no problem with the first three but am not so sure that I
I'm afraid that I'm losing track of your major point but . . . .
First off, you are violating your own definition of complexity . . . .
You said -- A system is deemed complex if the smallest size of a theory
that will explain that system is so large that, for today's human minds, the
Mark Waser wrote:
I'm afraid that I'm losing track of your major point but . . . .
First off, you are violating your own definition of complexity . . . .
You said -- A system is deemed complex if the smallest size of a
theory that will explain that system is so large that, for today's human
This is poppycock. The people who are really good at something like that so
something as simple but much more general. They have an associative memory of
lots of balls they have seen and tried to catch. This includes not only the
tracking sight of the ball, but things like the feel of the
Richard,
These last two messages with replies to Mark's questions clarify your
position more clearly than much of your prior writing (although I
didn't keep track of later discussions too closely). I think it's
important to show in the same example all the controversial aspects:
relatively simple
On Apr 29, 2008, at 1:46 AM, Russell Wallace wrote:
Suppose we say there are two types of intelligence (not in any
rigorous sense, just in broad classification):
Deliberative. Able to prove theorems, solve the Busy Beaver problem
for small N, write and prove properties of small functions,
18 matches
Mail list logo