2008/8/14 Ed Porter [EMAIL PROTECTED]:
A 'Frankenrobot' with a biological brain
I doubt that there will be much practical application of biological
neuron powered robots, since the overhead of keeping the biology alive
would be too troublesome (requiring feeding and removal of waste
products),
Jim:I know that
there are no solid reasons to believe that some kind of embodiment is
absolutely necessary for the advancement of agi.
I want to concentrate on one dimension of this: precisely the solid
dimension. My guess would be that this is a dimension of AGI that has been
barely thought
One of the worst problems of early AI was that it over-generalized
when it tried to use a general rule on a specific case. Actually they
over-generalized, under-generalized, and under-specified problem
solutions, but over-generalization was the most notable because they
relied primarily on word
On Thu, Aug 14, 2008 at 10:25:57AM +0100, Bob Mottram wrote:
I doubt that there will be much practical application of biological
neuron powered robots, since the overhead of keeping the biology alive
would be too troublesome (requiring feeding and removal of waste products),
Actually, better
2008/8/14 Mike Tintner [EMAIL PROTECTED]:
What it comes down to is: what can you learn about any object[s] from flat
drawings of them? Cardboard cutouts?
This is essentially the same problem as in computer vision. The
objects that you're looking at are three dimensional, but a camera
image is
I realized that I made a very important error in my brief description
of prejudice. Prejudice is the application of over-generalizations,
typically critical, that are inappropriately applied to a group. The
cause of the prejudice is based on a superficial characteristic that
most of the members
This is also a problem in animal vision. Each eye is 2-D. (That is
not entirely true, but from a practical point of view it is true.)
As far as flat land or hollywood land, we only live on the earth, so
that means that you can't understand anything about space right?
Well, your ideas about the
Jim:This is also a problem in animal vision. Each eye is 2-D. (That is
not entirely true, but from a practical point of view it is true.)
As far as flat land or hollywood land, we only live on the earth, so
that means that you can't understand anything about space right?
Logic running wild,
On Thu, Aug 14, 2008 at 6:59 AM, Mike Tintner [EMAIL PROTECTED]wrote:
Jim:I know that
there are no solid reasons to believe that some kind of embodiment is
absolutely necessary for the advancement of agi.
I want to concentrate on one dimension of this: precisely the solid
dimension. My
2008/8/14 Ciro Aisa [EMAIL PROTECTED]:
On Thu, Aug 14, 2008 at 10:25:57AM +0100, Bob Mottram wrote:
I doubt that there will be much practical application of biological
neuron powered robots, since the overhead of keeping the biology alive
would be too troublesome (requiring feeding and removal
Having information about all the details of 3D scenes leaves the agent
about as limited as having only 2D camera snapshots, or verbal
descriptions, if it is not able to extract a language of causal models
from this information. Static description of a scene, however precise,
is no use if you can
Ben: as discussed already ad nauseum, I do not think that robust
perception/action is necessarily the best place to start in making an AGI.
However, our current work on embodying Novamente and OpenCog does involve 3D
virtual worlds ... and, of course, my planned work with Xiamen University
Well I am definitely a philosopher-scientist and not a PR guy ;-)
Perhaps the confusion is just that I don't think there is any one
exclusively correct approach. I think 3D robotic or virtual embodiment are
very convenient approaches so I am following these paths w/ OpenCog and
Novamente (with
2008/8/14 Mike Tintner [EMAIL PROTECTED]:
But - correct me - when you engineer the 3D shape, you are merely applying
previous,existing knowledge about other objects to do so - which is a useful
but narrow AI function. You are not actually discovering anything new about
this particular object?
This looks like it could be an interesting thread.
However, I disagree with your distinction between ad hoc and post hoc.
The programmer may see things from the high-level maze view, but the
program itself typically deals with the mess. So, I don't think
there is a real distinction to be made
Of course I have considered these issues before.
On Thu, Aug 14, 2008 at 8:41 AM, Mike Tintner [EMAIL PROTECTED] wrote:
Jim:This is also a problem in animal vision. Each eye is 2-D. (That is
not entirely true, but from a practical point of view it is true.)
As far as flat land or hollywood
Sorry if my phrasing or tone were off, I probably shoulda got more sleep
last night!
I am not at all frustrated by discussions of the specific role that 3D
perception and visualization plays in human or humanlike cognition. Very
interesting, worthwhile topic!
I get frustrated by
On Thu, Aug 14, 2008 at 12:59 PM, Abram Demski [EMAIL PROTECTED] wrote:
A more worrisome problem is that B may be contradictory in and of
itself. If (1) I can as a human meaningfully explain logical system X,
and (2) logical system X can meaningfully explain anything that humans
can, then (3)
Jim,
You are right to call me on that. I need to provide an argument that,
if no logic satisfying B exists, human-level AGI is impossible.
B1: A foundational logic for a human-level intelligence should be
capable of expressing any concept that a human can meaningfully
express.
If a broad enough
On Thu, Aug 14, 2008 at 3:06 PM, Abram Demski [EMAIL PROTECTED] wrote:
Jim,
You are right to call me on that. I need to provide an argument that,
if no logic satisfying B exists, human-level AGI is impossible.
I don't know why I am being so aggressive these days. I don't start
out intending
The paradox (I assume that is what you were pointing to) is based on
your idealized presentation. Not only was your presentation
idealized, but it was also exaggerated.
I sometimes wonder why idealizations can be so effective in some
cases. An idealization is actually an imperfect way of
On Thu, Aug 14, 2008 at 4:26 PM, Jim Bromer [EMAIL PROTECTED] wrote:
On Thu, Aug 14, 2008 at 3:06 PM, Abram Demski [EMAIL PROTECTED] wrote:
Jim,
You are right to call me on that. I need to provide an argument that,
if no logic satisfying B exists, human-level AGI is impossible.
I don't know
2008/8/14 Ciro Aisa [EMAIL PROTECTED]:
On Thu, Aug 14, 2008 at 10:25:57AM +0100, Bob Mottram wrote:
I doubt that there will be much practical application of biological
neuron powered robots, since the overhead of keeping the biology alive
would be too troublesome (requiring feeding and
The training issue is a real one, but presumably over time electronics that
would be part of these wetware/hardware combination brains could be
developed to train the wetware/hardware machines --- under the control
guidance of external systems at the factory --- relatively rapidly, so that
in say
24 matches
Mail list logo