Valentina wrote:
>Sorry if I'm commenting a little late to this: just read the thread. Here
>is a question. I assume we all agree that intelligence can be defined as
>the ability to achieve goals. My question concerns the establishment of
>those goals. As human beings we move in a world of limitations (life span,
>ethical laws, etc.) and have inherent goals (pleasure vs pain) given by
>evolution. An AGI in a different embodyment might not have any of that,
>just a pure meta system of obtaining goals, which I assume, we partly give
>the AGI and partly it establishes. Now, as I understand, the point of
>Singularity is that of building an AGI more intelligent than humans so it
>could solve problems for us that we cannot solve. That entails that the
>goal system of the AGI and ours must be interconnected somehow. I find it
>difficult to understand how that can be achieved with an AGI with a
>different type of embodyment. I.e. planes are great in achieving flights,
>but are quite useless to birds as their goal system is quite different.
>Can anyone clarify?

And in a post a little later clarified the question:
>[...] I am asking what is the use of a non-embodied AGI, given
>it would necessarily have a different goal system from that of humans [...]

Well, there is a cultural notion of an oracle AI.  A big brain in a box
that you can ask questions.  Honestly, with the way people use Google,
we almost have that now!  It's a perfectly good and wonderful goal
for an AGI system.  It doesn't need to have a body in itself, but
maybe it has a way of understanding or modeling what a body might do
out in the real word and come up with answers to questions about things.
Maybe simulations are good enough.  Or maybe you might want to say
that simulated embodiment is a significant kind of embodiment.

It could be, though, that embodiment is part of what people mean
by intelligence.  I personally have a suspicion that that might be
a hidden sort of gotcha lurking in the field of AGI.  It seems like
it happens that computers will be able to do something that seemed
to be enough for intelligence, but someone will say, "well they aren't
intelligent because they can't do this X here", and you don't know
if it will eventually come down to something involving reaching out
into the world and doing something.  There was that thing with going
into someone's house and making coffee.

And to me it seems like people use the idea of intelligence as something
that distinugishes people from other animals.  We can do things that
animals can't, and it isn't necessarily an actual, real thing, but
we have a sort of loose word "intelligence" that kind of tries to
get at that difference.  Maybe "intelligence" isn't a specific
thing, but rather a collection of different features that work
together, and maybe not always the same collection.  There's just
this difference.  But if intelligence is some difference between
us an animals, then intelligence may assume skills and abilities
that animals have, specifically sensation and movement, or more
generally, embodiment.  That is, maybe embodiment really is part
of what people mean by intelligence after all.

It seems like people may often assume that embodiment has to be some
sort of humanoid type to be really important, but I like to think
that a more natural environment for a computer-based intelligence to
exist in is the computer desktop.  It would certainly be possible
for an intelligent program to run other programs that we use.  We
are intelligent, and we use computer programs.  If you had an
intelligent program, wouldn't the most natural first thing for
it to do be running programs in an environment fairly close to
it?  And it seems to me like it's at least some form of embodiment:
it has interaction, it has an effect on the world, there are things
to sense, in a way.  Since it's computer data, it can already be
a form that is pretty well digested for it.  Of course, the real
world is messy, and maybe dealing with that mess is really an important
part of intelligence, but if that isn't quite always true, possibly
you really could have a desktop AGI in some useful sense.  Also,
with the video and phone stuff, desktops do have real access to
the world, so they need not be completely isolated.

And of course, extending to robots from the desktop is a natural step. 
Plenty of robots are controlled directly by people through computers
(though
generally with little controller devices).  If you had an intelligent
program that controlled other programs,  it could just step into
the existing systems and control the robots.  Would that mean it's then
embodied when it wasn't before?  At that point, I'm not sure how valuable
the concept of embodied really is.

andi






-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to