Terren Suydam wrote:
Brad,
I'm not entirely certain this was directed to me, since it seems to be a
response to both things I said and things Mike Tintner said. My comments below,
where (hopefully) appropriate.
--- On Mon, 8/4/08, Brad Paulsen <[EMAIL PROTECTED]> wrote:
Ah, excuse me. Don't humans (i.e., computer
programmers, script writers)
"ground" virtual reality worlds? Isn't that
just a way of "simulating" human
(or some other abstract) reality?
Humans create simulated environments, but I'm not sure what you mean when you say humans "ground" them. If I
have some kind of intelligent agent running around in a simulation, then that
simulation *is* the agent's
"real world". Consider for instance that we could actually be intelligent agents
in some super-intelligent
alien's simulation. We have no access to that alien's world - all we know is
what we perceive through our
senses. What would it add to say that the alien who created the simulation
"grounds" our world?
Terren,
My comments were directed to the thread, not to you personally. For future
reference, I always begin comments directed to specific persons in a thread to
those persons by name. I wish everyone one would. Absent such personal
salutation, assume I meant my comments to be for all readers to/posters in the
thread.
Unfortunately, there is no single definition of the word "grounding" (or "symbol
grounding") that is widely accepted in the AI (or Cognitive Science) community.
It is, however, fairly well acknowledged that:
"The symbols in an autonomous hybrid symbolic+sensorimotor system -- a
Turing-scale robot consisting of both a symbol system and a sensorimotor system
that reliably connects its internal symbols to the external objects they refer
to, so it can interact with them Turing-indistinguishably from the way a person
does -- would be grounded. But whether its symbols would have meaning rather
than just grounding is something that even the robotic Turing Test -- hence
cognitive science itself -- cannot determine, or explain."
(http://en.wikipedia.org/wiki/Symbol_grounding).
To me, the key phrase in that quote is that grounding may be defined if an AGI
"...interacts Turning-indistinguishably from the way a person does...". And, of
course, the last sentence puts even accomplishment of that goal in doubt to the
extent grounding a symbol may not necessarily give it "meaning." So far, we
have found no way to tell. But, I think most folks on this list would be
awestruck by an AGI that could pass such a test next week, next year, next
decade. We aren't even close, fifty years post-Turing, as things currently stand.
I think the Turning test has been holding back AI/AGI for decades. It sets up a
nearly impossible (in fact, it may *be* impossible) standard by which all AI
efforts are (and have been) judged. Sometimes I think Turing was playing a
cruel joke ("I'll make sure no one is able to build an AI that anyone will take
seriously by defining a test that sounds reasonable but is, in fact, impossible
to pass. Now, that's comedy!"). I know of no law that says an AGI *must* be
Turing-indistinguishable from a human in the way it interacts with the world
(and real humans living in it). In my view of AGI, there is only a need for the
AGI to be grounded in *some* world (including worlds that can only be described)
that is *compatible* with ours. This does not *require* human-like senses
(indeed, it may not require *any* senses) nor does this require the ability to
pass the Turing test.
How is grounding
using AGI-human interaction different from getting the
experiential information
from a third-party once removed (i.e., from the virtual
reality program's
programmer)? Except that the former method might be more
direct and efficient.
Experiential information cannot be given, it must be experienced, in exactly the same way that you cannot give a virgin
the experience of sex by talking about it. Grounding requires experience
(otherwise the term is meaningless). An agent in
a virtual environment experiences that environment in the same way that we
experience ours. There is no fundamental
difference there.
I disagree. You can give a virgin an understanding of the experience of having
sex by talking about (i.e., describing) it. Do you really want to throw the
human emotion of empathy out with the bathwater? I think its a rather important
emotion. It says that one human being can understand the feelings of another
without having to have, first, personally experienced what gave rise to those
feelings. You don't have to be a rape victim to understand rape. You don't
have to be a black person to understand racism and prejudice. I can go on and
on. But, that's not my point (just a counter to yours). Here's my point...
There is an implicit assumption in everything you've said in your reply. It is
that an AGI, to be an AGI, must be able to experience the human (or VR avatar)
world in a way identical (or, at least, very similar) to how a real human (or VR
avatar) does. Take away that assumption and you have no argument. So, I
repeat: there is *no rule* written *anywhere* that says this is a correct (or
beneficial) assumption. Turing was not a deity. He could be wrong. In this
case, I submit, he was wrong -- and tragically so. Whatever his intentions,
that one paper has retarded the growth of AI for decades by sending the knights
out to slay the wrong dragon.
AGI's don't have to think or act or experience anything (exactly) like humans.
They will be extremely helpful to humanity if they think, act and experience in
a way that is *compatible* with the way humans think, act and experience. We can
endow them with the ability to acquire knowledge and reason about it in a way
that is *compatible with but not necessarily identical to* the way we do. When
designing and developing an AGI that will be implemented using available
computer technology, we should be concentrating 99% of our collective effort at
leveraging the things those computers do well. Now (not 10, 20, 30 years in the
future). Where this intersects with something humans also do, that's a bonus
not, IMHO, the primary goal.
People blind or deaf from birth probably have a very
different "internal idea"
(grounding) of colors or sounds (respectively) than people
born with normal
vision and hearing. That doesn't mean they can't
productively interact with the
latter group. It happens every day.
Of course. But congenitally blind people still do experience through other senses.
Their constructions are still grounded in something, if without the benefit of
visual gestalt.
They can communicate effectively with others in spite of their handicap to the
extent
they can relate others' meanings to the experience they do have, limited however
it is.
You just made my point.
It's also not fair to use Harry's statements and
expose them to Vlad's requests
for clarification as a counterexample of "not
grounding." Vlad and Harry are
just human. Humans get tired, don't feel well
(headaches, etc.). There are a
multitude of things that could cause a human to write
"fuzzily" (or, perhaps,
for Vlad to read or think "fuzzily"). The AGI a
human creates can, however, be
built to not suffer from "fuzziness" when
describing things it believes or knows
(without having to be grounded in human reality through
direct self-experience).
In that case, Vlad would not have to ask for
clarification and that "test"
goes out the window.
This was Mike's rebuttal, no comments here.
I'm not sure what you mean. I wrote the above paragraph. And the following
paragraph.
Grounding is a potential problem IFF your AGI is, actually,
an AGHI, where the H
stands for Human. There's nothing wrong with borrowing
the good features of
human intelligence, but an uncritical aping of all aspects
of human intelligence
just because we think highly of ourselves is doomed. At
least I hope it is.
Frankly, the possibility of an AGHI scares the crap out of
me. Personally, I'm
in this to build and AGI that is about as far from a human
copy (with or without
improvements) as possible. Better, faster, less prone to
breakdown. And,
eventually, a whole lot smarter.
I'm not advocating uncritical aping of humans, but I think grounding
would be an issue for all possible minds. The foundational insight of
constructivist philosophy is that any and all knowledge must be structured,
ultimately, within a center of experience. Knowledge does not exist,
period, except within a mind. A poem read by a hundred minds results in
a hundred different constructions.
Well, it's a good thing there are alternative philosophies to constructivism,
isn't it? ;-) When I think about constructivism, the phrase "learn by
(supervised) doing" comes immediately to mind. As a former teacher
(college-level), I can tell you from my own experiences that humans definitely
learn much more (deeper, at least) by doing than by any other method currently
known to educational science. But, they also (and usually initially) must learn
from description (i.e., hearing about personal or collective experiences -- some
of which will be negative -- so they don't have to repeat them). The shop
teacher who gives a high school kid a hunk of metal, points to the band saw and
says, "Go ahead, learn by doing." will be lucky to be making burgers at
MacDonald's after the fist student's hand goes flying across the room.
But, again, from an AGI point of view...
What's a good way for humans to learn may be an absolutely horrible, costly, and
inefficient way for an AGI to learn. IMHO, requiring that an AGI have
human-like senses, so it can learn by doing, is an extremely costly and
inefficient way for an AGI to acquire and process knowledge. We must *always*
start with the strengths of the deployment platform. Learning by doing is not,
again IMHO, the best way to go when the deployment platform is a computer.
We don't need no stinkin' grounding.
I disagree, of course, but approaches to AI that don't involve embodiment
or internally grounded knowledge might be successful in unexpected ways even
if they never achieve the status of AGI. Obviously some existing narrow
approaches have become extremely valuable (of course, they're no longer
considered AI ;-) ).
Tell me about it. :-) I built expert system shells and engines for a living in
the 1980's. When I started, expert systems were lauded as the first example of
commercially successful AI. Five years later, expert systems were "just
another computer program." Sic transit gloria mundi!
Cheers,
Brad
Terren
Cheers,
Brad
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com