Terren Suydam wrote:
I don't know, how do you do it? :-] A human baby that grows up with virtual
reality hardware surgically implanted (never to experience anything but a
virtual reality) will have the same issues, right?
There is no difference in principle between "real" reality and virtual reality. All we have is our senses and
our abilities to internally structure a world from the data that comes from them. That is how an AGI must do it -
internally structure. To a virtually-embodied AGI, the "virtual" world would be its "real" world.
It wouldn't have access to our "real" world.
Terren
--- On Mon, 8/4/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
How will the virtual AGI distinguish between what is
virtual and real, and
whether any information in any medium presents a
"realistic picture," "good
likeness", is "true to life" or "a
gross distortion", and whether any
proposal will "really work" or whether it itself
is "grounded" or a
"fictional character"?
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
Ah, excuse me. Don't humans (i.e., computer programmers, script writers)
"ground" virtual reality worlds? Isn't that just a way of "simulating" human
(or some other abstract) reality?
If so, why not simply tell the AGI what it needs to know about the physical
world? Better yet (for human sensibilities), why not simply program the AGI to
ask a human when it determines it needs further information? How is grounding
using AGI-human interaction different from getting the experiential information
from a third-party once removed (i.e., from the virtual reality program's
programmer)? Except that the former method might be more direct and efficient.
People blind or deaf from birth probably have a very different "internal idea"
(grounding) of colors or sounds (respectively) than people born with normal
vision and hearing. That doesn't mean they can't productively interact with the
latter group. It happens every day.
It's also not fair to use Harry's statements and expose them to Vlad's requests
for clarification as a counterexample of "not grounding." Vlad and Harry are
just human. Humans get tired, don't feel well (headaches, etc.). There are a
multitude of things that could cause a human to write "fuzzily" (or, perhaps,
for Vlad to read or think "fuzzily"). The AGI a human creates can, however, be
built to not suffer from "fuzziness" when describing things it believes or knows
(without having to be grounded in human reality through direct self-experience).
In that case, Vlad would not have to ask for clarification and that "test"
goes out the window.
Grounding is a potential problem IFF your AGI is, actually, an AGHI, where the H
stands for Human. There's nothing wrong with borrowing the good features of
human intelligence, but an uncritical aping of all aspects of human intelligence
just because we think highly of ourselves is doomed. At least I hope it is.
Frankly, the possibility of an AGHI scares the crap out of me. Personally, I'm
in this to build and AGI that is about as far from a human copy (with or without
improvements) as possible. Better, faster, less prone to breakdown. And,
eventually, a whole lot smarter.
We don't need no stinkin' grounding.
Cheers,
Brad
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com