Brad,
I'm not entirely certain this was directed to me, since it seems to be a
response to both things I said and things Mike Tintner said. My comments below,
where (hopefully) appropriate.
--- On Mon, 8/4/08, Brad Paulsen <[EMAIL PROTECTED]> wrote:
> Ah, excuse me. Don't humans (i.e., computer
> programmers, script writers)
> "ground" virtual reality worlds? Isn't that
> just a way of "simulating" human
> (or some other abstract) reality?
Humans create simulated environments, but I'm not sure what you mean when you
say humans "ground" them. If I have some kind of intelligent agent running
around in a simulation, then that simulation *is* the agent's "real world".
Consider for instance that we could actually be intelligent agents in some
super-intelligent alien's simulation. We have no access to that alien's world -
all we know is what we perceive through our senses. What would it add to say
that the alien who created the simulation "grounds" our world?
> How is grounding
> using AGI-human interaction different from getting the
> experiential information
> from a third-party once removed (i.e., from the virtual
> reality program's
> programmer)? Except that the former method might be more
> direct and efficient.
Experiential information cannot be given, it must be experienced, in exactly
the same way that you cannot give a virgin the experience of sex by talking
about it. Grounding requires experience (otherwise the term is meaningless). An
agent in a virtual environment experiences that environment in the same way
that we experience ours. There is no fundamental difference there.
> People blind or deaf from birth probably have a very
> different "internal idea"
> (grounding) of colors or sounds (respectively) than people
> born with normal
> vision and hearing. That doesn't mean they can't
> productively interact with the
> latter group. It happens every day.
Of course. But congenitally blind people still do experience through other
senses. Their constructions are still grounded in something, if without the
benefit of visual gestalt. They can communicate effectively with others in
spite of their handicap to the extent they can relate others' meanings to the
experience they do have, limited however it is.
> It's also not fair to use Harry's statements and
> expose them to Vlad's requests
> for clarification as a counterexample of "not
> grounding." Vlad and Harry are
> just human. Humans get tired, don't feel well
> (headaches, etc.). There are a
> multitude of things that could cause a human to write
> "fuzzily" (or, perhaps,
> for Vlad to read or think "fuzzily"). The AGI a
> human creates can, however, be
> built to not suffer from "fuzziness" when
> describing things it believes or knows
> (without having to be grounded in human reality through
> direct self-experience).
> In that case, Vlad would not have to ask for
> clarification and that "test"
> goes out the window.
This was Mike's rebuttal, no comments here.
> Grounding is a potential problem IFF your AGI is, actually,
> an AGHI, where the H
> stands for Human. There's nothing wrong with borrowing
> the good features of
> human intelligence, but an uncritical aping of all aspects
> of human intelligence
> just because we think highly of ourselves is doomed. At
> least I hope it is.
> Frankly, the possibility of an AGHI scares the crap out of
> me. Personally, I'm
> in this to build and AGI that is about as far from a human
> copy (with or without
> improvements) as possible. Better, faster, less prone to
> breakdown. And,
> eventually, a whole lot smarter.
I'm not advocating uncritical aping of humans, but I think grounding would be
an issue for all possible minds. The foundational insight of constructivist
philosophy is that any and all knowledge must be structured, ultimately, within
a center of experience. Knowledge does not exist, period, except within a mind.
A poem read by a hundred minds results in a hundred different constructions.
> We don't need no stinkin' grounding.
I disagree, of course, but approaches to AI that don't involve embodiment or
internally grounded knowledge might be successful in unexpected ways even if
they never achieve the status of AGI. Obviously some existing narrow approaches
have become extremely valuable (of course, they're no longer considered AI ;-)
).
Terren
> Cheers,
>
> Brad
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com