comments below...
--- On Sat, 8/23/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> The last post by Eliezer provides handy imagery for this
> point (
> http://www.overcomingbias.com/2008/08/mirrors-and-pai.html
> ). You
> can't have an AI of perfect emptiness, without any
> goals at all,
> because it won't start doing *anything*, or anything
> right, unless the
> urge is already there (
> http://www.overcomingbias.com/2008/06/no-universally.html
> ).
Of course, that's what the evolutionary process is for. You use selective
pressure to shape the behavior of the agents. The way it works in my mind, you
start out with very primitive intelligence and increase the difficulty of the
simulation to get increasingly intelligent behavior.
> But you
> can have an AI that has a bootstrapping mechanism that
> tells it where
> to look for goal content, tells it to absorb it and embrace
> it.
Yes, but in this scenario, the AI does not structure the goals itself. It is
not fully embodied. Of course, we will probably argue about how important that
is.
> Evolution has nothing to do with it, except in the sense
> that it was
> one process that implemented the bedrock of goal system,
> making a
> first step that initiated any kind of moral progress. But
> evolution
> certainly isn't an adequate way to proceed from now on.
I assume you make this assertion based on how much time/computation would be
required, and the lack of control we have over the process. In other words, at
the end of this process we can never have a provably friendly AI. We cannot
dictate its morals, any more than we can dictate morals to our fellow humans.
However, going down the path of "provably friendly AI" is fraught with its own
concerns. Going into what those concerns are is a whole different topic, but
for me that road is a dead end.
> Basically, non-embodied interaction as you described it is
> extracognitive interaction, workaround that doesn't
> comply with a
> protocol established by cognitive algorithm. If you can do
> that, fine,
> but cognitive algorithm is there precisely because we
> can't build a
> mature AI by hand, by directly reaching into the AGI's
> mind, it needs
> a subcognitive process that will assemble its cognition for
> us. It is
> basically the same problem with general intelligence and
> with
> Friendliness: you can neither assemble an AGI that already
> knows all
> the stuff and possesses human-level skills, nor an AGI that
> has proper
> humane goals. You can only create a metacognitive metamoral
> process
> that will collect both from the environment.
I'm not trying to wave a magic wand and pretend we can just create something
out of thin air that will be intelligent. Of course there needs to be some
underlying cognitive process... did something I say lead you to believe I
thought otherwise?
I'm saying that we don't specify that process. We let it emerge through large
numbers of generations of simulated evolution. Now that's going to be a very
unpopular idea in this forum, but it comes out of what I think are valid
philosophical criticisms of designed (or metacognitive/metamoral if you wish)
intelligence.
Terren
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com