Hi Eliezer,

It looks like Williams' book is more about the perils of Asimov's
Laws than about hard-wiring. As logical constraints, Asimov's Laws
suffer from the grounding problem. Any analysis of brains as purely
logical runs afoul of the grounding problem. Brains are statistical
(or, if you prefer, "fuzzy"), and logic must emerge from statistical
processes. That is, symbols must be grounded in sensory experience,
reason and planning must be grounded in learning, and goals must be
grounded in values.

Also, while I advocate hard-wiring certain values of intelligent
machines, I also recognize that such machines will evolve (there
is a section on "Evolving God" in my book). And as Ben says, once
things evolve there can be no absolute guaratees. But I think
that a machine whose primary values are for the happiness of all
humans will not learn any behaviors to evolve against human
interests. Ask any mother whether she would rewire her brain
to want to eat her children. Designing machines with primary
values for the happiness of all humans essentially defers their
values to the values of humans, so that machine values will
adapt to evolving circumstances as human values adapt.

Cheers,
Bill

On Tue, 14 Jan 2003, Eliezer S. Yudkowsky wrote:

> Roger "localroger" Williams has published online at kuro5hin his novel
> "The Metamorphosis of Prime Intellect", the first member of the new "Seed
> AI Programmer Screws Up Friendliness" genre of science fiction.  I would
> like to recommend it to, at the very least, Ben Goertzel, Bill Hibbard,
> and Kevin Copple, since the novel not only illustrates vividly some of the
> problems with "hard-wiring", but also some of the problems that
> "experiential learning" as an answer to hard-wiring does *not* solve.  One
> of the Big Lessons in AI is that just because you've solved one piece of
> the problem, doesn't mean you can stop there.
>
> http://www.kuro5hin.org/prime-intellect
>
> Roger Williams has emphasized for the record that this story is meant to
> emphasize the *importance* of thinking through the Singularity, not as a
> prediction of dystopia; also that the story was written in 1994, and set
> in 1988; hence, if the story fails to make any mention of more recent
> thinking on the Singularity, it may be excused.
>
> --
> Eliezer S. Yudkowsky                          http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to