Hi David,

> The problem here, I guess, is the conflict between Platonic expectations of
> perfection and the messiness of the real world.

I never said perfection, and in my book make it clear that
the task of a super-intelligent machine learning behaviors
to promote human happiness will be very messy. That's why
it needs to be super-intelligent.

> The only systems we know of that generate the most happiness, freedom, and
> prosperity are democracy and free enterprise.  Both systems are messy and
> far from perfect.  They both generate a lot of unhappiness and poverty in
> their operation.  Both need regulation and control mechanisms (rule of law)
> to inhibit their unrestricted action.  The goal is to find a balance between
> the social justice goal of wealth redistibution and the social welfare goal
> of wealth generation through unrestricted innovation.  They  in a sense need
> the messiness in order to generate the benefits; designing systems to
> generate happiness has always been a recipe for totalitarianism.  When the
> systems does not allow balance, or failure, when no company, say, can go
> bankrupt or fail, no company can succeed, change, or take risks.  That's
> socialism, and that's what's wrong with it.
>
> The problem with the issue we are discussing here is that the worst-case
> scenario for handing power to unrestricted, super-capable AI entities is
> very bad, indeed.  So what we are looking for is not really building an
> ethical structure or moral sense at all.  Failure is not an option.  The
> only way to prevent the worst-case scenarios that have been mentioned by
> discussants is not to design moral values and hope, but to build in
> hard-wired, Three Laws-type rules that cannot be overridden.  And then, on
> top of that, build in social, competitive systems that use the presence of
> mulitple AIs, dependence on humans as suppliers or intermediaries, ethical,
> legal, and even game-theory (remember the movie /War Games/?) strictures,
> and even punishment systems up to and incuding shut-down capabilities.

The problem with laws is that they are inevitably ambiguous.
They are analogous to the expert system approach to AI, that
cannot cope with the messiness of the real world. Human laws
require intelligent judges to resolve their ambiguities. Who
will supply the intelligent judgement for applying laws to
super-intelligent machines?

I agree whole heartedly that the stakes are high, but think
the safer apporach is to build ethics into the fundamental
driver of super-intelligent machines, which will be their
reinforcement values.

Cheers,
Bill
----------------------------------------------------------
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI  53706
[EMAIL PROTECTED]  608-263-4427  fax: 608-263-6738
http://www.ssec.wisc.edu/~billh/vis.html

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to