On 10/26/07, Benjamin Goertzel wrote:
>
> My understanding is that it's more like this (taking some liberties)
>
> X0 = me
> Y0 = what X0 thinks is good for the world
> X1 = what X0 wants to be
> Y1 = what X1 would think is good for the world
> X2 = what X1 would want to be
> Y2 = what X2 would think is good for the world.
> ...
>
> You can sensibly argue that CEV is poorly-defined, or that it's
> not likely to converge to anything given a normal human as an initial
> condition, or that it is likely to converge to totally different things
> for different sorts of people, giving no clear message...
>


Certainly, different societies have different ideals that they claim
to be aiming for.

But just take our western societies as an example.
We pass many laws to try to control human behaviour and punish
offenders as an additional deterrent. Murder, rape, theft, etc. Few
people would claim that humans should commit these 'crimes'. But
obviously many people *do* commit these crimes. So how does an AI
decide what humans *should* want when a significant minority appear to
want to commit crimes? Popularity doesn't make it right or good.

For example, we pass laws to stop cars speeding through residential
areas where pedestrians are likely  Even down to 20 mph in areas where
children are likely to be playing.  Humans all agree that these laws
are good, to avoid the risk of people being killed. Yet humans in cars
universally ignore speed limits every time they get in their cars and
thousands are killed every year.  So, what do humans really want? Is a
certain level of death and injury quite acceptable?

There are plans to use GPS, a map of the speed limit areas and
computer control in cars to make it impossible for humans to exceed
speed limits. Is this good?  It saves lives and makes human behaviour
conform to the laws that everyone agrees are a *good* thing.

An all-powerful AI might well decide that it is easier to adjust human
brains to what it considers to be good than to go round placing
innumerable fences round human behaviour, which might well drive
humans mad with frustration.

BillK

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57827499-fadecc

Reply via email to