This is to extract these statements and reply to them.
> > I the happiness/unhappiness of all humans is one good stepping off
> > point for learning values. But there may be some values that are not
> > shared strongly as major motivators by all humans which might be
> > importaant values.
> >
> I agree about human limitations. But for me any values
> other than human happiness are likely to produce results
> that humans are unhappy with. I guess for me, the ultimate
> value is human happjness.
>
> The bright side is that association with a super-intelligent
> mind will educate and elevate humans and their values.
The problem here, I guess, is the conflict between Platonic expectations of
perfection and the messiness of the real world.
The only systems we know of that generate the most happiness, freedom, and
prosperity are democracy and free enterprise. Both systems are messy and
far from perfect. They both generate a lot of unhappiness and poverty in
their operation. Both need regulation and control mechanisms (rule of law)
to inhibit their unrestricted action. The goal is to find a balance between
the social justice goal of wealth redistibution and the social welfare goal
of wealth generation through unrestricted innovation. They in a sense need
the messiness in order to generate the benefits; designing systems to
generate happiness has always been a recipe for totalitarianism. When the
systems does not allow balance, or failure, when no company, say, can go
bankrupt or fail, no company can succeed, change, or take risks. That's
socialism, and that's what's wrong with it.
The problem with the issue we are discussing here is that the worst-case
scenario for handing power to unrestricted, super-capable AI entities is
very bad, indeed. So what we are looking for is not really building an
ethical structure or moral sense at all. Failure is not an option. The
only way to prevent the worst-case scenarios that have been mentioned by
discussants is not to design moral values and hope, but to build in
hard-wired, Three Laws-type rules that cannot be overridden. And then, on
top of that, build in social, competitive systems that use the presence of
mulitple AIs, dependence on humans as suppliers or intermediaries, ethical,
legal, and even game-theory (remember the movie /War Games/?) strictures,
and even punishment systems up to and incuding shut-down capabilities.
As we have seen, massively redundant systems do not always prevent
catastrophic breakdowns. In this messy, living universe, nothing does. But
it does make sense to put as many safeguards as we can in place, and if some
of them aren't properly respectful of the civil rights of our AI child
entities, so be it. This is survival we're talking about here, according to
many of you.
Hell, it's been said in many SF stories, the human spieces is highly
vulnerable as long as it's stuck on one planet, with all its "eggs" in one
basket. But changing that is going to be even harder than getting the
Shuttle to be reliable.
C. David Noziglia
Object Sciences Corporation
6359 Walker Lane, Alexandria, VA
(703) 253-1095
"What is true and what is not? Only God knows. And, maybe, America."
Dr. Khaled M. Batarfi, Special to Arab
News
"Just because something is obvious doesn't mean it's true."
--- Esmerelda Weatherwax, witch of Lancre
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]