Hi Stephen,

Thanks for your reply. I think you and I agree on most of
this, but just want to add a couple comments.

> > The greatest danger in the development of intelligent machines
> > is that they will be built by corporations with learning values
> > focused narrowly on corporate profits (this corresponds very
> > closely with current applications of machine learning to financial
> > investing). Or they will be built by militaries with learning
> > values focused on killing enemies and preserving lives of
> > friendly soldiers.
>
> The latter is much more likely in my opinion. Another discussion is
> whether the US military has sufficient ethics to train and use an AGI on
> behalf of US citizens and I believe that it does.

Given more than 200 years of civilian control over
the U.S. military, my concerns are not based on
doubts about U.S. military ethics. They are based on
the observation that intelligence is the ultimate
source of power, so machines much more intelligent
than humans potentially pose great danger to us.

> > It is important to generate public resistence before wealthy
> > organizations build intelligent machines with learning values
> > focused on narrow interests, rather than the happiness of all
> > humans. Military applications provide an opportunity to make
> > a clear analogy with nuclear, chemical and especially biological
> > weapons, where the public and responsible leaders already
> > understand the importance of banning such technologies.
>
> The analogy of comparing AGI with weapons of mass destruction/impact
> is relevant to the degree that both are dangerous in the hands of our
> enemies, but fails in that AGI is potentially the greatest, most
> beneficial technology - so it likely will not be banned, rather
> regulated.  So government regulation of AGI is another issue to discuss;
> I favor it, many others distrust the US government.

I devote a chapter of my book to regulation of machine
intelligence (chapter title: Public Education and
Control). I think an outright ban on machine intelligence
is unlikely, given their ultimate ability to create wealth
without work for everyone. And many other benefits beyond
that, such as a companion who can develop and explain a
level of mathematics that humans would never discover.
I want intelligent machines, but I want them to be safe
for humans.

> > There will eventually be a terrific political battle over the
> > values of intelligent machines. Powerful corporations will
> > want machines that serve their narrow interests, and national
> > security will motivate many to argue for unrestricted military
> > applications. On the other hand, democracy, education and the
> > free flow of information are increasing (although there are
> > certainly challenges). Hopefully as the technology matures, a
> > "Ralph Nader" of machine intelligence will raise the general
> > public awareness.
>
> I entirely agree, although I do not have Green political beliefs, I find
> many of Ralph Nader's arguments persuasive.  One can imagine an AGI not
> having a party line or dogma, but whose reasoning powers are objective.
> It will be interesting to see what an evolving AGI contributes to
> political debate (or to military ethics).

I wasn't thinking of Nader's role as the Green presidential
nominee in 2000, but his role in educating the public to
automobile safety. At the Green convention, he was introduced
using the words "One million Americans are alive who would
have been killed in auto accidents without Ralph Nader".

Good luck with DARPA.

Cheers,
Bill

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to