From the Machine Intelligence Research Institute:
/Bill Hibbard is an Emeritus Senior Scientist at the University of Wisconsin-Madison Space
Science and Engineering Center, currently working on issues of AI safety and unintended
behaviors. He has a BA in Mathematics and MS and PhD in Computer Sciences, all from the
University of Wisconsin-Madison. He is the author of Super-Intelligent Machines, “Avoiding
Unintended AI Behaviors,” “Decision Support for Safe AI Design,” and “Ethical Artificial
Intelligence.” He is also principal author of the Vis5D, Cave5D, and VisAD open source
visualization systems./
...
/Bill: The central point of my 2002 book was the need for public education about and
control over above-human-level AI. The current public discussion by Stephen Hawking, Bill
Gates, Elon Musk, Ray Kurzweil, and others about the dangers of AI is very healthy, as it
educates the public. Similarly for the Singularity Summits organized by the Singularity
Institute (MIRI’s predecessor), which I thought were the best thing the Singularity
Institute did.//
//
//In the US people cannot own automatic weapons, guns of greater than .50 caliber, or
explosives without a license. It would be absurd to license such things but to allow
unregulated development of above-human-level AI. As the public is educated about AI, I
think some form of regulation will be inevitable.//
//
//However, as they say, the devil will be in the details and humans will be unable to
compete with future AI on details. Complex details will be AI’s forte. So formulating
effective regulation will be a political challenge. The Glass-Steagal Act of 1933,
regulating banking, was 37 pages long. The Dodd-Frank bill of 2010, also to regulate
banking 77 years later, was 848 pages long. An army of lawyers drafted the bill, many
employed to protect the interests of groups affected by the bill. The increasing
complexity of laws reflects efforts by regulated entities to lighten the burden of
regulation. The stakes in regulating AI will be huge and we can expect armies of lawyers,
with the aid of the AI systems being regulated, to create very complex laws.//
//
//In the second chapter of my book, I conclude that ethical rules are inevitably ambiguous
and base my proposed safe AI design on human values expressed in a utility function rather
than rules. Consider the current case before the US Supreme Court to interpret the meaning
of the words “established by the state” in the context of the 363,086 words of the
Affordable Care Act. This is a good example of the ambiguity of rules. Once AI regulations
become law, armies of lawyers, aided by AI, will be engaged in debates over their
interpretation and application/.
Artificial intelligence vs natural stupidity. Who will win?
Brent
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.