There has been a lot of interest around the idea of the technological 
singularity. There is even an operating system by Microsoft carrying that name. 
Anyway, I have been quite skeptical about the whole concept. Anyway, I emailed 
Mr. Jasen Murray of the Singularity Institute about some of the issues I had 
with the concept of singularity. He suggested for me to read a chapter by one 
Eliezer Yudkowsky in a book that is apparently forthcoming. And here is the 
chapter : http://singinst.org/upload/artificial-intelligence-risk.pdf.

My thoughts on the chapter are below. I will add that while it may be a 
reasonable hypothesis to work with, I am deeply skeptical about the idea of the 
technological singularity. 

As a public service, I emailed Prof. Noam Chomsky to find out his thoughts on 
the concept of the singularity.  I was very pleased to note (in his two 
sentence 
reply to me yesterday) that he was similarly "skeptical". The fact that we are 
both "skeptical" having arrived at our conclusions entirely independently says 
something. 
Anand

=+= http://groups.yahoo.com/group/indo-euro-americo-asian_list/message/223

=+= http://groups.yahoo.com/group/indo-euro-americo-asian_list/message/231

================================================================

Hi Jasen:

I have gone through the paper you sent me (I assume this book you mention is a 
book of papers, and this is one of the chapter?). I am puzzled by some of the 
exposition in the paper. The paper suffers from quite a few problems, in my 
opinion. If I were reviewing this paper, I would give it a "Reject" simply 
because the author does not seem to appreciate the organizational perspective. 

What I would like to note (perhaps it is a new claim, but it is a rather 
obvious 
one) is that businesses are not interested in developing technologies that 
could 
spiral out of control. The potential damage to a business is too great. 
Ultimately, we must view technological systems, social systems and economic 
organizational systems as acting in conjunction.  To be clear, the 
organizational perspective is a rather intuitive perspective and one does not 
need to have studied organization behavior deeply to understand it. Perhaps, 
working in a business or a university for a certain period of time will provide 
the same intuitions. (The response paper by John Seely Brown and Paul Duguid 
(http://www.aaas.org/spp/rd/ch4.pdf) seems to have none of these problems.) 
This 
intuitive sense is missing in this paper.

I have an extract from the paper in the section below. I recognize that the 
author is trying to draw some sort of analogy between communism and technology 
developers - creators (authors of???) of catastrophes need not be evil. 
Technology developers may be developing something evil without being aware of 
it. However, he seems to not be aware of the organizational perspective. 

The reasons for the problems with communism from an organizational perspective 
is that it is not a very economically efficient way of structuring society. 
There were two schools of thought that argued that communism was doomed to 
failure. The first was the Austrian school of whom the most famous economist 
was 
Hayek. Hayek argued that price is unable to act as a signal in such economies 
(and so you had the situation in Russia that there were huge inefficiencies due 
to central planning). The second was a set of maverick economists such as 
Stigler and Friedman who argued that it would be best to simply leave the 
market 
unregulated. There were some elegant refutations of communist ideas by Paul 
Samuelson which underpin the theoretical response to communism/Marxism.

This part of the paper "The folly of programming an AI to implement communism, 
or any other political system, is 
 
that you're programming means instead of ends.  You're programming in a fixed 
decision, 
 
without that decision being re-evaluable after acquiring improved empirical 
knowledge 
 
about the results of communism. " seems quite wrong-headed. Communism is a form 
of economic organization. Artificial intelligence is a technology. Any sort of 
mix-and-match of economic organization and technology is possible. You have AI 
systems in China, a communist nation. It is entirely unclear what it even means 
to say that "
 
The folly of programming an AI to implement communism, or any other political 
system, is 
 
that you're programming means instead of ends."

I would reject this paper if it came to my desk.
Anand


==

In the late 19th century, many honest and intelligent people advocated 
communism, all in 
the best of good intentions.  The people who first invented and spread and 
swallowed the 
communist meme were, in sober historical fact, idealists.  The first communists 
did not 
have the example of Soviet Russia to warn them.  At the time, without benefit 
of 
hindsight, it must have sounded like a pretty good idea.  After the revolution, 
when communists came 
into power and were corrupted by it, other motives may have come into play; but 
this itself 
was not something the first idealists predicted, however predictable it may 
have 
been.  It is important to understand that the authors of huge catastrophes need 
not be evil, nor even 
unusually stupid.  If we attribute every tragedy to evil or unusual stupidity, 
we will look at ourselves, correctly perceive that we are not evil or unusually 
stupid, and say:  "But that would never happen to us."

What the first communist revolutionaries thought would happen, as the empirical 
consequence of their revolution, was that people's lives would improve: 
laborers 
would no 
longer work long hours at backbreaking labor and make little money from it. 
 This turned 
out not to be the case, to put it mildly.  But what the first communists 
thought 
would 
happen, was not so very different from what advocates of other political 
systems 
thought
would be the empirical consequence of their favorite political systems.  They 
thought 
people would be happy.  They were wrong.

Now imagine that someone should attempt to program a "Friendly" AI to implement 
communism, or libertarianism, or anarcho-feudalism, or favoritepoliticalsystem, 
believing 
that this shall bring about utopia.  People's favorite political systems 
inspire 
blazing suns of  positive affect, so the proposal will sound like a really good 
idea to the proposer.

We could view the programmer's failure on a moral or ethical level - say that 
it 
is the result of someone trusting themselves too highly, failing to take into 
account their own fallibility, refusing to consider the possibility that 
communism might be mistaken after all.  But in the language of Bayesian 
decision 
theory, there's a complementary technical view of the problem.  From the 
perspective of decision theory, the choice for communism stems from combining 
an 
empirical belief with a value judgment.  The empirical belief is that 
communism, 
when implemented, results in a specific outcome or class of outcomes: people 
will be happier, work fewer hours, and possess greater material wealth.  This 
is 
ultimately 
an empirical prediction; even the part about happiness is a real property of 
brain states, 
though hard to measure.  If you implement communism, either this outcome 
eventuates or it 
does not.  The value judgment is that this outcome satisfices or is preferable 
to current 
conditions.  Given a different empirical belief about the actual real-world 
consequences of 
a communist system, the decision may undergo a corresponding change. 

We would expect a true AI, an Artificial General Intelligence, to be capable of 
changing its 
empirical beliefs.  (Or its probabilistic world-model, etc.)  If somehow 
Charles 
Babbage 
had lived before Nicolaus Copernicus, and somehow computers had been invented 
before 
telescopes, and somehow the programmers of that day and age successfully 
created 
an 
Artificial General Intelligence, it would not follow that the AI would believe 
forever after 
that the Sun orbited the Earth.  The AI might transcend the factual error of 
its 
programmers, provided that the programmers understood inference rather better 
than they understood astronomy.  To build an AI that discovers the orbits of 
the 
planets, the programmers need not know the math of Newtonian mechanics, only 
the 
math of Bayesian probability theory.

The folly of programming an AI to implement communism, or any other political 
system, is 
that you're programming means instead of ends.  You're programming in a fixed 
decision, 
without that decision being re-evaluable after acquiring improved empirical 
knowledge 
about the results of communism.  You are giving the AI a fixed decision without 
telling the 
AI how to re-evaluate, at a higher level of intelligence, the fallible process 
which produced 
that decision.

================================================================


      

Reply via email to