On 8/20/07, Sergey A. Novitsky <[EMAIL PROTECTED]> wrote:
> However, in general, I got disenchanted with the concept of Singularity, and
> for the following reasons:
> - Technology and intelligence is like a double-edged sword. It can serve
> both harmony and cooperation, and hostility and predation.

I -- as an example of a singularitarian -- certainly agree.

> - From what I can observe, the more power over the environment someone gets
> (it may be a single individual, and organization, or a country), the more
> there is a tendency to abuse that power (and the environment), get addicted
> to it, want more of it. I come from a country (Russia) where the government
> used to destroy its own people by millions. Now it looks like the progress
> of technology will make it possible to silence all dissent once and for all
> (in any country). Nanotech, for example, would allow killing with complete
> impunity. The US government now wants the ability to strike at any place in
> the world within one hour. As technology gets smarter, this time will
> decrease to one minute, then one second, then it gets automated, etc.
> (Russian government would want such a thing as well).

This tends to be the case with humans, yes.

> - Intelligence IS power. If power can be abused, it will be.

Often in the case of humans, yes, but there are effective ways to
mitigate these problems substantially.

> - Technological progress tends to increase inequality and create tensions
> between people (good analogy with astronomy, where as one approaches a black
> hole, tidal forces increase). Similar stretching forces on the social level?

I'm not sure if I agree with this being universally true, but even if
we assume that it is, it still isn't a realistic plan to stop
technological progress, and so it follows that we have to deal with
the problems that come with it one way or another.

> - Current progress in medicine is at the service of the rich. Pharmaceutical
> companies would rather have thousands of people die than lose some of their
> profits.

True, but not very relevant to the topic at hand.

> - Science is immoral in its approaches to living beings, it has no
> compassion and no heart. (E.g. millions of mice were bred with certain genes
> knocked out in order to study the genome, other examples may be noted as
> well). Are we going to allow advanced AIs to do similar things with people?
> And if no, why do we allow doing these things ourselves with less
> intelligent beings?

I for my part am not particularly allowing weaker beings to be treated
in such a way. I've been a vegan for many years etc. (Though I
currently sometimes eat non-vegan stuff too.)

It's not that science is immoral, it's amoral, and most of the people
using it are selfish bastards, as humans usually are.

> - No sane government would allow developing an AI (or in general, 'some kind
> of power') which:
>   - Has a potential to restructure society and deprive the government of its
> power and privileges.
>   - Has a potential to install itself as a governing body.
>   - Has a potential to bring bring existing rulers to justice and judge them
> by the laws different from the ones we have now.
>   Most governments will sooner or later outlaw all decent AI work (unless
> its under the government hood and serving the interests of the elite).

Even if governments have enough of a clue to be involved in the
Singularity scenario being implemented (as they very well might have,
though I won't be counting on it), it does not seem that it would be
in their best interest to prevent all positive scenarios. The
Singularity scenario can be implemented in such a way that everyone
wins, even the selfish bastards that currently have most of the power.
It is more advantageous for the current rulers to allow some sort of
"everyone wins" scenario to come to pass, than to exert a lot of
effort to e.g. prevent anyone from developing an advanced AI. They
would lose out a lot too if superintelligence was never developed.

These "everyone wins" scenarios are a complex topic, and I might write
a bit more about them in a later message, but suffice it to say for
now that with sufficiently advanced technology, there is no motive for
the current rulers to pay a lot for keeping their power to make
others' lives miserable, as they will be able to achieve any goal
they'd like without making anyone miserable. And the current ruling
class (i.e. rich people) is not composed solely of selfish bastards;
there are some moderately nice people as well, and they(/us?) will in
a technologically developed scenario most likely be able to barter
with the bastards to achieve an outcome where no human beings are
subjected to misery. The bastards are interested in causing misery
only when they can profit by it, and when it ceases to be profitable
(when their goals can be achieved through other means), they can be
bought out of causing misery for quite a reasonable price.

-- 
Aleksei Riikonen - http://www.iki.fi/aleksei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=33573947-1d97d5

Reply via email to