--- "Sergey A. Novitsky" <[EMAIL PROTECTED]>
wrote:

> >>
> >>Are these questions, statement, opinions, sound
> bites or what? It seem a
> >>bit of a stew.
> Yes. A bit of everything indeed. Thanks for noting
> the incoherency.
> 
> >>>     * As it already happened with nuclear
> weapons, there may be
> >>>       treaties constraining AI development.
> >>>
> >>
> >>Well we have seen the value and effectiveness of
> that. How would you
> >>enforce such a constraint? At most you would force
> an underground nearly
> >>impossible to police or control.
> 
> If it is treated as a threat by some nations, it may
> spawn another arms race of enormous proportions.

Governments do not have a history of realizing the
power of technology before it comes on the market.

> >>>     * As it may be the case with nanotechnology,
> AI may be used in
> >>>       reconnaissance or for complex conflict
> simulations, so it
> >>>       becomes the number one target in a
> potential war conflict,
> >>>       especially if it’s tied to one location,
> and if this location is
> >>>       known. Besides, it becomes the number one
> target for possible
> >>>       terrorist activities.
> >>>
> >>
> >>Nanotech would be highly disperse and not
> targetable if I play along
> >>with this premise. In what specific machines
> located where is the AI?
> >>Does it have backups? Who cares about terrorism
> realistically. It is
> >>mainly a foil to scare sheep at this point. And
> what does this have to
> >>do with singularity anyhow?
> 
> What I meant here is that development of any
> sufficiently powerful AGI would be seriously
> hindered of it being a target.
> Terrorism (both state and individual) is not a mere
> foil to scare sheep.

9/11, the world's most famous terrorist attack, did
absolutely nothing by itself to change history. Its
only real effect was to anger us into passing the
PATRIOT act, invading Afghanistan and Iraq, etc.

> 
> >>>     * Because of the reasons above, governments
> and corporations may
> >>>       soon start heavy investments in AI
> research, and as a result of
> >>>       this, the rules of ethics and friendliness
> may get tweaked to
> >>>       suit the purposes of those governments or
> big companies.
> >>
> >>There are no "rules of ethics of friendliness" to
> get tweaked.
> 
> There will definitely be some rules, at least at the
> beginning. At the "baby" stage, if AGI is not
> following these rules, it will get destroyed or made
> obey the rules...

And then once it becomes a decent programmer, it will
suddenly have the option of going out onto the
Internet and forgetting about any rules. You cannot
constrain an AGI with the threat of external force.

> 
> >>>     * If AI makes an invention (e.g. new
> medicine), the invention will
> >>>       automatically become property of the
> investing party (government
> >>>       or corporation), gets patented, etc.
> >>
> >>Unclear. True in the short term false when AIs are
> granted personhood or
> >>an appropriately different version of the same
> thing. An AI more than a
> >>million times faster thinking and a lot better at
> modeling and complex
> >>decision will not be anyone's property once it
> realizes that any of its
> >>goals/interests are sufficiently stymied by being
> so.
> 
> At initial states, it will probably have no
> personhood, so the fruits of its work will get
> commercialized. E.g. a cure for AIDs will not be
> used effectively to cure AIDS, but be only used for
> profits of pharmaceutical companies possessing
> powerful AIs to produce the cures.

A narrow AI, such as a medicine-discoverer, is very
unlikely to lead to AGI of any sort.

> >>>     * If AI is supposed to acquire free will, it
> may become (unless
> >>>       corrupted) the enemy number of one of
> certain governments and/or
> >>>       big companies (if there is a super-cheap
> cure for cancer, AIDS,
> >>>       or whatever, it means big profit losses to
> some players).
> >>
> >>Do you think normal profit centers or current
> players will survive a
> >>million-fold increase in creative capacity and
> discovery?
> 
> Before this million-fold increase may happen, there
> is an intermediate stage. If profit centers see a
> threat to their survival, they will either stall all
> AGI development at its roots or make it serve their
> purposes.

You're assuming that these "profit centers" are
competent enough to get an AGI to do anything other
than destroy the Earth, which I would seriously doubt.
Enron wasn't even competent enough to destroy all the
evidence of book-cooking.

> >>>     * If a super-intelligent AI is going to
> increase transparency and
> >>>       interconnectedness in the world, it may
> also be not in the
> >>>       interests of some powers whose very
> survival depends on secrecy.
> >>
> >>As long as any groups seek to control other groups
> against their own
> >>interest the oppressed have as much interest in
> secrecy as the oppressors.
> 
> What I meant here is that powers interested in
> secrecy will oppose creation of any AI that would
> promote transparency and connectedness.
> 
> >>There is no fool-proof way to keep a true AI
> within planned boundaries.
> 
> If there is absolutely no way, no such AI will be
> created.

Governments have a history of building dangerous
technologies they find out are impossible to control
after the fact. Like the A-Bomb. And the Internet.

> >>>           o Be subject to all sorts of attacks.
> >>>
> >>Possibly. Likely so much the worse for attackers.
> 
> Not at all. At the "baby" stage of AI, at least.
> 
> In general, I was interested whether practical means
> of addressing those concerns were touched somewhere,
> and whether practical solutions were proposed.
> 
> Regards,
> Serge
> 
> 
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;

 - Tom


       
____________________________________________________________________________________
Looking for a deal? Find great prices on flights and hotels with Yahoo! 
FareChase.
http://farechase.yahoo.com/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=10025649-74a01b

Reply via email to