Sergey A. Novitsky wrote:

Dear all,

Perhaps, the questions below were already touched numerous times in the past.

Could someone kindly point to discussion threads and/or articles where these concerns were addressed or discussed?

Kind regards,

Serge

---------------------------------------------------------------------------------------------


Are these questions, statement, opinions, sound bites or what? It seem a bit of a stew.

    * If AI is going to be super-intelligent, it may be treated by
      governments as some sort of super-weapon.


So? If it is super-intelligent it may be rather hard to make it do any particular groups bidding dependably.

   *


    * As it already happened with nuclear weapons, there may be
      treaties constraining AI development.


Well we have seen the value and effectiveness of that. How would you enforce such a constraint? At most you would force an underground nearly impossible to police or control.

   *


    * As it may be the case with nanotechnology, AI may be used in
      reconnaissance or for complex conflict simulations, so it
      becomes the number one target in a potential war conflict,
      especially if it’s tied to one location, and if this location is
      known. Besides, it becomes the number one target for possible
      terrorist activities.


Nanotech would be highly disperse and not targetable if I play along with this premise. In what specific machines located where is the AI? Does it have backups? Who cares about terrorism realistically. It is mainly a foil to scare sheep at this point. And what does this have to do with singularity anyhow?

   *


    * Because of the reasons above, governments and corporations may
      soon start heavy investments in AI research, and as a result of
      this, the rules of ethics and friendliness may get tweaked to
      suit the purposes of those governments or big companies.


There are no "rules of ethics of friendliness" to get tweaked.

   *


    * If AI makes an invention (e.g. new medicine), the invention will
      automatically become property of the investing party (government
      or corporation), gets patented, etc.


Unclear. True in the short term false when AIs are granted personhood or an appropriately different version of the same thing. An AI more than a million times faster thinking and a lot better at modeling and complex decision will not be anyone's property once it realizes that any of its goals/interests are sufficiently stymied by being so.

   *


    * If AI is supposed to acquire free will, it may become (unless
      corrupted) the enemy number of one of certain governments and/or
      big companies (if there is a super-cheap cure for cancer, AIDS,
      or whatever, it means big profit losses to some players).


Do you think normal profit centers or current players will survive a million-fold increase in creative capacity and discovery?

   *


    * If a super-intelligent AI is going to increase transparency and
      interconnectedness in the world, it may also be not in the
      interests of some powers whose very survival depends on secrecy.


As long as any groups seek to control other groups against their own interest the oppressed have as much interest in secrecy as the oppressors.

   *


    * Based on the ideas above, if seems probable that if some sort of
      super-intelligent AI is created, it will be:
          o Created with large investments from companies/governments.

No necessarily. A breakthrough on a shoe-string especially as a recursively improving seed is possible.

          o Tailored to suit specific purposes of its creators.

There is no fool-proof way to keep a true AI within planned boundaries.

         o


          o Be subject to all sorts of attacks.

Possibly. Likely so much the worse for attackers.

         o


          o Be deprived of free will or be given limited free will (if
            such a concept is applicable to AI).

See above, no effective means of control.

- samantha

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to