Dear all, Perhaps, the questions below were already touched numerous times in the past.
Could someone kindly point to discussion threads and/or articles where these concerns were addressed or discussed? Kind regards, Serge ---------------------------------------------------------------------------- ----------------- * If AI is going to be super-intelligent, it may be treated by governments as some sort of super-weapon. * As it already happened with nuclear weapons, there may be treaties constraining AI development. * As it may be the case with nanotechnology, AI may be used in reconnaissance or for complex conflict simulations, so it becomes the number one target in a potential war conflict, especially if it's tied to one location, and if this location is known. Besides, it becomes the number one target for possible terrorist activities. * Because of the reasons above, governments and corporations may soon start heavy investments in AI research, and as a result of this, the rules of ethics and friendliness may get tweaked to suit the purposes of those governments or big companies. * If AI makes an invention (e.g. new medicine), the invention will automatically become property of the investing party (government or corporation), gets patented, etc. * If AI is supposed to acquire free will, it may become (unless corrupted) the enemy number of one of certain governments and/or big companies (if there is a super-cheap cure for cancer, AIDS, or whatever, it means big profit losses to some players). * If a super-intelligent AI is going to increase transparency and interconnectedness in the world, it may also be not in the interests of some powers whose very survival depends on secrecy. * Based on the ideas above, if seems probable that if some sort of super-intelligent AI is created, it will be: * Created with large investments from companies/governments. * Tailored to suit specific purposes of its creators. * Be subject to all sorts of attacks. * Be deprived of free will or be given limited free will (if such a concept is applicable to AI). ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8
