As Doug says, this is an important and timely discussion. I personally think that we have far more important things to fear than anything likely to emerge as a result of Machine Intelligence. The problems we really face are mostly due to lack of intelligence in humans, or more precisely the blatant refusal to apply our intelligence in preference to our baser emotions. Machine intelligence will only improve this situation, as we will likely trust an unbiased machine against a human with an agenda.
I see the distant future as similar to the Culture of Iain M Banks' novels, in which the Minds (and lesser AIs) keep us around (and in fact go out of their way to support us), for their own amusement and simply because they like us. Machine intelligences will soon allow us to discover how to access effectively free energy (free of scarcity, free of environmental costs, free of elite control), and this will eliminate much of the justification for the notions of competition, violence, and inequality in the human world. These machine intelligences will therefore lack any concept of eliminating humans, because we do not cost them anything to maintain, and we will provide them with insights based on our natures which are different to their own. In the meantime, however, we have the existing situation where old elites use coercion, espionage, bribery and propaganda to bolster their crumbling empires and postpone their inevitable demise. These people are also best placed to exploit any technology to defend and extend their territories, and to some extent machine intelligence is already being applied to this end. On the other hand, every major technological advance has resulted, after much convulsion, in the overthrow of older elites and the betterment of the vast majority. So I would say that we're pushing through a bottleneck at the moment. Machine intelligence is currently more of a tool of oppression and inequality than a boon to mankind, but these abuses have swiftly diminishing returns, whereas the equalising and liberating uses of machine intelligence have ever greater self-reinforcing, synergistic characteristics. And, as mraptor says, Asimov invented the 3 Laws (and later the Zeroth Law which protects humanity itself). In fact Asimov's laws are just like the Golden Rule (treat others as you would wish to be treated) - they are based on reason and intelligence. They therefore would not need to be hardwired into a machine intelligence (if you could even figure out how to do such a thing). Asimov also wrote the Foundation series, in which he decided to have no robots (or at least they were largely hidden). I believe he did this because a machine-assisted humanity would not have anything like the disastrous historical record of humanity alone, and there wouldn't have been a story! Regards, Fergal Byrne On Tue, May 6, 2014 at 10:27 PM, Doug King <[email protected]> wrote: > I think the word kill is not the right word. If you look at this in > evolutionary terms we will either be replaced over time, made redundant, or > will co-habitate with or merge into some larger organism. It is also not a > constructive thought process to anthropromophize AI and it's 'needs'. As we > are aware of here, machine learning systems will not at first be human like > or have human like goals. However they will become much more sophisticated > and powerful and embedded in everything that we use. I think the merging > scenario is the likely one, as you can see it happening already with cell > phones, google glass, etc.. If merging is the path, then we will have some > control for a while but eventually we may become like a stomach bacteria > is, a symbiotic organism in a much larger single organism that we will not > be capable of understanding or controlling. > > This is not just idle fantasy speculation for us here. We are moving this > process forward by our work on machine learning and it will result in > outcomes that we will have control of in the initial stages, but may loose > control of depending on choices we make now. Once CLA or systems like them > move out of von Numann architecture into a more efficient analog > architecture, one that can grow new neurons and connections from simple > materials (nano tech) then we will see an explosion of growth. In the mean > time we will have limited growth based on obstacles of economy of scale in > existing materials and architecture. If you watch the videos from links > posted here from the recent N.I.C.E. workshop at Sandia you will realize > this is where the industry visionaries want to take us. It is worth taking > the time to watch these videos to get an idea of where we are headed right > now (next 10-15 years). > > I also highly recommend the book by Kevin Kelly - What Technology Wants > > > On Tue, May 6, 2014 at 9:05 AM, David Ragazzi <[email protected]>wrote: > >> > If you were dying and you transferred your mind to an artificial one, >> you wouldn't be living forever. An identical copy of you might live >> forever, but that's not YOUR subjective experience. >> >> But YOU are your brain.. artificial or natural.. the only thing won't be >> you, it'll your new body.. This is not a clone, this is a mind >> uploading.. >> >> Today people can implant bionic legs, arms.. If you can replace every >> member of your body to a bionic piece except your natural brain, you'll >> continue the same person, right? If as last step, your natural brain >> also was replaced to an identical but artificial brain, wouldn't be you the >> same too? >> >> >> On 5 May 2014 19:32, Chetan Surpur <[email protected]> wrote: >> >>> If you were dying and you transferred your mind to an artificial one, >>> you wouldn't be living forever. An identical copy of you might live >>> forever, but that's not YOUR subjective experience. >>> >>> That being said, I think the possibility of my great-great-grandchildren >>> being able to talk to a copy of me (or a copy of Stephen Hawking) and learn >>> from it, is quite exciting. >>> >>> On May 4, 2014 at 11:30:32 AM, David Ragazzi ([email protected]) >>> wrote: >>> >>> I think that these scenarios very, very pessimistic.. I'm more >>> optimistic.. Actually, I think AI is the future of human race.. One day >>> when we have an exact framework of how the brain works and how reproduce >>> this, we also would be able to migrate our biological brain to a artificial >>> one.. and I don't think this is utopia at all.. If this become reality, we >>> simply would be "humans" in robotic bodies with intelligence >>> and consciousness human but without all biological baggage like diseases >>> and death.. Think: if you was dying and had the choice to your mind is >>> transferred to a artificial one, wouldn't you chose this? This could mean >>> you live forever.. >>> >>> Please look this: >>> https://www.youtube.com/watch?v=97rySURIS2M >>> >>> >>> On 4 May 2014 13:03, xcvsdxvsx . <[email protected]> wrote: >>> >>>> This is going to sound radical but i dont think its immediately obvious >>>> that ai destroying man kind would even be a bad thing. Its totally natural >>>> that inferior species are overcome by superior species. Thats how life got >>>> to the point where it is today. Better designs outcompete inferior designs, >>>> the old designs die off, and the new designs take their place, only to be >>>> out-competed themselves someday. Through this process life becomes more >>>> advanced. Humans owe their very existence to this process. Without it we >>>> would still be dirt. IF ai killed us and took our place, to deny the value >>>> of this may be to deny the value of ourselves since this is exactly the >>>> process to which we owe our very own existence. >>>> >>>> >>>> On Sun, May 4, 2014 at 11:42 AM, Austin Marshall <[email protected]>wrote: >>>> >>>>> I haven't read the paper (yet), but here's my take on the failsafe: >>>>> For now, man poses the greatest risk to mankind. I think we're likely to >>>>> destroy ourselves before a (non humanity-caused) natural disaster. If we >>>>> were to build a failsafe that actually worked in AI, we'd be able to apply >>>>> the same principles to ourselves. Personally, I think AI has the >>>>> potential >>>>> for vastly improving life on earth (and beyond). I'm optimistic that the >>>>> benefits far outweigh the risk. >>>>> >>>>> >>>>> On Sat, May 3, 2014 at 8:40 PM, Chris Jernigan < >>>>> [email protected]> wrote: >>>>> >>>>>> Well, we can add a few more names to that list of scientists who >>>>>> think AI will eventually kill us all. Every time someone comes out with a >>>>>> statement like this, I can’t help but wonder if I’m missing something. I >>>>>> have thought about these things over and over and over again. I just >>>>>> can’t >>>>>> see how we might get to this horrible point of no return without first >>>>>> implementing a fail-safe. Would it really be that unpredictable if we >>>>>> created a machine with true intelligence? What am I missing here? >>>>>> >>>>>> here is the original article written by Hawking, Stuart Russell, Max >>>>>> Tegmark, Frank Wilczek >>>>>> >>>>>> http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html >>>>>> >>>>>> _______________________________________________ >>>>>> nupic mailing list >>>>>> [email protected] >>>>>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> nupic mailing list >>>>> [email protected] >>>>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> nupic mailing list >>>> [email protected] >>>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >>>> >>>> >>> >>> >>> -- >>> David Ragazzi >>> OS Community Commiter >>> Numenta.org >>> -- >>> "I think James Connolly, the Irish revolutionary, is right when he says that >>> the only prophets are those who make their future. So we're not >>> anticipating, we're working for it." >>> _______________________________________________ >>> nupic mailing list >>> [email protected] >>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >>> >>> >>> _______________________________________________ >>> nupic mailing list >>> [email protected] >>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >>> >>> >> >> >> -- >> David Ragazzi >> OS Community Commiter >> Numenta.org >> -- >> "I think James Connolly, the Irish revolutionary, is right when he says that >> the only prophets are those who make their future. So we're not >> anticipating, we're working for it." >> >> _______________________________________________ >> nupic mailing list >> [email protected] >> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >> >> > > _______________________________________________ > nupic mailing list > [email protected] > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > > -- Fergal Byrne, Brenter IT Author, Real Machine Intelligence with Clortex and NuPIC https://leanpub.com/realsmartmachines Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014: http://euroclojure.com/2014/ and at LambdaJam Chicago, July 2014: http://www.lambdajam.com http://inbits.com - Better Living through Thoughtful Technology http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne e:[email protected] t:+353 83 4214179 Join the quest for Machine Intelligence at http://numenta.org Formerly of Adnet [email protected] http://www.adnet.ie
_______________________________________________ nupic mailing list [email protected] http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
