> On Feb 16, 2017, at 12:27 AM, grarpamp <grarp...@gmail.com> wrote:
>> On Wed, Feb 15, 2017 at 8:39 PM, Razer <g...@riseup.net> wrote:
>> Garbage In Garbage Out...
>> "Will artificial intelligence get more aggressive and selfish the more
>> intelligent it becomes? A new report out of Google’s DeepMind AI division
>> suggests this is possible based on the outcome of millions of video game
>> sessions it monitored. The results of the two games indicate that as
>> artificial intelligence becomes more complex, it is more likely to take
>> extreme measures to ensure victory, including sabotage and greed.
> This is all foretold by Skynet circa Aug 29, 1997, and similar stories
> et al in the SF literature, and by human nature since eons. The only hope
> you have is that AI somehow outlearns and fully rejects the [im]morality
> of its original human programming. And provides a survivable unbeforethought
> and unimagineable solution therein.

There's always the 3 laws of robotics ;)

Nick Bostrom  doesn't seem to think it will be that easy of course. His 
"Superintelligence" book is an interesting look at the problem. He's far more 
pessimistic and i think realistic than Ray Kurtzweil and some other 
"singularity" hypes..

Reply via email to