I think that these scenarios very, very pessimistic.. I'm more optimistic..
Actually, I think AI is the future of human race.. One day when we have an
exact framework of how the brain works and how reproduce this, we also
would be able to migrate our biological brain to a artificial one.. and I
don't think this is utopia at all.. If this become reality, we simply would
be "humans" in robotic bodies with intelligence and consciousness human but
without all biological baggage like diseases and death.. Think: if you was
dying and had the choice to your mind is transferred to a artificial one,
wouldn't you chose this? This could mean you live forever..

Please look this:
https://www.youtube.com/watch?v=97rySURIS2M


On 4 May 2014 13:03, xcvsdxvsx . <[email protected]> wrote:

> This is going to sound radical but i dont think its immediately obvious
> that ai destroying man kind would even be a bad thing. Its totally natural
> that inferior species are overcome by superior species. Thats how life got
> to the point where it is today. Better designs outcompete inferior designs,
> the old designs die off, and the new designs take their place, only to be
> out-competed themselves someday. Through this process life becomes more
> advanced. Humans owe their very existence to this process. Without it we
> would still be dirt. IF ai killed us and took our place, to deny the value
> of this may be to deny the value of ourselves since this is exactly the
> process to which we owe our very own existence.
>
>
> On Sun, May 4, 2014 at 11:42 AM, Austin Marshall <[email protected]>wrote:
>
>> I haven't read the paper (yet), but here's my take on the failsafe: For
>> now, man poses the greatest risk to mankind.  I think we're likely to
>> destroy ourselves before a (non humanity-caused) natural disaster.  If we
>> were to build a failsafe that actually worked in AI, we'd be able to apply
>> the same principles to ourselves.  Personally, I think AI has the potential
>> for vastly improving life on earth (and beyond).  I'm optimistic that the
>> benefits far outweigh the risk.
>>
>>
>> On Sat, May 3, 2014 at 8:40 PM, Chris Jernigan <
>> [email protected]> wrote:
>>
>>> Well, we can add a few more names to that list of scientists who think
>>> AI will eventually kill us all. Every time someone comes out with a
>>> statement like this, I can’t help but wonder if I’m missing something. I
>>> have thought about these things over and over and over again. I just can’t
>>> see how we might get to this horrible point of no return without first
>>> implementing a fail-safe. Would it really be that unpredictable if we
>>> created a machine with true intelligence? What am I missing here?
>>>
>>> here is the original article written by Hawking, Stuart Russell, Max
>>> Tegmark, Frank Wilczek
>>>
>>> http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html
>>>
>>> _______________________________________________
>>> nupic mailing list
>>> [email protected]
>>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>>
>>>
>>
>> _______________________________________________
>> nupic mailing list
>> [email protected]
>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>
>>
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>


-- 
David Ragazzi
OS Community Commiter
Numenta.org
--
"I think James Connolly, the Irish revolutionary, is right when he says that
the only prophets are those who make their future. So we're not anticipating
, we're working for it."
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to