I think the word kill is not the right word. If you look at this in
evolutionary terms we will either be replaced over time, made redundant, or
will co-habitate with or merge into some larger organism. It is also not a
constructive thought process to anthropromophize AI and it's 'needs'. As we
are aware of here, machine learning systems will not at first be human like
or have human like goals. However they will become much more sophisticated
and powerful and embedded in everything that we use. I think the merging
scenario is the likely one, as you can see it happening already with cell
phones, google glass, etc.. If merging is the path, then we will have some
control for a while but eventually we may become like a stomach bacteria
is, a symbiotic organism in a much larger single organism that we will not
be capable of understanding or controlling.

This is not just idle fantasy speculation for us here. We are moving this
process forward by our work on machine learning and it will result in
outcomes that we will have control of in the initial stages, but may loose
control of depending on choices we make now. Once CLA or systems like them
move out of von Numann architecture into a more efficient analog
architecture, one that can grow new neurons and connections from simple
materials (nano tech) then we will see an explosion of growth. In the mean
time we will have limited growth based on obstacles of economy of scale in
existing materials and architecture. If you watch the videos from links
posted here from the recent N.I.C.E. workshop at Sandia you will realize
this is where the industry visionaries want to take us. It is worth taking
the time to watch these videos to get an idea of where we are headed right
now (next 10-15 years).

I also highly recommend the book by Kevin Kelly - What Technology Wants


On Tue, May 6, 2014 at 9:05 AM, David Ragazzi <[email protected]>wrote:

> > If you were dying and you transferred your mind to an artificial one,
> you wouldn't be living forever. An identical copy of you might live
> forever, but that's not YOUR subjective experience.
>
> But YOU are your brain.. artificial or natural.. the only thing won't be
> you, it'll your new body.. This is not a clone, this is a mind uploading..
>
> Today people can implant bionic legs, arms.. If you can replace every
> member of your body to a bionic piece except your natural brain, you'll
> continue the same person, right? If as last step, your natural brain also
> was replaced to an identical but artificial brain, wouldn't be you the same
> too?
>
>
> On 5 May 2014 19:32, Chetan Surpur <[email protected]> wrote:
>
>> If you were dying and you transferred your mind to an artificial one, you
>> wouldn't be living forever. An identical copy of you might live forever,
>> but that's not YOUR subjective experience.
>>
>> That being said, I think the possibility of my great-great-grandchildren
>> being able to talk to a copy of me (or a copy of Stephen Hawking) and learn
>> from it, is quite exciting.
>>
>> On May 4, 2014 at 11:30:32 AM, David Ragazzi ([email protected])
>> wrote:
>>
>> I think that these scenarios very, very pessimistic.. I'm more
>> optimistic.. Actually, I think AI is the future of human race.. One day
>> when we have an exact framework of how the brain works and how reproduce
>> this, we also would be able to migrate our biological brain to a artificial
>> one.. and I don't think this is utopia at all.. If this become reality, we
>> simply would be "humans" in robotic bodies with intelligence
>> and consciousness human but without all biological baggage like diseases
>> and death.. Think: if you was dying and had the choice to your mind is
>> transferred to a artificial one, wouldn't you chose this? This could mean
>> you live forever..
>>
>> Please look this:
>> https://www.youtube.com/watch?v=97rySURIS2M
>>
>>
>> On 4 May 2014 13:03, xcvsdxvsx . <[email protected]> wrote:
>>
>>> This is going to sound radical but i dont think its immediately obvious
>>> that ai destroying man kind would even be a bad thing. Its totally natural
>>> that inferior species are overcome by superior species. Thats how life got
>>> to the point where it is today. Better designs outcompete inferior designs,
>>> the old designs die off, and the new designs take their place, only to be
>>> out-competed themselves someday. Through this process life becomes more
>>> advanced. Humans owe their very existence to this process. Without it we
>>> would still be dirt. IF ai killed us and took our place, to deny the value
>>> of this may be to deny the value of ourselves since this is exactly the
>>> process to which we owe our very own existence.
>>>
>>>
>>> On Sun, May 4, 2014 at 11:42 AM, Austin Marshall <[email protected]>wrote:
>>>
>>>> I haven't read the paper (yet), but here's my take on the failsafe: For
>>>> now, man poses the greatest risk to mankind.  I think we're likely to
>>>> destroy ourselves before a (non humanity-caused) natural disaster.  If we
>>>> were to build a failsafe that actually worked in AI, we'd be able to apply
>>>> the same principles to ourselves.  Personally, I think AI has the potential
>>>> for vastly improving life on earth (and beyond).  I'm optimistic that the
>>>> benefits far outweigh the risk.
>>>>
>>>>
>>>>  On Sat, May 3, 2014 at 8:40 PM, Chris Jernigan <
>>>> [email protected]> wrote:
>>>>
>>>>>  Well, we can add a few more names to that list of scientists who
>>>>> think AI will eventually kill us all. Every time someone comes out with a
>>>>> statement like this, I can’t help but wonder if I’m missing something. I
>>>>> have thought about these things over and over and over again. I just can’t
>>>>> see how we might get to this horrible point of no return without first
>>>>> implementing a fail-safe. Would it really be that unpredictable if we
>>>>> created a machine with true intelligence? What am I missing here?
>>>>>
>>>>> here is the original article written by Hawking, Stuart Russell, Max
>>>>> Tegmark, Frank Wilczek
>>>>>
>>>>> http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html
>>>>>
>>>>>  _______________________________________________
>>>>> nupic mailing list
>>>>> [email protected]
>>>>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> nupic mailing list
>>>> [email protected]
>>>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>>>
>>>>
>>>
>>> _______________________________________________
>>> nupic mailing list
>>> [email protected]
>>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>>
>>>
>>
>>
>> --
>>  David Ragazzi
>> OS Community Commiter
>> Numenta.org
>> --
>> "I think James Connolly, the Irish revolutionary, is right when he says that
>> the only prophets are those who make their future. So we're not
>> anticipating, we're working for it."
>>   _______________________________________________
>> nupic mailing list
>> [email protected]
>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>
>>
>> _______________________________________________
>> nupic mailing list
>> [email protected]
>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>
>>
>
>
> --
> David Ragazzi
> OS Community Commiter
> Numenta.org
> --
> "I think James Connolly, the Irish revolutionary, is right when he says that
> the only prophets are those who make their future. So we're not
> anticipating, we're working for it."
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to