On 6/20/2017 11:58 AM, kirst...@saunalahti.fi wrote:
Are you taking the side: "machines are innocent, blame individual persons' ???

No, that's not what I said or implied.  You said that you agreed
with Gene, and I was also agreeing with Gene:

On 6/15/2017 1:10 PM, Eugene Halton wrote:
What "would motivate [AI systems] to kill us?"
Rationally-mechanically infantilized us.

There are many machines that are designed for neutral purposes,
such as cars and trucks.  They can be used for good or evil.

Many machines are deliberately designed for evil purposes.
For example, land mines, chemical weapons, nuclear bombs...
Those are inherently evil.  But they have no more intentionality
than a thermostat.  The evil is in the human design and use.

People talk about the possibility that machines might evolve
intentionality.  But there are no examples today.

The only examples that anyone has suggested are systems that
learn to be evil.  For example, a puppy's natural instinct is
to be a loving companion.  But it could be trained to be vicious.

That's all I was trying to say.  And I thought that I was
agreeing with Gene.

John
-----------------------------
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .




Reply via email to