On 9/16/2019 7:49 PM, Alan Grayson wrote:


On Monday, September 16, 2019 at 2:41:26 PM UTC-6, Brent wrote:



    On 9/16/2019 6:07 AM, Alan Grayson wrote:
    > My take on AI; it's no more dangerous than present day computers,
    > because it has no WILL, and can only do what it's told to do. I
    > suppose it could be told to do bad things, and if it has inherent
    > defenses, it can't be stopped, like Gort in The Day the Earth Stood
    > Still. AG

    The danger is not so much in AI being told to do bad things, but
    that in
    doing the good things it was told to do it uses unforseen methods
    that
    have disasterous consequences.  It's like Henry Ford was told to
    invent
    fast, convenient personal transportation...and created traffic
    jams and
    global warming.

    Brent


One could expect military applications, such as robots replacing human
infantry, their job to kill the enemy. So if their programming had a flaw, accidental or intentional, these AI infantry could start killing indiscriminately.

 Less likely than with human troops who have built in emotions of revenge and retaliation.

It would be hard to stop them since they'd come with self defense functions. AG

But we also know a lot more about their internal construction and functions.  We would probably even build in an Achilles heel.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/cb65eb0e-bd08-fc2a-2a48-b4b1e11b86a0%40verizon.net.

Reply via email to