> On 17 Sep 2019, at 10:33, Philip Thrift <[email protected]> wrote:
> 
> 
> 
> On Tuesday, September 17, 2019 at 2:15:52 AM UTC-5, Alan Grayson wrote:
> 
> 
> On Monday, September 16, 2019 at 10:17:24 PM UTC-6, Brent wrote:
> 
> 
> On 9/16/2019 7:49 PM, Alan Grayson wrote:
>> 
>> 
>> On Monday, September 16, 2019 at 2:41:26 PM UTC-6, Brent wrote:
>> 
>> 
>> On 9/16/2019 6:07 AM, Alan Grayson wrote: 
>> > My take on AI; it's no more dangerous than present day computers, 
>> > because it has no WILL, and can only do what it's told to do. I 
>> > suppose it could be told to do bad things, and if it has inherent 
>> > defenses, it can't be stopped, like Gort in The Day the Earth Stood 
>> > Still. AG 
>> 
>> The danger is not so much in AI being told to do bad things, but that in 
>> doing the good things it was told to do it uses unforseen methods that 
>> have disasterous consequences.  It's like Henry Ford was told to invent 
>> fast, convenient personal transportation...and created traffic jams and 
>> global warming. 
>> 
>> Brent 
>> 
>> One could expect military applications, such as robots replacing human
>> infantry, their job to kill the enemy. So if their programming had a flaw, 
>> accidental or intentional, these AI infantry could start killing 
>> indiscriminately.
> 
>  Less likely than with human troops who have built in emotions of revenge and 
> retaliation.
> 
>> It would be hard to stop them since they'd come with self defense functions. 
>> AG
> 
> But we also know a lot more about their internal construction and functions.  
> We would probably even build in an Achilles heel.
> 
> Brent
> 
> I think you underestimate the evil that men can do, not to mention some bit 
> flips due to cosmic rays that could change their MO's entirely. AG 
> 
> 
> Properly-programmed robots would negotiate and avoid any war, killing, or 
> destruction all together.

Properly-programmed robots are what we call conventional non AI programs. Even 
there, there are many difficulties, and economically is not sustainable.

AI programs themselves, and if we treat them s we treat ourselves, conflicts 
will be inevitable. AI are like kids, except that they “evolve” much more 
quickly.

The human factor is the most big danger here.

Bruno




> 
> @philipthrift 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/c9b03a6e-f714-470b-8690-29f40d716cc6%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/everything-list/c9b03a6e-f714-470b-8690-29f40d716cc6%40googlegroups.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/FC847B5E-0EEF-4079-BA10-9A96683F8956%40ulb.ac.be.

Reply via email to