Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-13 Thread scott Ford
Russell:

Yep , exactly, AI's that develop themselves , think Person of Interest ,
had a list of all the social security numbers of people that needed help
and at Midnight did a complete refresh of that list.
A very interesting concept.

Scott

On Mon, May 11, 2020 at 10:06 PM Seymour J Metz  wrote:

> Well, Heinlein's explanation was bafflegab, but think neural nets; they
> have to be trained rather than programmed.
>
>
> --
> Shmuel (Seymour J.) Metz
> http://mason.gmu.edu/~smetz3
>
> 
> From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf
> of Russell Witt [025adb32e6d7-dmarc-requ...@listserv.ua.edu]
> Sent: Monday, May 11, 2020 9:52 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Colossus, Strangelove, etc. was: Developers say...
>
> But what about the AI that develops autonomously? Remember Mike (Mycroft)
> from The Moon is a Harsh Mistress (Heinlein) and TANSTAAFL (still true
> today - so many people forget). AI might not be "developed" directly, which
> then rules out having any "rules".
>
> Russell
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of scott Ford
> Sent: Monday, May 11, 2020 10:51 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Colossus, Strangelove, etc. was: Developers say...
>
> Joel,
>
> I agree I am a huge sci-fi fan and believe in the sciences over utter
> stupidity.
> Lionel your point is well taken. I am guilty too, but when you have strong
> feelings , which sometimes part of ADHD , it’s called RSD ( Reject
> Sensitive Dysphoria ).
> I have both ...
>
> Scott
>
> On Mon, May 11, 2020 at 11:22 AM Lionel B Dyck  wrote:
>
> > Joel - can we please keep politics out of this listserv. Personally I
> > wouldn't trust anyone in power to act against their own self interests
> > and that applies to politicians and anyone else with power (as in
> > money, influence, etc.).
> >
> > There are altruistic individuals in the world and when it comes to the
> > development of an AI robot one prays/hopes that those are the software
> > developers who implement the code for the three laws.
> >
> >
> > Lionel B. Dyck <
> > Website:
> https://secure-web.cisco.com/10rW9HRt3rzwxzpQwZHaIaC_lBpBXktWnjqey9MYD8CTNRZehiZ-cQm-wjOlPtpza0yh2Q10-0KdT_XcArRjoeQ2nMiLt61252ye4hPTKFuPXgrYELwQ54ioOLkR-FEGH68FsHXY145RqiE1b97NrhE7o2clkfWGlhPy4F22jGvW6jjJwZoNwOx_dD5DdA6cOy-OO7TwEgYNdCD5EJ4IN51GSWLIYW-JGV4c_TaAon7_kL_nRItaZXmspnf7KySHBu5WuvaH4pKwaq4YARZjZT50Ltdv63kKUvQ72XSRiO7-aCszqGoWi4CU0gh--4qDLpQsD5sQH0UhbJfJkZKPzfZoGfFDT12X_BzTNKk0CYbrB-yKyMQlyr3pXTBSYUcc74hMt2il56Km4CzPi85cLM1YuDxBMjeMeZMa1sXtneJ86iw1PpGfx__Tkz8En7xH8/https%3A%2F%2Fwww.lbdsoftware.com
> >
> > "Worry more about your character than your reputation.  Character is
> > what you are, reputation merely what others think you are." - John
> > Wooden
> >
> > -Original Message-
> > From: IBM Mainframe Discussion List  On
> > Behalf Of Joel C. Ewing
> > Sent: Monday, May 11, 2020 10:12 AM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: Re: Colossus, Strangelove, etc. was: Developers say...
> >
> > I've greatly enjoyed Asimov's vision of future possibilities, but when
> > I step back to reality it occurs to me that his perfect laws of
> > robotics would have to be implemented by fallible human programmers.
> > Even if well-intentioned, how would they unambiguously convey to a
> > robot the concepts of "human", "humanity", "hurt", and "injure" when
> > there have always been minorities or "others" that are treated by one
> > group of humans as sub-human to justify injuring them in the name of
> "protecting"
> > them or protecting humanity?  And then there is the issue of who might
> > make the decision to build sentient robots:   For example, who in our
> > present White House would you trust to pay any heed to logic or
> > scientific recommendations or long-term consequences, if they were
> > given the opportunity to construct less-constrained AI robots that
> > they perceived offered some short-term political advantage?
> >
> > Humanity was also fortunate that when the hardware of Asimov's Daneel
> > began to fail, that he failed gracefully, rather than becoming a
> > menace to humanity.
> > Joel C Ewing
> >
> > On 5/11/20 8:43 AM, scott Ford wrote:
> > > Well done JoelI agree , But I can help to to be curious about
> > > the future of AI.
> > > a bit of Isaac Asimov 
&g

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-12 Thread Phil Smith III
Shmuel wrote:

>I hated it; that level of AI on a 360/75? To say nothing of just reeking of 
>sympathetic magic.

>BTW, the wiki article got the origin of the name wrong; it was P-1 because it 
>ran in partition (remember those) 1. Does anybody know whether Waterloo was 
>actually running MVT on their 75, as seems likely?



Yes, we were. I remember long after the 360 was gone, a virtual machine called 
OSVS2 still running some tiny thing that nobody felt like porting.

 

That book started badly, getting the bloody location of Waterloo wrong for no 
good reason (didn't matter to the story, but they had it on the wrong highway) 
but was at least entertaining.

 

...phsiii


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread Seymour J Metz
Well, Heinlein's explanation was bafflegab, but think neural nets; they have to 
be trained rather than programmed.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Russell Witt [025adb32e6d7-dmarc-requ...@listserv.ua.edu]
Sent: Monday, May 11, 2020 9:52 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Colossus, Strangelove, etc. was: Developers say...

But what about the AI that develops autonomously? Remember Mike (Mycroft) from 
The Moon is a Harsh Mistress (Heinlein) and TANSTAAFL (still true today - so 
many people forget). AI might not be "developed" directly, which then rules out 
having any "rules".

Russell

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of scott Ford
Sent: Monday, May 11, 2020 10:51 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Colossus, Strangelove, etc. was: Developers say...

Joel,

I agree I am a huge sci-fi fan and believe in the sciences over utter stupidity.
Lionel your point is well taken. I am guilty too, but when you have strong 
feelings , which sometimes part of ADHD , it’s called RSD ( Reject Sensitive 
Dysphoria ).
I have both ...

Scott

On Mon, May 11, 2020 at 11:22 AM Lionel B Dyck  wrote:

> Joel - can we please keep politics out of this listserv. Personally I
> wouldn't trust anyone in power to act against their own self interests
> and that applies to politicians and anyone else with power (as in
> money, influence, etc.).
>
> There are altruistic individuals in the world and when it comes to the
> development of an AI robot one prays/hopes that those are the software
> developers who implement the code for the three laws.
>
>
> Lionel B. Dyck <
> Website: 
> https://secure-web.cisco.com/10rW9HRt3rzwxzpQwZHaIaC_lBpBXktWnjqey9MYD8CTNRZehiZ-cQm-wjOlPtpza0yh2Q10-0KdT_XcArRjoeQ2nMiLt61252ye4hPTKFuPXgrYELwQ54ioOLkR-FEGH68FsHXY145RqiE1b97NrhE7o2clkfWGlhPy4F22jGvW6jjJwZoNwOx_dD5DdA6cOy-OO7TwEgYNdCD5EJ4IN51GSWLIYW-JGV4c_TaAon7_kL_nRItaZXmspnf7KySHBu5WuvaH4pKwaq4YARZjZT50Ltdv63kKUvQ72XSRiO7-aCszqGoWi4CU0gh--4qDLpQsD5sQH0UhbJfJkZKPzfZoGfFDT12X_BzTNKk0CYbrB-yKyMQlyr3pXTBSYUcc74hMt2il56Km4CzPi85cLM1YuDxBMjeMeZMa1sXtneJ86iw1PpGfx__Tkz8En7xH8/https%3A%2F%2Fwww.lbdsoftware.com
>
> "Worry more about your character than your reputation.  Character is
> what you are, reputation merely what others think you are." - John
> Wooden
>
> -Original Message-
> From: IBM Mainframe Discussion List  On
> Behalf Of Joel C. Ewing
> Sent: Monday, May 11, 2020 10:12 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Colossus, Strangelove, etc. was: Developers say...
>
> I've greatly enjoyed Asimov's vision of future possibilities, but when
> I step back to reality it occurs to me that his perfect laws of
> robotics would have to be implemented by fallible human programmers.
> Even if well-intentioned, how would they unambiguously convey to a
> robot the concepts of "human", "humanity", "hurt", and "injure" when
> there have always been minorities or "others" that are treated by one
> group of humans as sub-human to justify injuring them in the name of 
> "protecting"
> them or protecting humanity?  And then there is the issue of who might
> make the decision to build sentient robots:   For example, who in our
> present White House would you trust to pay any heed to logic or
> scientific recommendations or long-term consequences, if they were
> given the opportunity to construct less-constrained AI robots that
> they perceived offered some short-term political advantage?
>
> Humanity was also fortunate that when the hardware of Asimov's Daneel
> began to fail, that he failed gracefully, rather than becoming a
> menace to humanity.
> Joel C Ewing
>
> On 5/11/20 8:43 AM, scott Ford wrote:
> > Well done JoelI agree , But I can help to to be curious about
> > the future of AI.
> > a bit of Isaac Asimov 
> >
> > Scott
> >
> > On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing  wrote:
> >
> >> And of course the whole point of Colossus, Dr Strangelove, War
> >> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc.
> >> was to try to make it clear to all the non-engineers and
> >> non-programmers (all of whom greatly outnumber us) why putting
> >> lethal force in the hands of any autonomous or even semi-autonomous
> >> machine is something with incredible potential to go wrong.  We all
> >> know that even if the hardware doesn't fail, which it inevitably
> >> will, that all software above a certain leve

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread Russell Witt
But what about the AI that develops autonomously? Remember Mike (Mycroft) from 
The Moon is a Harsh Mistress (Heinlein) and TANSTAAFL (still true today - so 
many people forget). AI might not be "developed" directly, which then rules out 
having any "rules".

Russell

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of scott Ford
Sent: Monday, May 11, 2020 10:51 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Colossus, Strangelove, etc. was: Developers say...

Joel,

I agree I am a huge sci-fi fan and believe in the sciences over utter stupidity.
Lionel your point is well taken. I am guilty too, but when you have strong 
feelings , which sometimes part of ADHD , it’s called RSD ( Reject Sensitive 
Dysphoria ).
I have both ...

Scott

On Mon, May 11, 2020 at 11:22 AM Lionel B Dyck  wrote:

> Joel - can we please keep politics out of this listserv. Personally I 
> wouldn't trust anyone in power to act against their own self interests 
> and that applies to politicians and anyone else with power (as in 
> money, influence, etc.).
>
> There are altruistic individuals in the world and when it comes to the 
> development of an AI robot one prays/hopes that those are the software 
> developers who implement the code for the three laws.
>
>
> Lionel B. Dyck <
> Website: https://www.lbdsoftware.com
>
> "Worry more about your character than your reputation.  Character is 
> what you are, reputation merely what others think you are." - John 
> Wooden
>
> -Original Message-
> From: IBM Mainframe Discussion List  On 
> Behalf Of Joel C. Ewing
> Sent: Monday, May 11, 2020 10:12 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Colossus, Strangelove, etc. was: Developers say...
>
> I've greatly enjoyed Asimov's vision of future possibilities, but when 
> I step back to reality it occurs to me that his perfect laws of 
> robotics would have to be implemented by fallible human programmers.  
> Even if well-intentioned, how would they unambiguously convey to a 
> robot the concepts of "human", "humanity", "hurt", and "injure" when 
> there have always been minorities or "others" that are treated by one 
> group of humans as sub-human to justify injuring them in the name of 
> "protecting"
> them or protecting humanity?  And then there is the issue of who might
> make the decision to build sentient robots:   For example, who in our
> present White House would you trust to pay any heed to logic or 
> scientific recommendations or long-term consequences, if they were 
> given the opportunity to construct less-constrained AI robots that 
> they perceived offered some short-term political advantage?
>
> Humanity was also fortunate that when the hardware of Asimov's Daneel 
> began to fail, that he failed gracefully, rather than becoming a 
> menace to humanity.
> Joel C Ewing
>
> On 5/11/20 8:43 AM, scott Ford wrote:
> > Well done JoelI agree , But I can help to to be curious about 
> > the future of AI.
> > a bit of Isaac Asimov 
> >
> > Scott
> >
> > On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing  wrote:
> >
> >> And of course the whole point of Colossus, Dr Strangelove, War 
> >> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. 
> >> was to try to make it clear to all the non-engineers and 
> >> non-programmers (all of whom greatly outnumber us) why putting 
> >> lethal force in the hands of any autonomous or even semi-autonomous 
> >> machine is something with incredible potential to go wrong.  We all 
> >> know that even if the hardware doesn't fail, which it inevitably 
> >> will, that all software above a certain level of complexity is 
> >> guaranteed to have bugs with unknown consequences.
> >> There is another equally cautionary genre in sci-fi about 
> >> society becoming so dependent on machines as to lose the knowledge 
> >> to understand and maintain the machines, resulting in total 
> >> collapse when the machines inevitably fail.  I still remember my 
> >> oldest sister
> reading E.M.
> >> Forster, "The Machine Stops" (1909), to me  when I was very young.
> >> Various Star Trek episodes used both of these themes as plots.
> >> People can also break down with lethal  side effects, but the 
> >> potential  damage one person can create is more easily contained by
> >> other people.   The  only effective way to defend again a berserk lethal
> >> machine may be with another lethal machine, and Colossus-Guardian 
> >&

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread Mike Schwab
I was on a chemistry site, and posted asking about a source of
https://en.wikipedia.org/wiki/Thiotimoline .  Someone actually emailed
me they had some, so I asked how quick did it dissolve in water.  They
didn't reply.

At the end of his Chemistry PhD Thesis defense, the questioners asked
about it too.  Then grinned and announced he had passed.

On Mon, May 11, 2020 at 8:16 PM Seymour J Metz  wrote:
>
> Remember that it was fiction, and that Asimov's field was biochemistry. His 
> stories have a good deal of bafflegab in them, but the key question is 
> whether you enjoyed them; I did. I found the gimmick of laws that we don't 
> even know how to interpret, never mind implement, far less distracting than, 
> e.g., the faulty counting of electrons in "The Gods Themselves".
>
> Likewise the Galactic Empire stories. Did the handwaving prevent you from 
> enjoying them.
>
> And for all of you that enjoyed any of Asimov's stories, I strongly recommend 
> that you look up Thiotimoline on wike; put down your hot coffe and your cat 
> before you start reading those stories.
>
>
> --
> Shmuel (Seymour J.) Metz
> http://mason.gmu.edu/~smetz3
>
> 
> From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
> Joel C. Ewing [jcew...@acm.org]
> Sent: Monday, May 11, 2020 11:11 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Colossus, Strangelove, etc. was: Developers say...
>
> I've greatly enjoyed Asimov's vision of future possibilities, but when I
> step back to reality it occurs to me that his perfect laws of robotics
> would have to be implemented by fallible human programmers.  Even if
> well-intentioned, how would they unambiguously convey to a robot the
> concepts of "human", "humanity", "hurt", and "injure" when there have
> always been minorities or "others" that are treated by one group of
> humans as sub-human to justify injuring them in the name of "protecting"
> them or protecting humanity?  And then there is the issue of who might
> make the decision to build sentient robots:   For example, who in our
> present White House would you trust to pay any heed to logic or
> scientific recommendations or long-term consequences, if they were given
> the opportunity to construct less-constrained AI robots that they
> perceived offered some short-term political advantage?
>
> Humanity was also fortunate that when the hardware of Asimov's Daneel
> began to fail, that he failed gracefully, rather than becoming a menace
> to humanity.
> Joel C Ewing
>
> On 5/11/20 8:43 AM, scott Ford wrote:
> > Well done JoelI agree , But I can help to to be curious about the
> > future of AI.
> > a bit of Isaac Asimov 
> >
> > Scott
> >
> > On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing  wrote:
> >
> >> And of course the whole point of Colossus, Dr Strangelove, War
> >> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was to
> >> try to make it clear to all the non-engineers and non-programmers (all
> >> of whom greatly outnumber us) why putting lethal force in the hands of
> >> any autonomous or even semi-autonomous machine is something with
> >> incredible potential to go wrong.  We all know that even if the hardware
> >> doesn't fail, which it inevitably will, that all software above a
> >> certain level of complexity is guaranteed to have bugs with unknown
> >> consequences.
> >> There is another equally cautionary genre in sci-fi about society
> >> becoming so dependent on machines as to lose the knowledge to understand
> >> and maintain the machines, resulting in total collapse when the machines
> >> inevitably fail.  I still remember my oldest sister reading E.M.
> >> Forster, "The Machine Stops" (1909), to me  when I was very young.
> >> Various Star Trek episodes used both of these themes as plots.
> >> People can also break down with lethal  side effects, but the
> >> potential  damage one person can create is more easily contained by
> >> other people.   The  only effective way to defend again a berserk lethal
> >> machine may be with another lethal machine, and Colossus-Guardian
> >> suggests why that may be an even worse idea.
> >> Joel C Ewing
> >>
> >> On 5/11/20 4:54 AM, Seymour J Metz wrote:
> >>> Strangelove was twisted because the times were twisted. We're ripe for a
> >> similar parody on our own times.
> >>>
> >>> --
> >>> Shmue

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread Seymour J Metz
Remember that it was fiction, and that Asimov's field was biochemistry. His 
stories have a good deal of bafflegab in them, but the key question is whether 
you enjoyed them; I did. I found the gimmick of laws that we don't even know 
how to interpret, never mind implement, far less distracting than, e.g., the 
faulty counting of electrons in "The Gods Themselves".

Likewise the Galactic Empire stories. Did the handwaving prevent you from 
enjoying them.

And for all of you that enjoyed any of Asimov's stories, I strongly recommend 
that you look up Thiotimoline on wike; put down your hot coffe and your cat 
before you start reading those stories.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Joel C. Ewing [jcew...@acm.org]
Sent: Monday, May 11, 2020 11:11 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Colossus, Strangelove, etc. was: Developers say...

I've greatly enjoyed Asimov's vision of future possibilities, but when I
step back to reality it occurs to me that his perfect laws of robotics
would have to be implemented by fallible human programmers.  Even if
well-intentioned, how would they unambiguously convey to a robot the
concepts of "human", "humanity", "hurt", and "injure" when there have
always been minorities or "others" that are treated by one group of
humans as sub-human to justify injuring them in the name of "protecting"
them or protecting humanity?  And then there is the issue of who might
make the decision to build sentient robots:   For example, who in our
present White House would you trust to pay any heed to logic or
scientific recommendations or long-term consequences, if they were given
the opportunity to construct less-constrained AI robots that they
perceived offered some short-term political advantage?

Humanity was also fortunate that when the hardware of Asimov's Daneel
began to fail, that he failed gracefully, rather than becoming a menace
to humanity.
Joel C Ewing

On 5/11/20 8:43 AM, scott Ford wrote:
> Well done JoelI agree , But I can help to to be curious about the
> future of AI.
> a bit of Isaac Asimov 
>
> Scott
>
> On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing  wrote:
>
>> And of course the whole point of Colossus, Dr Strangelove, War
>> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was to
>> try to make it clear to all the non-engineers and non-programmers (all
>> of whom greatly outnumber us) why putting lethal force in the hands of
>> any autonomous or even semi-autonomous machine is something with
>> incredible potential to go wrong.  We all know that even if the hardware
>> doesn't fail, which it inevitably will, that all software above a
>> certain level of complexity is guaranteed to have bugs with unknown
>> consequences.
>> There is another equally cautionary genre in sci-fi about society
>> becoming so dependent on machines as to lose the knowledge to understand
>> and maintain the machines, resulting in total collapse when the machines
>> inevitably fail.  I still remember my oldest sister reading E.M.
>> Forster, "The Machine Stops" (1909), to me  when I was very young.
>> Various Star Trek episodes used both of these themes as plots.
>> People can also break down with lethal  side effects, but the
>> potential  damage one person can create is more easily contained by
>> other people.   The  only effective way to defend again a berserk lethal
>> machine may be with another lethal machine, and Colossus-Guardian
>> suggests why that may be an even worse idea.
>> Joel C Ewing
>>
>> On 5/11/20 4:54 AM, Seymour J Metz wrote:
>>> Strangelove was twisted because the times were twisted. We're ripe for a
>> similar parody on our own times.
>>>
>>> --
>>> Shmuel (Seymour J.) Metz
>>> http://mason.gmu.edu/~smetz3
>>>
>>> 
>>> From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on
>> behalf of Farley, Peter x23353 [peter.far...@broadridge.com]
>>> Sent: Sunday, May 10, 2020 11:39 PM
>>> To: IBM-MAIN@LISTSERV.UA.EDU
>>> Subject: Re: Developers say Google's Go is 'most sought after'
>> programming language of 2020
>>> For relatively recent fare, I agree 100% - "Person of Interest" leads
>> the pack.  My favorite oldie -- "Let's play Global Thermonuclear War . . .
>> " (War Games), right after Dr. Strangelove of course, simply because it was
>> so twisted.
>>> Mutual Assure

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread Seymour J Metz
I hated it; that level of AI on a 360/75? To say nothing of just reeking of 
sympathetic magic.

BTW, the wiki article got the origin of the name wrong; it was P-1 because it 
ran in partition (remember those) 1. Does anybody know whether Waterloo was 
actually running MVT on their 75, as seems likely?


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of Joe 
Monk [joemon...@gmail.com]
Sent: Monday, May 11, 2020 12:57 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Colossus, Strangelove, etc. was: Developers say...

An even better story ...

https://secure-web.cisco.com/1uc-VlaG4d4DXKhOogYOQ-R2xx23bgRDSC_wr66amqQiP3JV4iUeulIhLwneXEMLW355gnqlB4IoI-jRG1gHFALOCJZl9sQ8e8Nr73-c6782R0WU_Os6gnDwja6mrvo4oEbL_nk2DGPA9VLQ0Exe0S-dzkqkiR_QD2TMZp1ymyy3ZzbvqQ2uiBr5AmjZv-6YN8D0t2QERQ6sxkP0CFe1y-bKP5oa-K6nXaOZvYymMe8_X-Gnzb7rd8PtAbJ_nvUVGQctCvIdNwiMB_Tb1TlHYTKd8P1v_Zq4JS8jYMxAfLtQ49SLQcp0C0xEMv2pyFjwP2GOpn9yn8xV2rI9EVHenvEEfd-6c-5YfcmqkJa9MsVz_4CuDg0PNqognkeutg5ISRDq5JWK6JkARt0mTZwVY_qRY4iaPoAlTaoLO_WuISso16sZExEaDfLksVO16Dw6a/https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FThe_Adolescence_of_P-1

Joe

On Mon, May 11, 2020 at 11:31 AM Bob Bridges  wrote:

> I'll cheerfully leave political partisanship aside.  But if I may
> attribute this equally to both sides (and thus avoid partisanship), I'm
> with Joel ~and~ Lionel on this.  Most folks who misuse their power start
> out, at least, in hopes of doing good.  What I'm saying is that although we
> value altruism, I don't trust even altruists in the matter of exercising
> power, especially when in pursuit of The Good of Humanity.
>
> Doesn't mean we won't keep building robots.  Doesn't even mean we
> shouldn't.  But even altruists can be villains.  Ultron and Colossus both
> wanted to save the world, after all.
>
> ---
> Bob Bridges, robhbrid...@gmail.com, cell 336 382-7313
>
> /* The historian Macaulay famously said that the Puritans opposed
> bearbaiting not because it gave pain to the bears but because it gave
> pleasure to the spectators. The Puritans were right: Some pleasures are
> contemptible because they are coarsening. They are not merely private
> vices, they have public consequences in driving the culture's downward
> spiral.  -George Will, "The challenge of thinking lower", 2001-06-22 */
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Lionel B Dyck
> Sent: Monday, May 11, 2020 11:22
>
> Joel - can we please keep politics out of this listserv. Personally I
> wouldn't trust anyone in power to act against their own self interests and
> that applies to politicians and anyone else with power (as in money,
> influence, etc.).
>
> There are altruistic individuals in the world and when it comes to the
> development of an AI robot one prays/hopes that those are the software
> developers who implement the code for the three laws.
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf
> Of Joel C. Ewing
> Sent: Monday, May 11, 2020 10:12 AM
>
> I've greatly enjoyed Asimov's vision of future possibilities, but when I
> step back to reality it occurs to me that his perfect laws of robotics
> would have to be implemented by fallible human programmers.  Even if
> well-intentioned, how would they unambiguously convey to a robot the
> concepts of "human", "humanity", "hurt", and "injure" when there have
> always been minorities or "others" that are treated by one group of humans
> as sub-human to justify injuring them in the name of "protecting"
> them or protecting humanity?  And then there is the issue of who might
> make the decision to build sentient robots:   For example, who in our
> present White House would you trust to pay any heed to logic or scientific
> recommendations or long-term consequences, if they were given the
> opportunity to construct less-constrained AI robots that they perceived
> offered some short-term political advantage?
>
> Humanity was also fortunate that when the hardware of Asimov's Daneel
> began to fail, that he failed gracefully, rather than becoming a menace to
> humanity.
>
> --- On 5/11/20 8:43 AM, scott Ford wrote:
> > Well done JoelI agree , But I can help to to be curious about the
> > future of AI.
> > a bit of Isaac Asimov 
> >
> > --- On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing 
> wrote:
> >> And of course the whole point of Colossus, Dr Strangelove, War
> >> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was
> >> to try to make it 

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread David Spiegel

Hi Joe,
You beat me to it!

Regards,
David

On 2020-05-11 12:57, Joe Monk wrote:

An even better story ...

https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FThe_Adolescence_of_P-1&data=02%7C01%7C%7Ca75858d79d87417845d108d7f5cc8165%7C84df9e7fe9f640afb435%7C1%7C0%7C637248130998340065&sdata=NhAxRmAC4HmPk73MgwB1TITh%2BMtPZ5Y3a5hDvhnSV0Y%3D&reserved=0

Joe

On Mon, May 11, 2020 at 11:31 AM Bob Bridges  wrote:


I'll cheerfully leave political partisanship aside.  But if I may
attribute this equally to both sides (and thus avoid partisanship), I'm
with Joel ~and~ Lionel on this.  Most folks who misuse their power start
out, at least, in hopes of doing good.  What I'm saying is that although we
value altruism, I don't trust even altruists in the matter of exercising
power, especially when in pursuit of The Good of Humanity.

Doesn't mean we won't keep building robots.  Doesn't even mean we
shouldn't.  But even altruists can be villains.  Ultron and Colossus both
wanted to save the world, after all.

---
Bob Bridges, robhbrid...@gmail.com, cell 336 382-7313

/* The historian Macaulay famously said that the Puritans opposed
bearbaiting not because it gave pain to the bears but because it gave
pleasure to the spectators. The Puritans were right: Some pleasures are
contemptible because they are coarsening. They are not merely private
vices, they have public consequences in driving the culture's downward
spiral.  -George Will, "The challenge of thinking lower", 2001-06-22 */

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Lionel B Dyck
Sent: Monday, May 11, 2020 11:22

Joel - can we please keep politics out of this listserv. Personally I
wouldn't trust anyone in power to act against their own self interests and
that applies to politicians and anyone else with power (as in money,
influence, etc.).

There are altruistic individuals in the world and when it comes to the
development of an AI robot one prays/hopes that those are the software
developers who implement the code for the three laws.

-Original Message-
From: IBM Mainframe Discussion List  On Behalf
Of Joel C. Ewing
Sent: Monday, May 11, 2020 10:12 AM

I've greatly enjoyed Asimov's vision of future possibilities, but when I
step back to reality it occurs to me that his perfect laws of robotics
would have to be implemented by fallible human programmers.  Even if
well-intentioned, how would they unambiguously convey to a robot the
concepts of "human", "humanity", "hurt", and "injure" when there have
always been minorities or "others" that are treated by one group of humans
as sub-human to justify injuring them in the name of "protecting"
them or protecting humanity?  And then there is the issue of who might
make the decision to build sentient robots:   For example, who in our
present White House would you trust to pay any heed to logic or scientific
recommendations or long-term consequences, if they were given the
opportunity to construct less-constrained AI robots that they perceived
offered some short-term political advantage?

Humanity was also fortunate that when the hardware of Asimov's Daneel
began to fail, that he failed gracefully, rather than becoming a menace to
humanity.

--- On 5/11/20 8:43 AM, scott Ford wrote:

Well done JoelI agree , But I can help to to be curious about the
future of AI.
a bit of Isaac Asimov 

--- On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing 

wrote:

 And of course the whole point of Colossus, Dr Strangelove, War
Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was
to try to make it clear to all the non-engineers and non-programmers
(all of whom greatly outnumber us) why putting lethal force in the
hands of any autonomous or even semi-autonomous machine is something
with incredible potential to go wrong.  We all know that even if the
hardware doesn't fail, which it inevitably will, that all software
above a certain level of complexity is guaranteed to have bugs with
unknown consequences.
 There is another equally cautionary genre in sci-fi about society
becoming so dependent on machines as to lose the knowledge to
understand and maintain the machines, resulting in total collapse
when the machines inevitably fail.  I still remember my oldest sister

reading E.M.

Forster, "The Machine Stops" (1909), to me  when I was very young.
 Various Star Trek episodes used both of these themes as plots.
 People can also break down with lethal  side effects, but the
potential  damage one person can create is more easily contained by
other people.   The  only effective way to defend again a berserk lethal
machine may be with another lethal machine, and Colossus-Guardian
suggests why that may be an even worse idea.

-Original Message-
From: Bob Bridges
Sent: Sunday, May 10, 2020 10:21 PM

I've always loved "Colossus: The Forbin Project".  Not many people
have seen it, as f

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread scott Ford
Joe,

Yeah I read it, it’s a great book along with “The Martian”, couldn’t put
the Martian down.

Scott

On Mon, May 11, 2020 at 12:58 PM Joe Monk  wrote:

> An even better story ...
>
> https://en.wikipedia.org/wiki/The_Adolescence_of_P-1
>
> Joe
>
> On Mon, May 11, 2020 at 11:31 AM Bob Bridges 
> wrote:
>
> > I'll cheerfully leave political partisanship aside.  But if I may
> > attribute this equally to both sides (and thus avoid partisanship), I'm
> > with Joel ~and~ Lionel on this.  Most folks who misuse their power start
> > out, at least, in hopes of doing good.  What I'm saying is that although
> we
> > value altruism, I don't trust even altruists in the matter of exercising
> > power, especially when in pursuit of The Good of Humanity.
> >
> > Doesn't mean we won't keep building robots.  Doesn't even mean we
> > shouldn't.  But even altruists can be villains.  Ultron and Colossus both
> > wanted to save the world, after all.
> >
> > ---
> > Bob Bridges, robhbrid...@gmail.com, cell 336 382-7313
> >
> > /* The historian Macaulay famously said that the Puritans opposed
> > bearbaiting not because it gave pain to the bears but because it gave
> > pleasure to the spectators. The Puritans were right: Some pleasures are
> > contemptible because they are coarsening. They are not merely private
> > vices, they have public consequences in driving the culture's downward
> > spiral.  -George Will, "The challenge of thinking lower", 2001-06-22 */
> >
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> > Behalf Of Lionel B Dyck
> > Sent: Monday, May 11, 2020 11:22
> >
> > Joel - can we please keep politics out of this listserv. Personally I
> > wouldn't trust anyone in power to act against their own self interests
> and
> > that applies to politicians and anyone else with power (as in money,
> > influence, etc.).
> >
> > There are altruistic individuals in the world and when it comes to the
> > development of an AI robot one prays/hopes that those are the software
> > developers who implement the code for the three laws.
> >
> > -Original Message-
> > From: IBM Mainframe Discussion List  On Behalf
> > Of Joel C. Ewing
> > Sent: Monday, May 11, 2020 10:12 AM
> >
> > I've greatly enjoyed Asimov's vision of future possibilities, but when I
> > step back to reality it occurs to me that his perfect laws of robotics
> > would have to be implemented by fallible human programmers.  Even if
> > well-intentioned, how would they unambiguously convey to a robot the
> > concepts of "human", "humanity", "hurt", and "injure" when there have
> > always been minorities or "others" that are treated by one group of
> humans
> > as sub-human to justify injuring them in the name of "protecting"
> > them or protecting humanity?  And then there is the issue of who might
> > make the decision to build sentient robots:   For example, who in our
> > present White House would you trust to pay any heed to logic or
> scientific
> > recommendations or long-term consequences, if they were given the
> > opportunity to construct less-constrained AI robots that they perceived
> > offered some short-term political advantage?
> >
> > Humanity was also fortunate that when the hardware of Asimov's Daneel
> > began to fail, that he failed gracefully, rather than becoming a menace
> to
> > humanity.
> >
> > --- On 5/11/20 8:43 AM, scott Ford wrote:
> > > Well done JoelI agree , But I can help to to be curious about the
> > > future of AI.
> > > a bit of Isaac Asimov 
> > >
> > > --- On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing 
> > wrote:
> > >> And of course the whole point of Colossus, Dr Strangelove, War
> > >> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was
> > >> to try to make it clear to all the non-engineers and non-programmers
> > >> (all of whom greatly outnumber us) why putting lethal force in the
> > >> hands of any autonomous or even semi-autonomous machine is something
> > >> with incredible potential to go wrong.  We all know that even if the
> > >> hardware doesn't fail, which it inevitably will, that all software
> > >> above a certain level of complexity is guaranteed to have bugs with
> > >> unknown consequences.
> > >> There is another equally cautionary genre in sci-fi about society
> > >> becoming so dependent on machines as to lose the knowledge to
> > >> understand and maintain the machines, resulting in total collapse
> > >> when the machines inevitably fail.  I still remember my oldest sister
> > reading E.M.
> > >> Forster, "The Machine Stops" (1909), to me  when I was very young.
> > >> Various Star Trek episodes used both of these themes as plots.
> > >> People can also break down with lethal  side effects, but the
> > >> potential  damage one person can create is more easily contained by
> > >> other people.   The  only effective way to defend again a berserk
> lethal
> > >> machine may be with another lethal ma

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread Joe Monk
An even better story ...

https://en.wikipedia.org/wiki/The_Adolescence_of_P-1

Joe

On Mon, May 11, 2020 at 11:31 AM Bob Bridges  wrote:

> I'll cheerfully leave political partisanship aside.  But if I may
> attribute this equally to both sides (and thus avoid partisanship), I'm
> with Joel ~and~ Lionel on this.  Most folks who misuse their power start
> out, at least, in hopes of doing good.  What I'm saying is that although we
> value altruism, I don't trust even altruists in the matter of exercising
> power, especially when in pursuit of The Good of Humanity.
>
> Doesn't mean we won't keep building robots.  Doesn't even mean we
> shouldn't.  But even altruists can be villains.  Ultron and Colossus both
> wanted to save the world, after all.
>
> ---
> Bob Bridges, robhbrid...@gmail.com, cell 336 382-7313
>
> /* The historian Macaulay famously said that the Puritans opposed
> bearbaiting not because it gave pain to the bears but because it gave
> pleasure to the spectators. The Puritans were right: Some pleasures are
> contemptible because they are coarsening. They are not merely private
> vices, they have public consequences in driving the culture's downward
> spiral.  -George Will, "The challenge of thinking lower", 2001-06-22 */
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Lionel B Dyck
> Sent: Monday, May 11, 2020 11:22
>
> Joel - can we please keep politics out of this listserv. Personally I
> wouldn't trust anyone in power to act against their own self interests and
> that applies to politicians and anyone else with power (as in money,
> influence, etc.).
>
> There are altruistic individuals in the world and when it comes to the
> development of an AI robot one prays/hopes that those are the software
> developers who implement the code for the three laws.
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf
> Of Joel C. Ewing
> Sent: Monday, May 11, 2020 10:12 AM
>
> I've greatly enjoyed Asimov's vision of future possibilities, but when I
> step back to reality it occurs to me that his perfect laws of robotics
> would have to be implemented by fallible human programmers.  Even if
> well-intentioned, how would they unambiguously convey to a robot the
> concepts of "human", "humanity", "hurt", and "injure" when there have
> always been minorities or "others" that are treated by one group of humans
> as sub-human to justify injuring them in the name of "protecting"
> them or protecting humanity?  And then there is the issue of who might
> make the decision to build sentient robots:   For example, who in our
> present White House would you trust to pay any heed to logic or scientific
> recommendations or long-term consequences, if they were given the
> opportunity to construct less-constrained AI robots that they perceived
> offered some short-term political advantage?
>
> Humanity was also fortunate that when the hardware of Asimov's Daneel
> began to fail, that he failed gracefully, rather than becoming a menace to
> humanity.
>
> --- On 5/11/20 8:43 AM, scott Ford wrote:
> > Well done JoelI agree , But I can help to to be curious about the
> > future of AI.
> > a bit of Isaac Asimov 
> >
> > --- On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing 
> wrote:
> >> And of course the whole point of Colossus, Dr Strangelove, War
> >> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was
> >> to try to make it clear to all the non-engineers and non-programmers
> >> (all of whom greatly outnumber us) why putting lethal force in the
> >> hands of any autonomous or even semi-autonomous machine is something
> >> with incredible potential to go wrong.  We all know that even if the
> >> hardware doesn't fail, which it inevitably will, that all software
> >> above a certain level of complexity is guaranteed to have bugs with
> >> unknown consequences.
> >> There is another equally cautionary genre in sci-fi about society
> >> becoming so dependent on machines as to lose the knowledge to
> >> understand and maintain the machines, resulting in total collapse
> >> when the machines inevitably fail.  I still remember my oldest sister
> reading E.M.
> >> Forster, "The Machine Stops" (1909), to me  when I was very young.
> >> Various Star Trek episodes used both of these themes as plots.
> >> People can also break down with lethal  side effects, but the
> >> potential  damage one person can create is more easily contained by
> >> other people.   The  only effective way to defend again a berserk lethal
> >> machine may be with another lethal machine, and Colossus-Guardian
> >> suggests why that may be an even worse idea.
> >>>
> >>> -Original Message-
> >>> From: Bob Bridges
> >>> Sent: Sunday, May 10, 2020 10:21 PM
> >>>
> >>> I've always loved "Colossus: The Forbin Project".  Not many people
> >>> have seen it, as far as I can tell.  The only problem I have with
> >>> that movie 

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread Bob Bridges
I'll cheerfully leave political partisanship aside.  But if I may attribute 
this equally to both sides (and thus avoid partisanship), I'm with Joel ~and~ 
Lionel on this.  Most folks who misuse their power start out, at least, in 
hopes of doing good.  What I'm saying is that although we value altruism, I 
don't trust even altruists in the matter of exercising power, especially when 
in pursuit of The Good of Humanity.

Doesn't mean we won't keep building robots.  Doesn't even mean we shouldn't.  
But even altruists can be villains.  Ultron and Colossus both wanted to save 
the world, after all.

---
Bob Bridges, robhbrid...@gmail.com, cell 336 382-7313

/* The historian Macaulay famously said that the Puritans opposed bearbaiting 
not because it gave pain to the bears but because it gave pleasure to the 
spectators. The Puritans were right: Some pleasures are contemptible because 
they are coarsening. They are not merely private vices, they have public 
consequences in driving the culture's downward spiral.  -George Will, "The 
challenge of thinking lower", 2001-06-22 */

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Lionel B Dyck
Sent: Monday, May 11, 2020 11:22

Joel - can we please keep politics out of this listserv. Personally I wouldn't 
trust anyone in power to act against their own self interests and that applies 
to politicians and anyone else with power (as in money, influence, etc.).

There are altruistic individuals in the world and when it comes to the 
development of an AI robot one prays/hopes that those are the software 
developers who implement the code for the three laws.

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Joel C. Ewing
Sent: Monday, May 11, 2020 10:12 AM

I've greatly enjoyed Asimov's vision of future possibilities, but when I step 
back to reality it occurs to me that his perfect laws of robotics would have to 
be implemented by fallible human programmers.  Even if well-intentioned, how 
would they unambiguously convey to a robot the concepts of "human", "humanity", 
"hurt", and "injure" when there have always been minorities or "others" that 
are treated by one group of humans as sub-human to justify injuring them in the 
name of "protecting"
them or protecting humanity?  And then there is the issue of who might make the 
decision to build sentient robots:   For example, who in our present White 
House would you trust to pay any heed to logic or scientific recommendations or 
long-term consequences, if they were given the opportunity to construct 
less-constrained AI robots that they perceived offered some short-term 
political advantage?

Humanity was also fortunate that when the hardware of Asimov's Daneel began to 
fail, that he failed gracefully, rather than becoming a menace to humanity.

--- On 5/11/20 8:43 AM, scott Ford wrote:
> Well done JoelI agree , But I can help to to be curious about the 
> future of AI.
> a bit of Isaac Asimov 
>
> --- On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing  wrote:
>> And of course the whole point of Colossus, Dr Strangelove, War 
>> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was 
>> to try to make it clear to all the non-engineers and non-programmers 
>> (all of whom greatly outnumber us) why putting lethal force in the 
>> hands of any autonomous or even semi-autonomous machine is something 
>> with incredible potential to go wrong.  We all know that even if the 
>> hardware doesn't fail, which it inevitably will, that all software 
>> above a certain level of complexity is guaranteed to have bugs with 
>> unknown consequences.
>> There is another equally cautionary genre in sci-fi about society 
>> becoming so dependent on machines as to lose the knowledge to 
>> understand and maintain the machines, resulting in total collapse 
>> when the machines inevitably fail.  I still remember my oldest sister 
>> reading E.M.
>> Forster, "The Machine Stops" (1909), to me  when I was very young.
>> Various Star Trek episodes used both of these themes as plots.
>> People can also break down with lethal  side effects, but the 
>> potential  damage one person can create is more easily contained by
>> other people.   The  only effective way to defend again a berserk lethal
>> machine may be with another lethal machine, and Colossus-Guardian 
>> suggests why that may be an even worse idea.
>>>
>>> -Original Message-
>>> From: Bob Bridges
>>> Sent: Sunday, May 10, 2020 10:21 PM
>>>
>>> I've always loved "Colossus: The Forbin Project".  Not many people 
>>> have seen it, as far as I can tell.  The only problem I have with
>>> that movie - well, the main problem - is that no programmer in the
>>> world would make such a system and then throw away the Stop button.
>>> No engineer would do that with a machine he built, either.  Too many
>>> things can go wrong.  But a fun movie, if you can ignore that.

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread scott Ford
Alan,

Yes sir, ‘I  Robot’ is great story.

Scott

On Mon, May 11, 2020 at 12:10 PM Allan Staller 
wrote:

> Look up the story "I, Robot". From memory, I believe it is also an Isaac
> Asimov story
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf
> Of Joel C. Ewing
> Sent: Monday, May 11, 2020 10:12 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Colossus, Strangelove, etc. was: Developers say...
>
> [CAUTION: This Email is from outside the Organization. Unless you trust
> the sender, Don’t click links or open attachments as it may be a Phishing
> email, which can steal your Information and compromise your Computer.]
>
> I've greatly enjoyed Asimov's vision of future possibilities, but when I
> step back to reality it occurs to me that his perfect laws of robotics
> would have to be implemented by fallible human programmers.  Even if
> well-intentioned, how would they unambiguously convey to a robot the
> concepts of "human", "humanity", "hurt", and "injure" when there have
> always been minorities or "others" that are treated by one group of humans
> as sub-human to justify injuring them in the name of "protecting"
> them or protecting humanity?  And then there is the issue of who might
> make the decision to build sentient robots:   For example, who in our
> present White House would you trust to pay any heed to logic or scientific
> recommendations or long-term consequences, if they were given the
> opportunity to construct less-constrained AI robots that they perceived
> offered some short-term political advantage?
>
> Humanity was also fortunate that when the hardware of Asimov's Daneel
> began to fail, that he failed gracefully, rather than becoming a menace to
> humanity.
> Joel C Ewing
>
> On 5/11/20 8:43 AM, scott Ford wrote:
> > Well done JoelI agree , But I can help to to be curious about the
> > future of AI.
> > a bit of Isaac Asimov 
> >
> > Scott
> >
> > On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing  wrote:
> >
> >> And of course the whole point of Colossus, Dr Strangelove, War
> >> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was
> >> to try to make it clear to all the non-engineers and non-programmers
> >> (all of whom greatly outnumber us) why putting lethal force in the
> >> hands of any autonomous or even semi-autonomous machine is something
> >> with incredible potential to go wrong.  We all know that even if the
> >> hardware doesn't fail, which it inevitably will, that all software
> >> above a certain level of complexity is guaranteed to have bugs with
> >> unknown consequences.
> >> There is another equally cautionary genre in sci-fi about society
> >> becoming so dependent on machines as to lose the knowledge to
> >> understand and maintain the machines, resulting in total collapse
> >> when the machines inevitably fail.  I still remember my oldest sister
> reading E.M.
> >> Forster, "The Machine Stops" (1909), to me  when I was very young.
> >> Various Star Trek episodes used both of these themes as plots.
> >> People can also break down with lethal  side effects, but the
> >> potential  damage one person can create is more easily contained by
> >> other people.   The  only effective way to defend again a berserk lethal
> >> machine may be with another lethal machine, and Colossus-Guardian
> >> suggests why that may be an even worse idea.
> >> Joel C Ewing
> >>
> >> On 5/11/20 4:54 AM, Seymour J Metz wrote:
> >>> Strangelove was twisted because the times were twisted. We're ripe
> >>> for a
> >> similar parody on our own times.
> >>>
> >>> --
> >>> Shmuel (Seymour J.) Metz
> >>> https://apc01.safelinks.protection.outlook.com/?url=http:%2F%2Fmason
> >>> .gmu.edu%2F~smetz3&data=02%7C01%7Callan.staller%40HCL.COM%7C87d9
> >>> 89082f374f96610c08d7f5be19cc%7C189de737c93a4f5a8b686f4ca9941912%7C0%
> >>> 7C0%7C637248069162560622&sdata=ZnMqmL1CJJ4Ndpc9HLcl%2FYWR%2FpnSo
> >>> zSoLcU13aVX8NI%3D&reserved=0
> >>>
> >>> 
> >>> From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on
> >> behalf of Farley, Peter x23353 [peter.far...@broadridge.com]
> >>> Sent: Sunday, May 10, 2020 11:39 PM
> >>> To: IBM-MAIN@LISTSERV.UA.EDU
> >>> Subject: Re: Developers say Google's Go is '

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread Allan Staller
Look up the story "I, Robot". From memory, I believe it is also an Isaac Asimov 
story

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Joel C. Ewing
Sent: Monday, May 11, 2020 10:12 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Colossus, Strangelove, etc. was: Developers say...

[CAUTION: This Email is from outside the Organization. Unless you trust the 
sender, Don’t click links or open attachments as it may be a Phishing email, 
which can steal your Information and compromise your Computer.]

I've greatly enjoyed Asimov's vision of future possibilities, but when I step 
back to reality it occurs to me that his perfect laws of robotics would have to 
be implemented by fallible human programmers.  Even if well-intentioned, how 
would they unambiguously convey to a robot the concepts of "human", "humanity", 
"hurt", and "injure" when there have always been minorities or "others" that 
are treated by one group of humans as sub-human to justify injuring them in the 
name of "protecting"
them or protecting humanity?  And then there is the issue of who might
make the decision to build sentient robots:   For example, who in our
present White House would you trust to pay any heed to logic or scientific 
recommendations or long-term consequences, if they were given the opportunity 
to construct less-constrained AI robots that they perceived offered some 
short-term political advantage?

Humanity was also fortunate that when the hardware of Asimov's Daneel began to 
fail, that he failed gracefully, rather than becoming a menace to humanity.
Joel C Ewing

On 5/11/20 8:43 AM, scott Ford wrote:
> Well done JoelI agree , But I can help to to be curious about the
> future of AI.
> a bit of Isaac Asimov 
>
> Scott
>
> On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing  wrote:
>
>> And of course the whole point of Colossus, Dr Strangelove, War
>> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was
>> to try to make it clear to all the non-engineers and non-programmers
>> (all of whom greatly outnumber us) why putting lethal force in the
>> hands of any autonomous or even semi-autonomous machine is something
>> with incredible potential to go wrong.  We all know that even if the
>> hardware doesn't fail, which it inevitably will, that all software
>> above a certain level of complexity is guaranteed to have bugs with
>> unknown consequences.
>> There is another equally cautionary genre in sci-fi about society
>> becoming so dependent on machines as to lose the knowledge to
>> understand and maintain the machines, resulting in total collapse
>> when the machines inevitably fail.  I still remember my oldest sister 
>> reading E.M.
>> Forster, "The Machine Stops" (1909), to me  when I was very young.
>> Various Star Trek episodes used both of these themes as plots.
>> People can also break down with lethal  side effects, but the
>> potential  damage one person can create is more easily contained by
>> other people.   The  only effective way to defend again a berserk lethal
>> machine may be with another lethal machine, and Colossus-Guardian
>> suggests why that may be an even worse idea.
>> Joel C Ewing
>>
>> On 5/11/20 4:54 AM, Seymour J Metz wrote:
>>> Strangelove was twisted because the times were twisted. We're ripe
>>> for a
>> similar parody on our own times.
>>>
>>> --
>>> Shmuel (Seymour J.) Metz
>>> https://apc01.safelinks.protection.outlook.com/?url=http:%2F%2Fmason
>>> .gmu.edu%2F~smetz3&data=02%7C01%7Callan.staller%40HCL.COM%7C87d9
>>> 89082f374f96610c08d7f5be19cc%7C189de737c93a4f5a8b686f4ca9941912%7C0%
>>> 7C0%7C637248069162560622&sdata=ZnMqmL1CJJ4Ndpc9HLcl%2FYWR%2FpnSo
>>> zSoLcU13aVX8NI%3D&reserved=0
>>>
>>> 
>>> From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on
>> behalf of Farley, Peter x23353 [peter.far...@broadridge.com]
>>> Sent: Sunday, May 10, 2020 11:39 PM
>>> To: IBM-MAIN@LISTSERV.UA.EDU
>>> Subject: Re: Developers say Google's Go is 'most sought after'
>> programming language of 2020
>>> For relatively recent fare, I agree 100% - "Person of Interest"
>>> leads
>> the pack.  My favorite oldie -- "Let's play Global Thermonuclear War . . .
>> " (War Games), right after Dr. Strangelove of course, simply because
>> it was so twisted.
>>> Mutual Assured Destruction indeed.  Is SkyNet far away?
>>>
>>> Peter
>>>
>>

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread scott Ford
Lionel,

Out of respect for all, oh yes me too...I have made mistakes like enough
one.


On Mon, May 11, 2020 at 11:54 AM Lionel B Dyck  wrote:

> For me, my blood type is B+, and I tend to look on the positive side of
> things - including giving most the benefit of the doubt and hoping for the
> best. Sadly my short/long term memory failures have not erased lessons
> learned from granting trust when it shouldn't have been granted.
>
> Enough said - may y'all be safe, healthy, and blessed. During challenging
> times we need each other in many different ways and that includes looking
> out to prevent others from being taken advantage of if we have the ability
> to do so.
>
> Lionel B. Dyck <
> Website: https://www.lbdsoftware.com
>
> "Worry more about your character than your reputation.  Character is what
> you are, reputation merely what others think you are." - John Wooden
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf
> Of scott Ford
> Sent: Monday, May 11, 2020 10:51 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Colossus, Strangelove, etc. was: Developers say...
>
> Joel,
>
> I agree I am a huge sci-fi fan and believe in the sciences over utter
> stupidity.
> Lionel your point is well taken. I am guilty too, but when you have strong
> feelings , which sometimes part of ADHD , it’s called RSD ( Reject
> Sensitive Dysphoria ).
> I have both ...
>
> Scott
>
> On Mon, May 11, 2020 at 11:22 AM Lionel B Dyck  wrote:
>
> > Joel - can we please keep politics out of this listserv. Personally I
> > wouldn't trust anyone in power to act against their own self interests
> > and that applies to politicians and anyone else with power (as in
> > money, influence, etc.).
> >
> > There are altruistic individuals in the world and when it comes to the
> > development of an AI robot one prays/hopes that those are the software
> > developers who implement the code for the three laws.
> >
> >
> > Lionel B. Dyck <
> > Website: https://www.lbdsoftware.com
> >
> > "Worry more about your character than your reputation.  Character is
> > what you are, reputation merely what others think you are." - John
> > Wooden
> >
> > -Original Message-
> > From: IBM Mainframe Discussion List  On
> > Behalf Of Joel C. Ewing
> > Sent: Monday, May 11, 2020 10:12 AM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: Re: Colossus, Strangelove, etc. was: Developers say...
> >
> > I've greatly enjoyed Asimov's vision of future possibilities, but when
> > I step back to reality it occurs to me that his perfect laws of
> > robotics would have to be implemented by fallible human programmers.
> > Even if well-intentioned, how would they unambiguously convey to a
> > robot the concepts of "human", "humanity", "hurt", and "injure" when
> > there have always been minorities or "others" that are treated by one
> > group of humans as sub-human to justify injuring them in the name of
> "protecting"
> > them or protecting humanity?  And then there is the issue of who might
> > make the decision to build sentient robots:   For example, who in our
> > present White House would you trust to pay any heed to logic or
> > scientific recommendations or long-term consequences, if they were
> > given the opportunity to construct less-constrained AI robots that
> > they perceived offered some short-term political advantage?
> >
> > Humanity was also fortunate that when the hardware of Asimov's Daneel
> > began to fail, that he failed gracefully, rather than becoming a
> > menace to humanity.
> > Joel C Ewing
> >
> > On 5/11/20 8:43 AM, scott Ford wrote:
> > > Well done JoelI agree , But I can help to to be curious about
> > > the future of AI.
> > > a bit of Isaac Asimov 
> > >
> > > Scott
> > >
> > > On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing  wrote:
> > >
> > >> And of course the whole point of Colossus, Dr Strangelove, War
> > >> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc.
> > >> was to try to make it clear to all the non-engineers and
> > >> non-programmers (all of whom greatly outnumber us) why putting
> > >> lethal force in the hands of any autonomous or even semi-autonomous
> > >> machine is something with incredible potential to go wrong.  We all
> > >> know that even if the hardware doesn't fail, which it inevitably
> > >&g

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread Lionel B Dyck
For me, my blood type is B+, and I tend to look on the positive side of things 
- including giving most the benefit of the doubt and hoping for the best. Sadly 
my short/long term memory failures have not erased lessons learned from 
granting trust when it shouldn't have been granted.

Enough said - may y'all be safe, healthy, and blessed. During challenging times 
we need each other in many different ways and that includes looking out to 
prevent others from being taken advantage of if we have the ability to do so.

Lionel B. Dyck <
Website: https://www.lbdsoftware.com

"Worry more about your character than your reputation.  Character is what you 
are, reputation merely what others think you are." - John Wooden

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
scott Ford
Sent: Monday, May 11, 2020 10:51 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Colossus, Strangelove, etc. was: Developers say...

Joel,

I agree I am a huge sci-fi fan and believe in the sciences over utter stupidity.
Lionel your point is well taken. I am guilty too, but when you have strong 
feelings , which sometimes part of ADHD , it’s called RSD ( Reject Sensitive 
Dysphoria ).
I have both ...

Scott

On Mon, May 11, 2020 at 11:22 AM Lionel B Dyck  wrote:

> Joel - can we please keep politics out of this listserv. Personally I 
> wouldn't trust anyone in power to act against their own self interests 
> and that applies to politicians and anyone else with power (as in 
> money, influence, etc.).
>
> There are altruistic individuals in the world and when it comes to the 
> development of an AI robot one prays/hopes that those are the software 
> developers who implement the code for the three laws.
>
>
> Lionel B. Dyck <
> Website: https://www.lbdsoftware.com
>
> "Worry more about your character than your reputation.  Character is 
> what you are, reputation merely what others think you are." - John 
> Wooden
>
> -Original Message-
> From: IBM Mainframe Discussion List  On 
> Behalf Of Joel C. Ewing
> Sent: Monday, May 11, 2020 10:12 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Colossus, Strangelove, etc. was: Developers say...
>
> I've greatly enjoyed Asimov's vision of future possibilities, but when 
> I step back to reality it occurs to me that his perfect laws of 
> robotics would have to be implemented by fallible human programmers.  
> Even if well-intentioned, how would they unambiguously convey to a 
> robot the concepts of "human", "humanity", "hurt", and "injure" when 
> there have always been minorities or "others" that are treated by one 
> group of humans as sub-human to justify injuring them in the name of 
> "protecting"
> them or protecting humanity?  And then there is the issue of who might
> make the decision to build sentient robots:   For example, who in our
> present White House would you trust to pay any heed to logic or 
> scientific recommendations or long-term consequences, if they were 
> given the opportunity to construct less-constrained AI robots that 
> they perceived offered some short-term political advantage?
>
> Humanity was also fortunate that when the hardware of Asimov's Daneel 
> began to fail, that he failed gracefully, rather than becoming a 
> menace to humanity.
> Joel C Ewing
>
> On 5/11/20 8:43 AM, scott Ford wrote:
> > Well done JoelI agree , But I can help to to be curious about 
> > the future of AI.
> > a bit of Isaac Asimov 
> >
> > Scott
> >
> > On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing  wrote:
> >
> >> And of course the whole point of Colossus, Dr Strangelove, War 
> >> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. 
> >> was to try to make it clear to all the non-engineers and 
> >> non-programmers (all of whom greatly outnumber us) why putting 
> >> lethal force in the hands of any autonomous or even semi-autonomous 
> >> machine is something with incredible potential to go wrong.  We all 
> >> know that even if the hardware doesn't fail, which it inevitably 
> >> will, that all software above a certain level of complexity is 
> >> guaranteed to have bugs with unknown consequences.
> >> There is another equally cautionary genre in sci-fi about 
> >> society becoming so dependent on machines as to lose the knowledge 
> >> to understand and maintain the machines, resulting in total 
> >> collapse when the machines inevitably fail.  I still remember my 
> >> oldest sister
> reading E.M.
> >> Forster, "The Machine Stops" (1909), to me  when I was very young.
>

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread scott Ford
Joel,

I agree I am a huge sci-fi fan and believe in the sciences over utter
stupidity.
Lionel your point is well taken. I am guilty too, but when you have strong
feelings , which sometimes part of ADHD , it’s called RSD ( Reject
Sensitive Dysphoria ).
I have both ...

Scott

On Mon, May 11, 2020 at 11:22 AM Lionel B Dyck  wrote:

> Joel - can we please keep politics out of this listserv. Personally I
> wouldn't trust anyone in power to act against their own self interests and
> that applies to politicians and anyone else with power (as in money,
> influence, etc.).
>
> There are altruistic individuals in the world and when it comes to the
> development of an AI robot one prays/hopes that those are the software
> developers who implement the code for the three laws.
>
>
> Lionel B. Dyck <
> Website: https://www.lbdsoftware.com
>
> "Worry more about your character than your reputation.  Character is what
> you are, reputation merely what others think you are." - John Wooden
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf
> Of Joel C. Ewing
> Sent: Monday, May 11, 2020 10:12 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Colossus, Strangelove, etc. was: Developers say...
>
> I've greatly enjoyed Asimov's vision of future possibilities, but when I
> step back to reality it occurs to me that his perfect laws of robotics
> would have to be implemented by fallible human programmers.  Even if
> well-intentioned, how would they unambiguously convey to a robot the
> concepts of "human", "humanity", "hurt", and "injure" when there have
> always been minorities or "others" that are treated by one group of humans
> as sub-human to justify injuring them in the name of "protecting"
> them or protecting humanity?  And then there is the issue of who might
> make the decision to build sentient robots:   For example, who in our
> present White House would you trust to pay any heed to logic or scientific
> recommendations or long-term consequences, if they were given the
> opportunity to construct less-constrained AI robots that they perceived
> offered some short-term political advantage?
>
> Humanity was also fortunate that when the hardware of Asimov's Daneel
> began to fail, that he failed gracefully, rather than becoming a menace to
> humanity.
> Joel C Ewing
>
> On 5/11/20 8:43 AM, scott Ford wrote:
> > Well done JoelI agree , But I can help to to be curious about the
> > future of AI.
> > a bit of Isaac Asimov 
> >
> > Scott
> >
> > On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing  wrote:
> >
> >> And of course the whole point of Colossus, Dr Strangelove, War
> >> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was
> >> to try to make it clear to all the non-engineers and non-programmers
> >> (all of whom greatly outnumber us) why putting lethal force in the
> >> hands of any autonomous or even semi-autonomous machine is something
> >> with incredible potential to go wrong.  We all know that even if the
> >> hardware doesn't fail, which it inevitably will, that all software
> >> above a certain level of complexity is guaranteed to have bugs with
> >> unknown consequences.
> >> There is another equally cautionary genre in sci-fi about society
> >> becoming so dependent on machines as to lose the knowledge to
> >> understand and maintain the machines, resulting in total collapse
> >> when the machines inevitably fail.  I still remember my oldest sister
> reading E.M.
> >> Forster, "The Machine Stops" (1909), to me  when I was very young.
> >> Various Star Trek episodes used both of these themes as plots.
> >> People can also break down with lethal  side effects, but the
> >> potential  damage one person can create is more easily contained by
> >> other people.   The  only effective way to defend again a berserk lethal
> >> machine may be with another lethal machine, and Colossus-Guardian
> >> suggests why that may be an even worse idea.
> >> Joel C Ewing
> >>
> >> On 5/11/20 4:54 AM, Seymour J Metz wrote:
> >>> Strangelove was twisted because the times were twisted. We're ripe
> >>> for a
> >> similar parody on our own times.
> >>>
> >>> --
> >>> Shmuel (Seymour J.) Metz
> >>> http://mason.gmu.edu/~smetz3
> >>>
> >>> 
> >>> From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU]

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread Lionel B Dyck
Joel - can we please keep politics out of this listserv. Personally I wouldn't 
trust anyone in power to act against their own self interests and that applies 
to politicians and anyone else with power (as in money, influence, etc.).

There are altruistic individuals in the world and when it comes to the 
development of an AI robot one prays/hopes that those are the software 
developers who implement the code for the three laws.


Lionel B. Dyck <
Website: https://www.lbdsoftware.com

"Worry more about your character than your reputation.  Character is what you 
are, reputation merely what others think you are." - John Wooden

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Joel C. Ewing
Sent: Monday, May 11, 2020 10:12 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Colossus, Strangelove, etc. was: Developers say...

I've greatly enjoyed Asimov's vision of future possibilities, but when I step 
back to reality it occurs to me that his perfect laws of robotics would have to 
be implemented by fallible human programmers.  Even if well-intentioned, how 
would they unambiguously convey to a robot the concepts of "human", "humanity", 
"hurt", and "injure" when there have always been minorities or "others" that 
are treated by one group of humans as sub-human to justify injuring them in the 
name of "protecting"
them or protecting humanity?  And then there is the issue of who might make the 
decision to build sentient robots:   For example, who in our present White 
House would you trust to pay any heed to logic or scientific recommendations or 
long-term consequences, if they were given the opportunity to construct 
less-constrained AI robots that they perceived offered some short-term 
political advantage?

Humanity was also fortunate that when the hardware of Asimov's Daneel began to 
fail, that he failed gracefully, rather than becoming a menace to humanity.
Joel C Ewing

On 5/11/20 8:43 AM, scott Ford wrote:
> Well done JoelI agree , But I can help to to be curious about the 
> future of AI.
> a bit of Isaac Asimov 
>
> Scott
>
> On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing  wrote:
>
>> And of course the whole point of Colossus, Dr Strangelove, War 
>> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was 
>> to try to make it clear to all the non-engineers and non-programmers 
>> (all of whom greatly outnumber us) why putting lethal force in the 
>> hands of any autonomous or even semi-autonomous machine is something 
>> with incredible potential to go wrong.  We all know that even if the 
>> hardware doesn't fail, which it inevitably will, that all software 
>> above a certain level of complexity is guaranteed to have bugs with 
>> unknown consequences.
>> There is another equally cautionary genre in sci-fi about society 
>> becoming so dependent on machines as to lose the knowledge to 
>> understand and maintain the machines, resulting in total collapse 
>> when the machines inevitably fail.  I still remember my oldest sister 
>> reading E.M.
>> Forster, "The Machine Stops" (1909), to me  when I was very young.
>> Various Star Trek episodes used both of these themes as plots.
>> People can also break down with lethal  side effects, but the 
>> potential  damage one person can create is more easily contained by
>> other people.   The  only effective way to defend again a berserk lethal
>> machine may be with another lethal machine, and Colossus-Guardian 
>> suggests why that may be an even worse idea.
>> Joel C Ewing
>>
>> On 5/11/20 4:54 AM, Seymour J Metz wrote:
>>> Strangelove was twisted because the times were twisted. We're ripe 
>>> for a
>> similar parody on our own times.
>>>
>>> --
>>> Shmuel (Seymour J.) Metz
>>> http://mason.gmu.edu/~smetz3
>>>
>>> 
>>> From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on
>> behalf of Farley, Peter x23353 [peter.far...@broadridge.com]
>>> Sent: Sunday, May 10, 2020 11:39 PM
>>> To: IBM-MAIN@LISTSERV.UA.EDU
>>> Subject: Re: Developers say Google's Go is 'most sought after'
>> programming language of 2020
>>> For relatively recent fare, I agree 100% - "Person of Interest" 
>>> leads
>> the pack.  My favorite oldie -- "Let's play Global Thermonuclear War . . .
>> " (War Games), right after Dr. Strangelove of course, simply because 
>> it was so twisted.
>>> Mutual Assured Destruction indeed.  Is SkyNet far away?
>>>
>>> Peter
>&

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread Joel C. Ewing
I've greatly enjoyed Asimov's vision of future possibilities, but when I
step back to reality it occurs to me that his perfect laws of robotics
would have to be implemented by fallible human programmers.  Even if
well-intentioned, how would they unambiguously convey to a robot the
concepts of "human", "humanity", "hurt", and "injure" when there have
always been minorities or "others" that are treated by one group of
humans as sub-human to justify injuring them in the name of "protecting"
them or protecting humanity?  And then there is the issue of who might
make the decision to build sentient robots:   For example, who in our
present White House would you trust to pay any heed to logic or
scientific recommendations or long-term consequences, if they were given
the opportunity to construct less-constrained AI robots that they
perceived offered some short-term political advantage?

Humanity was also fortunate that when the hardware of Asimov's Daneel
began to fail, that he failed gracefully, rather than becoming a menace
to humanity.
    Joel C Ewing

On 5/11/20 8:43 AM, scott Ford wrote:
> Well done JoelI agree , But I can help to to be curious about the
> future of AI.
> a bit of Isaac Asimov 
>
> Scott
>
> On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing  wrote:
>
>> And of course the whole point of Colossus, Dr Strangelove, War
>> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was to
>> try to make it clear to all the non-engineers and non-programmers (all
>> of whom greatly outnumber us) why putting lethal force in the hands of
>> any autonomous or even semi-autonomous machine is something with
>> incredible potential to go wrong.  We all know that even if the hardware
>> doesn't fail, which it inevitably will, that all software above a
>> certain level of complexity is guaranteed to have bugs with unknown
>> consequences.
>> There is another equally cautionary genre in sci-fi about society
>> becoming so dependent on machines as to lose the knowledge to understand
>> and maintain the machines, resulting in total collapse when the machines
>> inevitably fail.  I still remember my oldest sister reading E.M.
>> Forster, "The Machine Stops" (1909), to me  when I was very young.
>> Various Star Trek episodes used both of these themes as plots.
>> People can also break down with lethal  side effects, but the
>> potential  damage one person can create is more easily contained by
>> other people.   The  only effective way to defend again a berserk lethal
>> machine may be with another lethal machine, and Colossus-Guardian
>> suggests why that may be an even worse idea.
>> Joel C Ewing
>>
>> On 5/11/20 4:54 AM, Seymour J Metz wrote:
>>> Strangelove was twisted because the times were twisted. We're ripe for a
>> similar parody on our own times.
>>>
>>> --
>>> Shmuel (Seymour J.) Metz
>>> http://mason.gmu.edu/~smetz3
>>>
>>> 
>>> From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on
>> behalf of Farley, Peter x23353 [peter.far...@broadridge.com]
>>> Sent: Sunday, May 10, 2020 11:39 PM
>>> To: IBM-MAIN@LISTSERV.UA.EDU
>>> Subject: Re: Developers say Google's Go is 'most sought after'
>> programming language of 2020
>>> For relatively recent fare, I agree 100% - "Person of Interest" leads
>> the pack.  My favorite oldie -- "Let's play Global Thermonuclear War . . .
>> " (War Games), right after Dr. Strangelove of course, simply because it was
>> so twisted.
>>> Mutual Assured Destruction indeed.  Is SkyNet far away?
>>>
>>> Peter
>>>
>>> -Original Message-
>>> From: IBM Mainframe Discussion List  On
>> Behalf Of Bob Bridges
>>> Sent: Sunday, May 10, 2020 10:21 PM
>>> To: IBM-MAIN@LISTSERV.UA.EDU
>>> Subject: Re: Developers say Google's Go is 'most sought after'
>> programming language of 2020
>>> I've always loved "Colossus: The Forbin Project".  Not many people have
>> seen it, as far as I can tell.
>>> The only problem I have with that movie - well, the main problem - is
>> that no programmer in the world would make such a system and then throw
>> away the Stop button.  No engineer would do that with a machine he built,
>> either.  Too many things can go wrong.
>>> But a fun movie, if you can ignore that.
>>>
>>> ---
>>> Bob Bridges, robhbrid...@gmail.com, cell 336 382-7313
>>>
>>> /* The only thing UFO aliens deserve is to be ignored...and when we
>> finally develop the right missiles, to have their smug, silvery little
>> butts shot down.  Not a single reported UFO sighting -- if true! --
>> describes the behavior of decent, polite, honorable visitors to our world.
>> -David Brin in a 1998 on-line interview */
>>>
>>> -Original Message-
>>> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
>> On Behalf Of scott Ford
>>> Sent: Sunday, May 10, 2020 11:38
>>>
>>> Like the 1970s flick , ‘Colossus , The Forbin Project’,
>>>
>>> Colossus and American computer and Guardian a Russian co

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread scott Ford
Well done JoelI agree , But I can help to to be curious about the
future of AI.
a bit of Isaac Asimov 

Scott

On Mon, May 11, 2020 at 9:25 AM Joel C. Ewing  wrote:

> And of course the whole point of Colossus, Dr Strangelove, War
> Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was to
> try to make it clear to all the non-engineers and non-programmers (all
> of whom greatly outnumber us) why putting lethal force in the hands of
> any autonomous or even semi-autonomous machine is something with
> incredible potential to go wrong.  We all know that even if the hardware
> doesn't fail, which it inevitably will, that all software above a
> certain level of complexity is guaranteed to have bugs with unknown
> consequences.
> There is another equally cautionary genre in sci-fi about society
> becoming so dependent on machines as to lose the knowledge to understand
> and maintain the machines, resulting in total collapse when the machines
> inevitably fail.  I still remember my oldest sister reading E.M.
> Forster, "The Machine Stops" (1909), to me  when I was very young.
> Various Star Trek episodes used both of these themes as plots.
> People can also break down with lethal  side effects, but the
> potential  damage one person can create is more easily contained by
> other people.   The  only effective way to defend again a berserk lethal
> machine may be with another lethal machine, and Colossus-Guardian
> suggests why that may be an even worse idea.
> Joel C Ewing
>
> On 5/11/20 4:54 AM, Seymour J Metz wrote:
> > Strangelove was twisted because the times were twisted. We're ripe for a
> similar parody on our own times.
> >
> >
> > --
> > Shmuel (Seymour J.) Metz
> > http://mason.gmu.edu/~smetz3
> >
> > 
> > From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on
> behalf of Farley, Peter x23353 [peter.far...@broadridge.com]
> > Sent: Sunday, May 10, 2020 11:39 PM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: Re: Developers say Google's Go is 'most sought after'
> programming language of 2020
> >
> > For relatively recent fare, I agree 100% - "Person of Interest" leads
> the pack.  My favorite oldie -- "Let's play Global Thermonuclear War . . .
> " (War Games), right after Dr. Strangelove of course, simply because it was
> so twisted.
> >
> > Mutual Assured Destruction indeed.  Is SkyNet far away?
> >
> > Peter
> >
> > -Original Message-
> > From: IBM Mainframe Discussion List  On
> Behalf Of Bob Bridges
> > Sent: Sunday, May 10, 2020 10:21 PM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: Re: Developers say Google's Go is 'most sought after'
> programming language of 2020
> >
> > I've always loved "Colossus: The Forbin Project".  Not many people have
> seen it, as far as I can tell.
> >
> > The only problem I have with that movie - well, the main problem - is
> that no programmer in the world would make such a system and then throw
> away the Stop button.  No engineer would do that with a machine he built,
> either.  Too many things can go wrong.
> >
> > But a fun movie, if you can ignore that.
> >
> > ---
> > Bob Bridges, robhbrid...@gmail.com, cell 336 382-7313
> >
> > /* The only thing UFO aliens deserve is to be ignored...and when we
> finally develop the right missiles, to have their smug, silvery little
> butts shot down.  Not a single reported UFO sighting -- if true! --
> describes the behavior of decent, polite, honorable visitors to our world.
> -David Brin in a 1998 on-line interview */
> >
> >
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of scott Ford
> > Sent: Sunday, May 10, 2020 11:38
> >
> > Like the 1970s flick , ‘Colossus , The Forbin Project’,
> >
> > Colossus and American computer and Guardian a Russian computer take over
> saying ‘ Colossus and Guardian we are one’, or better yet My favorite show,
> ‘Person of Interest’.
> > --
> >
> > This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential.
> If the reader of the message is not the intended recipient or an authorized
> representative of the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please notify us immediately by
> e-mail and delete the message and any attachments from your system.
> >
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>
> --
> Joel C. Ewing

Re: Colossus, Strangelove, etc. was: Developers say...

2020-05-11 Thread Joel C. Ewing
    And of course the whole point of Colossus, Dr Strangelove, War
Games, Terminator,  Forbidden Planet, Battlestar Galactica, etc. was to
try to make it clear to all the non-engineers and non-programmers (all
of whom greatly outnumber us) why putting lethal force in the hands of
any autonomous or even semi-autonomous machine is something with
incredible potential to go wrong.  We all know that even if the hardware
doesn't fail, which it inevitably will, that all software above a
certain level of complexity is guaranteed to have bugs with unknown
consequences. 
    There is another equally cautionary genre in sci-fi about society
becoming so dependent on machines as to lose the knowledge to understand
and maintain the machines, resulting in total collapse when the machines
inevitably fail.  I still remember my oldest sister reading E.M.
Forster, "The Machine Stops" (1909), to me  when I was very young. 
    Various Star Trek episodes used both of these themes as plots.
    People can also break down with lethal  side effects, but the
potential  damage one person can create is more easily contained by
other people.   The  only effective way to defend again a berserk lethal
machine may be with another lethal machine, and Colossus-Guardian
suggests why that may be an even worse idea.
        Joel C Ewing

On 5/11/20 4:54 AM, Seymour J Metz wrote:
> Strangelove was twisted because the times were twisted. We're ripe for a 
> similar parody on our own times.
>
>
> --
> Shmuel (Seymour J.) Metz
> http://mason.gmu.edu/~smetz3
>
> 
> From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
> Farley, Peter x23353 [peter.far...@broadridge.com]
> Sent: Sunday, May 10, 2020 11:39 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Developers say Google's Go is 'most sought after' programming 
> language of 2020
>
> For relatively recent fare, I agree 100% - "Person of Interest" leads the 
> pack.  My favorite oldie -- "Let's play Global Thermonuclear War . . . " (War 
> Games), right after Dr. Strangelove of course, simply because it was so 
> twisted.
>
> Mutual Assured Destruction indeed.  Is SkyNet far away?
>
> Peter
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf Of 
> Bob Bridges
> Sent: Sunday, May 10, 2020 10:21 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Developers say Google's Go is 'most sought after' programming 
> language of 2020
>
> I've always loved "Colossus: The Forbin Project".  Not many people have seen 
> it, as far as I can tell.
>
> The only problem I have with that movie - well, the main problem - is that no 
> programmer in the world would make such a system and then throw away the Stop 
> button.  No engineer would do that with a machine he built, either.  Too many 
> things can go wrong.
>
> But a fun movie, if you can ignore that.
>
> ---
> Bob Bridges, robhbrid...@gmail.com, cell 336 382-7313
>
> /* The only thing UFO aliens deserve is to be ignored...and when we finally 
> develop the right missiles, to have their smug, silvery little butts shot 
> down.  Not a single reported UFO sighting -- if true! -- describes the 
> behavior of decent, polite, honorable visitors to our world.  -David Brin in 
> a 1998 on-line interview */
>
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
> Behalf Of scott Ford
> Sent: Sunday, May 10, 2020 11:38
>
> Like the 1970s flick , ‘Colossus , The Forbin Project’,
>
> Colossus and American computer and Guardian a Russian computer take over 
> saying ‘ Colossus and Guardian we are one’, or better yet My favorite show, 
> ‘Person of Interest’.
> --
>
> This message and any attachments are intended only for the use of the 
> addressee and may contain information that is privileged and confidential. If 
> the reader of the message is not the intended recipient or an authorized 
> representative of the intended recipient, you are hereby notified that any 
> dissemination of this communication is strictly prohibited. If you have 
> received this communication in error, please notify us immediately by e-mail 
> and delete the message and any attachments from your system.
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


-- 
Joel C. Ewing

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN