Except when it failed.  IIRC, an aberration known as "the mule" appeared 
along the way in the trilogy and they (the robot (R.Daneel Olivaw) working 
through the humans (2nd Foundation) had to find a work-around to keep the 
future from slipping into a dark age.

On Tuesday, March 20, 2018 at 12:21:27 PM UTC-5, Tim Butterfield wrote:
>
> Patrick,
>
> There was some of this exact thinking in some of Azimov's Foundation 
> series.  I don't recall if it was an Azimov book or another added to the 
> series, though I suspect the later.  On a galactic level, the robots were 
> unable to determine what was best for humanity.  So, they found a human who 
> always made the 'right' decisions and put that person in charge of 
> determining the right decision to make.
>
> Tim
>
> On Tue, Mar 20, 2018 at 9:52 AM, Deacon Patrick <lamon...@mac.com 
> <javascript:>> wrote:
>
>> Asimov’s laws are pretty, and a logical foundation from which to start, 
>> but are far from being the answer they appear. Take the first “law” of 
>> medicine, for example. Modern doctors deleted the notion of “first, do no 
>> harm” from the modern Hippocratic Oath (which isn’t required), and added 
>> some stunning language which supports euthanasia for patients who are 
>> burdens on family/society) — might not robots do the same? Define “harm”? 
>> Are humans smart enough to know “harm” when they see it? Or only robots?
>> http://mindyourheadcoop.org/the-hippocratic-oath-that-isnt
>>
>> With abandon,
>> Patrick
>>
>>

-- 
You received this message because you are subscribed to the Google Groups "RBW 
Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to rbw-owners-bunch+unsubscr...@googlegroups.com.
To post to this group, send email to rbw-owners-bunch@googlegroups.com.
Visit this group at https://groups.google.com/group/rbw-owners-bunch.
For more options, visit https://groups.google.com/d/optout.

Reply via email to