The mule aberration was considered a threat to humanity in that it could
delay/extend the dark age/period after the collapse of the empire by
tracking down and eliminating the influence of the second foundation, which
was helping the first foundation from the shadows. The mule was therefore
Oh, I know Asimov saw at least some of the flaws in the laws. I bringing them
up so would could too. Grin. In a fallen world, the answers can’t come from
within the fallen world. The robot’s discovered their limits and had humility
(whether they found the correct answer is another discussion).
Except when it failed. IIRC, an aberration known as "the mule" appeared
along the way in the trilogy and they (the robot (R.Daneel Olivaw) working
through the humans (2nd Foundation) had to find a work-around to keep the
future from slipping into a dark age.
On Tuesday, March 20, 2018 at
Patrick,
There was some of this exact thinking in some of Azimov's Foundation
series. I don't recall if it was an Azimov book or another added to the
series, though I suspect the later. On a galactic level, the robots were
unable to determine what was best for humanity. So, they found a human
Asimov’s laws are pretty, and a logical foundation from which to start, but are
far from being the answer they appear. Take the first “law” of medicine, for
example. Modern doctors deleted the notion of “first, do no harm” from the
modern Hippocratic Oath (which isn’t required), and added some
Good point Tim. If I knew an automated car was programmed to stop for a
cyclist regardless of the overall traffic situation, it would be more
tempting to seize the initiative to cross the intersection before it did.
Not that I would, but anyway.
On Tue, Mar 20, 2018 at 11:15 AM, Tim Butterfield
It was stated in the media that neither the car or the human nanny
attempted to slow or stop the vehicle. Not sure who the source of that
info.
On Tue, Mar 20, 2018 at 9:05 AM, Joe Bernard wrote:
> We don't know that the driving attendant or the computer failed. I have
>
I expect that Arizona's and Tuscon's laws would only allow local police to cite
the human operator with a traffic violation. If Uber has any liability in this
instance, it must be established via a civil process. Police and prosecutors
don't make that call. This is an area of law that will come
On Tue, Mar 20, 2018 at 8:40 AM, Joe Bernard wrote:
> There it is. https://twitter.com/FortuneMagazine/status/
> 976099801669521409?s=19
Hmmm. It may not be in this case, but I have a sad premonition that people
playing with/around self-driving cars may become an extreme
There it is. https://twitter.com/FortuneMagazine/status/976099801669521409?s=19
--
You received this message because you are subscribed to the Google Groups "RBW
Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
Curtis,
There is also the Zeroth law, which takes precedence over the other three.
0. A robot may not harm humanity, or, by inaction, allow humanity to come
to harm.
Of course, these are built into the positronic brain, which does not yet
exist.
Tim
On Tue, Mar 20, 2018 at 6:05 AM, Curtis
Basically, there is a whole lot we do not know, and we really would do
better waiting for the results of the on-going crash investigation
rather than speculating as we have done.
On 03/20/2018 10:05 AM, Joe Bernard wrote:
We don't know that the driving attendant or the computer failed. I
We don't know that the driving attendant or the computer failed. I have
questions concerning how well the pedestrian stepping into the street outside a
crosswalk late at night judged the distance of the oncoming vehicle.
--
You received this message because you are subscribed to the Google
Are autonomous cars built to obey Asimov's Three Laws? I doubt it very
much.
On 03/20/2018 09:05 AM, Curtis McKenzie wrote:
Hello Everyone,
Just a quick note about the "self-driving" cars.
Isaac Asimov's "Three Laws of Robotics"
1. A robot may not injure a human being or, through
The driving attendant had the accident. Violated the trust of oversight
which was the promise predicating this experiment's permission to use
public roads as its laboratory.
I'm (was) coming to grips with AI Ubers and have felt safer around them
than transit busses whose operators continue to
Hello Everyone,
Just a quick note about the "self-driving" cars.
Isaac Asimov's "Three Laws of Robotics"
1. A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
2. A robot must obey orders given it by human beings except where such
orders
I would trust Google with this technology more than Uber. Uber has shown
quite frequently that they push the envelope on what is legal and what is
moral. Google's stated philosophy is "first, do no harm". Pretty big
difference there.
I don't know if perception is reality, but it seems to me
Uber has been picking people up at the Pittsburgh airport using autonomous
vehicles for over a year. They've managed the sometimes bumper-to-bumper,
sometimes 70mph Parkway West to the city with no incidents and have
delivered folks to the city accident-free. Try driving in a city laid out
Still basically a sandbox, not the real world at large. Life is
different out in the real world, where the maps are full of roads that
exist only on paper, the white lines don't get maintained, and random
chaos rules. You can go 2 million miles around a closed track and still
not have ten
On Monday, March 19, 2018 at 3:58:21 PM UTC-7, Steve Palincsar wrote:
>
> Since computer driven cars are basically still in the laboratory
>
I'm not going to argue that autonomous cars are ready for prime time, but
they're still in the lab only if you consider the streets of Mountain View,
the
Today I delivered a bike in a very tippy and strange U-Haul van after not
sleeping well last night. I like cars and driving, but I'm quite sure a
computer would have been a more competent operator of that contraption than I
was.
--
You received this message because you are subscribed to the
Good point Steve.
>> "computer-driven cars are safer than people-driven"
Not sure if this domain can be generalized like that. There are so many
independent projects underway. The statement might be true for the leading
one or two, but majority them are still buggy, not close to being
Well, they’re out in the world, so they’ve escaped the lab.
Note that I referred to a *future* in which safer cars drive themselves. We may
not be there yet.
--Eric Norris
campyonly...@me.com
@CampyOnlyguy (Twitter/Instagram)
> On Mar 19, 2018, at 3:58 PM, Steve Palincsar
On 03/19/2018 06:39 PM, Eric Norris wrote:
By comparison, 5,376 pedestrians were killed by people-drive cars in
2015, which is one pedestrian killed every 1.6 hours.
It’s not helpful to focus a single accident—this is a “man bites dog”
story that triggers all sorts of worries about
By comparison, 5,376 pedestrians were killed by people-drive cars in 2015,
which is one pedestrian killed every 1.6 hours.
It’s not helpful to focus a single accident—this is a “man bites dog” story
that triggers all sorts of worries about technology gone wrong. The facts,
however, might
This is not a solution, just a description of another day and another
place; a galaxy long, long ago, and far, far away. Dave Moulton on cars,
bikes, and everyday transportation in 1950s. Rationing in its entirety
ended only in 1954.
26 matches
Mail list logo