Stephen & Mike raise a really excellent question (below).  Only a human can 
assume moral or legal responsibility, so who would be responsible for a death 
caused by the actions of a vehicle computer?

Candidates include the pollies who mandated use of the technology, the 
executives of the company who developed it, and even individuals who drive such 
vehicles.  Although it's strictly a legal workaround, perhaps we'd see a 
situation where driverless technology was legally quarantined if such a thing 
is constitutionally possible.

Where are you Link lawyers and moral philosophers?


On 2016-06-08 22:12 Stephen Loosley wrote:

> For mine, it'll be a question of the definition of the terms, "work reliably."
> 
> It's a problem, but, I'd prefer that my driverless car was not programmed to 
> automatically kill my family over multiple others. I suspect they will be.
> 
> For an example think an impending accident on a mountain road between your 
> family-filled sedan and a bus-load of 20 school excursion kids? Under 
> political correctness for safety purposes which vehicle goes over the side?
> 
> If a human driver can see a one-in-a-hundred chance of both the vehicles 
> staying on the road they'd take it. But, could our "road worthy" computer?


On 2016-06-09 07:37 Mike  wrote:

> My concern is that the driving software etc will become so good that it is 
> made mandatory that it be used at all times, no matter what.  That there will 
> be no option available to the humans involved.  And heavy penalties for 
> disabling or over-riding the system - assuming that in a non-human controlled 
> vehicle it will be possible to disable the system and take over.

David L
_______________________________________________
Link mailing list
[email protected]
http://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to