2017-04-28 16:17 GMT+02:00 Luke Kenneth Casson Leighton <l...@lkcl.net>:

>
>  the rest of the article makes a really good point, which has me
> deeply concerned now that there are fuckwits out there making
> "driverless" cars, toying with people's lives in the process.  you
> have *no idea* what unexpected decisions are being made, what has been
> "optimised out".
>

That's no different from regular "human" programming. If you employ IA
programming you still can validate the code like you would that of a normal
human.

Or build a second independ IA for the "four" eye principle.


>  with aircraft it's a different matter: the skies are clear, it's a
> matter of physics and engineering, and the job of taking off, landing
> and changing direction is, if extremely complex, actually just a
> matter of programming.  also, the PILOT IS ULTIMATELY IN CHARGE.
>
>  cars - where you could get thrown unexpected completely unanticipated
> scenarios involving life-and-death decisions - are a totally different
> matter.
>
>  the only truly ethical way to create "driverless" cars is to create
> an actual *conscious* machine intelligence with which you can have a
> conversation, and *TEACH* it - through a rational conversation - what
> the actual parameters are for (a) the laws of the road (b) moral
> decisions regarding life-and-death situations.
>

The problem is nuance. If a cyclist crosses your path and escaping
collision can only be done by driving into a group of people waiting to
cross after you passed them. The choice seems logical: Hit the cyclist.
Many are saved by killing/injuring/bumping one.

Humans are notoriously bad in taking those decisions themselves. We only
consider the cyclist. That's our focus. The group become the second
objective.

Many people are killed/injured by trying to avoid hitting animals. You try
to avoid collision only to find you'r vehicle becoming uncontrollable or
finding a new object on your new trajectory, mostly trees.

The real crisis comes from outside control. The car can be hacked and
become weaponized. That works with humans as well but is more difficult and
takes more time. Programming humans takes time.

Or some other Asimov related issue ;-)


>
>  applying genetic algorithms to driving of vehicles is a stupid,
> stupid idea because you cannot tell what has been "optimised out" -
> just as the guy from this article says.
>
> l.
>
> _______________________________________________
> arm-netbook mailing list arm-netbook@lists.phcomp.co.uk
> http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
> Send large attachments to arm-netb...@files.phcomp.co.uk
>
_______________________________________________
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-netb...@files.phcomp.co.uk

Reply via email to