In your hypothetical scenario, if the car can give you as much debugging
information as you suggest (100% tree is there, 95% child is there), you
can actually figure out what's happening. The only other piece of
information you need is the configured utility values for the possible
outcomes.

Say the utility of hitting a tree is -1000, the utility of hitting a child
is -5000 and the utility of not hitting anything is 0. A rational agent
maximizes the expected value of the utility function. So:
 - Option A: Hit the tree. Expected utility = -1000.
 - Option B: Avoid the tree, possibly hitting the child, if there is a
child there after all. Expected utility: 0.95 * (-5000) + 0.05 * 0 = -4750.

So the car should pick option A. If the configured utility function is such
that hitting a tree and hitting a child have the same value, the lawyers
would be correct that the programmers are endangering the public with their
bad programming.

Álvaro.



On Mon, Oct 30, 2017 at 2:22 PM, Pierce T. Wetter III <
pie...@alumni.caltech.edu> wrote:

> Unlike humans, who have these pesky things called rights, we can abuse our
> computer programs to deduce why they made decisions. I can see a future
> where that has to happen. From my experience in trying to best the stock
> market with an algorithm I can tell you that you have to be able to explain
> why something happened, or the CEO will rest control away from the
> engineers.
>
> Picture a court case where the engineers for an electric car are called
> upon to testify about why a child was killed by their self driving car. The
> fact that the introduction of the self-driving car has reduced the accident
> rate by 99% doesn’t matter, because the court case is about *this* car
> and *this* child. The 99% argument is for the closing case, or for the
> legislature, but it’s early yet.
>
> The Manufacturer throws up their arms and says “we dunno, sorry”.
>
> Meanwhile, the plaintiff has hired someone who has manipulated the inputs
> to the neural net, and they’ve figured out that the car struck the child,
> because the car was 100% sure the tree was there, but it could only be 95%
> sure the child was there. So it ruthlessly aimed for the lesser
> probability.
>
> The plaintiff’s lawyer argues that a human would have rather hit a tree
> than a child.
>
> Jury awards $100M in damages to the plaintiffs.
>
> I would think it would be possible to do “differential” analysis on AGZ
> positions to see why AGZ made certain moves. Add an eye to a weak group,
> etc. Essentially that’s what we’re doing with MCTS, right?
>
> It seems like a fun research project to try to build a system that can
> reverse engineer AGZ, and not only would it be fun, but its a moral
> imperative.
>
> Pierce
>
>
> _______________________________________________
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to