Unlike humans, who have these pesky things called rights, we can abuse our 
computer programs to deduce why they made decisions. I can see a future where 
that has to happen. From my experience in trying to best the stock market with 
an algorithm I can tell you that you have to be able to explain why something 
happened, or the CEO will rest control away from the engineers.

Picture a court case where the engineers for an electric car are called upon to 
testify about why a child was killed by their self driving car. The fact that 
the introduction of the self-driving car has reduced the accident rate by 99% 
doesn’t matter, because the court case is about this car and this child. The 
99% argument is for the closing case, or for the legislature, but it’s early 
yet.

The Manufacturer throws up their arms and says “we dunno, sorry”.

Meanwhile, the plaintiff has hired someone who has manipulated the inputs to 
the neural net, and they’ve figured out that the car struck the child, because 
the car was 100% sure the tree was there, but it could only be 95% sure the 
child was there. So it ruthlessly aimed for the lesser probability.

The plaintiff’s lawyer argues that a human would have rather hit a tree than a 
child.

Jury awards $100M in damages to the plaintiffs.

I would think it would be possible to do “differential” analysis on AGZ 
positions to see why AGZ made certain moves. Add an eye to a weak group, etc. 
Essentially that’s what we’re doing with MCTS, right?

It seems like a fun research project to try to build a system that can reverse 
engineer AGZ, and not only would it be fun, but its a moral imperative.

Pierce

_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to