On 02.02.2016 17:29, Jim O'Flaherty wrote:
AI Software Engineers: Robert, please stop asking our AI for explanations.
We don't want to distract it with limited human understanding. And we don't
want the Herculean task of coding up that extremely frail and error prone
bridge.

Currently I do not ask a specific AI engine explanations. If an AI program only has the goal of playing strong, then - while it is playing or preparing play - it should not be disturbed with extra tasks.

Explanations can come from AI programs, their programmers, researchers providing the theory applied in those programs, researchers analysing the program codes, data structures or outputs.

I do not expect everybody to be interested in explanations, but I ask those interested. It must be possible to study theory for playing programs, their data structures or outputs and find connections to explanatory theory - as much as it must be possible to use explanatory theory to improve "brute force" programs.

Herculean task? Likely. The research in explanatory theory is, too.

Error prone? I disagree. Errors are not created due to volume of a task but due to carelessness or missing study of semantic conflicts.

--
robert jasiek
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to