Hi Claus, > In sum, there doesn't seem to be a good basis for understanding playouts > and their optimisation, other than by trial and error. Those who've been > through some cycles of trial-and-error probably have at least a vague > intuition of what works and what doesn't (or didn't when they last tried), > but there is depressingly little theory (and many of the references are > infuriatingly vague - in the style of "there was a lot of discussion about > that"; > great! not! where, when, what keywords or references, please?-).
It's a new area and the systems are very complicated. What kind of theory are you after, and what would you like it to tell you? > I'm trying to build some of that intuition in advance, both to be able to > focus my trial-and-error phase and to have some theory to complement > the experiments. > > At the moment, the most puzzling aspect is that the uncertainties aren't > bounded in any direction: the pseudo-eyes variations aren't subsets of > each other (that can probably be fixed, and would result in a single best > variation, at least for the three ones I've mentioned); > > the playout evaluation, while somewhat biased toward not seeing more > complex properties of board positions, cannot even be said to give a > lower bound on the score, because it will underestimate both the > strength of positions and the strength of possible attacks against them > (though knowing that much will be of some help at least). > > Currently, it seems as if even many of the tricks that seem to work > wonderfully well don't really have a solid basis, so any tournament > could throw up a game that puts the whole thing into doubt, by > showing a hole in the test coverage big enough to drive a truck > through (once pros and others know what to look for). Or not. Is testing based on a large number of games not solid enough? I don't see any alternative with such complicated systems. Another point: We deliberately restrict the complexity of the generative model (the playout function) by keeping it simple, and show that it works on a large, representative number of positions. Because the generative model is so simple, we can expect the performance we see in private testing to be realized in real games. We need not live in constant fear, at least to the degree that I think you are implying. Joel _______________________________________________ computer-go mailing list [email protected] http://www.computer-go.org/mailman/listinfo/computer-go/
