Terry wrote: > Regressions are not run often ... On the contrary regressions are run frequently to try out new patches. It may not be run often, or at all, by ordinary users but for developers the regression run time has a big impact on the productivity. This does not mean that it's automatically wrong to slow down the regressions but we need to be careful about it.
> it would be more pertinent to ask a) whether gnugo picks better > moves with a higher node limit, In general it can be expected not to do worse on average, at least. But as long we are doing selective search with heuristic evaluation of the leaf nodes there will always be anomalies where a deeper search gives a worse result. More important here is the fact that the semeai reading tends to have a high branching factor. My feeling is that for the simpler semeais, the node limit is not much of a problem, while for complex semeais the node limit can be increased a lot and still be insufficient. There's certainly a class of semeais where changes in the 500-2000 range are significant but the question is how often we encounter those. > and b) whether the cost time-wise is acceptable for ordinary usage. I'm more interested in the question which ways to slow down the engine have most impact on the playing strength. Maybe large-scale (ticket #29), a new dragon amalgamation algorithm (#97) or Alain's twin option (#104) are better places to spend time. Or maybe just increasing some depth parameter (e.g. ko_depth). /Gunnar _______________________________________________ gnugo-devel mailing list gnugo-devel@gnu.org http://lists.gnu.org/mailman/listinfo/gnugo-devel