On Tue, Feb 11, 2014 at 1:08 PM, Petr Baudis <[email protected]> wrote: > Hi! > > On Tue, Feb 11, 2014 at 11:42:24AM -0800, Peter Drake wrote: > > A naive question: > > > > In what situations is it better to use Coulom's Elo method vs his CLOP > > method for setting parameters? It seems they are both techniques for > > optimizing a high-dimensional, noisy function. > > Do you mean minorization-maximization?
Yes. > I'm not sure if it could be > adapted for optimizing a black-box function sensibly. I'm still fuzzy on this. Is it limited to boolean inputs (e.g., "include features 3, 7, and 22")? > Moreover, it might > not deal well with noisy observations. > Aren't the recorded-game data (used to find feature weights) noisy? > But most importantly, it can optimize a function on presampled data, > while CLOP will perform the sampling itself in order to enhance the fit > of the quadratic model. > Ahhhh, that makes sense. So CLOP is good for guiding experiments, but when you're learning from either recorded games or live playouts, it doesn't apply. > P.S.: I don't understand the details of minorization-maximization so > maybe I'm wildly off in something. > You're not the only one. My math is much weaker than I would wish. Thanks! -- Peter Drake https://sites.google.com/a/lclark.edu/drake/
_______________________________________________ Computer-go mailing list [email protected] http://dvandva.org/cgi-bin/mailman/listinfo/computer-go
