Jean-loup Gailly wrote:
Thanks Rémi for these slides, looking forward to see the paper.
For this application it is safe to assume that the function to be
optimized (the winning rate) is a smooth function of parameters,
and it has no tricky local optima
Why is it safe to assume no local optimum? This is only a guess,
but with high dimension (many parameters to tune) I would expect
local optima. Does your algorithm scale to many dimensions?
I never observed any local optimum in my long experience of tuning
parameters. My feeling is that, the higher the dimension, the lower the
risk of local optimum. Each additional dimension is an additional option
to escape from a local optimum. The Rosenbrock function is a typical
example:
http://en.wikipedia.org/wiki/Rosenbrock_function
It has local optimal in the x direction, but the whole function has no
local optima.
This does not mean that local optima cannot occur. It is easy to create
an artificial situation with a local optimum. But for most parameters, I
can usually have a strong intuitive feeling that there is no local
optimum. If anybody knows a real example where local optima can be a
problem, I would be glad to hear from it.
Note also that local quadratic regression will handle many local optima
correctly, if they are not tricky.
I have not run many experiments in high dimension yet. I am very
confident my algorithm will scale much better than UCT. UCT has a cost
exponential with dimension.
There are very few good algorithms to compare to. The Cross-Entropy
method works better than UCT in high dimensions:
http://www.personeel.unimaas.nl/m-winands/documents/crossmc.pdf
But I am confident it is easy to do better than Cross-Entropy with
quadratic regression.
If anybody has an alternative algorithm to suggest, I would be glad to
implement it in my optimization framework.
Rémi
_______________________________________________
Computer-go mailing list
Computer-go@dvandva.org
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go