> What is the motivation in this? I cannot conceive of any good reason for > running an experiment this way, so I would be interested in opinions. It > seems to me that making algorithms heavier and then demonstrating that > they are stronger with the same number of playouts misses the point - > why would one not run an experiment under the same time conditions instead?
* The heavier algorithm might be unoptimized; * The heavier algorithm might be easier to parallelize (or put into hardware); * The scaling behaviour might be different. E.g. if fuego and Valkyria are both run with 10 times more playouts the win rate might change. Just to dismiss an algorithm that loses at time limits that happen to suit rapid testing on today's hardware could mean we miss out on the ideal algorithm for tomorrow's hardware. (*) * By analyzing why the win rate of the super-heavy algorithm is better might give other people ideas for lighter but still effective playouts. Darren *: As an example, monte carlo itself was ignored for the first 10 years of its life because traditional programs were stronger on the same hardware. -- Darren Cook, Software Researcher/Developer http://dcook.org/gobet/ (Shodan Go Bet - who will win?) http://dcook.org/mlsn/ (Multilingual open source semantic network) http://dcook.org/work/ (About me and my work) http://dcook.org/blogs.html (My blogs and articles) _______________________________________________ computer-go mailing list [email protected] http://www.computer-go.org/mailman/listinfo/computer-go/
