Orego version 3.03, described in the paper and available at the Orego web site, is in C++.

However, we are finding C++ an exceedingly frustrating language to work in. I won't go into the details here -- we don't need another language war -- but suffice it to say that it seems like we're spending a lot of time messing with details that aren't of interest for the research. Now that I've found that Java can have speed within a factor of 2 of C++, I'm thinking of going back to Java.

A student and I are challenging each other to make the program multithreaded -- he in C++, me in Java. If I can rewrite the program in Java AND multithread it before he can multithread the C++ version, it will be (weak) evidence that it might be worth trading some runtime speed for development time.

Your point (also raised by others) about testing against other opponents is, of course, valid. I would have done so in this paper were the submission deadline not pressing, and plan to do so for future papers. In defense of this paper, I can only say that (according to the self-play data in the paper) adding the proximity heuristic offered not just an improvement but a very large improvement -- 95% statistical confidence.

Peter Drake
Assistant Professor of Computer Science
Lewis & Clark College
http://www.lclark.edu/~drake/




On Nov 29, 2006, at 3:22 AM, Chrilly wrote:


I am confused. In your paper you write "Orego is a Monte CarloGo programm written in C++". Is Orego now in C++ or in Java or both?

The paper mentions the relative comparison of 2 versions. This is common practice in the scientific literature, but it is a very poor choice if one wants to measure the effect of a new method. The effects of changes is much more pronounced than against another opponet. A method which is good against the twin-brother must not be good against other opponents at all. Even against other opponents it happens frequently that a method works quite well against opponent A but it fails against B. Its relative easy to make a version which crashes e.g. Rybka, but this version is poor against Fritz and Shredder. The really difficult task is to find a combination which plays good against all. But its of course a good method for papers where the authors want to proove how good their idea is. But it demonstrates the lack of competence of the academic world for game-programming. Otherwise such experiments would not be accepted as a proof for a concept. There is also Vincent Diepeveens law: In a weak programm is every change an improvement. I do not know how good Orego is, but playing e.g. against the top-3 programms would be a much better experiment.

This remark is not against the Orego team which does a fine job to explain their work. I like to read the Orego project "news". Its against a very common and bad practice in papers. One can even get an award for the best paper of the year and become a standard reference with this poor methodology. In my "Null-Move" paper I did the same. The Null-Move Version of Nimzo was running on a 386, the Non-Null-Move Version on a 486 and the result was about equal. My conclusion was: The Null-Move is worth one hardware generation. At that time I was not really aware of the problem and the bad comparision was not on purpose. But the reviewer/editor of the ICCA- Journal should have insisted on better experiments. The only "improvements" he made was changing the original title from "Null move and deep search: Selective search heuristics for. stupid chess programs" to "obtuse chess programms". I know what a stupid programm is, but I do not know till today the meaning of "obtuse".

Chrilly

_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/
_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to