That is interesting, I did not realize that gnubg misplays race positions much. 
What are some examples?
________________________________
From: Øystein Schønning-Johansen <[email protected]>
Sent: October 19, 2020 3:20 AM
To: Joseph Heled <[email protected]>
Cc: Philippe Michel <[email protected]>; Aaron Tikuisis 
<[email protected]>; [email protected] <[email protected]>
Subject: Re: The status of gnubg?

Attention : courriel externe | external email
On Mon, Oct 19, 2020 at 12:02 AM Joseph Heled 
<[email protected]<mailto:[email protected]>> wrote:
But someone starting work in that area can take the old frame for another spin. 
They would learn a lot, even if they don't improve anything.

Yes! I can confirm that.

They can start with the "relatively" low hanging fruit of the race net. (even 
though it is not as low hanging as some may think.)
Oystein can add more on that.

Oh, yes! For many years I had the impression that the race neural network was 
close to perfect, and didn't care much to look into it. I now realize that 
there are many positions that are actually misplayed. So, I'm currently trying 
to find better methods for evaluating race position. It is possible to 
calculate some positions exactly to the bitter end with simple dynamic 
programming, however that is way too slow, it is also possible to estimate 
winning probabilities using the Central Limit Theorem for Renewal Processes. 
That is superfast, however worse than the current neural network, I guess. The 
problem is to find the sweet spot between what is feasible from a time (and 
memory) consuming point of view and the precision of the evaluation.

I'm in the phase of setting up a more detailed simulation on these different 
methods, such that I do have some numbers for comparison.

-Øystein

Reply via email to