As I have spent a lot of time trying to guess what could be done for
Quasi-Monte-Carlo
or other standard forms of Monte-Carlo-improvements in computer-go, I
write below
my (humble and pessimistic :-) ) opinion about that.
Let's formalize Monte-Carlo.
Consider P a distribution of probability.
I could see a case where it is possible to reduce a variance of a single
variable even in the 0-1 case. Let us say that black has about 5% chances of
winning. If we could (exactly) double the chances of black winning by
changing the nonuniform sampling somehow (say, enforce bad moves by
Upon continuing to learn about the general Monte Carlo field, I've found
it seems there is a general consensus in this community about a
distinction between Monte Carlo (MC) and what appears to be commonly
called Quasi Monte Carlo (QMC). MC is defined as using
random/pseudo-random distributions
It seems that there are at least three cases:
1: Choosing a random move from a uniform distribution
2: Choosing a random move from a nonuniform distribution (patterns etc.)
3: Choosing a move taking into account what has been chosen before
The concensus seems to be that numbers 1 and 2 are MC and
ivan dubois wrote:
I dont understand how you can reduce the variance of monte-carlo sampling,
given a simulation can return either 0(loss) or 1(win).
Maybe it means trying to have mean values that are closer to 0 or 1 ?
Well strictly speaking I agree the standard models don't fit that well
-
It seems that there are at least three cases:
1: Choosing a random move from a uniform distribution
2: Choosing a random move from a nonuniform distribution (patterns etc.)
3: Choosing a move taking into account what has been chosen before
The concensus seems to be that numbers 1 and 2 are MC