Jacques Basaldúa wrote:
Hello,
Just an explanation on something I may have explained badly. I see we
agree in the fundamental.
Correcting bias in that estimate should lead to better sampling.
This is usually called continuity correction
http://en.wikipedia.org/wiki/Continuity_correction.
Hello,
Just an explanation on something I may have explained badly. I see
we agree in the fundamental.
Correcting bias in that estimate should lead to
better sampling.
This is usually called continuity correction
http://en.wikipedia.org/wiki/Continuity_correction. The estimator
is not
Well, the assumption that p is estimated from the binomial because we
are counting Bernoulli experiments of constant p is a mathematically
sound method used universally. It does not require go knowledge, that's
what i meant. When n is big enough, the binomial converges to the normal
and
Hello Jason
I think what you are trying to do can be done more easily.
A. You have a Bernoulli random variable whose result is 0 or 1
following an unknown probability p. (Excuse me for explaining
obvious things, this is for anyone who reads it.) You want to
estimate p from a random sample. The
I respond to various items below. Sections of the original e-mail that
I'm not responding to were completely deleted.
Jacques Basaldúa wrote:
Hello Jason
I think what you are trying to do can be done more easily.
I guess the key question is what am I trying to do?.
In UCT, the next move
Based on my analysis, estimating a moves probability of winning by
taking the number of winning simulations (w) and dividing it by the
total number of simulations (n) is actually biased. I tried to break
this e-mail up into sections for easy digestion by the various people
who might read
Maybe other simple solutions exist,
you might want to check out those distributions that magically
have nice properties with respect to the bayesian integral.
they're called conjugate priors, and lots of distributions have
nice, easy to calculate conjugate priors.
there's a table here: