On 1 nov. 2013, at 13:32, Darren Cook <dar...@dcook.org> wrote:

>> MCMC has little to do with what we do in computer Go. In MCTS we have
>> a Markov Chain and we take Monte-Carlo samples from it, but the
>> purpose is really not the same at all as what MCMC algorithms do. I
>> recommend the wikipedia articles. It is difficult to really get an
>> idea of MCMC by reading a general description. It is probably best
>> that you get a feeling of what it is by studying the details of a
>> real MCMC algorithm. The most basic MCMC algorithm is the
>> Metropolis-Hastings algorithm: 
>> https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm
> 
> Thanks Remi.
> Quoting from that article:
> "...[a] method for obtaining a sequence of random samples from a
> probability distribution for which direct sampling is difficult. This
> sequence can be used to approximate the distribution (i.e., to generate
> a histogram)"
> 
> This sounds exactly like using N monte-carlo simulations at a node in an
> MCTS tree, generating a histograms of possible scores. The highest point
> on the histogram is used as the best-guess estimate of the score. When
> you have two peaks it implies some unstable situation, like a semeai. Etc.
> 
> You mentioned the "purpose" is not the same. Can you elaborate?

In MCMC the distribution is given to you with some kind of mathematical 
definition, and the challenge is to create a Markov Chain that approximates the 
distribution well.

> 
> (If "Markov-Chain" is a nice clean synonym for "rules of the game",
> whether the game is go or weather systems, I feel I am on home ground!)
> 
> Darren
> _______________________________________________
> Computer-go mailing list
> Computer-go@dvandva.org
> http://dvandva.org/cgi-bin/mailman/listinfo/computer-go

_______________________________________________
Computer-go mailing list
Computer-go@dvandva.org
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go

Reply via email to