-----Original Message-----
From: [EMAIL PROTECTED]
...
>> The earliest MC engines were extremely simple and easily described. > It
>> seems inevitable that someone new to the field will seize >on > this
>> description, and then combine it with the success of current > Monte-Carlo
>> engines, leading to unnecessary confusion.
>I am not sure what you mean by that. Do you mean that different people will
>use the term MC in
> different ways and cause confusion in the minds of third parties? In other
> words, some people are
> using the simplest pure random playout (no consideration ^of distribution at
> all) and calling that MC
>while others are trying hard to keep the moves searched "Go-like." and thus
>get different ^results.
>Or do you mean that naive programmers will try pure random playout and wonder
>why the MoGo
>folks are doing so very much better, not realizing the importance of getting a
>decent distribution
> for MC to be effective?
I mean both. But IMHO the most serious distinction is whether the MC is
combined with tree search or not. I'm not embarking on a nomenclature crusade.
Just remarking that when people say Monte-Carlo does this or that, it's often
ambiguous what algorithm they are actually talking about.
>> On a tangential note: MC/UCT with light playouts and MC/UCT > with heavy
>> playouts are different beasts.
>> If SlugGo's mixture of
> >experts work expands to include MC/UCT, you might want to consider > adding
> >one of each.
>I am quite open minded about what kind of experts we add to SlugGo. We are
>limited more by the time required to implement and
>integrate than anything else. Integration is a problem that can be quite
>severe for experts (or move suggesters) with completely
>different evaluation functions. How does one arbitrate between suggestions
>that come to you with evaluations of move quality that
>are on scales that are not co-linear? This point has kept SlugGo as a pure GNU
>Go dependent engine for longer than we had
>originally expected.
Well this is a common problem when combining heterogeneous experts and
there is no shortage of approaches, just as you point out, a shortage of time
and energy to experiment with them. One very simple approach that sometimes
works in some domains is to gather a number of experts, have them rank the
choices and combine at that level. In this case, it's usually good to have
experts that are qualitatively different.
>But I am not sure what the value is in what you are calling "light" playouts.
>As per the above, it seems to me that >light playout is simply ignoring any
>proper distribution, and thus is just a much more inefficient way to sample.
AntIgo with heavy playouts is about 300 ELO points stronger on CGOS than
AntIgo with light playouts. If you're going to choose just one, I don't think
it's a close call.
- Dave Hillis
________________________________________________________________________
Check Out the new free AIM(R) Mail -- 2 GB of storage and industry-leading spam
and email virus protection.
_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/