rren Cook
> Sent: Tuesday, October 06, 2009 12:06 AM
> To: computer-go
> Subject: Re: [computer-go] Progressive widening vs unpruning
>
> > I tried this yesterday with K=10 and it seemed to make Many Faces weaker
> > (84.2% +- 2.3 vs 81.6% +-1.7), not 95% confidence, but like
> "progressive unpruning" and "progressive bias" were not termed by me, but
> by Guillaume:
> http://www.cs.unimaas.nl/g.chaslot/papers/newMath.pdf
> I only suggested that, in order to avoid confusion, it might be best to
> keep using those terms with the meaning he gave to them 2 years ago.
>
> Su
*: Olivier wrote:
What we call "progressive unpruning" is termed "progressive bias" by Rémi
Coulom.
"progressive unpruning" and "progressive bias" were not termed by me,
but by Guillaume:
http://www.cs.unimaas.nl/g.chaslot/papers/newMath.pdf
I only suggested that, in order to avoid confusion,
> I tried this yesterday with K=10 and it seemed to make Many Faces weaker
> (84.2% +- 2.3 vs 81.6% +-1.7), not 95% confidence, but likely weaker. This
> is 19x19 vs gnugo with Many Faces using 8K playouts per move, 1000 games
> without and 2000 games with the change. I have the UCT exploration t
M] Re: [SPAM] Re: [computer-go] Progressive widening vs
unpruning
4) regularized success rate (nbWins +K ) /(nbSims + 2K)
(the original "progressive bias" is simpler than that)
I'm not sure what you mean here. Can you explain a bit more?
Sorry for being unclear, I
>
>> 4) regularized success rate (nbWins +K ) /(nbSims + 2K)
>> (the original "progressive bias" is simpler than that)
>>
>
> I'm not sure what you mean here. Can you explain a bit more?
>
>
Sorry for being unclear, I hope I'll do better below.
Instead of just "number of wins" divided by "numer of
On Fri, Oct 02, 2009 at 06:49:46PM -0400, Jason House wrote:
> On Oct 2, 2009, at 2:24 PM, Olivier Teytaud
> wrote:
>
> >
> >4) regularized success rate (nbWins +K ) /(nbSims + 2K)
> >(the original "progressive bias" is simpler than that)
>
> I'm not sure what you mean here. Can you explain a bi
On Oct 2, 2009, at 2:24 PM, Olivier Teytaud
wrote:
4) regularized success rate (nbWins +K ) /(nbSims + 2K)
(the original "progressive bias" is simpler than that)
I'm not sure what you mean here. Can you explain a bit more?
___
computer-go mailing
On Fri, Oct 02, 2009 at 08:27:23PM +0200, Olivier Teytaud wrote:
> >
> > >What's your general approach? My understanding from your previous posts
> > is
> > >that it's something like:
> >
> > Your understanding is right.
> >
> >
> By the way, all the current strong programs are really very similar
>
> >What's your general approach? My understanding from your previous posts
> is
> >that it's something like:
>
> Your understanding is right.
>
>
By the way, all the current strong programs are really very similar...
Perhaps Fuego has something different in 19x19 (no big database of patterns
?).
>
> Your implementation must be very different from mine. Actually I don't
> use Progressive widening (or unpruning) at all. It's a mystery to me why
> others say it does work.
>
>
Hi Yamato. I want to clarify what we use in MoGo.
What we call "progressive unpruning" is termed "progressive bias" by
David Fotland wrote:
>What's your general approach? My understanding from your previous posts is
>that it's something like:
Your understanding is right.
>I don't know if you bias the initial rave values, or bias the initial win
>rate, or add a third heuristic term to the UCT formula.
I don't bi
-go.org
Sent: Thu, Oct 1, 2009 4:50 pm
Subject: Re: [computer-go] Progressive widening vs unpruning
For 9x9 games, when I added progressive widening to AntiGo (before I added
RAVE), it was low hanging fruit. I used my old program Antbot9x9 for the move
ranking and got a very nice strength
For 9x9 games, when I added progressive widening to AntiGo (before I added
RAVE), it was low hanging fruit. I used my old program Antbot9x9 for the move
ranking and got a very nice strength increase for very little effort. Then,
with a bit of tweaking, I got more improvement. RAVE, on the other
I was not at all surprised by this result.
My thinking goes like this. On 9x9 the global situation is everything
that matters and precomputed information is not as important as
searching effectly is. Good 9x9 games are often very sharp fights
where then next move often violates good shapes
What's your general approach? My understanding from your previous posts is
that it's something like:
UCT search using Silver's beta formula and UCB1 with Win-rate and Rave for
choosing a child (I use basic UCT with win-rate and Rave, and the original
MOGO beta formula).
UCT search is biased with
> To: computer-go
> Subject: Re: [computer-go] Progressive widening vs unpruning
>
> David Fotland wrote:
> > Well I have no idea how much I gained from this. It might be weaker
> than
> > what everyone else is doing, since it seems I didn't implement this as
> i
009 10:58 PM
> To: 'computer-go'
> Subject: RE: [computer-go] Progressive widening vs unpruning
>
> I'm trying an experiment. I took the Many Faces code completely out of
> the
> engine, and put it on 9x9 cgos as mfgo-none-1c. It's faster without Many
> Fa
David Fotland wrote:
> Well I have no idea how much I gained from this. It might be weaker than
> what everyone else is doing, since it seems I didn't implement this as it's
> been described recently. My progressive widening only uses Rave values.
> It's very simple. Others seem to have much mor
David Fotland wrote:
>Well I have no idea how much I gained from this. It might be weaker than
>what everyone else is doing, since it seems I didn't implement this as it's
>been described recently. My progressive widening only uses Rave values.
>It's very simple. Others seem to have much more co
I'm trying an experiment. I took the Many Faces code completely out of the
engine, and put it on 9x9 cgos as mfgo-none-1c. It's faster without Many
Faces, but it's just a basic uct engine with medium playouts. This should
tell us how much benefit I get from the Many Faces knowledge.
David
>
Well I have no idea how much I gained from this. It might be weaker than
what everyone else is doing, since it seems I didn't implement this as it's
been described recently. My progressive widening only uses Rave values.
It's very simple. Others seem to have much more complex schemes. But I
don
David Fotland wrote:
>> To be sure that I understand, MF orders the moves using static analysis,
>> and
>> then the ordering is further modified by RAVE observations?
>>
>> So when Many Faces accumulates Schedule(N) trials, it will restrict its
>> attention to the N highest ranked moves according
Look for the graph I posted a few weeks ago. Most things tried make it
worse. Some make it a little better, and every now and then there is a big
jump.
David
> I'm wondering, are these tunings about squeezing single-percent
> increases with very narrow confidence bounds, or something that gives
On Tue, Sep 29, 2009 at 10:25:40PM +0200, Olivier Teytaud wrote:
> I think someone pointed out a long time ago on this mailing list that
> initializing the prior in terms of Rave simulations was far less efficient
> than initializing the prior in terms of "real" simulations, at least if you
> have
> This sounds like progressive widening, but it could still be progressive
> unpruning, depending on implementation choices.
I do both. I have a small pool of moves that are active and I also bias the
initial rave values.
>
>
> >My current schedule looks like:
>
> To be sure that I understand
.
David
From: computer-go-boun...@computer-go.org
[mailto:computer-go-boun...@computer-go.org] On Behalf Of Olivier Teytaud
Sent: Tuesday, September 29, 2009 1:26 PM
To: computer-go
Subject: Re: [SPAM] Re: [computer-go] Progressive widening vs unpruning
I guess I'm not really appreci
I guess I'm not really appreciating the difference between node value
> prior and progressive bias - adding a fixed small number of wins or
> diminishing heuristic value seems very similar to me in practice. Is the
> difference noticeable?
>
It just means that the weight of the prior does not nece
I guess I'm not really appreciating the difference between node value
prior and progressive bias - adding a fixed small number of wins or
diminishing heuristic value seems very similar to me in practice. Is the
difference noticeable?
On Tue, Sep 29, 2009 at 08:25:56AM -0700, David Fotland wrote:
>
dhillism...@netscape.net wrote:
I'm not sure whether they meant different things when they were first coined,
but maybe that doesn't matter, and there are two different approaches that
should be distinguished somehow.
When a node has been visited the required number of times:
1) Use patte
I'm not sure whether they meant different things when they were first coined,
but maybe that doesn't matter, and there are two different approaches that
should be distinguished somehow.
When a node has been visited the required number of times:
1) Use patterns, heuristics, ownership maps f
I start with one move, and slowly add moves to the pool of moves to be
considered, peaking at considering 30 moves.
My current schedule looks like:
visits 0 2 5 9 15 24 38 59
90 100 ... 2142
moves 1 2 3 4 5
32 matches
Mail list logo