On Mon, May 22, 2017 at 11:27 AM, Erik van der Werf <
erikvanderw...@gmail.com> wrote:

> On Mon, May 22, 2017 at 10:08 AM, Gian-Carlo Pascutto <g...@sjeng.org>
> wrote:
>>
>> ... This heavy pruning
>> by the policy network OTOH seems to be an issue for me. My program has
>> big tactical holes.
>
>
> Do you do any hard pruning? My engines (Steenvreter,Magog) always had a
> move predictor (a.k.a. policy net), but I never saw the need to do hard
> pruning. Steenvreter uses the predictions to set priors, and it is very
> selective, but with infinite simulations eventually all potentially
> relevant moves will get sampled.
>
>
Oh, haha, after reading Brian's post I guess I misunderstood :-)

Anyway, LMR seems like a good idea, but last time I tried it (in Migos) it
did not help. In Magog I had some good results with fractional depth
reductions (like in Realization Probability Search), but it's a long time
ago and the engines were much weaker then...
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to