On 23-08-16 08:57, Detlef Schmicker wrote:

> So, if somebody is sure, it is measured against GoGod, I think a 
> number of other go programmers have to think again. I heard them 
> reaching 51% (e. g. posts by Hiroshi in this list)

I trained a 128 x 14 network for Leela 0.7.0 and this gets 51.1% on GoGoD.

Something I noticed from the papers is that the prediction percentage
keeps going upwards with more epochs, even if slowly, but still
clearly up.

In my experience my networks converge rather quickly (like >0.5% per
epoch after the first), get stuck, get one more 0.5% gain if I lower
the learning rate (by a factor 5 or 10) and don't gain any more
regardless of what I do thereafter.

I do use momentum. IIRC I tested without momentum once and it was
worse, and much slower.

I did not find any improvement in playing strength from doing
Facebook's 3 move prediction. Perhaps it needs much bigger networks
than 128 x 12.

Adding ladder features also isn't good enough to (consistently) keep
the network from playing into them. (And once it's played the first
move, you're totally SOL because the resulting positions aren't in the
training set and you'll get 99% confidence for continuing the losing
ladder moves)

I'm currently doing a more systematic comparison of all methods (and
GoGoD vs KGS+GoGoD) on 128 x 12, and testing the resulting strength
(rather than looking at prediction %). I'll post the results here, if
anything definite comes out of it.

-- 
GCP
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to