On 16 January 2013 11:11, Philippe Michel <[email protected]> wrote:
> On Mon, 14 Jan 2013, Mark Higgins wrote: > > What training approach have you been using, if you don't mind elaborating? >> > > Supervised training. I used the same training tools that were used years > ago to create the current nets. > > The main difference is that I rolled out the training database while it > previously used (as far as I know) 2ply evaluations from the preceding > generation of nets. > > This obviously took some time, but with current processors what was out of > question in the early- to mid-2000s when the currents nets were trained is > now doable. > > I don't know if Joseph Heled did many iterations (reevaluate database / > train nets / maybe add mishandled positions) but with rollouts, each of > them take a long time (I did it twice for the crashed database and once for > the contact one). This is then mostly a one-shot effort, at least until > something important changes in the training database. > Oh yes. Many iterations :) but at 2 ply, no rollouts. I am willing to believe the new nets are better, but I have not seen the results of a long-enough/statistically-significant run of matches between the old and new. -Joseph > > Another thing that must have been helpful is that I added to the trainig > databases its positions with the other player on roll. I think this helped > a little for the general playing strength and diminished significantly the > odd/even plies discrepancies. > > I used slightly larger pruning nets, with sizes adapted to SSE or AVX > instructions, but I don't think it make much of a difference. > > > ______________________________**_________________ > Bug-gnubg mailing list > [email protected] > https://lists.gnu.org/mailman/**listinfo/bug-gnubg<https://lists.gnu.org/mailman/listinfo/bug-gnubg> >
_______________________________________________ Bug-gnubg mailing list [email protected] https://lists.gnu.org/mailman/listinfo/bug-gnubg
