Hey, so, after some struggling, here is an incarnation of my ideas on the matter, a work in progress. The toolkit has currently two main functions: 1) general GTP wrapper for turning move distribution generating bot into a full-featured player. It can pass by using GnuGo as an pass-oracle. Currently supporting Hugh Perkins's DeepCL convolutional network. 2) dataset generation - extract feature planes (ie. Clark & Storkey 2014) and labels from a list of games and save them in a HDF5 file (customizable a lot). After some research I concluded that the HDF5 is a very good format for this purpose (large amounts of binary data) as it is a somewhat standard format with transparent compression, featuring support in toolkits like pylearn2. I think this is a better choice than home-brew binary formats.
Code can be found here: https://github.com/jmoudrik/deep-go-wrap Regards, Josef On Thu, Apr 30, 2015 at 12:25 PM Josef Moudrik <[email protected]> wrote: > > I would love to have something like this. > Great! > > > I would appreciate some way to configure depth levels and > > variable branching factors for move generation as well as scoring > > playouts using the NN. > > Hmm, I am not talking about MCTS integration or DNN training. Rather, I > have a small wrapper in mind, that will make the DNN possible to be used as > a standalone player - something like a convenient unified toolkit to which > you would plug in your dnn model of choice (used as a black box). The > toolkit would handle i/o (i.e. transform the board to input planes), gtp > communication, ensured move correctness and e.g. enabled passing. > > Regards, > Josef >
_______________________________________________ Computer-go mailing list [email protected] http://computer-go.org/mailman/listinfo/computer-go
