C# is a nice language, but for anything open-source, the fact that it
was created by Microsoft kind of 'taints' it. Which is not to say
that java is very untainted these days, being owned by Oracle...
From a practical point of view, java does most things that C# does,
with a few obvious
Well, what I did was connect my bot to kgs a few times, and watch as
it go repeatedly beaten by anything much better than 25k :-)
Normally, there's a few 'randombots' there to start with. Once you can
beat those (which is harder than it sounds, or it is if your program
has bugs, which mine did
Still, it would be nice if the computer could learn the heuristics
itself, by self-play.
Which is why my bot is still stuck on 25k rating :-D At least, that's
my excuse :-P
On Sat, Mar 28, 2015 at 7:18 PM, Urban Hafner cont...@urbanhafner.com wrote:
But my guess based on ad hoc tests during
But then, the komi wont really
participate in the hierarchical representation we are hoping that the
network will build, that I suppose we are hoping is the key to
obtaining human-comparable results?
Well... it seems that Hinton, in his dropout paper
http://arxiv.org/pdf/1207.0580.pdf , get
Perhaps what we want is a compromise between convnets and fcs though?
ie, either take an fc and make it a bit more sparse, and / or take an
fc and randomly link sets of weights together???
Maybe something like: each filter consists of eg 16 weights, which are
assigned randomly over all
On 1/12/15, Álvaro Begué alvaro.be...@gmail.com wrote:
A CNN that starts with a board and returns a single number will typically
have a few fully-connected layers at the end. You could make the komi an
extra input in the first one of those layers, or perhaps in each of them.
That's an
On Sat, Mar 21, 2015 at 11:41 AM, Álvaro Begué alvaro.be...@gmail.com wrote:
I don't see why komi needs to participate in the hierarchical representation
at all.
Yes, fair point. I guess I was taking 'komi' as an example of any
additional natural number that one might wish to feed into a net.
On Wed, Dec 31, 2014 at 9:29 PM, Hugh Perkins hughperk...@gmail.com wrote:
- finally, started to get a signal, on the kgsgo data :-) Not a very strong
signal, but a signal :-) :
test accuracy: 364/1 3.64%
Up to 35.1% test accuracy for next-move-prediction task now, still 9%
lower than
Detleft wrote:
The idea is, I can do the equivalent of lets say 1000 playouts with a call to
the CNN for the cost of 2 playouts some time...
That sounds like a good plan :-)
___
Computer-go mailing list
Computer-go@computer-go.org
Thinking about datasets for CNN training, of which I lack one
currently :-P Hence I've been using MNIST , but also since MNIST
results are widely known, and if I train with a couple of layers, and
get 12% accuracy, obviously I know I have to fix something :-P
But now, my network consistently
On 1/11/15, Hugh Perkins hughperk...@gmail.com wrote:
Thinking about datasets for CNN training, of which I lack one
currently :-P Hence I've been using MNIST , but also since MNIST
results are widely known, and if I train with a couple of layers, and
get 12% accuracy, obviously I know I have
Why don’t you make a dataset of the raw board positions, along with code to
convert to Clark and Storkey planes? The data will be smaller, people can
verify against Clark and Storkey, and they have the data to make their own
choices about preprocessing for network inputs.
Well, a lot of
Darren wrote:
I'm wondering if I've misunderstood, but does this mean it is the same
as just training your CNN on the 9-dan games, and ignoring all the 8-dan
and weaker games? (Surely the benefit of seeing more positions outweighs
the relatively minor difference in pro player strength??)
It's
On 1/11/15, Detlef Schmicker d...@physik.de wrote:
Todays bot tournament nicego19n (oakfoam) played with a CNN for move
prediction.
Blimey! You coded that quickly. Impressive! :-)
___
Computer-go mailing list
Computer-go@computer-go.org
I would very much appreciate an open source implementation of this
- or rather, I'd rather spend my time using one to do interesting things
rather than building one, I do plan to open source my implementation if
I have to make one and can bring myself to build one from scratch...
I started
Hi Aja,
Couple of questions:
1. connectivity, number of parameters
Just to check, each filter connects to all the feature maps below it,
is that right? I tried to check that by ball-park estimating number
of parameters in that case, and comparing to the section paragraph in
your section 4.
On Fri Dec 19 23:17:23 UTC 2014, Aja Huang wrote:
We've just submitted our paper to ICLR. We made the draft available at
http://www.cs.toronto.edu/~cmaddis/pubs/deepgo.pdf
Cool... just out of curiosity, did a back-of-an-envelope estimation of the
cost of training your and Clark and Storkey's
(Hiroshi Yamashita)
2. Teaching Deep Convolutional Neural Networks to Play Go
(Hugh Perkins)
3. Move Evaluation in Go Using Deep Convolutional Neural
Networks (Hugh Perkins)
4. Re: Move Evaluation in Go Using Deep Convolutional Neural
Networks (Stefan
(sorry for forgetting to delete the digest before replying :-( )
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go
Interesting looking paper: On correlation and budget constraints in
model-based bandit optimization with application to automatic machine
learning, Hoffman, Shahriari, de Freitas, AISTATS 2014
I can't say I've entirely understood yet, but I *think* that:
- targets scenario where there are many
On Sun Dec 14 23:53:45 UTC 201, Hiroshi Yamashita wrote:
Teaching Deep Convolutional Neural Networks to Play Go
http://arxiv.org/pdf/1412.3409v1.pdf
Wow, this resembles somewhat what I was hoping to do! But now I
should look for some other avenue :-) But
I'm surprised it's only published on
21 matches
Mail list logo