Hi,
Winrate of your pure CNN againts pachi retsugen is:
GAMES WINRATE S.D.PAIRING
224 0.558 0.033 19-7.5-1-pachi-=1-detlef_54
221 0.407 0.033 19-7.5-1-pachi-=2-detlef_54
I used the
https://github.com/jmoudrik/deep-go-wrap
for the player.
Regards,
Josef
On Tue, Dec 29,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I am fighting with the problem most seem to have with the strong move
predictions at the moment, MCTS is not increasing the players a lot :)
I wonder, if somebody measured the performance of the pure CNN54
against pachi 10k (or 100k), to get a
Thank you for the feedback, everyone.
Regarding the CPU-GPU roundtrips, I'm wondering whether it'd be
possible to recursively apply the output matrix to the prior input
matrix to update board positions within the GPU and without any
actual (possibly CPU-based) evaluation until all branches come
I doubt that the illegal moves would fall away since every professional
would retake the ko... if it was legal
On 2015-12-09 4:59, Michael Markefka wrote:
Thank you for the feedback, everyone.
Regarding the CPU-GPU roundtrips, I'm wondering whether it'd be
possible to recursively apply the
I think ko moves are taken into account on one of in the input planes
for most configurations. At least I hope remember that correctly.
Could it be achieved to create such a plane from the prior input
matrix and following output matrix by difference?
On Wed, Dec 9, 2015 at 2:08 PM, Igor Polyakov
Hi!
In case someone is looking for a starting point to actually implement
Go rules etc. on GPU, you may find useful:
https://www.mail-archive.com/computer-go@computer-go.org/msg12485.html
I wonder if you can easily integrate caffe GPU kernels in another GPU
kernel like this? But
Regarding full CNN playouts, I think that problem is that a playout is a
long serial process, given 200-300 moves a game. You need to construct
planes and transfer them to GPU for each move and read result back (at
least with current CNN implementations afaik), so my guess would be that
such
Hello Detlef,
I've got a question regarding CNN-based Go engines I couldn't find
anything about on this list. As I've been following your posts here, I
thought you might be the right person to ask.
Have you ever tried using the CNN for complete playouts? I know that
CNNs have been tried for move
Hi!
Well, for this to be practical the entire playout would have to be
executed on the GPU, with no round-trips to the CPU. That's what my
email was aimed at.
On Tue, Dec 08, 2015 at 04:37:05PM +, Josef Moudrik wrote:
> Regarding full CNN playouts, I think that problem is that a playout
Yes, that's why I wrote with current CNN implementations. But I still
wonder whether my estimate for the round-trip length is at least of the
correct magnitude.
Josef
On Tue, Dec 8, 2015 at 6:03 PM Petr Baudis wrote:
> Hi!
>
> Well, for this to be practical the entire playout
I don't think the CPU-GPU communication is what's going to kill this idea.
The latency in actually computing the feed-forward pass of the CNN is going
to be in the order of 0.1 seconds (I am guessing here), which means
finishing the first playout will take many seconds.
So perhaps it would be
Did everyone forget the fact that stronger playouts don't necessarily lead to
an better evaluation function? (Yes, that what playouts essential are, a
dynamic evaluation function.) This is even under the assumption that we can
reach the same number of playouts per move.
> On 08 Dec 2015, at
Of course whether these "neuro-playouts" are any better than the heavy
playouts currently being used by strong programs is an empirical question.
But I would love to see it answered...
On Tue, Dec 8, 2015 at 1:31 PM, David Ongaro
wrote:
> Did everyone forget the fact
As NNs basically learn the frequency of each move, using the value as
its probability to be chosen in a simulation could be ok.
Hideki
David Ongaro: <6c2ff906-2a00-45c1-b892-2b14bef35...@hamburg.de>:
>Did everyone forget the fact that stronger playouts don't necessarily lead to
>an better
14 matches
Mail list logo