The confidence in Lee Sedol is staggering.
I'd say it's quite natural. People know Lee Sedol and his strength.
Plus, he would have crushed the version that played Fan Hui. It takes
some domain knowledge to see that Aja et al. have gained four stones in
less than a year by incorporating a NEW
On 06.11.2015 10:47, Aja Huang wrote:
area scoring, in which case the score is almost always odd.
Black wins: odd score
White wins: even score
Aja means pre-komi. It's always odd (except special seki).
Jonas
___
Computer-go mailing list
Of course. You always use win rate, never margin for that type of analysis.
Problem is, this is not relevant unless we have games with different
komi.
The winrate depends as much on the width of the distribution as on the
median. Hence for weak players, it may go closer to 50%, even if the
On Tue, 3 Nov 2015, Urban Hafner wrote:
Thank you Remi!
So the 85.5% +/- 2.5 reported by GoGui would be 85.5% +/- 5 for 95% and 85.5%
+/- 7.5.
Correct?
Correct.
But you do not need that intervals do not overlap for significativity.
You may divide by $\sqrt{2}$ those intervals before testing
Why not xorshift+128 ?
*
include stdint.h
/* The state must be seeded so that it is not everywhere zero. */
uint64_t s[2];
uint64_t xorshift128plus(void) {
uint64_t x = s[0];
uint64_t const y = s[1];
s[0] = y;
x ^= x 23; // a
x ^= x 17; // b
x ^= y ^ (y 26); // c
I just think Go (except trivial implementation cases) should be very
insensitive to RNGs. It is not like many Monte Carlo applications where
you just call the RNG in a tight loop in regular manner to move in the
same state space. In a Go program, you call the RNG from playouts in
all sorts of
Computer Go progress has stopped?
http://www.sankei.com/life/news/150323/lif1503230014-n1.html
Cho Chikun plays computer with handicaps game.
http://www.nikkei.com/article/DGXMZO84490320X10C15A300/
Cho Chikun, a win, a lose against computer.
So what's the strongest program you can make with minimum effort
and code size while keeping maximum clarity? Chess programers
were exploring this for long time, e.g. with Sunfish, and that inspired
me to try out something similar in Go over a few evening recently:
Based on my
observations, the limiting factor is time - Python is slw and
a faster language with the exact same algorithm should be able to speed
this up at least 5x, which should mean at least two ranks level-up.
Maybe a first step would be using numpy arrays for the board and
patterns.
Here is a partial game record I saved during the game. I am sorry I did not
save the full game as sgf. It is on the nngs server of the UEC, so I might be
able to get it tomorrow.
Crazy Stone invaded too deeply, and died.
Thank you RĂ©mi!
That was a very surprising game. I would have sworn
A few blurry photos I took:
http://pasky-jp.soup.io/post/556768169/UEC-Cup-2015-exhibition-game-was-between
http://pasky-jp.soup.io/post/556769031/The-prize-winners-Right-of-Remi-is
http://pasky-jp.soup.io/post/556769152/All-qualified-participants
Thank you Petr,
So
The discussion on move evaluation via CNNs got me wondering: has anyone
tried to make an evaluation function with CNNs ?
I mean, it's hard to really combine CNNs move estimator with a tree
search: you still need something to tell what the best leaf is. Given
the state of the art, the reflex is
Hi Aja
We've just submitted our paper to ICLR. We made the draft available at
http://www.cs.toronto.edu/~cmaddis/pubs/deepgo.pdf
I hope you enjoy our work. Comments and questions are welcome.
I did not look at the go content, on which I'm no expert.
But for the network training, you might be
13 matches
Mail list logo