I have never tried Atari go, but every single time I have experienced a MCTS playing random, it was either:- A bug in the MC policy (wrong distribution, rotated, etc)- A bug in the scoring function/branch updateSince your bot can play 9x9 I guess it's the second. Ofc it's easy to know the winner,
of
the boundaries after the image is zero-padded. The real question is more
like: is it useful to have both?
I haven't tested it but I guess that the min-max boundaries has to be
somehow a useful information for the network.
Vincent Richard
Le 18-Jul-17 à 7:53 PM, Brian Lee a écrit :
I've been wondering
, some pros feel it still
does some mistakes.
Vincent Richard
Le 06-Aug-17 à 10:49 PM, Cai Gengyang a écrit :
Is Alphago brute force search?
Is it possible to solve Go for 19x19 ?
And what does perfect play in Go look like?
How far are current top pros from perfect
Hello everyone,
For my master thesis, I have built an AI that has a strategical approach
to the game. It doesn’t play but simply describe the strategy behind all
possible move for a given strategy ("enclosing this group", "making life
for this group", "saving these stones", etc). My main idea
:38, Vincent Richard wrote:
During my research, I’ve trained a lot of different networks, first on
9x9 then on 19x19, and as far as I remember all the nets I’ve worked
with learned quickly (especially during the first batches), except the
value net which has always been problematic (diverge eas
is balanced at the beginning...
Le 20-Jun-17 à 5:48 AM, Gian-Carlo Pascutto a écrit :
On 19/06/2017 21:31, Vincent Richard wrote:
- The data is then analyzed by a script which extracts all kind of
features from games. When I'm training a network, I load the features I
want from this analysis