Re: [Computer-go] A new ELF OpenGo bot and analysis of historical Go games

2019-02-17 Thread Stephan K
"The sudden overall increase in agreement in 2016 also reinforces the
belief that the introduction of powerful AI opponents has boosted the
skills of professional players. That apparent correlation isn't
conclusive — it's possible that humans have gotten markedly better for
some other reason — but it's an example of how a system trained to
carry out a given task can also provide wide-ranging analysis of a
larger domain, both in the present and from a historical perspective.
"

They are using their go AI to measure the strength of human go
players, and use its analysis to 'prove' that copying the moves of AIs
has made humans stronger (or at least, quantify this increase in
strength).

There is a huge bias there. I think this is like asking a fisherman
whether chefs who specialize in fish have better cooking skills than
chefs who specialize in meat...

2019-02-16 17:49 UTC+01:00, J. van der Steen :
>
> And most important:
>
>* Does ELF know the meaning of life?
>
> On 16/02/2019 17:29, "Ingo Althöfer" wrote:
>> Hi Remi,
>> thanks you for the link.
>>
>> A few questions (to all who know something):
>>
>> * How strong is the new ELF bot in comparison with Leela-Zero?
>>
>> * How were komi values taken into account when analysing old go games with
>> help of ELF?
>>
>> * How often does ELF propose moves played by AlphaGo (for instance in the
>> games
>> with Fan Hui, Lee Sedol, and in the sixty games from December 2017)?
>>
>> * Does ELF understand that the strength of AlphaGo increased from October
>> 2015 to May 2017?
>>
>> Cheers, Ingo.
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Paper “Complexity of Go” by Robson

2018-06-22 Thread Stephan K
Hello,

I assume after white plays at U22, black T20, white T19, black should
have a choice of playing either at S21, capturing one white stone and
leading the ladder to the top ko, or at S20, leading the ladder to the
middle ko.

However, if black plays at S21, the sequence:
wS20 bT21 wR20 bS22 wS23 bR22 wQ22 bR23 wR24

Results in a win for white, regardless of who is holding the top ko.

2018-06-22 0:27 UTC+02:00, John Tromp :
 Direct link to image: http://tromp.github.io/img/WO5lives.png
>
> Might be useful for go event organizers in need of arrow signs...
>
> regards,
> -John
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Paper “Complexity of Go” by Robson

2018-06-22 Thread Stephan K
Hello,

I assume after white pla

2018-06-22 0:27 UTC+02:00, John Tromp :
 Direct link to image: http://tromp.github.io/img/WO5lives.png
>
> Might be useful for go event organizers in need of arrow signs...
>
> regards,
> -John
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] mcts and tactics

2017-12-19 Thread Stephan K
2017-12-20 0:26 UTC+01:00, Dan :
> Hello all,
>
> It is known that MCTS's week point is tactics. How is AlphaZero able to
> resolve Go tactics such as ladders efficiently? If I recall correctly many
> people were asking the same question during the Lee Sedo match -- and it
> seemed it didn't have any problem with ladders and such.

Note that the input to the neural networks in the version that played
against Lee Sedol had a lot of handcrafted features, including
information about ladders. See "extended data table 2", page 11 of the
Nature article. You can imagine that as watching the go board through
goggles that put a flag on each intersection that would result in a
successful ladder capture, and another flag on each intersection that
would result in a successful ladder escape.

(It also means that you only need to read one move ahead to see
whether a move is a successful ladder breaker or not.)

Of course, your question still stands for the Zero versions.

Here is the table :

Feature # of planes Description

Stone colour3   Player stone / opponent 
stone / empty
Ones1   A constant plane filled 
with 1
Turns since 8   How many turns since a 
move was played
Liberties   8   Number of 
liberties (empty adjacent points)
Capture size8   How many opponent 
stones would be captured
Self-atari size 8   How many of own stones 
would be captured
Liberties after move8   Number of liberties 
after this move is played
Ladder capture  1   Whether a move at this point is 
a successful ladder capture
Ladder escape   1   Whether a move at this 
point is a successful ladder escape
Sensibleness1   Whether a move is legal 
and does not fill its own eyes
Zeros   1   A constant plane filled 
with 0

Player color1   Whether current player 
is black
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Learning related stuff

2017-11-24 Thread Stephan K
2017-11-21 23:27 UTC+01:00, "Ingo Althöfer" <3-hirn-ver...@gmx.de>:
> My understanding is that the AlphaGo hardware is standing
> somewhere in London, idle and waitung for new action...
>
> Ingo.

The announcement at
https://deepmind.com/blog/applying-machine-learning-mammography/ seems
to disagree:

"Our partners in this project wanted researchers at both DeepMind and
Google involved in this research so that the project could take
advantage of the AI expertise in both teams, as well as Google’s
supercomputing infrastructure - widely regarded as one of the best in
the world, and the same global infrastructure that powered DeepMind’s
victory over the world champion at the ancient game of Go."
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Learning related stuff

2017-11-23 Thread Stephan K
2017-11-22 15:17 UTC+01:00, "Ingo Althöfer" <3-hirn-ver...@gmx.de>:
> For instance, with respect to the 72-hour run of AlphaGo Zero
> one might start several runs for Go(with komi=5.5),
> the first one starting from fresh, the second one from the
> 72-hour process after 1 hour, the next one after 2 hours ...
>
> Ingo

Another option for your experiment might be to take the 72-hour-old
network, but only retain the first layers, and initialize randomly the
last layers.

Stephan
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Is MCTS needed?

2017-11-16 Thread Stephan K
2017-11-16 17:37 UTC+01:00, Gian-Carlo Pascutto :
> Third, evaluating with a different rotation effectively forms an
> ensemble that improves the estimate.

Could you expand on that? I understand rotating the board has an
impact for a neural network, but how does that change anything for a
tree search? Or is it because the monte carlo tree search relies on
the policy network?

> As for a theoretical viewpoint: the value net is an estimation of the
> value of some fixed amount of Monte Carlo rollouts.

Could it be possible to train a value net using only the results of
already finished games, rather than monte carlo rollouts?

What about the value network from [Multi-Labelled Value Networks for
Computer Go https://arxiv.org/abs/1705.10701 ], which can compute an
estimate of the score by assigning each intersection of the board a
probability that it will be black territory? (It does compute a more
usual winrate estimation, but it also computes a territory
estimation).

>> What would you say is the current state-of-art game tree search for
>> chess?  That's a very unfamiliar world for me, to be honest all I
>> really know is MCTS...
>
> The same it was 20 year ago, alpha-beta. Though one could certainly make
> the argument that an alpha-beta searcher using late move reductions
> (searching everything but the best moves less deeply) is searching a
> tree of a very similar shape as an UCT searcher with a small exploration
> constant.

My (extremely vague and possibly fallacious) understanding of the
situation was that monte carlo tree search was less effective for
chess because of the more sudden changes there might be when
evaluating chess positions. For instance, a player with an apparently
lesser position might actually be a few moves away from a checkmate
(or just from a big gain), which might be missed by the monte carlo
tree search because it depends on one particular branch of the tree.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go