Re: [Computer-go] Indexing and Searching Go Positions -- Literature Wanted

2019-09-17 Thread Erik van der Werf
Perhaps 'implementation' is not the right word, but IIRC the fundamental
problem was that Antti Huima used a 4-segment scheme. With Zobrist hashing
(xor update) you need at least 8 segments (or 16 with colour symmetry).
Otherwise you get trivial collisions (where positions with only a small
number differences map to the same hash). Antoine de Maricourt claims to
have posted a proof on this in 2002 (
https://www.mail-archive.com/computer-go@computer-go.org/msg01748.html).
Nic Schraudolph designed a 6-segment scheme, but it is uses a different
update (non-Zobrist) (
https://www.mail-archive.com/computer-go@computer-go.org/msg01788.html). I
have a draft, but he asked me not to distribute. Not sure if it ever got
published... Better contact him directly.

Best,
Erik


On Tue, Sep 17, 2019 at 5:36 PM Stephen Martindale <
stephen.c.martind...@gmail.com> wrote:

> Thanks for the input, so far.
>
> Erik, is this the paper that you were referring to:
> http://fragrieu.free.fr/zobrist.pdf "A Group-Theoretic Zobrist Hash
> Function" Antti Huima.
>
> If so, do you have any sources on what's wrong with it? You say that his
> "implementation" was flawed -- does that mean that the theory in the paper
> is sound?
>
> I found the thread in the mailing list archives that you mentioned but
> most of the links are dead, by now, so it isn't totally helpful.
>
> I have yet to read the link that you posted a few minutes ago.
>
> *Stephen Martindale*
>
> +49 160 950 27545
> stephen.c.martind...@gmail.com
>
>
> On Tue, 17 Sep 2019 at 17:01, Erik van der Werf 
> wrote:
>
>> https://www.real-me.net/ddyer/go/signature-spec.html
>>
>> On Tue, Sep 17, 2019 at 4:16 PM Brian Sheppard via Computer-go <
>> computer-go@computer-go.org> wrote:
>>
>>> I remember a scheme (from Dave Dyer, IIRC) that indexed positions based
>>> on the points on which the 20th, 40th, 60th,... moves were made. IIRC it
>>> was nearly a unique key for pro positions.
>>>
>>> Best,
>>> Brian
>>>
>>> -Original Message-
>>> From: Erik van der Werf 
>>> To: computer-go 
>>> Sent: Tue, Sep 17, 2019 5:55 am
>>> Subject: Re: [Computer-go] Indexing and Searching Go Positions --
>>> Literature Wanted
>>>
>>> Hi Stephen,
>>>
>>> I'm not aware of recent published work. There is an ancient document by
>>> Antti Huima on hash schemes for easy symmetry detection/lookup.
>>> Unfortunately his implementation was broken, but other schemes have been
>>> proposed that solve the issue (I found one myself, but I think many others
>>> found the same or similar solutions). You may want to search the archives
>>> for "Zobrist hashing with easy transformation comparison". If you like math
>>> Nic Schrauolph has an interesting solution ;-)
>>>
>>> In Steenvreter I implemented a 16-segment scheme with a xor update (for
>>> rotation, mirroring and color symmetry). In GridMaster I have an
>>> experimental search feature which is somewhat similar except that I don't
>>> use hash keys (every possible point on the board simply gets its own bits),
>>> and I use 'or' instead of 'xor' (so stones that are added are never
>>> removed, which makes parsing game records extremely fast). This makes it
>>> very easy to filter positions/games that cannot match, and for the
>>> remainder (if needed, dealing with captures) it simply replays (which is
>>> fast enough because the number of remaining games is usually very small).
>>> I'm not sure what Kombilo does, but I wouldn't be surprised if it's
>>> similar. The only thing I haven't implemented yet is lookup of translated
>>> (shifted) local patterns. Still pondering what's most efficient for that,
>>> but I could simply run multiple searches with a mask.
>>>
>>> Best,
>>> Erik
>>>
>>>
>>> On Tue, Sep 17, 2019 at 10:17 AM Stephen Martindale <
>>> stephen.c.martind...@gmail.com> wrote:
>>> >
>>> > Dear Go programmers,
>>> >
>>> > I'm interested in experimenting with some new ideas for indexing and
>>> searching Goban positions and patterns and I want to stand on the shoulders
>>> of giants. Which papers, articles, blog posts or open-source code should I
>>> read to get concrete knowledge of the approaches used in the past?
>>> >
>>> > I know that Kombilo is (or used to be) the state of the art in this
>>> field. The source is available but, beyond reading the Libkombilo sources,
>>> are there a

Re: [Computer-go] Indexing and Searching Go Positions -- Literature Wanted

2019-09-17 Thread Erik van der Werf
https://www.real-me.net/ddyer/go/signature-spec.html

On Tue, Sep 17, 2019 at 4:16 PM Brian Sheppard via Computer-go <
computer-go@computer-go.org> wrote:

> I remember a scheme (from Dave Dyer, IIRC) that indexed positions based on
> the points on which the 20th, 40th, 60th,... moves were made. IIRC it was
> nearly a unique key for pro positions.
>
> Best,
> Brian
>
> -Original Message-
> From: Erik van der Werf 
> To: computer-go 
> Sent: Tue, Sep 17, 2019 5:55 am
> Subject: Re: [Computer-go] Indexing and Searching Go Positions --
> Literature Wanted
>
> Hi Stephen,
>
> I'm not aware of recent published work. There is an ancient document by
> Antti Huima on hash schemes for easy symmetry detection/lookup.
> Unfortunately his implementation was broken, but other schemes have been
> proposed that solve the issue (I found one myself, but I think many others
> found the same or similar solutions). You may want to search the archives
> for "Zobrist hashing with easy transformation comparison". If you like math
> Nic Schrauolph has an interesting solution ;-)
>
> In Steenvreter I implemented a 16-segment scheme with a xor update (for
> rotation, mirroring and color symmetry). In GridMaster I have an
> experimental search feature which is somewhat similar except that I don't
> use hash keys (every possible point on the board simply gets its own bits),
> and I use 'or' instead of 'xor' (so stones that are added are never
> removed, which makes parsing game records extremely fast). This makes it
> very easy to filter positions/games that cannot match, and for the
> remainder (if needed, dealing with captures) it simply replays (which is
> fast enough because the number of remaining games is usually very small).
> I'm not sure what Kombilo does, but I wouldn't be surprised if it's
> similar. The only thing I haven't implemented yet is lookup of translated
> (shifted) local patterns. Still pondering what's most efficient for that,
> but I could simply run multiple searches with a mask.
>
> Best,
> Erik
>
>
> On Tue, Sep 17, 2019 at 10:17 AM Stephen Martindale <
> stephen.c.martind...@gmail.com> wrote:
> >
> > Dear Go programmers,
> >
> > I'm interested in experimenting with some new ideas for indexing and
> searching Goban positions and patterns and I want to stand on the shoulders
> of giants. Which papers, articles, blog posts or open-source code should I
> read to get concrete knowledge of the approaches used in the past?
> >
> > I know that Kombilo is (or used to be) the state of the art in this
> field. The source is available but, beyond reading the Libkombilo sources,
> are there any other, more human friendly resources out there?
> >
> > My new ideas are currently insubstantial and vague but I have done some
> work, in the past, with natural language embeddings and large-database
> image indexing and searching and concepts from those two domains keep
> bouncing around in my mind -- I can't help but feel that there must be
> something there that can be the "next big thing" in Go position indexing.
> >
> > Any leads would be appreciated.
> >
> > Stephen Martindale
> >
> > +49 160 950 27545
> > stephen.c.martind...@gmail.com
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


Re: [Computer-go] ML web site was deleted?

2019-09-17 Thread Erik van der Werf
Apparently it's not so easy to keep a mailing list running smoothly... For
now at least we can still see archives at:
https://www.mail-archive.com/computer-go@computer-go.org/


On Thu, Aug 29, 2019 at 4:14 PM Adrian Petrescu  wrote:

> Indeed, I think a lot of aspects of the mailing list software have
> been broken for a while - I registered with a new e-mail address (not
> this one) about a month ago, successfully received the confirmation
> email, and then never got another delivery again.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


Re: [Computer-go] Indexing and Searching Go Positions -- Literature Wanted

2019-09-17 Thread Erik van der Werf
Hi Stephen,

I'm not aware of recent published work. There is an ancient document by
Antti Huima on hash schemes for easy symmetry detection/lookup.
Unfortunately his implementation was broken, but other schemes have been
proposed that solve the issue (I found one myself, but I think many others
found the same or similar solutions). You may want to search the archives
for "Zobrist hashing with easy transformation comparison". If you like math
Nic Schrauolph has an interesting solution ;-)

In Steenvreter I implemented a 16-segment scheme with a xor update (for
rotation, mirroring and color symmetry). In GridMaster I have an
experimental search feature which is somewhat similar except that I don't
use hash keys (every possible point on the board simply gets its own bits),
and I use 'or' instead of 'xor' (so stones that are added are never
removed, which makes parsing game records extremely fast). This makes it
very easy to filter positions/games that cannot match, and for the
remainder (if needed, dealing with captures) it simply replays (which is
fast enough because the number of remaining games is usually very small).
I'm not sure what Kombilo does, but I wouldn't be surprised if it's
similar. The only thing I haven't implemented yet is lookup of translated
(shifted) local patterns. Still pondering what's most efficient for that,
but I could simply run multiple searches with a mask.

Best,
Erik


On Tue, Sep 17, 2019 at 10:17 AM Stephen Martindale <
stephen.c.martind...@gmail.com> wrote:
>
> Dear Go programmers,
>
> I'm interested in experimenting with some new ideas for indexing and
searching Goban positions and patterns and I want to stand on the shoulders
of giants. Which papers, articles, blog posts or open-source code should I
read to get concrete knowledge of the approaches used in the past?
>
> I know that Kombilo is (or used to be) the state of the art in this
field. The source is available but, beyond reading the Libkombilo sources,
are there any other, more human friendly resources out there?
>
> My new ideas are currently insubstantial and vague but I have done some
work, in the past, with natural language embeddings and large-database
image indexing and searching and concepts from those two domains keep
bouncing around in my mind -- I can't help but feel that there must be
something there that can be the "next big thing" in Go position indexing.
>
> Any leads would be appreciated.
>
> Stephen Martindale
>
> +49 160 950 27545
> stephen.c.martind...@gmail.com
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


[Computer-go] CompGo list / gmail broken again? (was Re: A new ELF OpenGo bot and analysis of historical Go games)

2019-02-17 Thread Erik van der Werf
It looks like gmail is broken again for this list. I never got Remi's
original post (not even in my spam folder). I can only see it in the
archive.

Erik


On Sat, Feb 16, 2019 at 5:50 PM J. van der Steen 
wrote:

>
> And most important:
>
>* Does ELF know the meaning of life?
>
> On 16/02/2019 17:29, "Ingo Althöfer" wrote:
> > Hi Remi,
> > thanks you for the link.
> >
> > A few questions (to all who know something):
> >
> > * How strong is the new ELF bot in comparison with Leela-Zero?
> >
> > * How were komi values taken into account when analysing old go games
> with help of ELF?
> >
> > * How often does ELF propose moves played by AlphaGo (for instance in
> the games
> > with Fan Hui, Lee Sedol, and in the sixty games from December 2017)?
> >
> > * Does ELF understand that the strength of AlphaGo increased from
> October 2015 to May 2017?
> >
> > Cheers, Ingo.
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
> >
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] GoGui 1.5.0

2019-01-13 Thread Erik van der Werf
Sure, I posted a reply.

Best,
Erik


On Sun, Jan 13, 2019 at 11:20 AM Rémi Coulom  wrote:

> Hi Erik,
>
> We had this discussion on github about the bug you reported:
> https://github.com/Remi-Coulom/gogui/issues/9
> Can you clarify the bug you observed?
>
> Thanks
>
> - Mail original -----
> De: "Erik van der Werf" 
> À: "computer-go" 
> Envoyé: Mardi 1 Janvier 2019 18:24:40
> Objet: Re: [Computer-go] GoGui 1.5.0
>
>
>
>
> Thanks Remi! Nice to see that GoGui is still alive :-)
>
>
> FYI the included version of gogui-twogtp has a bug (which has been around
> for many years) where the '-alternate' option causes incorrect results in
> the game records.
>
>
> Happy New Year to all!
>
> Erik
>
>
>
>
> On Sat, Nov 17, 2018 at 6:50 PM Hiroshi Yamashita < y...@bd.mbn.or.jp >
> wrote:
>
>
> Hi Remi,
>
> Thank you for gogui update! after 10 years?
> Gomoku and renju support sounds good.
>
> Thanks,
> Hiroshi Yamashita
>
> On 2018/11/17 5:57, Rémi Coulom wrote:
> > Hi,
> >
> > In case anybody is interested, we have released a new version of GoGui:
> > https://github.com/Remi-Coulom/gogui/releases
> >
> > The main purpose of this version is to add support of other games via an
> extension of the GTP protocol:
> > https://www.kayufu.com/gogui/rules.html
> >
> > It also has minor improvements that may be useful for Go programmers:
> > - high-resolution icons for high-dpi screens
> > - wait a little for program output before displaying a popup dialog
> >
> > It also incorporates improvements by lemonsqueeze. In particular
> > - handicap support in gogui-twogtp
> > - correct scoring of handicap games
> >
> > The release page has a Windows installer. For other platforms, you will
> have to compile from source. The root of the repository has an
> "ubuntu_setup.sh" that should compile everything provided you have a jdk
> and ant installed. The contents of that file should give you indications of
> what to do on other platforms:
> > https://github.com/Remi-Coulom/gogui/blob/master/ubuntu_setup.sh
> >
> > I use this version for my gomoku, renju, and Othello programs. We
> decided to distribute it, as it might be useful to others.
> >
> > Rémi
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
> >
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] GoGui 1.5.0

2019-01-01 Thread Erik van der Werf
Thanks Remi! Nice to see that GoGui is still alive :-)

FYI the included version of gogui-twogtp has a bug (which has been around
for many years) where the '-alternate' option causes incorrect results in
the game records.

Happy New Year to all!
Erik


On Sat, Nov 17, 2018 at 6:50 PM Hiroshi Yamashita  wrote:

> Hi Remi,
>
> Thank you for gogui update! after 10 years?
> Gomoku and renju support sounds good.
>
> Thanks,
> Hiroshi Yamashita
>
> On 2018/11/17 5:57, Rémi Coulom wrote:
> > Hi,
> >
> > In case anybody is interested, we have released a new version of GoGui:
> > https://github.com/Remi-Coulom/gogui/releases
> >
> > The main purpose of this version is to add support of other games via an
> extension of the GTP protocol:
> > https://www.kayufu.com/gogui/rules.html
> >
> > It also has minor improvements that may be useful for Go programmers:
> >   - high-resolution icons for high-dpi screens
> >   - wait a little for program output before displaying a popup dialog
> >
> > It also incorporates improvements by lemonsqueeze. In particular
> >   - handicap support in gogui-twogtp
> >   - correct scoring of handicap games
> >
> > The release page has a Windows installer. For other platforms, you will
> have to compile from source. The root of the repository has an
> "ubuntu_setup.sh" that should compile everything provided you have a jdk
> and ant installed. The contents of that file should give you indications of
> what to do on other platforms:
> > https://github.com/Remi-Coulom/gogui/blob/master/ubuntu_setup.sh
> >
> > I use this version for my gomoku, renju, and Othello programs. We
> decided to distribute it, as it might be useful to others.
> >
> > Rémi
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
> >
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] New paper by DeepMind

2018-12-10 Thread Erik van der Werf
n the heart 
>>>>>> of
>>>>>> the issue, which is not well understood by the general public.
>>>>>>
>>>>>> In other words, thousands of patent applications are filed in the
>>>>>> world without any hope of the patent eventually being granted, to 
>>>>>> establish
>>>>>> "prior art" thereby protecting what's described in it from being patented
>>>>>> by somebody else.
>>>>>>
>>>>>> Or, am I responding to a troll?
>>>>>>
>>>>>> Tokumoto
>>>>>>
>>>>>>
>>>>>> On Fri, Dec 7, 2018 at 10:01 AM uurtamo  wrote:
>>>>>>
>>>>>>> You're insane.
>>>>>>>
>>>>>>> On Thu, Dec 6, 2018, 4:13 PM Jim O'Flaherty <
>>>>>>> jim.oflaherty...@gmail.com wrote:
>>>>>>>
>>>>>>>> Remember, patents are a STRATEGIC mechanism as well as a legal
>>>>>>>> mechanism. As soon as a patent is publically filed (for example, as
>>>>>>>> utility, and following provisional), the text and claims in the patent
>>>>>>>> immediately become prior art globally as of the original filing date
>>>>>>>> REGARDLESS of whether the patent is eventually approved or rejected. 
>>>>>>>> IOW, a
>>>>>>>> patent filing is a mechanism to ensure no one else can make a similar 
>>>>>>>> claim
>>>>>>>> without risking this filing being used as a possible prior art 
>>>>>>>> refutation.
>>>>>>>>
>>>>>>>> I know this only because it is a strategy option my company is
>>>>>>>> using in an entirely different unrelated domain. The patent filing is
>>>>>>>> defensive such that someone else cannot make a claim and take
>>>>>>>> our inventions away from us just because the coincidentally hit near 
>>>>>>>> our
>>>>>>>> inventions.
>>>>>>>>
>>>>>>>> So considering Google's past and their participation in the OIN, it
>>>>>>>> is very likely Google's patent is ensuring the ground all around this 
>>>>>>>> area
>>>>>>>> is sufficiently salted to stop anyone from attempting to exploit nearby
>>>>>>>> patent claims.
>>>>>>>>
>>>>>>>>
>>>>>>>> Respectfully,
>>>>>>>>
>>>>>>>> Jim O'Flaherty
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Dec 6, 2018 at 5:44 PM Erik van der Werf <
>>>>>>>> erikvanderw...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> On Thu, Dec 6, 2018 at 11:28 PM Rémi Coulom 
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Also, the AlphaZero algorithm is patented:
>>>>>>>>>>
>>>>>>>>>> https://patentscope2.wipo.int/search/en/detail.jsf?docId=WO2018215665
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> So far it just looks like an application (and I don't think it
>>>>>>>>> will be be difficult to oppose, if you care about this)
>>>>>>>>>
>>>>>>>>> Erik
>>>>>>>>>
>>>>>>>>> ___
>>>>>>>>> Computer-go mailing list
>>>>>>>>> Computer-go@computer-go.org
>>>>>>>>> http://computer-go.org/mailman/listinfo/computer-go
>>>>>>>>
>>>>>>>> ___
>>>>>>>> Computer-go mailing list
>>>>>>>> Computer-go@computer-go.org
>>>>>>>> http://computer-go.org/mailman/listinfo/computer-go
>>>>>>>
>>>>>>> ___
>>>>>>> Computer-go mailing list
>>>>>>> Computer-go@computer-go.org
>>>>>>> http://computer-go.org/mailman/listinfo/computer-go
>>>>>>
>>>>>> ___
>>>>>> Computer-go mailing list
>>>>>> Computer-go@computer-go.org
>>>>>> http://computer-go.org/mailman/listinfo/computer-go
>>>>>
>>>>> ___
>>>>> Computer-go mailing list
>>>>> Computer-go@computer-go.org
>>>>> http://computer-go.org/mailman/listinfo/computer-go
>>>>
>>>> ___
>>>> Computer-go mailing list
>>>> Computer-go@computer-go.org
>>>> http://computer-go.org/mailman/listinfo/computer-go
>>>
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] New paper by DeepMind

2018-12-06 Thread Erik van der Werf
On Thu, Dec 6, 2018 at 11:28 PM Rémi Coulom  wrote:

> Also, the AlphaZero algorithm is patented:
> https://patentscope2.wipo.int/search/en/detail.jsf?docId=WO2018215665
>

So far it just looks like an application (and I don't think it will be be
difficult to oppose, if you care about this)

Erik
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Using 9x9 policy on 13x13 and 19x19

2018-02-23 Thread Erik van der Werf
In the old days I trained separate move predictors on 9x9 games and on
19x19 games. In my case, the ones trained on 19x19 games beat the ones
trained on 9x9 games also on the 9x9 board. Perhaps it was just because of
was having better data from 19x19, but I thought it was interesting to see
that the 19x19 predictor generalized well to smaller boards.

I suppose the result you see can easily be explained; the big board policy
learns about large scale and small scale fights, while the small board
policy doesn't know anything about large scale fights.

BR,
Erik


On Fri, Feb 23, 2018 at 5:11 PM, Hiroshi Yamashita  wrote:

> Hi,
>
> Using 19x19 policy on 9x9 and 13x13 is effective.
> But opposite is?
> I made 9x9 policy from Aya's 10k playout/move selfplay.
>
> Using 9x9 policy on 13x13 and 19x19
> 19x19 DCNNAyaF128from9x91799
> 13x13 DCNNAyaF128from9x91900
> 9x9   DCNN_AyaF128a558x12290
>
> Using 19x19 policy on 9x9 and 13x13
> 19x19 DCNN_AyaF128a523x12345
> 13x13 DCNNAya795F128a5232354
> 9x9   DCNN_AyaF128a523x12179
>
> 19x19 policy is similar strength on 13x13 and 166 Elo weaker on 9x9.
> 9x9 policy is 390 Elo weaker on 13x13, and 491 Elo weaker on 19x19.
> It seems smaller board is more useless than bigger board...
>
> Note:
> All programs select maximum policy without search.
> All programs use opening book.
> 19x19 policy is Filter128, Layer 12, without Batch Normalization.
> 9x9 policy is Filter128, Layer 11, without Batch Normalization.
> 19x19 policy is made from pro 78000 games, GoGoD.
> 9x9 policy is made from 10k/move. It is CGOS 2892(Aya797c_p1v1_10k).
> Ratings are BayesElo.
>
> Thanks,
> Hiroshi Yamashita
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Zen giving handicap against pro players

2018-01-24 Thread Erik van der Werf
Normal handicap games with 0.5 komi favor Black by only half a stone/grade
compensation (so if there is a full grade difference in strength White
still has an advantage). Two handicap stones with normal komi just corrects
for one stone/grade strength difference (just like one handicap stone with
reverse komi). So, if this is true, these results are not as impressive as
what most people would consider for 2-stone handicap games..

On Wed, Jan 24, 2018 at 2:06 PM, "Ingo Althöfer" <3-hirn-ver...@gmx.de>
wrote:

> Dear Hideki,
>
> > >So, were the handicap games played at "normal" komi values
> > >(6,5 or 7,5 points) ?
> >
> > No, but "standard" 0.5 pts.  Zen supports very wide range of
> > komi; about -20 to +30 (mainly for the users of
> > Tencho-no-Igo).
>
> that is very interesting information.
> In particular, because from the sgf of FineArt's handicap
> games I read that they had komi=7.5 (opr 6.5) in "all"
> their handicap games.
>
> My impression was they needed this exact komi
> value to "please" their neural net.
>
> Ingo.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AI ryusei 2017 first day result

2018-01-02 Thread Erik van der Werf
I didn't see the games, but I suppose they simply made the rookie mistake
of playing (too many) stones inside own territory while the opponent was
passing...


Op 2 jan. 2018 22:09 schreef "Adrian Petrescu" :

I'm not sure I understand this rule. Why should a player forfeit because
they did not pass after their opponent passed? What if they disagreed that
the game was over?

Or do you mean that *three* passes are required to end the game, and the
faulty engine only passes once? But then wouldn't they lose every game in
which they were the first to pass?

Thanks for the updates! :)
-Adrian


On 12/09/2017 07:20 PM, Hiroshi Yamashita wrote:

> Hi Remi,
>
> FineArt lost against Maru by Japanese rule. FineArt did not pass after
> Maru's pass.
> Tianrang also lost one game for this.
>
> Thanks,
> Hiroshi Yamashita
>
> - Original Message - From: "Rémi Coulom" 
> To: 
> Sent: Sunday, December 10, 2017 4:35 AM
> Subject: Re: [Computer-go] AI ryusei 2017 first day result
>
>
> Thanks Hiroshi.
>>
>> Did anything special happen in the game between Maru and FineArt?
>>
>> I wish you good games for the second day.
>>
>> Rémi
>>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Learning related stuff

2017-11-21 Thread Erik van der Werf
No need for AlphaGo hardware to find out; any toy problem will suffice to
explore different initialization schemes... The main benefit of starting
random is to break symmetries (otherwise individual neurons cannot
specialize), but there are other approaches that can work even better.
Further you typically want to start with small weights so that the initial
mapping is relatively smooth.

E.

On Tue, Nov 21, 2017 at 2:24 PM, "Ingo Althöfer" <3-hirn-ver...@gmx.de>
wrote:

> AlphaGo Zero started with random values in
> its neural net - and reached top level
> within 72 hours.
>
> Would it typically help or disrupt to start
> instead with values that are non-random?
> What I have in mind concretely:
>
> Look at 19x19 Go with komi=5.5
> In run A you start with random values in the net.
> In another run B you start with the values that had
> emerged in the 7.5-NN after 72 hours.
>
> Would typically A or B learn better?
> Would there be a danger that B would not be able
> to leave the 7.5-"solution"?
>
> It is a pity that I/we do not have the hardware of
> AlphaGo Zero at hand for such experiments.
>
> Ingo.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-26 Thread Erik van der Werf
Good point, Roel. Perhaps in the final layers one could make it predict a
model of the expected score distribution (before combining with the komi
and other rules specific adjustments for handicap stones, pass stones,
last-play parity, etc.). Should be easy enough to back-propagate win/loss
information (and perhaps even more) through such a model.


On Thu, Oct 26, 2017 at 3:55 PM, Roel van Engelen 
wrote:

> @Gian-Carlo Pascutto
>
> Since training uses a ridiculous amount of computing power i wonder if it
> would
> be useful to make certain changes for future research, like training the
> value head
> with multiple komi values 
>
> On 26 October 2017 at 03:02, Brian Sheppard via Computer-go <
> computer-go@computer-go.org> wrote:
>
>> I think it uses the champion network. That is, the training periodically
>> generates a candidate, and there is a playoff against the current champion.
>> If the candidate wins by more than 55% then a new champion is declared.
>>
>>
>>
>> Keeping a champion is an important mechanism, I believe. That creates the
>> competitive coevolution dynamic, where the network is evolving to learn how
>> to beat the best, and not just most recent. Without that dynamic, the
>> training process can go up and down.
>>
>>
>>
>> *From:* Computer-go [mailto:computer-go-boun...@computer-go.org] *On
>> Behalf Of *uurtamo .
>> *Sent:* Wednesday, October 25, 2017 6:07 PM
>> *To:* computer-go 
>> *Subject:* Re: [Computer-go] Source code (Was: Reducing network size?
>> (Was: AlphaGo Zero))
>>
>>
>>
>> Does the self-play step use the most recent network for each move?
>>
>>
>>
>> On Oct 25, 2017 2:23 PM, "Gian-Carlo Pascutto"  wrote:
>>
>> On 25-10-17 17:57, Xavier Combelle wrote:
>> > Is there some way to distribute learning of a neural network ?
>>
>> Learning as in training the DCNN, not really unless there are high
>> bandwidth links between the machines (AFAIK - unless the state of the
>> art changed?).
>>
>> Learning as in generating self-play games: yes. Especially if you update
>> the network only every 25 000 games.
>>
>> My understanding is that this task is much more bottlenecked on game
>> generation than on DCNN training, until you get quite a bit of machines
>> that generate games.
>>
>> --
>> GCP
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Alphago and solving Go

2017-08-09 Thread Erik van der Werf
361! seems like an attempt to estimate an upper bound on the number of
games where nothing is captured.

On Wed, Aug 9, 2017 at 2:34 PM, Gunnar Farnebäck 
wrote:

> Except 361! (~10^768) couldn't plausibly be an estimate of the number of
> legal positions, since ignoring the rules in that case gives the trivial
> upper bound of 3^361 (~10^172).
>
> More likely it is a very, very bad attempt at estimating the number of
> games. Even with the extremely unsharp bound given in
> https://tromp.github.io/go/gostate.pdf
>
> 10^(10^48) < number of games < 10^(10^171)
>
> the 361! estimate comes nowhere close to that interval.
>
> /Gunnar
>
> On 08/07/2017 04:14 AM, David Doshay wrote:
>
>> Yes, that zeroth order number (the one you get to without any thinking
>> about how the game’s rules affect the calculation) is outdated since early
>> last year when this result gave us the exact number of legal board
>> positions:
>>
>> https://tromp.github.io/go/legal.html
>>
>> So, a complete game tree for 19x19 Go would contain about 2.08 * 10^170
>> unique nodes (see the paper for all 171 digits) but some number of
>> duplicates of those nodes for the different paths to each legal position.
>>
>> In an unfortunate bit of timing, it seems that many people missed this
>> result because of the Alpha Go news.
>>
>> Cheers,
>> David G Doshay
>>
>> ddos...@mac.com 
>>
>>
>>
>>
>>
>> On 6, Aug 2017, at 3:17 PM, Gunnar Farnebäck >> > wrote:
>>>
>>> On 08/06/2017 04:39 PM, Vincent Richard wrote:
>>>
 No, simply because there are way to many possibilities in the game,
 roughly (19x19)!

>>>
>>> Can we lay this particular number to rest? Not that "possibilities in
>>> the game" is very well defined (what does it even mean?) but the number of
>>> permutations of 19x19 points has no meaningful connection to the game of go
>>> at all, not even "roughly".
>>>
>>> /Gunnar
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org 
>>> http://computer-go.org/mailman/listinfo/computer-go
>>>
>>
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Alphago and solving Go

2017-08-07 Thread Erik van der Werf
On Mon, Aug 7, 2017 at 12:52 PM, Darren Cook  wrote:

> > https://en.wikipedia.org/wiki/Brute-force_search explains it as
> > "systematically enumerating all possible candidates for the
> > solution".
> >
> > There is nothing systematic about the pseudo random variation
> > selection in MCTS;
>
> More semantics, but as it is pseudo-random, isn't that systematic? It
> only looks like it is jumping around because we are looking at it from
> the wrong angle.
>
> (Systematic pseudo-random generation gets very hard over a cluster, of
> course...)
>
>
The selection should be quite deterministic. Randomness is in the playouts,
so it only comes in indirectly. With a value net there will be even less
variance.



> > it may not even have sufficient entropy to guarantee full
> > enumeration...
>
> That is the most interesting idea in this thread. Is there any way to
> prove it one way or the other? I'm looking at you here, John - sounds
> right up your street :-)
>

Full enumeration may occur with infinite time & memory, and a growing
exploration term for unexplored nodes. Randomness has little to do with it.

Anyway, IMO the whole argument is silly and even a bit disrespectful. I
don't consider AlphaGo a brute force solution. However, if some
hard-pruning would turn AlphaGo from brute force into non-brute force then
just implement some provably correct hard pruning rules and you're done
(e.g., don't play in unconditional territory, stop the playouts when the
position is statically solved, etc.). I have things like that in
Steenvreter, but it doesn't feel like that changes the nature of the beast.

E.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mailing list working?

2017-06-07 Thread Erik van der Werf
Yup, looks like something broke. Here everything that was sent after the
23rd only arrived today (June 7)... Ah well, it's game-over anyway :-)

On Mon, May 29, 2017 at 7:51 AM, J. van der Steen <
j.van.der.st...@gobase.org> wrote:

>
> Hi all,
>
> Is there something wrong with the mailing list? I didn't see any messages
> since the 23rd of May.
>
> best regards,
> Jan van der Steen
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] mini-max with Policy and Value network

2017-05-23 Thread Erik van der Werf
On Mon, May 22, 2017 at 4:54 PM, Gian-Carlo Pascutto <g...@sjeng.org> wrote:

> On 22-05-17 15:46, Erik van der Werf wrote:
> > Anyway, LMR seems like a good idea, but last time I tried it (in Migos)
> > it did not help. In Magog I had some good results with fractional depth
> > reductions (like in Realization Probability Search), but it's a long
> > time ago and the engines were much weaker then...
>
> What was generating your probabilities, though? A strong policy DCNN or
> something weaker?
>

Nothing deep. Back then for the move predictor I don't think I ever tried
more than two hidden layers (and it was only used near the root of the
search tree). RPS was even simpler (so it could be used with fast deep
searches). In hindsight I traded way too much accuracy for speed, but
coming from standard a AlphaBeta framework it still was a big improvement.


ERPS (LMR with fractional reductions based on move probabilities) with
> alpha-beta seems very similar to having MCTS with the policy prior being
> a factor in the UCT formula.


In the sense of the shape of the tree, possibly yes, but I have the
impression that AlphaBeta and similar search algorithms are more brittle
when working with high-variance (noisy) evaluations. In chess-like games it
may be less of an issue due to the implicit mobility feature that it adds,
but for Go mobility seems to be mostly irrelevant. The MCTS backup
(averaging evaluations) seems to reduce the variance much better than a
minimax backup.

Using a value net instead of raw Monte Carlo evaluation also reduces
variance (a lot), so trying out AlphaBeta with DCNN evaluations definitely
seems like an interesting experiment.



> This is what AlphaGo did according to their
> 2015 paper, so it can't be terrible, but it does mean that you are 100%
> blind to something the policy network doesn't see, which seems
> worrisome. I think I asked Aja once about what they do with first play
> urgency given that the paper doesn't address it - he politely ignored
> the question :-)
>

I don't think anyone has had good results with high FPU; it seems in Go we
simply cannot afford a very wide search (except perhaps near the root or on
the PV). I'm not sure if they still use an UCB term (which would ensure
some exploration of unlikely candidates). I think at some point David and
others argued against it, but in my own experiments it was always helpful,
and I think Aja may have found the same in Erica. Nevertheless, even
without it I think an argument can be made that the minimax result can
eventually be found.

I have an idea on what's causing the problems in Leela (and how you could
fix it), but I'll hold of on further commenting until I have some more time
to look at the examples.

Best,
Erik
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] What was the final score after the counting of AlphaGo-vs-Ke Jie Game #1?

2017-05-23 Thread Erik van der Werf
The Chinese counting looked so confusing :-)

On Tue, May 23, 2017 at 9:02 AM, Jim O'Flaherty 
wrote:

> I have now heard that AlphaGo one by 0.5 points.
>
>
> On Tue, May 23, 2017 at 2:00 AM, Jim O'Flaherty <
> jim.oflaherty...@gmail.com> wrote:
>
>> The announcer didn't have her mic on, so I couldn't hear the final score
>> announced...
>>
>> So, what was the final score after the counting of AlphaGo-vs-Ke Jie Game
>> #1?
>>
>>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] mini-max with Policy and Value network

2017-05-22 Thread Erik van der Werf
On Mon, May 22, 2017 at 3:56 PM, Gian-Carlo Pascutto <g...@sjeng.org> wrote:

> On 22-05-17 11:27, Erik van der Werf wrote:
> > On Mon, May 22, 2017 at 10:08 AM, Gian-Carlo Pascutto <g...@sjeng.org
> > <mailto:g...@sjeng.org>> wrote:
> >
> > ... This heavy pruning
> > by the policy network OTOH seems to be an issue for me. My program
> has
> > big tactical holes.
> >
> >
> > Do you do any hard pruning? My engines (Steenvreter,Magog) always had a
> > move predictor (a.k.a. policy net), but I never saw the need to do hard
> > pruning. Steenvreter uses the predictions to set priors, and it is very
> > selective, but with infinite simulations eventually all potentially
> > relevant moves will get sampled.
>
> With infinite simulations everything is easy :-)
>
> In practice moves with, say, a prior below 0.1% aren't going to get
> searched, and I still regularly see positions where they're the winning
> move, especially with tactics on the board.
>
> Enforcing the search to be wider without losing playing strength appears
> to be hard.
>
>
Well, I think that's fundamental; you can't be wide and deep at the same
time, but at least you can chose an algorithm that (eventually) explores
all directions.

BTW I'm a bit surprised that you are still able to find 'big tactical
holes' with Leela now playing as 8d KGS

Best,
Erik
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] mini-max with Policy and Value network

2017-05-22 Thread Erik van der Werf
On Mon, May 22, 2017 at 11:27 AM, Erik van der Werf <
erikvanderw...@gmail.com> wrote:

> On Mon, May 22, 2017 at 10:08 AM, Gian-Carlo Pascutto <g...@sjeng.org>
> wrote:
>>
>> ... This heavy pruning
>> by the policy network OTOH seems to be an issue for me. My program has
>> big tactical holes.
>
>
> Do you do any hard pruning? My engines (Steenvreter,Magog) always had a
> move predictor (a.k.a. policy net), but I never saw the need to do hard
> pruning. Steenvreter uses the predictions to set priors, and it is very
> selective, but with infinite simulations eventually all potentially
> relevant moves will get sampled.
>
>
Oh, haha, after reading Brian's post I guess I misunderstood :-)

Anyway, LMR seems like a good idea, but last time I tried it (in Migos) it
did not help. In Magog I had some good results with fractional depth
reductions (like in Realization Probability Search), but it's a long time
ago and the engines were much weaker then...
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] mini-max with Policy and Value network

2017-05-22 Thread Erik van der Werf
On Mon, May 22, 2017 at 10:08 AM, Gian-Carlo Pascutto  wrote:
>
> ... This heavy pruning
> by the policy network OTOH seems to be an issue for me. My program has
> big tactical holes.


Do you do any hard pruning? My engines (Steenvreter,Magog) always had a
move predictor (a.k.a. policy net), but I never saw the need to do hard
pruning. Steenvreter uses the predictions to set priors, and it is very
selective, but with infinite simulations eventually all potentially
relevant moves will get sampled.

Erik
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] dealing with multiple local optima

2017-02-27 Thread Erik van der Werf
On Mon, Feb 27, 2017 at 4:30 PM, Darren Cook  wrote:

> > But those video games have a very simple optimal policy. Consider Super
> Mario:
> > if you see an enemy, step on it; if you see a whole, jump over it; if
> you see a
> > pipe sticking up, also jump over it; etc.
>
> A bit like go? If you see an unsettled group, make it live. If you have
> a ko, play a ko threat. If you see have two 1-eye groups near each
> other, join them together. :-)
>
> Okay, those could be considered higher-level concepts, but I still
> thought it was impressive to learn to play arcade games with no hints at
> all.
>


The impressive part is hidden in what most humans consider trivial; to make
the programs 'see'

Erik
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] dealing with multiple local optima

2017-02-25 Thread Erik van der Werf
On Sat, Feb 25, 2017 at 12:30 AM, Brian Sheppard via Computer-go <
computer-go@computer-go.org> wrote:

> In retrospect, I view Schradolph’s paper as evidence that neural networks
> have always been surprisingly successful at Go. Like Brugmann’s paper about
> Monte Carlo, which was underestimated for a long time. Sigh.
>

Hear hear :-)
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Some experiences with CNN trained on moves by the winning player

2016-12-11 Thread Erik van der Werf
On Sun, Dec 11, 2016 at 8:44 PM, Detlef Schmicker  wrote:
> Hi Erik,
>
> as far as I understood it, it was 250ELO in policy network alone ...

Two problems: (1) it is a self-play result, (2) the policy was tested
as a stand-alone player.

A policy trained to win games will beat a policy trained to predict
moves, so what? That's just confirming the expected result.

BTW if you read a bit further it says that the SL policy performed
better in AG. This is consistent with earlier reported work. E.g., as
a student David used RL to train lots of strong stand-alone policies,
but they never worked well when combined with MCTS. As far as I can
tell, this one was no different, except that they were able to find
some indirect use for it in the form of generating training data for
the value network.

Erik
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Some experiences with CNN trained on moves by the winning player

2016-12-11 Thread Erik van der Werf
Detlef, I think your result makes sense. For games between
near-equally strong players the winning player's moves will not be
much better than the loosing player's moves. The game is typically
decided by subtle mistakes. Even if nearly all my moves are perfect,
just one blunder can throw the game. Of course it depends on how you
implement the details, but in principle reinforcement learning should
be able to deal with such cases (i.e., prevent propagating irrelevant
information all the way back to the starting position).

W.r.t. AG's reinforcement learning results, as far as I know,
reinforcement learning was only indirectly helpful. The RL policy net
performed worse then the SL policy net in the over-all system. Only by
training the value net to predict expected outcomes from the
(over-fitted?) RL policy net they got some improvement (or so they
claim). In essence this just means that RL may have been effective in
creating a better training set for SL. Don't get me wrong, I love RL,
but the reason why the RL part was hyped so much is in my opinion more
related to marketing, politics and personal ego.

Erik


On Sun, Dec 11, 2016 at 11:38 AM, Detlef Schmicker  wrote:
> I want to share some experience training my policy cnn:
>
> As I wondered, why reinforcement learning was so helpful. I trained
> from the Godod database with only using the moves by the winner of
> each game.
>
> Interestingly the prediction rate of this moves was slightly higher
> (without training, just taking the previously trained network) than
> taking into account the moves by both players (53% against 52%)
>
> Training on winning player moves did not help a lot, I got a
> statistical significant improvement of about 20-30ELO.
>
> So I still don't understand, why reinforcement should do around
> 100-200ELO :)
>
> Detlef
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Go Tournament with hinteresting rules

2016-12-08 Thread Erik van der Werf
On Thu, Dec 8, 2016 at 10:58 PM, "Ingo Althöfer" <3-hirn-ver...@gmx.de> wrote:
> Playing under such conditions might be a challenge for the bots

Why? Do you think the humans will collude?  ;-)

Erik.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Deep Zen vs Cho Chikun -- Round 1

2016-11-19 Thread Erik van der Werf
Hi Ingo, The SGF file you sent is malformed (in this case it's only a
minor issue for the date field, but some sgf viewers reject it).

Do you know which program was used to create it? (the AP property
suggests Many Faces, but it also containes the non-standard MULTIGOGM
property suggesting it came from MultiGo).

Best,
Erik


On Sat, Nov 19, 2016 at 8:55 AM, "Ingo Althöfer" <3-hirn-ver...@gmx.de> wrote:
> Hi,
>
> round 1 between Deep Zen and Cho Chikun is over. It was an interesting
> game, in a rather broad sense. Team Deep Zen resigned after move 223.
>
> sgf is appended.
> Round 2 is to start on Sunday, 05:00 a.m. Central Europian time.
>
> Ingo (pressing thumbs for Zen).
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Bug with resigned games in gogui-twogtp

2016-09-07 Thread Erik van der Werf
I've seen the same thing some years ago; it did not happen for all versions
of GoGui...


On Wed, Sep 7, 2016 at 6:42 PM,  wrote:

> Hi,
>
> I just think I found a bug in twogtp 1.4.8 (windows), using the -alternate
> flag and two programs that always resign if losing.
>
> Basically the winner is wrong in the SGF files, and also in the logs.
> The bug seems to happen only for odd games. It is easy to see because when
> a program resigns the other progtam that played the last move should be the
> winner.
>
> So by loading sgf's and go to the end of the game, the winner of the game
> should always be the player who played last.
>
> I also inspected the GTP logs with -verbose and verfied that the program
> resigned correctly.
>
>
> I post this here because this may affect a lot of go programmer who has
> might have random noise in test results that might be hard to notice.
> (Also was not able to find out quickly where to post on GoGui website).
>
> Best
> Magnus Persson
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to Zen!

2016-05-09 Thread Erik van der Werf
Oh that's silly! IIRC if your bot is not ranked than users can do all kind
of cheating in the scoring phase (e.g., mark all your living stones dead).

On Tue, May 10, 2016 at 12:07 AM, Gian-Carlo Pascutto <g...@sjeng.org> wrote:

> On 10/05/2016 0:01, Erik van der Werf wrote:
> > Well then why not make that a criterion for entering the tournament? For
> > any half-decent bot it shouldn't be hard to get a rating.
>
> FWIW I requested ranked status for LeelaBot 3 weeks ago and this was not
> granted.
>
> Technically I'm not sure if this is needed as the rating can just be
> calculated from the unranked games. I've been tempted to write a script
> to do exactly this.
>
> --
> GCP
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to Zen!

2016-05-09 Thread Erik van der Werf
Well then why not make that a criterion for entering the tournament? For
any half-decent bot it shouldn't be hard to get a rating.

Any idea what happens for unrated bots? Do they end up somewhere at the
bottom, or are they rejected?

Erik

On Mon, May 9, 2016 at 11:40 PM, Nick Wedd <mapr...@gmail.com> wrote:

> A problem with McMahon is that the bots would all need KGS ratings.  I
> can't assign ratings myself, the scheduler uses the ratings assigned by
> KGS.  In the five tournaments held so far this year, there are sixteen bots
> that have competed at least once: eight have been rated, eight not.
>
> Nick
>
> On 9 May 2016 at 22:16, Erik van der Werf <erikvanderw...@gmail.com>
> wrote:
>
>> Why not McMahon? (possibly with reduced handicap).  It works fine in
>> human Go tournaments.
>>
>> IMO KGS Swiss is pretty boring for most of the time, and the scheduler
>> often seems to have a lot of undesired influence on the final ranking. Also
>> at this point I'm really not that interested any more to see some top
>> engine win yet another bot tournament without serious competition; I'd be
>> more interested to see how many stones they could give to the rest.
>> Wouldn't it be fun to see how many stones AlphaGo could give to CS?
>>
>> E.
>>
>>
>> On Mon, May 9, 2016 at 10:29 PM, "Ingo Althöfer" <3-hirn-ver...@gmx.de>
>> wrote:
>>
>>> Hi Gian-Carlo,
>>>
>>> I have thought carefully about your question on
>>> determinning handicaps properly.
>>> It seems you are very right with your doubts
>>>
>>> > The first obvious question is then: how will you determine the
>>> handicaps?
>>>
>>> A naive approach would be to take the KGS ranks of the bots.
>>> But even for those who really have this may be a problem. Namely,
>>> the program may use other/stronger hardware in the tournament,
>>> or may have made a jump in performance without playing openly
>>> on KGS.
>>>
>>> > As to the "large gaps in strength": the actual rating of Zen is
>>> > 1 stone above abakus, which is 1 stone above HiraBot. That seems
>>> > to conflict with your classification.
>>>
>>> Yes, but only according to KGS ranks. My impression yesterday was
>>> that Zen has made another jump in performance and is now more
>>> an 8-dan than a 7-dan. But this is indeed only a personal opinion
>>> and can not be taken for "serious" handicapping.
>>>
>>> Concerning abakus and Hirabot, it is indeed my opinion that they
>>> are at most 1 stone apart of each other.
>>>
>>> In total: my handicap idea seems not to be practicable.
>>>
>>> Ingo.
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>>
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
>
> --
> Nick Wedd  mapr...@gmail.com
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to Zen!

2016-05-09 Thread Erik van der Werf
Why not McMahon? (possibly with reduced handicap).  It works fine in human
Go tournaments.

IMO KGS Swiss is pretty boring for most of the time, and the scheduler
often seems to have a lot of undesired influence on the final ranking. Also
at this point I'm really not that interested any more to see some top
engine win yet another bot tournament without serious competition; I'd be
more interested to see how many stones they could give to the rest.
Wouldn't it be fun to see how many stones AlphaGo could give to CS?

E.


On Mon, May 9, 2016 at 10:29 PM, "Ingo Althöfer" <3-hirn-ver...@gmx.de>
wrote:

> Hi Gian-Carlo,
>
> I have thought carefully about your question on
> determinning handicaps properly.
> It seems you are very right with your doubts
>
> > The first obvious question is then: how will you determine the handicaps?
>
> A naive approach would be to take the KGS ranks of the bots.
> But even for those who really have this may be a problem. Namely,
> the program may use other/stronger hardware in the tournament,
> or may have made a jump in performance without playing openly
> on KGS.
>
> > As to the "large gaps in strength": the actual rating of Zen is
> > 1 stone above abakus, which is 1 stone above HiraBot. That seems
> > to conflict with your classification.
>
> Yes, but only according to KGS ranks. My impression yesterday was
> that Zen has made another jump in performance and is now more
> an 8-dan than a 7-dan. But this is indeed only a personal opinion
> and can not be taken for "serious" handicapping.
>
> Concerning abakus and Hirabot, it is indeed my opinion that they
> are at most 1 stone apart of each other.
>
> In total: my handicap idea seems not to be practicable.
>
> Ingo.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] May KGS bot tournament

2016-05-04 Thread Erik van der Werf
Or switch to McMahon / Handicaps

On Wed, May 4, 2016 at 4:18 PM, Sebastian Scheib 
wrote:

> That would be good, something like in other sports where you have a first,
> second and so... categories.
>
> 2016-05-04 11:00 GMT-03:00 Jim O'Flaherty :
>
>> Hmmm...if bots weaker than GnuGo are actively discouraged, perhaps there
>> could be a separate tournament level for that grouping of "aspiring
>> computer Go" entrants (if it isn't too much extra work). Having bots earn
>> the right to move into the higher level of (i.e. have met the entry
>> requirement of "consistently beats GnuGo version X.Y) might be a nice
>> filter as the number of those desiring to participate (with weaker bots)
>> rises.
>>
>> On Wed, May 4, 2016 at 4:01 AM, Urban Hafner 
>> wrote:
>>
>>> I’m considering entering with my bot but it’s rather weak (a lot weaker
>>> than GnuGo on 19x19) so I don’t know if it makes sense. Unless of course
>>> other weaker bots were willing to enter as well. If no one is interested in
>>> this (or if it’s even discouraged by Nick) then I would refrain from
>>> entering tournaments that I have no chance in beating GnuGo.
>>>
>>> Urban
>>>
>>> On Mon, May 2, 2016 at 6:51 PM, Nick Wedd  wrote:
>>>
 The May KGS bot tournament will start on Sunday, May 8th at 16:00 UTC,
 and finish by 22:00 UTC.  It will use 19x19 boards, with time limits
 of 14 minutes each plus very fast Canadian overtime, and komi of 7.5.
 See http://www.gokgs.com/tournEntrants.jsp?id=1030

 Please register by emailing me, with the words "KGS Tournament 
 Registration"
 in the email title, at mapr...@gmail.com .

 Nick
 --
 Nick Wedd  mapr...@gmail.com

 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

>>>
>>>
>>>
>>> --
>>> Blog: http://bettong.net/
>>> Twitter: https://twitter.com/ujh
>>> Homepage: http://www.urbanhafner.com/
>>>
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>>
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
>
> --
> Dracux
> *http://www.dracux.com *
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Is Go group status recognition by CNN possible?

2016-04-21 Thread Erik van der Werf
On Thu, Apr 21, 2016 at 1:20 PM, "Ingo Althöfer" <3-hirn-ver...@gmx.de>
wrote:

> Likely it is almost impossible for neural nets of "moderate" size
> to identify life/death stati of a groups.
>

No. Neural nets (even shallow ones like we used over a decade ago) are
quite capable to identify life/death. Sure you can construct pathological
examples that in theory require some form of recursion, but in practice
this now seems to be mostly a non-issue.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Congratulations to AlphaGo

2016-03-12 Thread Erik van der Werf
Congratulations Aja & Deepmind team!

Now that the victory is clear, perhaps you can say a bit more on the latest
developments? Any major scientific breakthroughs beyond what we already
know from the Nature paper?

Enjoy the moments!

Erik


On Sat, Mar 12, 2016 at 9:53 AM, Aja Huang  wrote:

> Thanks all. AlphaGo has won the match against Lee Sedol. But there are
> still 2 games to play.
>
> Aja
>
> On Sat, Mar 12, 2016 at 5:49 PM, Jim O'Flaherty <
> jim.oflaherty...@gmail.com> wrote:
>
>> It was exhilerating to witness history being made! Awesome!
>>
>> On Sat, Mar 12, 2016 at 2:17 AM, David Fotland 
>> wrote:
>>
>>> Tremendous games by AlphaGo.  Congratulations!
>>>
>>>
>>>
>>> *From:* Computer-go [mailto:computer-go-boun...@computer-go.org] *On
>>> Behalf Of *Lukas van de Wiel
>>> *Sent:* Saturday, March 12, 2016 12:14 AM
>>> *To:* computer-go@computer-go.org
>>> *Subject:* [Computer-go] Congratulations to AlphaGo
>>>
>>>
>>>
>>> Whoa, what a fight! Well fought, and well won!
>>>
>>> Lukas
>>>
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>>
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AlphaGo won the second game!

2016-03-10 Thread Erik van der Werf
Very impressive results so far!

If it's going to be a clean sweep, I hope we will get to see some handicap
games :-)

Erik


On Thu, Mar 10, 2016 at 12:04 PM, Petr Baudis  wrote:

> In the press conference (https://youtu.be/l-GsfyVCBu0?t=5h40m00s), Lee
> Sedol said that while he saw some questionable moves by AlphaGo in the
> first game, he feels that the second game was a near-perfect play by
> AlphaGo and he did not feel ahead at any point of the game.
>
> On Thu, Mar 10, 2016 at 12:44:23PM +0200, Petri Pitkanen wrote:
> > This time I think game was tougher. Though too weak to judge. At the end
> > sacrifice a fistfull stones does puzzle me, but again way too weak to
> > analyze it.
> >
> > It seem Lee Sedol is lucky if he wins a game
> >
> > 2016-03-10 12:39 GMT+02:00 Petr Baudis :
> >
> > > On Wed, Mar 09, 2016 at 09:05:48PM -0800, David Fotland wrote:
> > > > I predicted Sedol would be shocked.  I'm still routing for Sedol.
> From
> > > Scientific American interview...
> > > >
> > > > Schaeffer and Fotland still predict Sedol will win the match. “I
> think
> > > the pro will win,” Fotland says, “But I think the pro will be shocked
> at
> > > how strong the program is.”
> > >
> > > In that case it's time for Lee Sedol to start working hard on turning
> > > this match around, because AlphaGo won the second game too! :)
> > >
> > > Petr Baudis
> > > ___
> > > Computer-go mailing list
> > > Computer-go@computer-go.org
> > > http://computer-go.org/mailman/listinfo/computer-go
>
> > ___
> > Computer-go mailing list
> > Computer-go@computer-go.org
> > http://computer-go.org/mailman/listinfo/computer-go
>
>
> --
> Petr Baudis
> If you have good ideas, good data and fast computers,
> you can do almost anything. -- Geoffrey Hinton
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move evalution by expected value, as product of expected winrate and expected points?

2016-02-23 Thread Erik van der Werf
On Tue, Feb 23, 2016 at 4:41 PM, Justin .Gilmer  wrote:

> I made a similar attempt as Alvaro to predict final ownership. You can
> find the code here: https://github.com/jmgilmer/GoCNN/. It's trained to
> predict final ownership for about 15000 professional games which were
> played until the end (didn't end in resignation). It gets about 80.5%
> accuracy on a held out test set, although the accuracy greatly varies based
> on how far through the game you are. Can't say how well it would work in a
> go player.
>

At the risk of sounding like a broken record; that result (~80%) seems
similar to what I got many years ago when excluding life knowledge.
When life & death knowledge is included (which can also be learned from
examples and/or self-play) then the accuracy should approach 100% for final
positions. For more information see chapter 10 of my thesis (
http://erikvanderwerf.tengen.nl/pubdown/thesis_erikvanderwerf.pdf).

Erik
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Frisbee Go

2016-02-22 Thread Erik van der Werf
The most important skill in this game might be in how accurately you can
throw your frisbee. Why take that out? Build real robots!

;-)
Erik

On Mon, Feb 22, 2016 at 4:42 PM, "Ingo Althöfer" <3-hirn-ver...@gmx.de>
wrote:

> Dear John, Dear Nick, Dear all,
>
> > > ...
> > > Suppose I want to play on either of two adjacent points, and I don't
> care
> > > which. If I aim for one of them, I will land on one of them with
> probability
> > > (3p+1)/4, or whatever the formula says. I feel that I ought to be able
> to do
> > > better by aiming midway between them.
> >
> > But then why stop there? You may also want to aim in between 4 points.
> > Or perhaps just epsilon more toward the right of there.
> >
> > There's no accounting for all possibilities of real life frisbee Go,
> > so we settle for the simplest rule that captures the esssence...
>
> John is exactly argumenting in my direction.
> Keep the rules set as simple as possible.
>
> Once a stable Frisbee Go simulation scene is established, people
> may build subscenes if they want. And of course, once Frisbee robot Go
> will be played in real, programmers will look at all possible tricks.
>
> Regards, Ingo.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Everybody should participate!

2016-02-20 Thread Erik van der Werf
Don't think so, for most people it was already 'over' years ago, but Go has
a great handicap system :-)
Op 20 feb. 2016 17:53 schreef Ingo Althöfer <3-hirn-ver...@gmx.de>:

> Possibly the last opportunity before "game over".
>
> Ingo.
>
>
> *Gesendet:* Samstag, 20. Februar 2016 um 15:38 Uhr
> *Von:* "Nick Wedd" 
> *An:* computer-go@computer-go.org
> *Betreff:* Re: [Computer-go] February KGS bot tournament
> Reminder - it's tomorrow
>
> Nick
>
> On 11 February 2016 at 11:38, Nick Wedd  wrote:
>>
>> The February KGS bot tournament will be on Sunday, February  21st,
>> starting at 16:00 UTC and ending at 22:40 UTC.  It will use 9x9 boards,
>> with time limits of 4 minutes each plus fast Canadian overtime, and komi
>> of 7.  See http://www.gokgs.com/tournEntrants.jsp?sort=n=1012
>>
>> Please register by emailing me, with the words "KGS Tournament Registration"
>> in the email title, at mapr...@gmail.com .
>> Nick
>> --
>> Nick Wedd  mapr...@gmail.com
>>
>
>
> --
> Nick Wedd  mapr...@gmail.com
> ___ Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mastering the Game of Go with Deep Neural Networks and Tree Search

2016-01-27 Thread Erik van der Werf
Wow, excellent results, congratulations Aja & team!

I'm surprised to see nothing explicitly on decomposing into subgames (e.g.
for semeai). I always thought some kind of adaptive decomposition would be
needed to reach pro-strength... I guess you must have looked into this;
does this mean that the networks have learnt to do it by themselves? Or
perhaps they play in a way that simply avoids their weaknesses?

Would be interesting to see a demonstration that the networks have learned
the semeai rules through reinforcement learning / self-play :-)

Best,
Erik


On Wed, Jan 27, 2016 at 7:46 PM, Aja Huang  wrote:

> Hi all,
>
> We are very excited to announce that our Go program, AlphaGo, has beaten a
> professional player for the first time. AlphaGo beat the European champion
> Fan Hui by 5 games to 0. We hope you enjoy our paper, published in Nature
> today. The paper and all the games can be found here:
>
> http://www.deepmind.com/alpha-go.html
>
> AlphaGo will be competing in a match against Lee Sedol in Seoul, this
> March, to see whether we finally have a Go program that is stronger than
> any human!
>
> Aja
>
> PS I am very busy preparing AlphaGo for the match, so apologies in advance
> if I cannot respond to all questions about AlphaGo.
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Number of Go positions computed at last

2016-01-22 Thread Erik van der Werf
Congratulations John!

Does the number include symmetrical positions (rotations / mirroring /
color reversal)?

Best,
Erik


On Fri, Jan 22, 2016 at 5:18 AM, John Tromp  wrote:

> It's been a long journey, and now it's finally complete!
>
> http://tromp.github.io/go/legal.html
>
> has all the juicy details...
>
> regards,
> -John
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] 7x7 Go is weakly solved

2015-11-30 Thread Erik van der Werf
Hi Aja,

This result seems consistent with earlier claimed human solutions for 7x7
dating back to 1989. So what exactly is new? Did he write a program that
actually calculates the value?

Best,
Erik


On Mon, Nov 30, 2015 at 2:10 AM, Aja Huang  wrote:

> It's the work by Chinese pro Li Zhe 7p.
> http://blog.sina.com.cn/s/blog_53a2e03d0102vyt5.html
>
> His conclusions on 7x7 Go board:
> 1. Optimal komi is 9.0.
> 2. Optimal solution is not unique. But the first 3 moves are unique, and
> the first 7 moves generate 5 major optimal solutions.
> 3. There are many variations not affecting final score.
>
> Aja
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] 7x7 Go is weakly solved

2015-11-30 Thread Erik van der Werf
On Mon, Nov 30, 2015 at 12:52 PM, Aja Huang <ajahu...@google.com> wrote:

> Hi Erik,
>
> On Mon, Nov 30, 2015 at 10:37 AM, Erik van der Werf <
> erikvanderw...@gmail.com> wrote:
>
>> Hi Aja,
>>
>> This result seems consistent with earlier claimed human solutions for 7x7
>> dating back to 1989. So what exactly is new? Did he write a program that
>> actually calculates the value?
>>
>
> Did you mean 7x7 Go was weakly solved before?
>

It depends on what you mean by 'weakly solved'. If we take the definition
from https://en.wikipedia.org/wiki/Solved_game:

*'Provide an algorithm that secures a win for one player, or a draw for
either, against any possible moves by the opponent, from the beginning of
the game.'*

then no, I did not mean that, and that's why I asked you if he actually
wrote a program that does this for 7x7.

Strong humans players including some pro's claimed to have solved 7x7
already back in 1989 (see my phd thesis for a reference), but AFAIK they
did not implement an algorithm, so just like most of the other small board
results by humans these were never really proofs in a strict sense.

Best,
Erik
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Strong engine that maximizes score

2015-11-17 Thread Erik van der Werf
Unless you can solve the position, maximizing the score involves risk.
Strong players tend to avoid unnecessary risk.

Erik
Op 17 nov. 2015 21:06 schreef "Álvaro Begué" :

> I wouldn't say they are "not compatible", since the move that maximizes
> score is always in the top class (win>draw>loss) for any setting of komi.
> You probably mean it in a practical sense, in that MCTS engines are
> stronger when maximizing win probability.
>
> I am more interested in attempting to maximize score, even if the engine
> is significantly weaker. Of course this is not what most people want, so I
> understand I am looking for something unusual.
>
>
> Álvaro.
>
>
> On Tue, Nov 17, 2015 at 2:58 PM, David Fotland 
> wrote:
>
>> Attempting to maximize the score is not compatible with being a strong
>> engine.  If you want a dan level engine it is maximizing win-probability.
>>
>> David
>>
>> > -Original Message-
>> > From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
>> Behalf Of
>> > Darren Cook
>> > Sent: Tuesday, November 17, 2015 6:49 AM
>> > To: computer-go@computer-go.org
>> > Subject: Re: [Computer-go] Strong engine that maximizes score
>> >
>> > > I am trying to create a database of games to do some machine-learning
>> > > experiments. My requirements are:
>> > >  * that all games be played by the same strong engine on both sides,
>> > >  * that all games be played to the bitter end (so everything on the
>> > > board is alive at the end), and
>> > >  * that both sides play trying to maximize score, not winning
>> probability.
>> >
>> > GnuGo might fit the bill, for some definition of strong. Or Many Faces,
>> on
>> > the level that does not use MCTS.
>> >
>> > Sticking with MCTS, you'd have to use komi adjustments: first find two
>> > extreme values that give each side a win, then use a binary-search-like
>> > algorithm to narrow it down until you find the correct value for komi
>> for
>> > that position. This will take approx 10 times longer than normal MCTS,
>> for
>> > the same strength level.
>> >
>> > (I'm not sure if this is what Pachi is doing?)
>> >
>> > Darren
>> >
>> > ___
>> > Computer-go mailing list
>> > Computer-go@computer-go.org
>> > http://computer-go.org/mailman/listinfo/computer-go
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Standard Computer Go Datasets - Proposal

2015-11-13 Thread Erik van der Werf
On Fri, Nov 13, 2015 at 10:46 AM, Darren Cook  wrote:
>
> The advantages of storing games:
>   * accountability/traceability
>   * for programs who want to learn sequences of moves.
>

Another advantage of storing games is that it is much more efficient; you
only have to encode one move per position.

Erik
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Komi 6.5/7.5

2015-11-05 Thread Erik van der Werf
We know the true values for some small boards that were solved, and what
some strong human players believed those values should be before they were
solved. I think that for all cases the humans where either correct, or
under-estimating. I don't remember any over-estimations.

Here are some cases where humans underestimated:

size human migos
2x11   4  6
3x75 21
4x61  8
4x74 28
5x62  4

For more results see: http://erikvanderwerf.tengen.nl/mxngo.html

Perhaps this can be considered an indication that weaker players tend to
benefit less from the first player advantage.

Best,
Erik


On Thu, Nov 5, 2015 at 3:35 PM, Justin Blank  wrote:

> I have repeatedly seen people assert that komi must be different for
> players of different skill levels, and have repeatedly questioned it, but I
> have never seen anyone try to substantiate the claim. People who believe it
> find it obvious, but I don't. There are two pieces of evidence that I can
> think of:
>
> 1) that I believe someone played near-random engines against each other,
> and the correct komi was different (I cannot find where that was done). But
> that's so far from even DDK play that it's pretty useless.
> 2) I believe the old OGS (DGS?) forums included an analysis of their games
> and what the correct komi was. I cannot confidently quote the results. If
> those are the old OGS forums, I don't know if they even exist anymore.
>
> The data from go servers are freely available. Does white have a greater
> advantage for weaker players? It doesn't seem so--anecdotally when players
> posted their KGS stats, they varied a bit, but didn't seem to have a bias
> for White.
>
> Of course, that's anecdata...anyone is welcome to prove or disprove this
> old claim by analyzing the stats on KGS, or Tygem or wherever else.
>
> On Thu, Nov 5, 2015 at 2:03 AM, Petri Pitkanen  > wrote:
>
>> Let alone we do not have even sufficient understanding of perfect play to
>> say what is correct komi in absolute sense. Nor it is it even meaningful
>> concept. Correct komi is a komi that produces about 50/50 result. Obviously
>> komi that will result in 50/50 for professionals will probably favour white
>> in your average weekend tournaments. Just like in chess 1st move advantage
>> is clearly less meanigfull for weaker players than top professionals.
>>
>> So setting komi is not theroretical but statistical issue
>>
>> 2015-11-05 0:04 GMT+02:00 Hideki Kato :
>>
>>> The correct komi value assuming both players are perfect.  Or, black
>>> utilize his advantage (maybe in an early stage) perfectly.  Actual
>>> players, even strong pros, are not perfect and cannot fully utilize
>>> their advantages.  As a conclusion, white is favored.
>>>
>>> Hideki
>>>
>>> Aja Huang: 

Re: [Computer-go] Komi 6.5/7.5

2015-11-04 Thread Erik van der Werf
I think he's right. I'm fairly sure 7.5 is a second-player win on 9x9,
and for larger boards intuitively it makes sense that the komi should
be the same or lower. Also, we know that perfect komi is an integer,
for area scoring the likely candidates are 5 and 7, and for territory
scoring (and some unlikely area scoring cases) we also have 6. The
fraction is just deciding who shall have the advantage in breaking
ties.

Erik


On Wed, Nov 4, 2015 at 1:59 PM, Aja Huang  wrote:
> Hi all,
>
> As you might know the Chinese professional player Ke Jie is like an erupting
> volcano, triumphant in many domestic and international Go competitions.
>
> In the interview at
>
> http://sports.sina.com.cn/go/2015-11-04/doc-ifxkhqea3033663.shtml
>
> Ke Jie said in his opinion on 19x19 komi 6.5 or 7.5 favors White. That seems
> consistent to MCTS's behavior? i.e. on the empty board, with komi 7.5,
> Black's win rate is usually between 46% and 48% meaning White is ahead. As
> the current top pro, Ke Jie's viewpoint is very interesting. :)
>
> Aja
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Playout speed... again

2015-10-14 Thread Erik van der Werf
You should be able to do at least 50 times faster.

Erik

On Thu, Oct 15, 2015 at 12:27 AM, Gonçalo Mendes Ferreira  wrote:
> Hi, I've been searching the mailing list archive but can't find an answer to
> this.
>
> What is currently the number of playouts per thread per second that the best
> programs can do, without using the GPU?
>
> I'm getting 2075 in light playouts and just 55 in heavy playouts. My heavy
> playouts use MoGo like patterns and are probability distributed, with
> liberty/capture counts/etc only updated when needed, so it should be pretty
> efficient.
>
> What would be a good ballpark for this?
>
> Thank you,
> Gonçalo F.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Playout speed... again

2015-10-14 Thread Erik van der Werf
I don't know, what language are you using? Did you do any
optimizations? How many clock cycles does it take your program on
average to make and undo a move (just counting the core board update)?

BTW you didn't specify board size and hardware, so I assumed 19x19 and
a modern PC.

Erik

On Thu, Oct 15, 2015 at 12:56 AM, Gonçalo Mendes Ferreira <go...@sapo.pt> wrote:
> Really? I didn't mention it but it's uses MCTS-UCT with RAVE, progressive
> pruning, mercy thresholds and max playout depth, etc. What could I be
> missing that is that much of a boost in the playouts, in your experience?
>
> Gonçalo
>
>
> On 14/10/2015 23:40, Erik van der Werf wrote:
>>
>> You should be able to do at least 50 times faster.
>>
>> Erik
>>
>> On Thu, Oct 15, 2015 at 12:27 AM, Gonçalo Mendes Ferreira <go...@sapo.pt>
>> wrote:
>>>
>>> Hi, I've been searching the mailing list archive but can't find an answer
>>> to
>>> this.
>>>
>>> What is currently the number of playouts per thread per second that the
>>> best
>>> programs can do, without using the GPU?
>>>
>>> I'm getting 2075 in light playouts and just 55 in heavy playouts. My
>>> heavy
>>> playouts use MoGo like patterns and are probability distributed, with
>>> liberty/capture counts/etc only updated when needed, so it should be
>>> pretty
>>> efficient.
>>>
>>> What would be a good ballpark for this?
>>>
>>> Thank you,
>>> Gonçalo F.
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] KGS bot tournaments - what are your opinions?

2015-10-08 Thread Erik van der Werf
On Thu, Oct 8, 2015 at 1:10 PM, Tobias Graf  wrote:
> 1. "Reducing computing power." Just let me quote the standings of the last
> 9x9 tournament.
> 1) 18 Cores
> 2) 80 Cores
> 3) 12 Cores
> 4) 288 Cores
> 5) 8 Cores

Counting 'cores' is a bad idea; 'core' is mostly just a marketing term.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] KGS bot tournaments - what are your opinions?

2015-10-07 Thread Erik van der Werf
Hi Nick,

Some kind of limit on processing power would be interesting. To me it
seems clear that a program like Zen benefits a lot by using more
processing power than it's close competitors.

A measure that I find reasonable is a limit on number of threads x
clock frequency. E.g., a program running on 64 intel cores, with 2
threads per core, at 3 Ghz would be counted as using 64x2x3 = 384 GHz,
and a program running on 24 amd cores, with 1 thread per core, at 2.6
Ghz would be counted as 62.4 GHz. As long as the top just uses x86
type processors this should work reasonably well. For GPU's or ARM
processors there probably needs to be a scaling factor.

Another option could be to limit the power supply, e.g., you may not
use more than say 400 Watt.

BR,
Erik


PS As someone that participates only occasionally I like the zeros,
but perhaps there is a friendlier way to indicate
participation/absence.

PS2 Crazy idea?: "Machine A plays black, machine B plays white. First
player proposes the komi, the second player chooses the
color"


On Wed, Oct 7, 2015 at 12:52 PM, Detlef Schmicker  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi Nick,
>
> 3. would be great, I am very often going through round results to see
> where CS or abakus has lost its games:)
>
> 1. I do not see a way to do this but running on same hardware (e.g.
> Amazon EC2 with graphic card). Even this is unfair, as programs might
> be optimized to other configurations (cluster)
>
> Detlef
>
> Am 07.10.2015 um 12:27 schrieb Nick Wedd:
>> I am thinking of making some small changes to the way I run bot
>> tournaments on KGS.  If you have ever taken part in a KGS bot
>> tournament, I would like to hear your opinions on three things.
>>
>>
>> 1.  Limit on processor power?
>>
>> This is the main point on which I want your opinions.  The other
>> two are trivial.
>>
>> Several people have suggested to me that these events would be
>> fairer if there were a limit on the computing power of the
>> entrants. I would like to do this, but I don't know how. I have
>> little understanding of the terminology, I don't know how *e.g.*
>> multiple cores in one computer compare with multiple computers on
>> one network, and I don't know how to count a graphics card.  *If*
>> someone can find a way to specify an upper limit to permitted power
>> which is clear and easy to understand, and *if* most entrants would
>> favor imposing such a limit, I will discuss what it should be, and
>> apply it.  I am not able to check what entrants are really running
>> on, but I will trust people.
>>
>>
>> 2. Zeroes in the "Annual Championship" table.
>>
>> The table at http://www.weddslist.com/kgs/annual/index.html has a 0
>> in a cell where a program competed but did not score, and a blank
>> where it did not compete (at least it should do, I sometimes get it
>> wrong). I would prefer to omit these zeroes, they seem a bit rude.
>> Also there is no clear distinction between competing and not
>> competing - how should I treat a program which crashes and
>> disappears after two rounds, or one (like AyaMC last Sunday) which
>> plays in every round but is broken and has no chance of winning?  I
>> realise that the zeroes some convey information that may be of
>> interest.  Should I continue to use them, or just leave those cells
>> blank?
>>
>>
>> 3. Live crosstable
>>
>> When I write up my reports, I include a crosstable, like the one
>> near the top of http://www.weddslist.com/kgs/past/116/index.html .
>> This is easy for me, I run a script which reads the data from the
>> KGS page ( http://www.gokgs.com/tournEntrants.jsp?sort=s=990 in
>> this case) and builds the crosstable in html, which I copy into the
>> tournament report. It only works for Swiss (and maybe Round Robin)
>> tournaments. It works while the tournament is still running, though
>> only between rounds.I could build a current crosstable each round
>> during a tournament if there is any demand for it.
>>
>>
>>
>> ___ Computer-go mailing
>> list Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2.0.22 (GNU/Linux)
>
> iQIcBAEBAgAGBQJWFPljAAoJEInWdHg+Znf4pMwQAI/4c2HhXA1DgAQEcOpulRAg
> SSrDGCLyAypTQb3p1YDx45bvIIE9r0qBL+Ze+GRO/gD9peYmbayQWO7pQiL+2A/i
> 2NSkr5gj6SR2JKW924Ba7NnBBIOil+bVX4Jku8PZJkz7MWwsP7LxFKhX2hA81Iis
> EFdLAJ74atr7LSbERuhDYpwn/FcS9ag2h9k1zcwXD6R/OSuB46+OyR6dWw5NCrAL
> FhjHo22rXig741ZTHtAxx7VXRwMdn6RV2oqkMbajFa7CLHFTcLRLMv3ix2TFyyd9
> JXwYKzJLxCeUNBWQ8WI5wvEqf7BPRjZcg0jujfR29zpg0AEooglrnnyIwRee9DPy
> BGAxR8FmW/5kC1tygvM2c+shxvdhGrIB+1f8UoKIMp/IdhSLGuHc3Dq077+jCjG/
> QYA766C9tg+mqEPRp3nzqTP4G6cdTGlPfPLxMGGz6r1ltdlUwcqAy+Q/x6vztRJz
> HH7ThXZgNpruhKqoDkltxW6udGpdeUiRW5u2JDCHVWPI6S+AjhdbtpI4EG+7Awwq
> GUqk2LFlh3PItxl3UkYBpfevdHnUKXXG5UDxODEIQx43QgyYqLRjMsMXfaJGCbce
> TBsJ9CvvyadAN2JhOzeIVRVZoWbzyk3t/+Pkyg4erhyi517jhmaXJApBQiKl1lOL
> EqL1TfDx1ZJm+Ow0y18D
> 

Re: [Computer-go] KGS bot tournaments - what are your opinions?

2015-10-07 Thread Erik van der Werf
On Wed, Oct 7, 2015 at 5:02 PM, Hideki Kato <hideki_ka...@ybb.ne.jp> wrote:
> Erik,
>
> Erik van der Werf: 
> 

Re: [Computer-go] KGS bot tournaments - what are your opinions?

2015-10-07 Thread Erik van der Werf
Although I agree on the research argument (setting no limits
encourages work on massive parallel distributed architectures), I do
find it a bit funny to see this argument coming from team Zen. As far
as I know team Zen does not publish their research findings (or did I
miss some papers?).

Erik


On Wed, Oct 7, 2015 at 4:20 PM, Hideki Kato  wrote:
> Nick & all,
>
> 1. Although introducing some limitation of cpu power is an intersting
> idea (actually my GPW Cup does), I think it's too early for KGS bot
> tournaments.
>
> How to utilize computer clusters' power for planning tasks is a common
> and important reseach theme now.  As communication over network is less
> effective than in-box one, playing-strength per (total) cpu-power get
> smaller when using computer clusters.  Zen's root parallelization
> improves up-to 1 or 2 ranks but TDS based algorithms (used for Gommora
> and MP-Fuego) are expected to give more improvements.  Preventing such
> effort must be a bad idea, I strongly believe.  So, at least, a simple
> sum of cpu power of all node computers is not acceptable.  (Some
> discouting could be?)
>
> Cpu power of each SMP or NUMA box can be computed by
> number-of-(physical-)cores times clock-frequency (although Erik used
> logical-cores).  Using number-of-threads instead might be better.  For
> more fairness, some factors can be defined for processor arichitectures
> or manufacturers, because some participants have to use non-Intel
> processors due to their environments.
>
> For GPUs, I have no concrete idea now.  Simple flops is not enough and
> more discussion is necessary, I believe.
>
> Another idea, you (or we?) can define some benchmark program(s).
>
> 2. I don't understand this at all.  It's just a record of fact.
> Intentional omitting of information must be a bad idea.
>
> 3. Watching the crosstable in real-time should be a fan for all
> observers.
>
> Hideki
>
> Nick Wedd: 
> :
>>I am thinking of making some small changes to the way I run bot tournaments
>>on KGS.  If you have ever taken part in a KGS bot tournament, I would like
>>to hear your opinions on three things.
>>
>>
>>1.  Limit on processor power?
>>
>>This is the main point on which I want your opinions.  The other two are
>>trivial.
>>
>>Several people have suggested to me that these events would be fairer if
>>there were a limit on the computing power of the entrants. I would like to
>>do this, but I don't know how. I have little understanding of the
>>terminology, I don't know how *e.g.* multiple cores in one computer compare
>>with multiple computers on one network, and I don't know how to count a
>>graphics card.  *If* someone can find a way to specify an upper limit to
>>permitted power which is clear and easy to understand, and *if* most
>>entrants would favor imposing such a limit, I will discuss what it should
>>be, and apply it.  I am not able to check what entrants are really running
>>on, but I will trust people.
>>
>>
>>2. Zeroes in the "Annual Championship" table.
>>
>>The table at http://www.weddslist.com/kgs/annual/index.html has a 0 in a
>>cell where a program competed but did not score, and a blank where it did
>>not compete (at least it should do, I sometimes get it wrong). I would
>>prefer to omit these zeroes, they seem a bit rude. Also there is no clear
>>distinction between competing and not competing - how should I treat a
>>program which crashes and disappears after two rounds, or one (like AyaMC
>>last Sunday) which plays in every round but is broken and has no chance of
>>winning?  I realise that the zeroes some convey information that may be of
>>interest.  Should I continue to use them, or just leave those cells blank?
>>
>>
>>3. Live crosstable
>>
>>When I write up my reports, I include a crosstable, like the one near the
>>top of http://www.weddslist.com/kgs/past/116/index.html .  This is easy for
>>me, I run a script which reads the data from the KGS page (
>>http://www.gokgs.com/tournEntrants.jsp?sort=s=990 in this case) and
>>builds the crosstable in html, which I copy into the tournament report. It
>>only works for Swiss (and maybe Round Robin) tournaments. It works while
>>the tournament is still running, though only between rounds.I could build a
>>current crosstable each round during a tournament if there is any demand
>>for it.
> --
> Hideki Kato 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Report KGS Slow Tournament

2015-09-27 Thread Erik van der Werf
On Sun, Sep 27, 2015 at 3:10 AM, Hiroshi Yamashita  wrote:

> His paper is also interesting.
> Abakus got +130 Elo by online learning.
>
> Adaptive Playouts in Monte Carlo Tree Search with Policy Gradient
> Reinforcement Learning
>
> https://www.conftool.net/acg2015/index.php?page=browseSessions=adminSessions=export=list=show
>
> Regards,
> Hiroshi Yamashita
>

You can also watch the presentations:
https://acg2015.wordpress.com/videos-of-presentations/

Best,
Erik
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] re good article on playout ending

2015-09-09 Thread Erik van der Werf
Steenvreter stops its playouts when it detects a proven win or loss. The
evaluation function it uses is an improved version of what I made to solve
the small boards. I once tried adding the mercy rule, but it did not
improve the program.

Erik


On Wed, Sep 9, 2015 at 5:46 PM, Peter Drake  wrote:

> I don't know of an article, but unless your ending detection is VERY fast,
> it's better to just finish the playout.
>
> One possibility is a "mercy threshold": if one player's stone count (which
> you update incrementally) far exceeds the other, declare the player with
> more stones the winner. The relevant class from Orego is attached.
>
>
> On Wed, Sep 9, 2015 at 7:53 AM, Gonçalo Mendes Ferreira 
> wrote:
>
>> Does anyone know of a good article on ending a MCTS playout early,
>> outcome estimation, the quality of interrupted outcomes, and so on?
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
>
>
>
> --
> Peter Drake
> https://sites.google.com/a/lclark.edu/drake/
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] open bank

2015-09-06 Thread Erik van der Werf
http://www.citeulike.org/group/5884/library
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] [ANN] yet another go engine : michi-c release 1.4 on GitHub

2015-08-29 Thread Erik van der Werf
Nice!

FYI: I tried the portable option and compiled for Android, but that seems
buggy. The code runs on my phone, and the program does make moves, but many
moves are bad (e.g., too many are on the second line). On my PC it seems
OK.

BR,
Erik


On Fri, Aug 28, 2015 at 11:00 PM, Denis Blumstein denis.blumst...@orange.fr
 wrote:

 Hi,

 michi-c is a port in C of the michi program by Petr Baudis with the same
 goals (see https://gitub.com/pasky/michi).

 It has many of the extensions that Petr has hoped:
 - early passing,
 - graphics in gogui,
 - parameters modifications by gtp commands,
 - speed improvement by tracking liberties and blocks,
 - preliminary time management and dynamic komi
 - read simple SGF files
 - small user manual

 Currently (version 1.4), it runs exactly the same algorithms as the michi
 python version.

 The michi goal for brevity has been relaxed in favor of speed and
 functionalities.

 Michi-c is relatively fast even if there is still much room for
 improvements. It runs 3200 playouts/s from an empty 19x19 board on an
 i7-4790K (single threaded and using large patterns). With this setting, it
 plays about even with gnugo on 19x19 (winrate 57 % +/- 2.5% measured on 400
 games) at an average speed of 400 seconds per game (about 3.3 sec/move).

 The development is done on Linux but the goal is to keep the code portable.
 Michi-c comes with everything included. The only requirements are :
 - a C compiler with the standard C library to build the gtp engine,
 - gogui (http://gogui.sourceforge.net) to use the engine confortably if
 gogui is supported on your system.

 The code for the MCTS tree search and the playout policy is about 1000
 lines of C (20 % of the total).

 Michi-c can be downloaded at https://github.com/db3108/michi-c2. It is
 distributed under the MIT license.

 Thanks to Horace Ho, Andreas Pearson, Eric Steinmetz and J.Kartz who have
 provided feedback and/or corrections about portability issues of earlier
 versions with IOS (iphone 6), Windows 32 bits system with Microsoft Visual
 Studio and MAC OS X.

 And of course, many thanks to Petr Baudis for having published the
 michi.py code and setting up the goals for this project.

 Denis
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Mental Imagery in Go - playlist

2015-08-05 Thread Erik van der Werf
On Wed, Aug 5, 2015 at 10:56 AM, Darren Cook dar...@dcook.org wrote:

 P.S. Isn't brute force the term used to mean that you can see
 measurable improvements in playing strength just by doubling the CPU
 speed (and/or memory or other hardware restraint). Alpha-beta with all
 the trimmings, and MCTS with a good pattern library, both seem to qualify.


No, that just means that the solution scales (and brute force solutions
tend to scale up quite poorly).

https://en.wikipedia.org/wiki/Brute-force_search
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] CGOS

2015-05-02 Thread Erik van der Werf
Baseline for worst play? Why?

Paris Hilton??
Op 2 mei 2015 11:42 schreef folkert folk...@vanheusden.com:

 I'm running parishilton it now so that you have a baseline for the
 worst play.

 On Sat, May 02, 2015 at 07:21:05AM +0200, Detlef Schmicker wrote:
  Hi,
 
  I set up a CGOS server at home. It is connected via dyndns, which is not
  optimal of cause :(
 
  physik.selfhost.eu
 
  Ports:
  8080 (webinterface)
  8083 (19x19, GnuGo 3.8 set to ELO 1800 as anachor)
 
  This is mainly for testing, if I get CGOS up correctly, what to do to
 have
  it permanently running still to be seen.
  I am not able to test the connection from the outside, hopefully I set up
  everything correctly.
  I might stop the server for the tournament on sunday, as it is the same
  machine
 
  future plan is:
  8081 for 9x9 8082 for 13x13 and 8084 for 25x25.
  (you will see on the web interface, as soon as the other boardsizes are
  switched on.
 
 
  ___
  Computer-go mailing list
  Computer-go@computer-go.org
  http://computer-go.org/mailman/listinfo/computer-go


 Folkert van Heusden

 --
 MultiTail is een flexibele tool voor het volgen van logfiles en
 uitvoer van commando's. Filteren, van kleur voorzien, mergen,
 'diff-view', etc. http://www.vanheusden.com/multitail/
 --
 Phone: +31-6-41278122, PGP-key: 1F28D8AE, www.vanheusden.com
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] How about a 39x39 tournament?

2015-04-26 Thread Erik van der Werf
Personally I think 39x39 is too big. Also, there is a problem with GTP; the
protocol does not support boards over 25x25.

Erik


On Sun, Apr 26, 2015 at 11:26 PM, Petr Baudis pa...@ucw.cz wrote:

 On Sun, Apr 26, 2015 at 12:17:01PM +0200, remi.cou...@free.fr wrote:
  Hi,
 
  I thought it might be fun to have a tournament on a very large board. It
 might also motivate research into more clever adaptive playouts. Maybe a
 KGS tournament? What do you think?

 That's a cool idea - even though I wonder if 39x39 is maybe too extreme
 (I guess the motivation is maximum size KGS allows).

 I think that actually, GNUGo could become stronger than even the top
 MCTS programs at some point when expanding the board size, but it's hard
 for me to estimate exactly when - if at 25x25 or at 49x49...

 Petr Baudis
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] running amigogtp using twogtp

2015-04-20 Thread Erik van der Werf
Just use GnuGo as referee.

On Mon, Apr 20, 2015 at 1:25 PM, folkert folk...@vanheusden.com wrote:

 Hi,

 I'm trying to run amigogtp using twogtp. This fails because it doesn't
 know final_score.
 Now I've read that it should be possible (with gogui-twogtp at least)
 to use a referree application. Is this true? Where can I find such a
 program (for linux)?
 Or is there an other way?
 I want to use amigogtp because its strength is not too far off compared
 to my own program (stop_0.9-2b: 627 and amigogtp-1.8: 811 on 13x13
 cgos).
 myCtest-10k would be even better but there's no source of that
 available?


 Folkert van Heusden

 --
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] UCB-1 tuned policy

2015-04-16 Thread Erik van der Werf
Many observed that, but not everyone.
Op 16 apr. 2015 07:38 schreef David Fotland fotl...@smart-games.com:

 I didn’t notice a difference.  Like everyone else, once I had RAVE
 implemented and added biases to the tree move selection, I found the UCT
 term made the program weaker, so I removed it.

 David

  -Original Message-
  From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
 Behalf Of
  Igor Polyakov
  Sent: Tuesday, April 14, 2015 3:37 AM
  To: computer-go@computer-go.org
  Subject: [Computer-go] UCB-1 tuned policy
 
  I implemented UCB1-tuned in my basic UCB-1 go player, but it doesn't seem
  like it makes a difference in self-play.
 
  It seems like it's able to run 5-25% more simulations, which means it's
  probably exploiting deeper (and has less steps until it runs out of room
 to
  play legal moves), but I have yet to see any strength improvements on
  9x9 boards.
 
  As far as I understand, the only thing that's different is the formula.
  Has anyone actually seen any difference between the two algorithms?
  ___
  Computer-go mailing list
  Computer-go@computer-go.org
  http://computer-go.org/mailman/listinfo/computer-go

 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] evaluating number of wins versus looses

2015-03-30 Thread Erik van der Werf
On Mon, Mar 30, 2015 at 4:09 PM, Petr Baudis pa...@ucw.cz wrote:

 The strongest programs often use RAVE or LGRF or something like that,
 with or without the UCB for tree exploration.


Huh, are there any strong programs that got LGRF to work?

Erik
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Weak bots to run on CGOS

2015-03-09 Thread Erik van der Werf
Perhaps AmiGo

http://amigogtp.sourceforge.net/

On Mon, Mar 9, 2015 at 10:08 AM, Urban Hafner cont...@urbanhafner.com wrote:
 Hey everyone,

 I'm currently running Brown (random bot) and GnuGo on CGOS 13x13. Mainly to
 get a feel for the strength of my own bot. And my bot is really bad. ;) So
 bad that it looses all games against GnuGo, but wins all games against
 Brown. So, the rating is a bit useless I assume as there are no bots that
 are in strength between GnuGo and the random player. Are there any bots in
 that range out there? I'd be willing to run them myself on CGOS.

 Urban
 --
 Blog: http://bettong.net/
 Twitter: https://twitter.com/ujh
 Homepage: http://www.urbanhafner.com/

 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Debugging Orego/KGS: Unknown message type -105

2015-02-26 Thread Erik van der Werf
I have no idea about that message, but one thing I do before every
tournament is make sure that I have the latest version of kgsGtp.

On Thu, Feb 26, 2015 at 9:45 PM, Peter Drake dr...@lclark.edu wrote:
 I've finally gotten around to trying to address the issue that Orego faced
 in the December tournament. As you may recall, every time Orego played black
 (except the first time), it immediately passed.

 Looking at the kgsGTP log, I see some strange things. At the beginning of
 every game, this happens:


 FINEST: Command queued for sending to engine: time_settings 540 30 10

 Dec 07, 2014 8:39:00 AM com.gokgs.client.gtp.a l

 WARNING: Opponent has left game. Will give them 5 minutes to return.

 Unknown message type -105 for channel org.igoweb.igoweb.client.gtp.c@cd3509c

 Dec 07, 2014 8:39:00 AM com.gokgs.client.gtp.GtpClient d

 FINEST: Got successful response to command boardsize 13: =

 Dec 07, 2014 8:39:00 AM com.gokgs.client.gtp.GtpClient d


 Does anyone know what this means? Why does the opponent leave? What is the
 unknown message type -105?


 --
 Peter Drake
 https://sites.google.com/a/lclark.edu/drake/

 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Go AI

2015-02-25 Thread Erik van der Werf
Well, at least Zen is on some slides in the presentation. Steenvreter
is not mentioned at all even though on 9x9 it won the Olymiad ahead of
Mogo and CrazyStone, and beat various Dutch top players (6d  7d).
Until 2013 CrazyStone never won a single game against Steenvreter...

Erik


On Wed, Feb 25, 2015 at 12:06 PM, Hideki Kato hideki_ka...@ybb.ne.jp wrote:
 Hm, there are no 'Zen' in the article, although Zen beat Takemiya 9p
 with 4 stones handicap in 2012, one year earlier than CrazyStone.

 Hideki

 Michael Alford: 54ed093a.5090...@aracnet.com:
Apology, forgot the link:

http://blog.fogcreek.com/go-and-artificial-intelligence-tech-talk/


On 2/24/15 3:26 PM, Michael Alford wrote:
 This link appeared in today's AGA E-journal. It mentions MoGo, Zen, and
 Crazy Stone.

 Michael

 ---

 http://en.wikipedia.org/wiki/Pale_Blue_Dot



--

http://en.wikipedia.org/wiki/Pale_Blue_Dot

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go
 --
 Hideki Kato mailto:hideki_ka...@ybb.ne.jp
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Building a database for training CNNs

2014-12-20 Thread Erik van der Werf
Hi Álvaro,

I've done things like that, except I didn't use games by strong
computer opponents (none existed at the time), so just human amateur
games. In my experience the critical part is in learning about life 
death. Once you have that, estimating ownership is fairly easy,
asymptotically reaching near 100% prediction accuracy toward the end
of the game. See the following papers for more details:

http://erikvanderwerf.tengen.nl/pubdown/learning_to_score_extended.pdf
http://erikvanderwerf.tengen.nl/pubdown/learningLifeandDeath.pdf
http://erikvanderwerf.tengen.nl/pubdown/predicting_territory.pdf

or just read the second half of my PhD thesis :)

http://erikvanderwerf.tengen.nl/pubdown/thesis_erikvanderwerf.pdf


BTW it's great to see this list come back from the dead. Thanks Petr  Michael!

Best,
Erik


On Sat, Dec 20, 2014 at 4:43 PM, Álvaro Begué alvaro.be...@gmail.com wrote:

 Hi,

 There are things a CNN could probably do well, if only we had the right
 database to train it. I have in mind these two possibilities:
  * using a CNN as an evaluation function,
  * using a CNN to estimate ownership for each point (i.e., a number
 between -1 and 1 that is an estimate of who is going to end up scoring it).

 So we need a large set of positions labelled with a final score for the game
 and who ended up scoring each point.

 I believe the right database to use for this purpose would consist of
 positions from games played by strong computer opponents which play to
 maximize score and which play to the bitter end, passing only when the
 opponent has no dead stones left on the board.

 I would like to know if you think this would be an interesting resource to
 have, if you have any recommendations on what engine(s) to use and if you
 would be willing to collaborate in creating it. Any other comments are
 welcome too, of course.

 Cheers,
 Álvaro.




 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Building a database for training CNNs

2014-12-20 Thread Erik van der Werf
On Sat, Dec 20, 2014 at 9:35 PM, Robert Jasiek jas...@snafu.de wrote:
 On 20.12.2014 17:04, Erik van der Werf wrote:

 the critical part is in learning about life 
 death. Once you have that, estimating ownership is fairly easy

 [...] See the following papers for more details: [...]

 http://erikvanderwerf.tengen.nl/pubdown/predicting_territory.pdf


 Estimating ownership or evaluation functions to predict final scores of
 already played games are other things than estimating potential territory.
 Therefore I dislike the title of your paper. Apart from lots of simplistic
 heuristics without relation to human understanding of territorial positional
 judgement, one thing has become clear to me from your paper:

 There are two fundamentally different ways of assessing potential territory:

 1) So far mainly human go: count territory, do not equate influence as
 additional territory.

 2) So far mainly computer go: count territory, equate influence as
 additional territory.

 Human players might think as follows: The player leads by T points.
 Therefore the opponent has to use his superior influence to make T more new
 points than the player. Computers think like this: One value is simpler
 than two values, therefore I combine territory and influence in just one
 number, the predicted score.

 Both methods have their advantages and disadvantages, but it does not mean
 that computers would always have to use (2); they can as well learn to use
 (1). (1) has the advantage that counting territory (or intersections that
 are almost territory) is easy for quiet positions.

 Minor note on your paper: influence and thickness are defined now (see
 Joseki 2 - Strategy) and influence stone difference and mobility are
 related concepts if one wants simpler tools. aji has approached a
 mathematical definition a bit but still some definition work remains.


Sure, I tried lots of simple heuristics to ease the learning task for
the networks. One might hope that 'deep' networks would be able to
learn advanced concepts more easily, perhaps more on par with human
understanding, but for the near future that might still just be
wishful thinking.

At the time I didn't really care much for a fundamental distinction
between territory and influence; I just wanted to have a function to
predict the outcome of the game for every intersection as well as
possible (because it seemed useful as an evaluation function).
Intersections colored with high probability for one side tend to
coincide with what human players call territory, while mediocre
probabilities tend to coincide more with influence. I know there are
non-probabilistic ways to define the two, but I'm not sure it really
matters. Perhaps the more effective approach is to just go directly
for the probability of winning (like MC does).

Best,
Erik
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks toPlay Go

2014-12-20 Thread Erik van der Werf
On Sat, Dec 20, 2014 at 6:16 AM, Hiroshi Yamashita y...@bd.mbn.or.jp wrote:
 I put two commented games on

 http://webdocs.cs.ualberta.ca/~mmueller/fuego/Convolutional-Neural-Network.html

 Thank you for the report. It was fun.
 I'm also surprised CNN can play move 185 in Game 1.
 CNN uses 1, 2, or 3 or more liberties info. B libs changed
 from 4 to 3. And W libs was 3. It looks CNN can not understand
 this difference, but he could.

Indeed surprising, but maybe it just got lucky. With some sense of
move history this one would have been more easily explained...

Erik
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional Neural Networks

2014-12-19 Thread Erik van der Werf
On Sat, Dec 20, 2014 at 12:17 AM, Aja Huang ajahu...@google.com wrote:
 We've just submitted our paper to ICLR. We made the draft available at
 http://www.cs.toronto.edu/~cmaddis/pubs/deepgo.pdf

Hi Aja,

Wow, very impressive. In fact so impressive, it seems a bit
suspicious(*)... If this is real then one might wonder what it means
about the game. Can it really be that simple? I always thought that
even with perfect knowledge there should usually still be a fairly
broad set of equal-valued moves. Are we perhaps seeing that most of
the time players just reproduce the same patterns over and over again?

Do I understand correctly that your representation encodes the
location of the last 5 moves? If so, do you have an idea how much
extra performance that provides compared to only the last or not using
it at all?

Thanks for sharing the paper!

Best,
Erik


* I'd really like to see some test results on pro games that are newer
than any of your training data.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Erik van der Werf
Thanks for posting this Hiroshi!

Nice to see this neural network revival. It is mostly old ideas, and it is
not really surprising to me, but with modern compute power everyone can now
see that it works really well. BTW for some related work (not cited),
people might be interested to read up on the 90s work of Stoutamire,
Enderton, Schraudolph and Enzenberger.

Comparing results to old publications is a bit tricky. For example, the
things I did in 2001/2002 are reported to achieve around 25% prediction
accuracy, which at the time seemed good but is now considered unimpressive.
However, in hindsight, an important reason for that number was time
pressure and lack of compute power, which is not really related to anything
fundamental. Nowadays using nearly the same training mechanism, but with
more data and more capacity to learn (i.e., a bigger network), I also get
pro-results around 40%. In case you're interested, this paper
http://arxiv.org/pdf/1108.4220.pdf by Thomas Wolf has a figure with more
recent results (the latest version of Steenvreter is still a little bit
better though).

Another problem with comparing results is the difficulty to obtain
independent test data. I don't think that was done optimally in this case.
The problem is that, especially for amateur games, there are a lot of
people memorizing and repeating the popular sequences. Also, if you're not
careful, it is quite easy to get duplicate games in you dataset (I've had
cases where one game was annotated in chinese, and the other (duplicate) in
English, or where the board was simply rotated). My solution around this
was to always test on games from the most recent pro-tournaments, for which
I was certain they could not yet be in the training database. However, even
that may not be perfect, because also pro's play popular joseki, which
means there will at least be lots of duplicate opening positions.

I'm not surprised these systems now work very well as stand alone players
against weak opponents. Some years ago David and Thore's move predictors
managed to beat me once in a 9-stones handicap game, which indicates that
also their system was already stronger than GNU Go. Further, the version of
Steenvreter in my Android app at its lowest level is mostly just a move
predictor, yet it still wins well over 80% of its games.

In my experience, when the strength difference is big, and the game is
even, it is usually enough for the strong player to only play good shape
moves. The move predictors only break down in complex tactical situations
where some form of look-ahead is critical, and the typical shape-related
proverbs provide wrong answers.

Erik

On Mon, Dec 15, 2014 at 12:53 AM, Hiroshi Yamashita y...@bd.mbn.or.jp
wrote:

 Hi,

 This paper looks very cool.

 Teaching Deep Convolutional Neural Networks to Play Go
 http://arxiv.org/pdf/1412.3409v1.pdf

 Thier move prediction got 91% winrate against GNU Go and 14%
 against Fuego in 19x19.

 Regards,
 Hiroshi Yamashita

 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [computer-go] Dynamic Komi's basics

2010-02-11 Thread Erik van der Werf
try: (handicap - 0.5) x 14

Erik



2010/2/11 Le Hir Matthieu mate...@hotmail.fr

Hi,


 I have a few questions concerning dynamic komi, I am not a programmer
 though and will try my best to be understandable.

 First, I'm wondering how komi is determined when a dynamic system is used :

 * According to this page : http://senseis.xmp.net/?Komi%2FvalueOfFirstMove the
 value of komi at first move is half the points the black move is supposed to
 be ( 14 points)  7,5 komi in even games.
 How does this apply to high handicap games  ?

 From what I think I understood, dynamic komi is supposed to try to keep the
 game more even.
 If the computer is black, playing at 9 handi, will the  burden komi (
 negative) be 9 x  - 7,5  ?  - 67,5 ? It sounds highly improbable.

 On the other hand, 9 handicaps are supposedly giving an advantage of 90 to
 120 points, so my natural thought would be that the bot would give itself at
 least a negative komi of that many points ?

 I can't figure out well how komi is determined at first move.

 In relation with the previous question, I'm wondering how komi is
 determined and what its value is for every handicap game ( as black and as
 white). Is there a specific value for each ( before first move is played) or
 is it only determined by the way it was programmed and the programmer's
 preferences ?

 I am going to play a series of high handicap games ( as white)  against
 pachIV on kgs, that explains why I'm curious to know about how this
 komi-stuff works precisely.

 Thanks for helping,

 Matthieu, aka CGBSpender.
 [image: Animations GRATUITES pour votre messagerie - par IncrediMail!
 Cliquez ici!] http://www.incredimail.com/?id=605280rui=122199674

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Dynamic Komi's basics

2010-02-11 Thread Erik van der Werf
2010/2/11 Jean-loup Gailly jl...@gailly.net

 A move early in the game is worth about 14 points, not 7.5.


While this may be true for professional-level play, the value of the first
move for balancing Monte-Carlo playouts towards a 50% win rate should be
expected to be lower.

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] scalability analysis with pachi

2010-01-17 Thread Erik van der Werf
Oh, ok. I was a bit surprised. Last time I checked my program scaled
quite nicely against GnuGo, at least for low numbers of simulations up
to about 97% winning rate. I suppose there could be some kind of
plateau when nearing 100% due to some missing knowledge/skills that
only GnuGo has.

Erik


2010/1/16  dhillism...@netscape.net:
 Well, I thought there seems to be a picture emerging was sufficiently
 hedged that it would be construed as a conjecture, not a conclusion. :)
 I am thinking, in particular, of the scalability studies with Zen that
 Hideki reported to this list in Oct. 2009.
 BTW, recently I've measured the strength (win rate) vs time for a move
 curves with Zen vs GNU Go and Zen vs Zen (self-play) on 19 x 19 board.
 Without opening book, it saturates between +400 and +500 Elo against
 GNU but doesn't upto +800 Elo in self-play.  That's somewhat
 interesting (detail will be open soon at GPW-2009).
 Hideki
  There was a bit more information provided in a sequence of posts to this
 list during that month. I wonder if the paper is out now.
 - Dave Hillis

 -Original Message-
 From: Erik van der Werf erikvanderw...@gmail.com
 To: computer-go computer-go@computer-go.org
 Sent: Sat, Jan 16, 2010 12:55 pm
 Subject: Re: [computer-go] scalability analysis with pachi

 2010/1/15 dhillism...@netscape.net

 Thank you for posting these interesting results There seems to be
 a picture emerging that MCTS engines scale very well in self play, and
 apparently against other MCTS engines, but not so well against the non-MCTS
 version of Gnugo.
 - Dave Hillis


 Do you have any data to back that conclusion?

 Erik

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] scalability analysis with pachi

2010-01-16 Thread Erik van der Werf
2010/1/15 dhillism...@netscape.net

 Thank you for posting these interesting results There seems to be a picture
 emerging that MCTS engines scale very well in self play, and apparently
 against other MCTS engines, but not so well against the non-MCTS version of
 Gnugo.

 - Dave Hillis



Do you have any data to back that conclusion?

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] First ever win of a computer against a pro 9P as black (game of Go, 9x9).

2009-10-29 Thread Erik van der Werf
2009/10/26 Don Dailey dailey@gmail.com:
 ... On the one hand we hear that MCTS has reached a dead end and there is no
 benefit from extra CPU power...

Just curious, who actually claimed that and what was it based on?

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Neural networks

2009-10-14 Thread Erik van der Werf
In my opinion NeuroGo was quite succesful with neural networks.
Magog's main strength came from neural networks. Steenvreter uses
'neural networks' to set priors in the Monte Carlo Tree.

Erik


On Wed, Oct 14, 2009 at 2:26 PM, Petr Baudis pa...@ucw.cz wrote:
  Hi!

  Is there some high-level reason hypothesised about why there are
 no successful programs using neural networks in Go?

  I'd also like to ask if someone has a research tip for some
 interesting Go sub-problem that could make for a nice beginner neural
 networks project.

  Thanks,

 --
                                Petr Pasky Baudis
 A lot of people have my books on their bookshelves.
 That's the problem, they need to read them. -- Don Knuth
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: stv is Steenvreter

2009-06-01 Thread Erik van der Werf
On Mon, Jun 1, 2009 at 7:59 AM, Ingo Althöfer 3-hirn-ver...@gmx.de wrote:
 Nick Wedd explained:
 stv is Steenvreter.  Its creator is indeed Erik van der Werf,
 whose KGS account is evdw.  Its name is Dutch for stone eater...

 Congratulations to Erik van der Warf for the Win!

Thanks!

 By the way, Steenvreter is such a nice name. You should
 call your baby by full name on KGS.

 Ingo.

 PS: van der Warf is also nice, but I understand when you want to keep
 that short.

When I registered the kgs account for Steenvreter the name was too
long, so I had to shorten it :-(
I've update stv's profile to show Steenvreter under 'Real Name'

BTW My last name is Werf (not Warf, and even if you wanted to
translate that it would become Wharf).

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Congratulations to Steenvreter!

2009-06-01 Thread Erik van der Werf
On Mon, Jun 1, 2009 at 10:39 PM, Nick Wedd n...@maproom.co.uk wrote:
 Congratulations to Steenvreter, winner of yesterday's KGS bot tournament,
 with three more wins than its nearest rival!

 The results are now at http://www.weddslist.com/kgs/past/47/index.html

Thanks!


 As usual, I look forward to your reports of the errors on that page.

Regarding the game between Fuego and Steenvreter in round 7. As far as
I can see White 42 (marked in your diagram) was a blunder by Fuego (it
has to capture at d1 to win the capturing race). Unfortunately
Steenvreter did not understand the position either (it should have
taken a liberty at e4 instead of capturing the marked stone).


 I will also welcome opinions and preferences about the format of such events
 in future.  Attendances got low towards the end of last year, so I gave them
 up for a few months.  The last two, in April and May, have each had six
 players, which I consider just about enough to make them worth running.  But
 I would prefer more, and would like to know what I might do to attract more
 entrants.

I like events with many (fast) rounds such as the one yesterday.

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 7x7 komi

2009-05-23 Thread Erik van der Werf
2009/5/22 Andrés Domínguez andres...@gmail.com:
 2009/5/22 Robert Jasiek jas...@snafu.de:
 Don Dailey wrote:
 Is the 5x5 claim the one you are skeptical about?

 IIRC, I am sceptical about both 5x5 (esp. first move not at tengen) and 6x6.

 AFAIK the claimed solution is tengen the first move. Maybe you are
 remebering some interesting lines that starts with (3,2) and (2,2):

 Subject: computer-go: 5x5 Go is solved
 Date: Sun, 20 Oct 2002 15:27:04 -0100
 From: Erik van der Werf
 To: COMPUTER GO MAILING LIST

 Yesterday my program solved 5x5 Go starting with the first move in the
 centre. As was expected it is a win for the first player with 25 points
 (the whole board belongs to black).

My phd thesis also reports the values for other opening moves. (3,2):
B+3, (2,2): W+1, (edge) W+25.

For more info see:
http://erikvanderwerf.tengen.nl/pubdown/thesis_erikvanderwerf.pdf

If anyone believes he can refute these results I'd be interested to
hear about it (especially if you are high dan level). In my experience
it is often quite interesting to see where strong humans go astray.

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Transpositions in Monte-carlo tree search

2009-04-01 Thread Erik van der Werf
On Wed, Apr 1, 2009 at 9:03 PM, Matthew Woodcraft
matt...@woodcraft.me.uk wrote:
 Erik van der Werf wrote:
  Jonas Kahn wrote:
 No there is no danger. That's the whole point of weighting with N_{s,a}.

 N_{s,a} = number of times the node s has been visited, starting with parent
 a.

 You can write Value of a node a = (\sum_{s \in sons} N_{s,a} V_s) / (\sum
 N_{s,a})

 where V_s is ideally the «true» value of node s.
 In UCT2, they use V_s = Q_{s,a} the win average of simulations going
 through a, and then through s.
 In UCT3, they use V_s = Q_s the win average of all simulations through
 s.

 There is a danger. The problem is that the selection policy also
 implements the soft-max like behavior that ensures convergence to the
 minimax result. If the you backup to all possible parents, including
 those for which the child would have been an inferior choice, you may
 get into trouble.

 That's what I was worried about.

 But I think it's ok the way Jonas describes above: you don't add
 anything to the false-parent node's simulation count, and you don't
 change the weight of the false-child in its value; you just change the
 evaluation of the false-child.

 (This means that the effect of backing up to alternate parents will be
 smaller than the effect of backing up to the 'true' parent, which is
 presumably part of the reason why this variant is less attractive.)


Ok, but I would not call that a back up; nothing goes up to the
alternative parents. Unless I missed something, with this you only
make adjustments to the statistics representing transposed occurrences
of the same position.  I don't see that this is how we should
interpret UCT3.

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] static evaluators for tree search

2009-02-17 Thread Erik van der Werf
On Tue, Feb 17, 2009 at 8:23 PM, George Dahl george.d...@gmail.com wrote:
 It is very hard for me to figure out how good a given evaluator is (if
 anyone has suggestions for this please let me know) without seeing it
 incorporated into a bot and looking at the bot's performance.  There
 is a complicated trade off between the accuracy of the evaluator and
 how fast it is.  We plan on looking at how well our evaluators
 predicts the winner or territory outcome or something for pro games,
 but in the end, what does that really tell us?  There is no way we are
 going to ever be able to make a fast evaluator using our methods that
 perfectly predicts these things.

You are optimizing two things; quality and speed. One can be exchanged
for the other, so together they span a frontier of solutions.
Until you fixate one, e.g., by setting a time constraint, you can look
at the entire frontier of (Pareto-optimal) solutions. Any evaluator
that falls behind the frontier is bad.

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Rules for remote play at the Computer Olympiad

2009-02-02 Thread Erik van der Werf
On Mon, Feb 2, 2009 at 11:25 AM, Nick Wedd n...@maproom.co.uk wrote:
 1.)  A neural net cannot explain its thinking process because it does not
 have any.

I have used artificial neural nets a lot in my go programs; it is
trivial to display predictions, but understanding them is of course
not always easy. Still I probably would not have a hard time to
explain the Tournament Director how it arrives at those predictions. I
do not agree with your statement that a neural net has no thinking
process.


 2.)  It would still be too easy to cheat.  The cheater could run a program
 which looks at the position and generates a plausible display of its
 thinking process, while a professional player thinks and then tells it
 where to play.  Then the program generates more display of thinking
 process tending to support the recommended move, before playing it.

True, but at least it requires some programming effort. I don't
believe we can rule out all possible forms of cheating (this can even
be done when playing locally using a simple wireless link) but we can
at least try to make it a bit of a challenge. BTW, when there is a
clear suspicion the author can already be forced to show his code to
the TD or some trusted independent party.

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Rules for remote play at the Computer Olympiad

2009-02-01 Thread Erik van der Werf
On Sun, Feb 1, 2009 at 2:54 PM, Rémi Coulom remi.cou...@univ-lille3.fr wrote:
 Erik van der Werf wrote:

 Hi Remi,

 There is a simpler solution: do not allow remote play at all.


 I would be in favor of this solution. But this has no chance to make
 unanimity. Even with a strong majority in favor of that rule, Jaap would
 probably not accept it, anyways.

Well, we could at least try to convince him.

With a strong majority in favor and a list of all the things that went
wrong in China we at least have a good case.



 As for stricter time controls; in principle I'm in favor. However, if
 you really want to enforce it we should have a real clock on the table
 like they have in the WCCC games. This would of course constitute a
 significant change from the usual relaxed (sloppy?) playing
 conditions...


 I believe we can still trust participants to count time correctly. Having to
 use a real clock is too annoying.

The problem is that the time info may simply be inaccessible when the
connection breaks.


 The best solution regarding time control is probably what is done in the UEC
 Cup and EGC: connect programs to a server and let the server do time
 control.

That is indeed a nice solution. What software was used for the UEC
cup? How did they deal with programs that could not connect to the
server; did some play manually?


 For a 3-round playoff I would propose that the third game uses komi
 bidding (one operator is given the right to choose the komi, and the
 other then chooses whether to play Black or White). An alternative is
 to play 4 rounds and use board-points as a tie-breaker.


 I am strongly against board-points as a tie-breaker. Most MC programs only
 optimize probability of winning.

I don't like it much either; any tie breaker is bad in some sense, but
I still prefer board-points over a coin-toss.


 In any case, I think the 9x9-komi should go back to 6.5.


 I think it was moved to 7.5 to allow automated play on KGS. I believe
 allowing automated play on KGS with a strange komi is better than having no
 KGS play and a normal komi.

No, I originally proposed it because the official Chinese rules had
switched to 7.5 komi. However, this was for 19x19 games.

Anyway, I don't think the KGS restrictions are a good argument.
Ideally we could persuade wms to have free komi setup under
kgs-chinese rules, but otherwise it is still easy enough to let you
program ignore the gtp-komi setup from kgs.

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Rules for remote play at the Computer Olympiad

2009-02-01 Thread Erik van der Werf
Hi Remi,

There is a simpler solution: do not allow remote play at all.


Something else for the discussion. I would like to have a rule about
mandatory displaying the thinking process of the program so that both
operators have an idea of what is happening. Especially for remote
play I think this is needed because now it is just too trivial to
cheat.

As for stricter time controls; in principle I'm in favor. However, if
you really want to enforce it we should have a real clock on the table
like they have in the WCCC games. This would of course constitute a
significant change from the usual relaxed (sloppy?) playing
conditions...

For a 3-round playoff I would propose that the third game uses komi
bidding (one operator is given the right to choose the komi, and the
other then chooses whether to play Black or White). An alternative is
to play 4 rounds and use board-points as a tie-breaker.

In any case, I think the 9x9-komi should go back to 6.5.

Erik


On Sun, Feb 1, 2009 at 11:18 AM, Rémi Coulom remi.cou...@univ-lille3.fr wrote:
 Hi,

 During the Computer Olympiad in Beijing, some remote participants had
 problem connecting to their remote machines, which created many unpleasant
 incidents. In order to avoid these problems in the next Olympiad, I believe
 we need better rules for remote play. Here is what I suggest:

 - The start of a round must not be delayed until remote participants connect
 to their remote machine. In case of any technical problem with the
 connection, remote participants must either play locally or forfeit the
 game. If they take a lot of time to connect, that time must be substracted
 from their thinking time.

 - If, for any reason, we do not have time to play all the scheduled rounds,
 playing less rounds is better than delaying the last round to a date when
 some participants have to forfeit their game because they cannot attend.

 - It is less important, but I would also like to suggest that a 7-round
 playoff is much too long. 3 games are enough for 9x9. And the 9x9 playoff
 must be scheduled right at the end of the 9x9 tournament, so that
 participants in the 9x9 tournament do not have to wait for the end of the
 19x19 tournament.

 These rules would avoid most of the incidents of the previous Olympiad. We
 could propose them to the tournament director if everybody agrees.

 Rémi
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Rules for remote play at the Computer Olympiad

2009-02-01 Thread Erik van der Werf
On Sun, Feb 1, 2009 at 3:03 PM, Mark Boon tesujisoftw...@gmail.com wrote:
 On Feb 1, 2009, at 11:29 AM, Erik van der Werf wrote:
 Something else for the discussion. I would like to have a rule about
 mandatory displaying the thinking process of the program so that both
 operators have an idea of what is happening. Especially for remote
 play I think this is needed because now it is just too trivial to
 cheat.

 Do you want this just for 'remote' programs, or any program?

Preferably any, but I'm naturally more suspicious of programs that
play remotely :-)

Currently the rule is that logs must be made available to the TD on
request when there is a suspicion. However, it is hard to be precise
when no information is displayed during the game.


 What if the 'thinking process' is nothing intelligible for anyone else? Do
 we want to restrict programs made according to certain specifications which
 include that the thinking process is understandable?

Well, most programs can in principle display the move they are
currently considering best, and usually also a principal variation,
winning probability, etc.

When a program is radically different from anything else, cannot show
any intermediate results, and a conflict arises, then the author will
probably have to try to convince the TD, for example by showing the
source code.


 I don't know what the situation currently is in computer-Go, but I don't
 think the stakes are high enough to go over the trouble of cheating through
 a remote program (it's quite a lot of work). I have been accused of cheating
 once, but it was a rare thing to happen.

With programs playing on KGS cheating is easy.

Also, I think the stakes are increasing because we are now getting in
the low amateur dan-levels.

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: WMSG - Scoring

2008-12-06 Thread Erik van der Werf
  When White is the first player to pass than komi is changed
  from 6.5 to 7.5 .

On Sat, Dec 6, 2008 at 11:02 PM, David Fotland [EMAIL PROTECTED] wrote:
 It should make almost no difference, since on odd sized boards with area
 counting the game result will be the same unless there is a seki with an odd
 number of shared liberties.  This kind of seki is rare.  I'd guess less than
 one in a hundred games ends with such a seki on the board.  AGA rules also
 have the effect of changing the komi depending on which side makes the last
 pass.

Is that right?

Area scoring  no neutral intersections  odd board surface - odd
komi for jigo (typically 5, 7 or 9).

So, what you're saying applies for 5.5 against 6.5, or 7.5 against 8.5.
6.5 versus 7.5 should frequently make a difference.


It seems this rule makes the game slightly more challenging by
increasing the granularity in frequent scores to the level normally
observed only under territory scoring.

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] log base e vs. log base 10

2008-12-01 Thread Erik van der Werf
When unspecified always assume the natural logarithm.

For UCT this does not really matter; only a different tuning constant.

log10(x) == ln(x) / ln(10)

Erik


On Mon, Dec 1, 2008 at 3:22 PM, Mark Boon [EMAIL PROTECTED] wrote:
 Just now I realized that I'm using the standard Java Math.log() function in
 places where it computes the log(visits). In Java, this log() function is
 actually the logarithm of base e, which I suppose is normally actually
 written as ln(). When I read articles about UCT and it says log(), does that
 actually mean log base e, or log base 10?

 I figured it probably won't make an awful lot of difference. But there
 should be some difference. Just to make sure I replaced Math.log() by
 Math.log10(). Now I'm seeing a slight degradation of play, so I suppose that
 should answer the question. That doesn't surprise my an awful lot, somehow
 intuitively it seems to make more sense to use log base e. But maybe
 adjusting the exploration-factor a little would bring them closer still. I
 just wanted to make sure...

 Another thing I tried was replacing log(virtual-parent-visits) by
 log(parent-visits) in the RAVE calculation. I see no effect on the level of
 play, so apparently it's a wash. But using the latter saves a little memory
 and / or (depending on your implementation) a little performance since the
 log() function is expensive.

 Mark



 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Monte-Carlo and Japanese rules

2008-11-06 Thread Erik van der Werf
IIRC under official Japanese rules at the end of the game all groups
with liberties shared between opposing colours are by definition in
seki. Therefore eventually (before counting) all dame have to be
filled.

Further, playing dame points is almost equally bad under Chinese rules
as it is under Japanese rules. So, if you have a strong 'Chinese'
engine no special tricks are needed at least until you reach the
endgame. The only thing that is severely penalized under Japanese
rules is playing needless defensive moves inside your own territory
while the opponent is passing.

Erik



On Thu, Nov 6, 2008 at 4:44 PM, Jason House [EMAIL PROTECTED] wrote:
 I think simplistic handling of Japanese rules should play dame points that
 connect chains. This avoids some problems that can arise where ownership
 probability drops after the opponent plays the dame, and a point of
 territory must get filled.

 Even if not technically required, I can imagine bots acting like beginners
 and get nervous over imagined vulnerabilites.

 Sent from my iPhone

 On Nov 6, 2008, at 9:12 AM, Don Dailey [EMAIL PROTECTED] wrote:

 On Thu, 2008-11-06 at 09:19 +0100, Ingo Althöfer wrote:

 Hello all, two questions.

 (i) Do there exist strong 9x9-go programs on Monte-Carlo base
 for Japanese rules?

 (ii) Having available only programs for Chinese rules, but playing
 in a tournament with Japanese rules, which special tricks and
 settings should be used to maximise winning chances? (This is meant
 especially in the light of MC's tendency to win games by 0.5
 points according to the rules implemented.)

 I've thought about those questions myself from time to time.  Let me
 think out loud concerning this.   I am by know means an expert in
 Japanese scoring or even GO in general, so I'm just giving some thoughts
 here and a plan for building a Japanese simple bot that you can be
 free to criticize:

 It seems to me the primary difference between the two is knowing when to
 stop playing and of course scoring dead groups.   The Chinese style bots
 do not technically need to know about scoring.

 You can look at the combined statistics at the end of the games for a
 given point to get a sense of whether that point is still in play or
 whether it's a forgone conclusion.  You can do the same to determine
 dead groups.   I don't know how well that works in all cases, but I have
 used it and it works pretty well.

 But we also want to recognize dame,  and not play to dame points early
 in the game even if it doesn't affect the final Chinese outcome.   So
 here is my idea:

  1. If ALL the stones of a particular group belong to the opponent with
 high certainty,  they are dead.

  2. If there are open spaces that belong to you or the opponent with
 high certainty don't move to them.

  3. If an uncertain point is touching stones of both colors and both
 colors have high certainty for the color they belong to, it is probably
 dame and you shouldn't move to them.

   example:   White has a stone on d4 that is clearly alive.
  Black has a stone on f4 that is clearly alive.
  An empty point on e4 is highly uncertain.
  Do not play to e4 - it is probably dame.

  question:  Is that a reasonably good rule or does it need some work?


  4. If you have no moves other than these cases, you should pass.

 You can test this idea by playing a bot on KGS under Japanese rules.
 You may have to tweak what you consider your uncertainty margin.   Also,
 I'm not considering seki here but we would want to find a way to cope
 with that.

 - Don



 Ingo.

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Transpostions in UCT search

2008-10-27 Thread Erik van der Werf
When a child has been sampled often through some other path a naive
implementation may initially explore other less frequently visited
children first. The new path leading to the transposition may
therefore suffer from some initial bias. Using state-action values
appears to solve the problem.

Erik


On Mon, Oct 27, 2008 at 2:21 PM, Mark Boon [EMAIL PROTECTED] wrote:
 A while ago I implemented what I thought was a fairly straightforward way to
 deal with transpositions. But to my surprise it made the program weaker
 instead of stronger. Since I couldn't figure out immediately what was wrong
 with it, I decided to leave it alone for the time being.

 Just now I decided to do a search on transpostions and UCT in this mailing
 list and it seems to have been discussed several times in the past. But from
 what I found it's not entirely clear to me what was the conclusion of those
 discussions.

 Let me first describe what I did (ar attempted to do): all nodes are stored
 in a hash-table using a checksum. Whenever I create a new node in the tree I
 add it in the hash-table as well. If two nodes have the same checksum, they
 are stored at the same slot in the hashtable in a small list.

 When I add a node to a slot that already contains something, then I use the
 playout statistics of the node(s) already there and propagate that up the
 tree. When I have done a playout I propagate the result of the single
 playout up the tree, but at each step I check in the hashtable to see if
 there are multiple paths to update.

 I've seen some posts that expressed concerns about using the existing
 statistics of another path. This may be the reason I'm not seeing any
 improvement. So I was wondering if there's any consensus about how to deal
 with transpositions in a UCT tree. Or if someone could point me to other
 sources of information on the subject.

 Mark

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Transpostions in UCT search

2008-10-27 Thread Erik van der Werf
Reinforcement Learning terminology :-)

In Go the state is the board situation (stones, player to move, ko
info, etc.), the action is simply the move. Together they form
state-action pairs.

A standard transposition table typically only has state values; action
values can then be inferred from a one ply search. In a full graph
representation the state-action values are the values of the edges.

Erik


On Mon, Oct 27, 2008 at 4:03 PM, Mark Boon [EMAIL PROTECTED] wrote:

 On 27-okt-08, at 12:45, Erik van der Werf wrote:

 Using state-action values

 appears to solve the problem.

 What are 'state-action values'?
 Mark

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Ending games by two passes

2008-10-24 Thread Erik van der Werf
Hi Dave,

This is a well-known problem with overly simplified rulesets.
TT-advocates don't care about the rare anomalies.

Did you notice that under positional superko you cannot take back the
ko after *any* number of consecutive passes? This is yet another
reason why in some cases filling an eye or playing in sure territory
may be the best move...

In your engine you don't want to use 3 passes unless absolutely
necessary because of horizon effects. In my experience it is best to
use 3 passes only if there is exactly one basic ko, and in all other
cases use 2 passes to end the game.

Erik


On Fri, Oct 24, 2008 at 10:00 AM,  [EMAIL PROTECTED] wrote:
 Is it correct to end games by 2 consecutive passes?

 When I learned go 20 years ago I was taught that 3 consecutive passes are
 required to end a game of go.
 In practice 2 passes are sufficient in nearly all cases, but sometimes 2
 passes is not enough.
 Suppose we have this position in a 5x5 game with area scoring and 2.5 komi:

 (0 = white, # = black)

   ABCDE
 5 00###
 4 00#+#
 3 +0###
 2 00##+
 1 0#+##

 Black has just played C4.

 The controller is very simple. It only prohibits simple ko (superko is not
 checked) and all stones left on the board when the game ends are considered
 alive.
 White now at C1. Black has no choice but pass and then white quickly passes
 too. What happens now?

 If 2 passes end the game, the controller will award a win to white by the
 komi.
 If 3 passes are required to end the game, black captures at B1, white has no
 choice but pass, then black captures at A3 and will (probably) win the game.

 On could argue that controllers are smarter than the controller in my
 example, so 2 passes are usually sufficient in pactice, because the
 controller will query the engines for dead stones.
 But in my example, wouldn't both engines be justified to declare the white
 stones alive because of the 2 pass rule?

 Also, if I am correct, (light) playouts are usually controlled by an
 internal controller that is very similar to the controller in my example.
 Wouldn't they be vulnerable to this type of situation?

 Why not avoid this issue simply by requiring 3 consecutive passes to end the
 game?

 Am I missing something here?

 Dave








 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] komi study with CGOS data

2008-10-13 Thread Erik van der Werf
Don, thanks for providing these statistics!

Overall it suggests that on CGOS White only has a small advantage. I
still don't like this, but it is not nearly as bad as I initially
suspected.

The initially decreasing percentages are somewhat puzzling. One might
speculate that up to a certain level Black's strategy to dominate the
center is easier, and White needs significant resources to learn how
to build two sufficiently large living groups.

My guess is that for 5-minute CGOS games the average level of play is
still weak enough to keep White's advantage relatively close to 50%.
I'm not sure how many programs actually play at peak strength on CGOS,
but in the future we may expect to see an increasing group suffering
from the large komi.

Maybe it would be interesting to compare recent games to older games
to see if there is a trend in the top group?

Erik



On Thu, Oct 9, 2008 at 6:08 AM, Don Dailey [EMAIL PROTECTED] wrote:
 Ok, I'm doing the komi study.   I hope this data formats properly on
 your email clients.

 I am not including the first day or two of games because I remember
 that I started out with 6.5 komi but I think that only lasted a few
 hours.

 I'm including ALL games unless they ended with an illegal move.

 I'm using bayeselo ratings.  So each bot has only 1 rating over it's
 entire lifetime of games.

 I do not include games where either player played less than 20 games.
 20 games does not give a very accurate rating, but I had to draw the
 line somewhere.  However it's probably within 100 ELO of being
 correct.

 I require opponents to be within 100 ELO of each other.  I ran this
 many times using different minimum ELO values.

 The data seems to indicate that white's winning percentage at 7.5 komi
 DECREASES with the strength of the players in general.

 HOWEVER, the 2500 ELO entry is intriguing.  It shows a sudden jump
 with a sample of 3400 games.  Does anyone have an explanation for
 that?  Is this just sample error?  Or are the programs finally strong
 enough to start seeing that white wins at 7.5 komi?

 Another interesting fact is that whites win percentage drops below 50%
 with some of these entries (stronger players.)


  DIFF  MIN ELOWHITETOTAL  PERCENT
 -  ---  ---  ---  ---
  100074513   141579   52.630
  100  10074513   141579   52.630
  100  20074511   141577   52.629
  100  30074191   141054   52.598
  100  40073723   140308   52.544
  100  50073524   140009   52.514
  100  60073427   139874   52.495
  100  70072763   138921   52.377
  100  80072335   138212   52.336
  100  90071227   136490   52.185
  100 100071192   136432   52.181
  100 110071084   136231   52.179
  100 120068862   132356   52.028
  100 130067765   130428   51.956
  100 140066562   128193   51.923
  100 150064143   123672   51.865
  100 160056458   108767   51.907
  100 170053943   103828   51.954
  100 18004079078999   51.634
  100 19001840336247   50.771
  100 20001685933340   50.567
  100 21001379327399   50.341
  100 2200 898618072   49.723
  100 2300 855517266   49.548
  100 2400 768615569   49.367
  100 2500 1801 3400   52.971
  100 2600   12   28   42.857

 When I use a window of 200 ELO the data looks very similar.

 Here is the data when I require the difference to be 50 ELO or less:

  DIFF  MIN ELOWHITETOTAL  PERCENT
 -  ---  ---  ---  ---
   5004618487556   52.748
   50  1004618487556   52.748
   50  2004618387555   52.747
   50  3004603187326   52.712
   50  4004589087100   52.687
   50  5004581586996   52.663
   50  6004581386993   52.663
   50  7004537686377   52.533
   50  8004510885932   52.493
   50  9004405084302   52.253
   50 10004403284277   52.247
   50 11004396984163   52.243
   50 12004284082193   52.121
   50 13004236181354   52.070
   50 14004150379735   52.051
   50 15004100378853   51.999
   50 16003795972933   52.046
   50 17003628069569   52.150
   50 18002847854845   51.925
   50 1900 853516668   51.206
   50 2000 789615433   51.163
   50 2100 682313349   51.112
   50 2200 4145 8120   51.047
   50 2300 3917 7704   50.844
   50 2400 3627 7166   50.614
   50 2500 1497 2871   52.142
   50 2600   12   28   42.857



 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing 

Re: [computer-go] (early) termination of random playouts?

2008-10-09 Thread Erik van der Werf
Sure, some long cycles have multi-stone captures.

Erik


On Thu, Oct 9, 2008 at 4:39 PM, Don Dailey [EMAIL PROTECTED] wrote:
 You might be right.   I have a liberal game length limit on my play-outs
 so I didn't notice this.

 Another game limiting rule could be something based on counting the
 number of consecutive 1 stone captures and terminating once this goes
 beyond some reasonable limit such as 10.Would infinite games still
 be possible with this rule?

 - Don



 On Thu, 2008-10-09 at 10:25 -0400, Jason House wrote:
 You are incorrect that the following heuristics in random games lead
 to finite game length:
 * no eye filling
 * no suicide
 * no simple ko violations

 Consider two eyeless chains with 3 ko's connecting them... Two taken
 by black and it's white's move. Filling the one ko it has is suicide.
 It must take a ko. On black's turn, it can't fill a ko due to suicide
 and must take a ko. The cycle repeats infinitely.

 Yes, this is a real bug found in a real CGOS game... I opted for the
 game length limit so my bot could remain online 24/7

 Sent from my iPhone

 On Oct 9, 2008, at 9:15 AM, Don Dailey [EMAIL PROTECTED] wrote:

  Claus,
 
  I think you have summarized things better than I am going to,  but
  here
  goes anyway:
 
  If the play-outs are uniformly random and you have eye rule, it is
  guaranteed to terminate as long as you use simple ko.  It might even
  be
  guaranteed to terminate if you don't, I don't know.  Although it's
  guaranteed to terminate, it could be arbitrarily long (millions or
  billions of moves) but that is not a practical consideration.   The
  odds
  of it not terminating within a couple of hundred moves at 9x9 is
  probably extremely low as long as you have eye and simple KO and do
  not
  allow suicide.
 
  If it's not uniformly random, all bets are off.
 
  If you want to be paranoid, you can limit the number of moves but it's
  not necessary.   Something like number of empty points times some
  constant such as 2 or 3 might be good here.
 
  There are several eye rules used, but I believe the vast majority of
  us
  use the same one.   It is stated like this:
 
1. An empty point surrounded by friendly stones.
2. If edge,  no diagonal enemies allowed.
3. If NOT edge, only 1 diagonal enemy allowed.
 
  Some programs do these things in addition to simple ko, no suicide
  allowed and eye rule:
 
   1.  Limit the number of moves. (not needed)
   2.  Stop game if one side has many more stones than the other.
   3.  Detect superko  (a waste of time)
   4.  Let pass move be one of the random moves tried. (bad)
 
 
  You asked if people compared the eye rules.  This has been discussed
  on
  this group before and the short answer seems to be that it probably
  does
  not make too much of a difference if it's reasonably sound.   You
  might
  get different opinions on this.
 
  - Don
 
 
 
 
 
  On Thu, 2008-10-09 at 13:31 +0100, Claus Reinke wrote:
  Hi again, with yet another question:-)
 
  Could someone please summarize the state-of-the art wrt the various
  ways
  of limiting random playouts, and the their consequences?
 
  There are several variations on the don't fill your own
  eyes (dfyoe) approach,
  but the way this approach is often described tends to confuse me
  (eg, when
  someone seems to claim that there is exactly one correct way of
  implementing
  this, or when someone seems to claim that this guarantees
  termination, or when
  the resulting biases are not even mentioned, let alone discussed or
  compared
  against biases in alternative implementations, etc.).
 
  I'll try to summarize my current understanding, in the hope that
  others will
  fill in the missing pieces or point out misunderstandings:
 
  - the only thing that guarantees termination of random playouts
  (with or
 without dfyoe) is the positional superko rule: no whole-board
  repetitions
 allowed. Waiting for this to kick in without taking other
  measures is not
 an option: it takes too long and the results don't tell us much
  about the
 value of the initial position.
 
  - dfyoe variations share some motivations:
 - reduce the likelihood of unnaturally long playouts (but not to
  zero)
 - try to avoid disastrous endgame-situation mistakes that random
  playouts
 are otherwise prone to make (and that eliminate the value of
  playouts)
 - try to be simple (efficiently implementable), with low (but
  non-zero)
 probability of introducing errors (filling in when not
  filling in would be
 better, or vice versa)
 - aim to steer playouts to easily counted terminal positions
  that bear a
 quantifiable relationship to the estimated value of the
  initial position
 (perhaps a more suggestive name would be random legal fill-
  ins?)
 
  - some variations of dfyoe have been reported (are there others?):
 
 [Bruegmann; Monte Carlo Go] (Gobble):
 The computer only passes if either no 

Re: [computer-go] (early) termination of random playouts?

2008-10-09 Thread Erik van der Werf
On Thu, Oct 9, 2008 at 5:03 PM, Jason House [EMAIL PROTECTED] wrote:
 On Oct 9, 2008, at 10:41 AM, Erik van der Werf [EMAIL PROTECTED]
 wrote:
 Sure, some long cycles have multi-stone captures.

 Can you provide an example?

http://www.cs.cmu.edu/~wjh/go/rules/bestiary.html
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] (early) termination of random playouts?

2008-10-09 Thread Erik van der Werf
Anything can exist in a random game :-) Sent-two-return-one may me the
biggest practical concern, but I would not be surprised if some day a
molasses ko would pop up as well, especially if your playouts are not
too stupid.

Erik



On Thu, Oct 9, 2008 at 5:24 PM, Jason House [EMAIL PROTECTED] wrote:
 Which multi stone capture case still exists under random games?

 Sent from my iPhone

 On Oct 9, 2008, at 11:12 AM, Erik van der Werf [EMAIL PROTECTED]
 wrote:

 On Thu, Oct 9, 2008 at 5:03 PM, Jason House [EMAIL PROTECTED]
 wrote:

 On Oct 9, 2008, at 10:41 AM, Erik van der Werf
 [EMAIL PROTECTED]
 wrote:

 Sure, some long cycles have multi-stone captures.

 Can you provide an example?

 http://www.cs.cmu.edu/~wjh/go/rules/bestiary.html
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 7.5-komi for 9x9 in Beijing

2008-10-08 Thread Erik van der Werf
On Fri, Oct 3, 2008 at 2:33 PM, Don Dailey [EMAIL PROTECTED] wrote:
 I had heard somewhere that there are some who believe 8.0 is the right
 komi for 9x9 Chinese.   I personally believed for a long time it was 7.0
 based on statistical data of games.However that can be misleading.

Do you understand why even numbers are very unlikely?

It's rather trivial, but somehow many people seem to miss it...

On 9x9 the board we know that all intersections are either Black (B),
White (W), or Neutral (N):

B + W + N = 81

Without seki:

W = 81 - B  (no neutral intersections in the final position, so N = 0)

Score = B - (W + komi) = 2B - (81+komi)

Consequently:
with 5.5 komi Black needs 44 points to win,
with 6.0 komi Black needs 44 points to win,
with 6.5 komi Black needs 44 points to win,
with 7.0 komi Black needs 44 points to tie,
with 7.5 komi Black needs 45 points to win,
with 8.0 komi Black needs 45 points to win,
with 8.5 komi Black needs 45 points to win,
with 9.0 komi Black needs 45 points to tie

At high levels on 9x9 it appears to be extremely difficult for Black
to get 45 points.

With seki:

N = even

For the common type of seki, where the stones living in seki share two
liberties, the perfect komi is again an odd number; results are
therefore consistent with the numbers above (without seki).

N = odd

Only if optimal play inevitably leads to a seki with an odd number of
neutral intersections the perfect komi becomes an even number (in
which case 7.5 might make sense). However, given what is known from
professional level 9x9 games this seems unlikely.

If there really are persons that believe that the perfect komi on 9x9
should be 8.0 then I would very much like to see a game record of such
a game... I'm sure some top 9x9 programs will have fun trying to tear
it apart :-)

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Another 6x6 analysis.

2008-10-08 Thread Erik van der Werf
On Fri, Oct 3, 2008 at 1:23 AM, Gunnar Farnebäck [EMAIL PROTECTED] wrote:
 At http://trac.gnugo.org/6x6.sgf you can find an ongoing analysis of
 6x6.

Nice! The main line looks correct.

It even has an interesting 59-ply deep variation which I don't
remember seeing before.

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 7.5-komi for 9x9 in Beijing

2008-10-08 Thread Erik van der Werf
On Wed, Oct 8, 2008 at 9:46 PM, Don Dailey [EMAIL PROTECTED] wrote:
 On Wed, 2008-10-08 at 11:47 -0700, Christoph Birk wrote:
 On Wed, 8 Oct 2008, Don Dailey wrote:
  much more common.There were just a few games that used 6.5 komi
  because when I first started CGOS I had set 6.5 by mistake but I think
  that was just for a few hours at most.   The vast majority of these are
  7.5 komi games:

 After all this discussion about komi for 9x9 games, wouldn't you
 think that using 7.5 was a mistake and go back to 6.5 ?

 Why?

1) In my opinion the first player should have a chance to win even
against the strongest possible opponents.
2) Currently at high levels 7.5 komi appears to give a huge advantage to White.
3) The playable openings for 6.5 komi appear to be more diverse (thus
making the game more interesting).
4) Professionals use 6.5


 5.5 gives black a huge advantage.

Actually it should be almost the same as for 6.5 (see my previous email).


 The only reason I would favor one over the other
 is if it turned out that in practical play the games ended up closer.
 For instance if black won a 53% at 6.5 komi and white wins 51% at 7.5
 komi, I would favor 7.5 because it kept the scores close.

Last week I was told that, with 7.5 komi against itself, Mogo wins
over 60% as White. Also against my own program I have much better
chances when playing White.

As programs become stronger the advantage for one side with fractional
komi will inevitably become totally unbalanced. At some point we will
approach 100% and then I rather have that go to the first player. The
only fair alternative is to use integer komi.


 I believe 6.5 would give black a bigger advantage than 7.5 gives white in 
 practical play.

This may be true for your CGOS games.

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] On ranks 2 and 3 of 9x9 in Beijing

2008-10-02 Thread Erik van der Werf
On Fri, Oct 3, 2008 at 12:13 AM, Gian-Carlo Pascutto [EMAIL PROTECTED] wrote:
 I'd have some preference for playing the decisive game with komi = 6.5,
 but apparently thats not possible on KGS. I think with komi = 7.5 white
 is scoring very high (too high?) in the top games.

Last year (when the komi was still 6.5) some participants played
through kgs with Japanese rules. Of course internally their programs
still used Chinese rules, so some final scores had to be corrected
manually afterwards.

Anyway, I think it would be much nicer if, at least for unrated games,
one could simply set the komi to any value.

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


  1   2   >