Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread Brian Sheppard via Computer-go
I think it uses the champion network. That is, the training periodically 
generates a candidate, and there is a playoff against the current champion. If 
the candidate wins by more than 55% then a new champion is declared.

 

Keeping a champion is an important mechanism, I believe. That creates the 
competitive coevolution dynamic, where the network is evolving to learn how to 
beat the best, and not just most recent. Without that dynamic, the training 
process can go up and down.

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
uurtamo .
Sent: Wednesday, October 25, 2017 6:07 PM
To: computer-go 
Subject: Re: [Computer-go] Source code (Was: Reducing network size? (Was: 
AlphaGo Zero))

 

Does the self-play step use the most recent network for each move?

 

On Oct 25, 2017 2:23 PM, "Gian-Carlo Pascutto"  > wrote:

On 25-10-17 17:57, Xavier Combelle wrote:
> Is there some way to distribute learning of a neural network ?

Learning as in training the DCNN, not really unless there are high
bandwidth links between the machines (AFAIK - unless the state of the
art changed?).

Learning as in generating self-play games: yes. Especially if you update
the network only every 25 000 games.

My understanding is that this task is much more bottlenecked on game
generation than on DCNN training, until you get quite a bit of machines
that generate games.

--
GCP
___
Computer-go mailing list
Computer-go@computer-go.org  
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread uurtamo .
I ask because there are (nearly) bus-speed networks that could make
multiple evaluation quick, especially if the various versions didn't differ
by more than a fixed fraction of nodes.

s.

On Oct 25, 2017 3:03 PM, uurt...@gmail.com wrote:

Does the self-play step use the most recent network for each move?

On Oct 25, 2017 2:23 PM, "Gian-Carlo Pascutto"  wrote:

On 25-10-17 17:57, Xavier Combelle wrote:
> Is there some way to distribute learning of a neural network ?

Learning as in training the DCNN, not really unless there are high
bandwidth links between the machines (AFAIK - unless the state of the
art changed?).

Learning as in generating self-play games: yes. Especially if you update
the network only every 25 000 games.

My understanding is that this task is much more bottlenecked on game
generation than on DCNN training, until you get quite a bit of machines
that generate games.

--
GCP
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread uurtamo .
Does the self-play step use the most recent network for each move?

On Oct 25, 2017 2:23 PM, "Gian-Carlo Pascutto"  wrote:

> On 25-10-17 17:57, Xavier Combelle wrote:
> > Is there some way to distribute learning of a neural network ?
>
> Learning as in training the DCNN, not really unless there are high
> bandwidth links between the machines (AFAIK - unless the state of the
> art changed?).
>
> Learning as in generating self-play games: yes. Especially if you update
> the network only every 25 000 games.
>
> My understanding is that this task is much more bottlenecked on game
> generation than on DCNN training, until you get quite a bit of machines
> that generate games.
>
> --
> GCP
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread Xavier Combelle
Nice to know. I wrongly believe that training such a big neural network
would need considerable hardware.

Le 25/10/2017 à 19:54, Álvaro Begué a écrit :
> There are ways to do it, but it might be messy. However, the vast
> majority of the computational effort will be in playing games to
> generate a training database, and that part is trivial to distribute.
> Testing if the new version is better than the old version is also very
> easy to distribute.
>
> Álvaro.
>
>
> On Wed, Oct 25, 2017 at 11:57 AM, Xavier Combelle
> > wrote:
>
> Is there some way to distribute learning of a neural network ?
>
>
> Le 25/10/2017 à 05:43, Andy a écrit :
>> Gian-Carlo, I didn't realize at first that you were planning to
>> create a crowd-sourced project. I hope this project can get off
>> the ground and running!
>>
>> I'll look into installing this but I always find it hard to get
>> all the tool chain stuff going.
>>
>>
>>
>> 2017-10-24 15:02 GMT-05:00 Gian-Carlo Pascutto > >:
>>
>> On 23-10-17 10:39, Darren Cook wrote:
>> >> The source of AlphaGo Zero is really of zero interest (pun
>> intended).
>> >
>> > The source code is the first-hand account of how it works,
>> whereas an
>> > academic paper is a second-hand account. So, definitely not
>> zero use.
>>
>> This should be fairly accurate:
>>
>> https://github.com/gcp/leela-zero
>> 
>>
>> --
>> GCP
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
>> 
>>
>>
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
>> 
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
>
>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AlphaGo Zero SGF - Free Use or Copyright?

2017-10-25 Thread Darren Cook
> What do you want evaluate the software for ? corner cases which never
> have happen in a real game ?

If the purpose of this mailing list is a community to work out how to
make a 19x19 go program that can beat any human, then AlphaGo has
finished the job, and we can shut it down.

But this list has always been for anything related to computers and the
game of go. Right from John Tromp counting the number of games through
to tips and hints on the best compiler flags to use.

BTW, I noticed in the paper that it showed 3 games AlphaGo Zero lost to
AlphaGo Master: in game 11 Zero had white, in games 14 and 16 Zero had
black. An opponent that can only win 11% of games against it, was able
to win on both sides of the komi. Suggesting there is still quite a bit
of room for improvement.

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread Shawn Ligocki
My guess is that they want to distribute playing millions of self-play
games. Then the learning would be comparatively much faster. Is that right?

On Wed, Oct 25, 2017 at 11:57 AM, Xavier Combelle  wrote:

> Is there some way to distribute learning of a neural network ?
>
> Le 25/10/2017 à 05:43, Andy a écrit :
>
> Gian-Carlo, I didn't realize at first that you were planning to create a
> crowd-sourced project. I hope this project can get off the ground and
> running!
>
> I'll look into installing this but I always find it hard to get all the
> tool chain stuff going.
>
>
>
> 2017-10-24 15:02 GMT-05:00 Gian-Carlo Pascutto :
>
>> On 23-10-17 10:39, Darren Cook wrote:
>> >> The source of AlphaGo Zero is really of zero interest (pun intended).
>> >
>> > The source code is the first-hand account of how it works, whereas an
>> > academic paper is a second-hand account. So, definitely not zero use.
>>
>> This should be fairly accurate:
>>
>> https://github.com/gcp/leela-zero
>>
>> --
>> GCP
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
>
>
>
> ___
> Computer-go mailing 
> listComputer-go@computer-go.orghttp://computer-go.org/mailman/listinfo/computer-go
>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread Álvaro Begué
There are ways to do it, but it might be messy. However, the vast majority
of the computational effort will be in playing games to generate a training
database, and that part is trivial to distribute. Testing if the new
version is better than the old version is also very easy to distribute.

Álvaro.


On Wed, Oct 25, 2017 at 11:57 AM, Xavier Combelle  wrote:

> Is there some way to distribute learning of a neural network ?
>
> Le 25/10/2017 à 05:43, Andy a écrit :
>
> Gian-Carlo, I didn't realize at first that you were planning to create a
> crowd-sourced project. I hope this project can get off the ground and
> running!
>
> I'll look into installing this but I always find it hard to get all the
> tool chain stuff going.
>
>
>
> 2017-10-24 15:02 GMT-05:00 Gian-Carlo Pascutto :
>
>> On 23-10-17 10:39, Darren Cook wrote:
>> >> The source of AlphaGo Zero is really of zero interest (pun intended).
>> >
>> > The source code is the first-hand account of how it works, whereas an
>> > academic paper is a second-hand account. So, definitely not zero use.
>>
>> This should be fairly accurate:
>>
>> https://github.com/gcp/leela-zero
>>
>> --
>> GCP
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
>
>
>
> ___
> Computer-go mailing 
> listComputer-go@computer-go.orghttp://computer-go.org/mailman/listinfo/computer-go
>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Zero is weaker than Master!?

2017-10-25 Thread Xavier Combelle
As I understand the paper they directly created alphago zero with a 40 block
setup.

They just made a reduced 20 block setup to compare on kifu prediction
(as far as I searched in the paper, it is the only
place where they mention the 20 block setup)

They specifically mention comparing several version of their software.
with various parameter

If the number of block was an important parameter I hope they would
mention it.

Of course they are a lot of things that they try and failed and we will
not know about

But I have hard time to believe that alphago zero with a 20 block is one
of them

About the paper, there is no mention of the number of block of master:

"AlphaGo Master is the program that defeated top human players by 60–0
in January, 2017 34 .
It was previously unpublished but uses the same neural network
architecture, reinforcement
learning algorithm, and MCTS algorithm as described in this paper.
However, it uses the
same handcrafted features and rollouts as AlphaGo Lee
and training was initialised by
supervised learning from human data."

Of what I understand same network architecture imply the same number of
block

Le 25/10/2017 à 17:58, Xavier Combelle a écrit :
> I understand better
>
>
> Le 25/10/2017 à 04:28, Hideki Kato a écrit :
>> Are you thinking the 1st instance could reach Master level 
>> if giving more training days?
>>
>> I don't think so.  The performance would be stopping 
>> improving at 3 days.  If not, why they built the 2nd 
>> instance?
>>
>> Best,
>> Hideki
>>
>> Xavier Combelle: <05c04de1-59c4-8fcd-2dd1-094faabf3...@gmail.com>:
>>> How is it a fair comparison if there is only 3 days of training for Zero ?
>>> Master had longer training no ? Moreover, Zero has bootstrap problem
>>> because at the opposite of Master it don't learn from expert games
>>> which means that it is likely to be weaker with little training.
>>> Le 24/10/2017 à 20:20, Hideki Kato a écrit :
 David Silver told Master used 40 layers network in May. 
 According to new paper, Master used the same architecture 
 as Zero.  So, Master used 20 blocks ResNet.  
 The first instance of Zero, 20 blocks ResNet version, is 
 weaker than Master (after 3 days training).  So, with the 
 same layers (a fair comparison) Zero is weaker than 
 Master.
 Hideki
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AlphaGo Zero

2017-10-25 Thread Gian-Carlo Pascutto
On 25-10-17 16:00, Petr Baudis wrote:

>> The original paper has the value they used. But this likely needs tuning. I
>> would tune with a supervised network to get started, but you need games for
>> that. Does it even matter much early on? The network is random :)
> 
>   The network actually adapts quite rapidly initially, in my experience.
> (Doesn't mean it improves - it adapts within local optima of the few
> games it played so far.)

Yes, but once there's structure, you can tune the parameter with CLOP or
whatever.

>   Yes, but why wouldn't you want that randomness in the second or third
> move?

You only need to play a different move at the root in order for the game
to deviate.

-- 
GCP
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AlphaGo Zero SGF - Free Use or Copyright?

2017-10-25 Thread Xavier Combelle
Le 24/10/2017 à 22:41, Robert Jasiek a écrit :

> On 24.10.2017 20:19, Xavier Combelle wrote:
>> totally unrelated
>
> No, because a) software must also be evaluated and can by go theory and
What do you want evaluate the software for ? corner cases which never
have happen in a real game ?

The current testing way that deepmind used
that is: first of all making software and software-human tournament,
guess pro move, guess pro game result
was amply enough to make the best go software.

> b) software can be built on exact go theory. That currently (b) is
> unpopular does not mean unrelated.
>
It is just a wild guess. exact go theory is full of hole.
Actually, to my knowledge human can't apply only the exact go theory and
play a decent game.
If human can't do that, how it will teach a computer to do it magically ?

if you want we can setup a game were you apply only exact go theory
against me (I'm only 2 kyu)
The rules are the following, you have to apply mechanically the go
theory as a computer would do
at each move such as I could do exactly the same
and show in a detailed way how you applied it. If you won I will
recognize  the fact that the exact go
theory is not full of hole.

The reason why (b) had became unpopular is because there is no go theory
precise enough to implement it as an algorithm
and MCTS and neural network was way to use small or none part of go
theory and make a decent player.

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Zero is weaker than Master!?

2017-10-25 Thread Xavier Combelle
I understand better


Le 25/10/2017 à 04:28, Hideki Kato a écrit :
> Are you thinking the 1st instance could reach Master level 
> if giving more training days?
>
> I don't think so.  The performance would be stopping 
> improving at 3 days.  If not, why they built the 2nd 
> instance?
>
> Best,
> Hideki
>
> Xavier Combelle: <05c04de1-59c4-8fcd-2dd1-094faabf3...@gmail.com>:
>> How is it a fair comparison if there is only 3 days of training for Zero ?
>> Master had longer training no ? Moreover, Zero has bootstrap problem
>> because at the opposite of Master it don't learn from expert games
>> which means that it is likely to be weaker with little training.
>> Le 24/10/2017 à 20:20, Hideki Kato a écrit :
>>> David Silver told Master used 40 layers network in May. 
>>> According to new paper, Master used the same architecture 
>>> as Zero.  So, Master used 20 blocks ResNet.  
>>> The first instance of Zero, 20 blocks ResNet version, is 
>>> weaker than Master (after 3 days training).  So, with the 
>>> same layers (a fair comparison) Zero is weaker than 
>>> Master.
>>> Hideki
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread Xavier Combelle
Is there some way to distribute learning of a neural network ?


Le 25/10/2017 à 05:43, Andy a écrit :
> Gian-Carlo, I didn't realize at first that you were planning to create
> a crowd-sourced project. I hope this project can get off the ground
> and running!
>
> I'll look into installing this but I always find it hard to get all
> the tool chain stuff going.
>
>
>
> 2017-10-24 15:02 GMT-05:00 Gian-Carlo Pascutto  >:
>
> On 23-10-17 10:39, Darren Cook wrote:
> >> The source of AlphaGo Zero is really of zero interest (pun
> intended).
> >
> > The source code is the first-hand account of how it works,
> whereas an
> > academic paper is a second-hand account. So, definitely not zero
> use.
>
> This should be fairly accurate:
>
> https://github.com/gcp/leela-zero 
>
> --
> GCP
> ___
> Computer-go mailing list
> Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
>
>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] AlphaGo Zero

2017-10-25 Thread Petr Baudis
On Fri, Oct 20, 2017 at 08:02:02PM +, Gian-Carlo Pascutto wrote:
> On Fri, Oct 20, 2017, 21:48 Petr Baudis  wrote:
> 
> >   Few open questions I currently have, comments welcome:
> >
> >   - there is no input representing the number of captures; is this
> > information somehow implicit or can the learned winrate predictor
> > never truly approximate the true values because of this?
> >
> 
> They are using Chinese rules, so prisoners don't matter. There are simply
> less stones of one color on the board.

  Right!  No idea what was I thinking.

> >   - what ballpark values for c_{puct} are reasonable?
> >
> 
> The original paper has the value they used. But this likely needs tuning. I
> would tune with a supervised network to get started, but you need games for
> that. Does it even matter much early on? The network is random :)

  The network actually adapts quite rapidly initially, in my experience.
(Doesn't mean it improves - it adapts within local optima of the few
games it played so far.)

> >   - why is the dirichlet noise applied only at the root node, if it's
> > useful?
> >
> 
> It's only used to get some randomness in the move selection, no ? It's not
> actually useful for anything besides that.

  Yes, but why wouldn't you want that randomness in the second or third
move?

> >   - the training process is quite lazy - it's not like the network sees
> > each game immediately and adjusts, it looks at last 500k games and
> > samples 1000*2048 positions, meaning about 4 positions per game (if
> > I understood this right) - I wonder what would happen if we trained
> > it more aggressively, and what AlphaGo does during the initial 500k
> > games; currently, I'm training on all positions immediately, I guess
> > I should at least shuffle them ;)
> >
> 
> I think the lazyness may be related to the concern that reinforcement
> methods can easily "forget" things they had learned before. The value
> network training also likes positions from distinct games.

  That makes sense.  I still hope that with a much more aggressive
training schedule we could train a reasonable Go player, perhaps at the
expense of worse scaling at very high elos...  (At least I feel
optimistic after discovering a stupid bug in my code.)

-- 
Petr Baudis, Rossum
Run before you walk! Fly before you crawl! Keep moving forward!
If we fail, I'd rather fail really hugely.  -- Moist von Lipwig
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Collision between e-manners and shoelaces...

2017-10-25 Thread patrick.bardou via Computer-go
Hi Pierce from Caltech,
Would an Aspergers typically try to lift himself up by pulling on his shoelaces 
? 
I think you just mistaken me for my fellowcontryman Xavier Combelle and was not 
replying to my post:
http://computer-go.org/pipermail/computer-go/2017-October/010338.html
I posted it just before the storm started. Gone with the wind ;-)
BTW, I would love to read a reply. Even from the sharp-minded Robert* ! ;-)
Regards,Patrick
* whose posts I appreciate too, even if they are sometimes beyond me.

--
Message: 2
Date: Tue, 24 Oct 2017 17:07:19 -0700
From: pie...@alumni.caltech.edu
To: Patrick Bardou via Computer-go 
Subject: [Computer-go] Digression about e-manners and the spectrum
Message-ID: <7d02c36b-a3e7-45af-8bdc-d41b932f453d@Spark>
Content-Type: text/plain; charset="utf-8"

Here’s a quick test:

1. Are you a software engineer?

2. If you answered “Yes” to question 1, congratulations, you have Aspergers. If 
you answered no to question 1, but you’re participating in an electronic 
discussion, you might as well have Aspergers, because e-media cannot convey 
tone or emotion. If you answered no to question 1 but you’re a college 
professor, be aware that Asperger called people on the spectrum “little 
professors” so you have double Aspergers. :-)

For those of you who are immediately offended, I should warn you that Aspergers 
is classified as a “Spectrum” which means that everyone has Aspergers. Its a 
range with Software Engineers on one end, and Actors on the other.

Robert hasn’t actually said anything that terrible that I can see reviewing his 
emails.

However, I suspect his tone is offending you rather than his actual words. 
That’s typical of conversing with someone with Aspergers over an electronic 
medium which strips all of the additional communication side bands. On a 
mailing list, we all have Aspergers.

Robert believes what he believes about the importance of edge cases. Telling 
him that you don’t care isn’t going to convince him. Getting pedantic about the 
purpose of this list just makes me suspect you of having Aspergers yourself.

If you don’t want to discuss double-ko, or the theoretical limits of Neural 
Network based AI, and whether the system actually “knows” anything, you don’t 
have to. But I kind of agree with Robert that I hope whoever designed whatever 
self-driving car I might end up with in the future has considered those edge 
cases, that their AI system isn’t just an elaborate “Chinese Room” of rules it 
doesn’t understand.

Pierce

P.S.

I freely admit to having Aspergers, and some days I admit to being a Chinese 
Room.

-- next part --
An HTML attachment was scrubbed...
URL: 


--


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread Gian-Carlo Pascutto
On 25-10-17 05:43, Andy wrote:
> Gian-Carlo, I didn't realize at first that you were planning to create a
> crowd-sourced project. I hope this project can get off the ground and
> running!
> 
> I'll look into installing this but I always find it hard to get all the
> tool chain stuff going.

I will provide pre-made packages for common operating systems. Right now
we (Jonathan Roy is helping with the server) are exploring what's
possible for such a crowd-sourced effort, and testing the server. I'll
provide an update here when there's something to play with.

-- 
GCP
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo Zero))

2017-10-25 Thread fotland
Sadly, this is GPL v3, so it's not safe for me to look at it.

David

PS even though Robert's posts are slightly off topic for the AlphaGo 
discussion, I respect that he has thought far more deeply than I have about go, 
and I support his inclusion in the list.

-Original Message-
From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Gian-Carlo Pascutto
Sent: Tuesday, October 24, 2017 1:02 PM
To: computer-go@computer-go.org
Subject: [Computer-go] Source code (Was: Reducing network size? (Was: AlphaGo 
Zero))

On 23-10-17 10:39, Darren Cook wrote:
>> The source of AlphaGo Zero is really of zero interest (pun intended).
> 
> The source code is the first-hand account of how it works, whereas an 
> academic paper is a second-hand account. So, definitely not zero use.

This should be fairly accurate:

https://github.com/gcp/leela-zero

--
GCP
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go