[computer-go] Call For Participation: 12th Computer Olympiad

2007-04-06 Thread Winands M (MICC)
The 15th World Computer-Chess Championship and the 12th Computer Olympiad will 
be held in Amsterdam, the Netherlands in conjunction with the Computer Games 
Workshop 2007 (CGW2007).

IBM, SARA (Academic Computer Centre Amsterdam) and NCF (Foundation of National 
Computing Facilities) are enabling the organization of the Computer Games 
Workshop 2007 (CGW2007) (15-17 June 2007), the 15th World Computer-Chess 
Championship (WCCC) (11-18 June) and the 12th Computer Olympiad (CO) (11-18 
June) to be held in Amsterdam, The Netherlands. Location: The Turing hall, 
Science Park Amsterdam, Kruislaan 419, 1098 VA Amsterdam.

Below we mention 27 different games for which a program can be submitted to the 
Olympiad.

Abalone
Amazons
Arimaa
Backgammon
Bao
Bridge
Chinese Chess
8x8 Checkers
Clobber
Computational Pool
Diplomacy
Dominoes
Dots and Boxes
10x10 Draughts
Gipf
Go
9x9 Go
Hex
Kriegspiel
Lines of Action
6x7 OCTI
9x9 OCTI
Othello
Poker
Renju
Scrabble
Shogi

The game of Renju, claimed to be solved, is still welcome since we have not 
seen a fully-operational program on internet that plays perfectly. Moreover, we 
are willing to host more games, such as Ataxx, Dvonn, Mediocrity, Onyx, Tamsk, 
TwixT and Zèrtz but we do not know of the existence of adequately playing 
programs. We are awaiting suggestions and proposals of programmers before we 
include them in the official list given above.

For each game a tournament will take place provided that at least two programs 
enter the tournament for that particular game. Gold, Silver and Bronze medals 
will be awarded to the leading programs in each tournament.

The Tournament Director of the WCCC and Computer Olympiad will be: H.J. van den 
Herik.

The rules of the WCCC and Computer Olympiad will be soon published.

The entry fees for the WCCC are as follows:

Amateur: € 25
Semi-professional: € 250
Professional: € 500

The entry fees for the other tournaments are as follows:

Amateur: € 25
Semi-professional: € 100
Professional: € 250

A participant is expected to be ICGA member (€ 40).

Deadline early registration: May 21, 2007.
Registration after May 21, 2007 will double the entry fee.

Amateur: programmers who have no commercial interest in their program, and 
are not professional game programmers. Applications for amateur classification 
must supply information to justify their claim.

Semi-professional: Any program submitted by an employee or associate from a 
games-programming company. The program's name must not be derived from or 
similar to a commercial product.

Professional: A program whose name is the same as or derived from a 
commercial product.

www.icga.org 
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] The dominance of search (Suzie v. GnuGo)

2007-04-06 Thread Rémi Coulom

Chrilly wrote:
I think on 9x9 the superiority of search based programms is now 
clearly demonstrated. Its only the question if UCT or Alpha-Beta is 
superior.

Hi Chrilly,

Thanks for your report.

The question of UCT versus Alpha-Beta is not open any more in my 
opinion. The current state of the art of Monte Carlo tree search is 
about 500 Elo points stronger than the version of Crazy Stone you tested 
against. Do you believe you can easily catch up with those 500 Elo 
points ? Also, I am convinced that UCT has tremendous potential for 
further improvement. I have improved Crazy Stone by about 50 Elo points 
per day in the past 10 days (on 9x9. The improvement on 13x13 and 19x19 
is much more). I am very confident that I can easily improve it further 
very much.


Rémi
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo

2007-04-06 Thread Jacques Basaldúa

Hello Sylvain

Sylvain Gelly wrote:

 If the program blundered as you said and still wins, it means
 that the program already won much earlier in the game.

You are totally right. I said that in my post already. But what the user
thinks is: 1. He was behind, but not desperately behind. 2. The engine
did a mistake and he should have been ahead, because he counts
Japanese.

 It is not a matter of chinese or japanese rules.

It is. With Chinese, playing in your own territory
is worth 0, in Japanese it is worth -1. The difference
is not important when there is territory in dispute
because the move is bad in either case. But at the
end it does. That's the good news for us: It does not
make any diference as long as the program plays
a friendly endgame.

this behavior has been explained many times by developers

I know, and so has general relativity. They still don't get it. Our
point is: they are all wrong (Which can be proved.) Their point
is this UCT behavior is unpleasant.

Jacques.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo

2007-04-06 Thread Jacques Basaldúa

Hello Don

Don Dailey wrote:

 Many people DO play chess after the game is over. They
 continue to play a game long after they could have
 resigned.

My example wasn't very good but I meant over literally.
= The king is captured (changing the rules a little).

How does Japanese make any difference?

I just answered this: It is the impression the user gets
because an unnecessary self protecting move changes
the score form -.5 to +.5 in his mind. I mostly agree
with both of you. It is not a priority to change this
behavior. But what surprises me is that you pretend
that it has not to be done at all.

For a commercial program to be friendly, it has.

Jacques.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo

2007-04-06 Thread Heikki Levanto
On Thu, Apr 05, 2007 at 10:49:05AM +0100, Jacques Basaldúa wrote:
 
 Many users feel stolen by UCT programs. I have read that 
 in the KGS chatrooms. Normal users do not count with 
 +/- 0.5 point precision. They have the impression the 
 program blundered and they caught up. But when the 
 program counts, surprise!, it wins by 0.5 points Chinese. 
 The users were thinking Japanese even if they accepted
 Chinese rules. In fact, they did not have the choice.
 They get the impression the game was stolen by 
 technicalities after they saw the engine blunder many 
 times.

As I have posted here before, this is easy to avoid, by counting the
result of teh game not as +/- 1, but as +/- (1+epsilon), where epsilon
is so small that it can not change the order of the averaged-out
results. If you first normalize the matrgin of victory to 0..1 by
dividing by boardsize, and then divide by by the number of playouts you
do, you get a suitably small epsilon, that only affects the sorting of
games that get as many wins. Of those the program will prefer the moves
that win by a larger margin. This should not have any effect on the
strength, because even one playout difference is greater than all teh
epsilons summed up, but in the endgame, when all moves lead to a
certain victory, the program prefers a win by a larger margin.

My Halgo does that (in a otherwise regular MC), and the feel of the
endgame is clearly more reasonable than without.

-H


-- 
Heikki Levanto   In Murphy We Turst heikki (at) lsd (dot) dk

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo

2007-04-06 Thread Don Dailey
On Fri, 2007-04-06 at 12:46 +0100, Jacques Basaldúa wrote:
 But what surprises me is that you pretend
 that it has not to be done at all.

Why does it have to be done?  Doesn't any player have a right
to play any way he wishes?

I agree that some people don't like it playing that way - 
but that's their problem.  I don't like the way some people
play the game but I guess that's my problem.

For a commercial program, you may prefer to cater if it
will get you more sales.   Even in computer chess some
people like some programs - the way they play - better 
than others regardless of strength.

- Don





___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] The dominance of search (Suzie v. GnuGo)

2007-04-06 Thread Tom Cooper
My guess is that the complexity of achieving a fixed standard of play 
(eg 1 dan) using a global alpha-beta or MC search is an exponential 
function of the board size.  For this guess, I exclude algorithms 
that have a tactical or local component.  If this guess is correct 
then, even if Moore's law remains in force, this kind of program 
should not reach dan level on a 19x19 board within 20 years.


To some extent, this is testable today by finding how a global search 
program's strength scales with board size and with thinking 
time.  For example, results in which Suzie had a week to play a 13x13 
game would be interesting.


I don't mean to imply by this message that I think I am particularly 
well qualified to have an opinion on this matter, but when someone 
writes something that surprises me, I'm inclined to argue :)





On 13x13 and especially 19x19 Suzie is still weaker than Gnu-Go. I 
think the hardware is still too weak to establish the same dominance 
of search for larger board-sizes. But thats only a matter of time or 
of a few million $ to build (with Chris Fant) a Go-Chip. Actually 
about 100.000 Euro for an FPGA based project would be sufficient.


Chrilly Donninger



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] professional knowledge

2007-04-06 Thread Jacques Basaldúa

Darren Cook wrote:

 All except joseki-knowledge is board-size independent.

Maybe human player's adapt to different board sizes without
even noticing. But if you try to model strategy with algorithms
it is totally board size dependent.

The extreme case is 5x5 where black 3,3 claims the four
corners. The relative size of the corner, the sides and the
center is crucial. Move timing, distances between stones,
everything.

I like use the two day argument because I believe a top level
computer go program should have two phases (But the first,
of course will be much longer than 50 moves.)

   2nd. When all local fights can be searched. If is a matter of
understanding their conditions and interactions. The program is
an endgame solver. It plays kami no itte for the last ¿m? moves.
Complexity increases with board size, but in a known way.

  1st. All the rest of the game. In this part there is no certitude.
The program uses stochastical evaluation (UCT) together with
knowledge to navigate in the fog expecting to be ahead when
the endgame solver finds the harbor.

*All* the knowledge used in the 1st step is board size dependent.
UCT is not, but UCT's efficiency (and variance) is.

Jacques.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] professional knowledge

2007-04-06 Thread Don Dailey
On Fri, 2007-04-06 at 13:48 +0100, Jacques Basaldúa wrote:
 Darren Cook wrote:
 
   All except joseki-knowledge is board-size independent.
 
 Maybe human player's adapt to different board sizes without
 even noticing. But if you try to model strategy with algorithms
 it is totally board size dependent.

Doesn't that just imply the model is incorrect?   If the model
captures the true spirit of the game, it shouldn't matter.

In practice things often work differently.  We choose approximations
that are not strictly correct and it may work well for us on a certain
board.  

As far as a joseki is concerned, that is not science so I suppose
you would be correct if that's what you are refering to.   

There may be some formal (but likely complex) way to describe a
correct opening play strategy on any size board that is not 
board-size dependent.I'll try to give an example:

  On small boards, it seems like it's correct to play to the 
  center point on the first move?   Why?   The rule: always
  play to the center point might NOT be a board size independent
  strategy but might work well for boards smaller than say 11x11.
  But if you could capture the REASON for this,  you might be
  able to formulate a better strategy for playing the first 
  move that would work on all boards.  

There are underlying reasons for everything and that is what must
be captured to achieve a boardsize independent strategy.

Of course I'm getting rather theoretical.  I understand that you
are looking for practical solutions and approximations.

- Don


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] The dominance of search (Suzie v. GnuGo)

2007-04-06 Thread Don Dailey
I would not be so quick to dismiss what Chrilly is saying.  I have
noticed that over time, in science, things blend together.   For 
instance mtd(f) is a systematic way to think of aspiration search,
(tampering with the alpha/beta window  in a search) and helps us to
appreciate how they are all pretty much variations of the same
basic concepts.

I noticed a trend in computer chess towards throwing out more and
more moves.  Years ago it was only alpha/beta pruning but then later
null move pruning, then other kinds of pruning and now the tree 
is being cut in many places.   Chess search trees now look much
more like the intial (highly selective) approach that was rejected 
just a few decades ago.

UCT and Monte Carlo.   It's not as much Monte Carlo any longer.
It's turning more into an evaluation function.  When it first 
started the 
play-outs were random, now they are semi random - essentially
forming a binary evaluation function.  In fact they ARE a binary
evaluation function - since that exact node will never have this
function directly applied to it again (with the standard UCT
formulation.)

So what we have is a best first search with an evaluation function!

(I would still argue that some randomness is important - but I can't
explain why in this email but a clue:  it has to do with recovering
from misconception if you can figure that out!) 

If you look at what mtd(f) is,  it's not alpha/beta, it's more like
a hybrid - a best first search that iterates.   But it's only a  hop,
skip, and jump away from standard alpha/beta.

It's not hard to imagine that with work,  the Chrilly approach will
start looking more like the UCT approach.   After all, if anything,
things 
have moved more TOWARDS the Chrilly approach and away from the initial
things we tried in UCT/monte/carlo.

Mabye in a few years we will look back and see that we started from a
lot of different places and ended up at the same location.This 
happens all the time in science.  

In my opinion, the insight that Chrilly articulated was that all of
sudden we are now all using some type of global search - the very
idea was considered blasphemy just 2 or 3 years ago.  

- Don



On Fri, 2007-04-06 at 13:54 +0200, Rémi Coulom wrote:
 Chrilly wrote:
  I think on 9x9 the superiority of search based programms is now 
  clearly demonstrated. Its only the question if UCT or Alpha-Beta is 
  superior.
 Hi Chrilly,
 
 Thanks for your report.
 
 The question of UCT versus Alpha-Beta is not open any more in my 
 opinion. The current state of the art of Monte Carlo tree search is 
 about 500 Elo points stronger than the version of Crazy Stone you tested 
 against. Do you believe you can easily catch up with those 500 Elo 
 points ? Also, I am convinced that UCT has tremendous potential for 
 further improvement. I have improved Crazy Stone by about 50 Elo points 
 per day in the past 10 days (on 9x9. The improvement on 13x13 and 19x19 
 is much more). I am very confident that I can easily improve it further 
 very much.
 
 Rémi
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] The dominance of search (Suzie v. GnuGo)

2007-04-06 Thread compgo123
An imperfect evaluation has errors. Is the exact value of the error known? No. 
Thus, it's random. :)
 
Daniel Liu
 
 
-Original Message-
From: [EMAIL PROTECTED]
To: computer-go@computer-go.org
Sent: Fri, 6 Apr 2007 10:57 AM
Subject: Re: [computer-go] The dominance of search (Suzie v. GnuGo)


On 4/6/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: 
 When it comes to a search, one need to ask that is my evaluation function 
 perfect? 
 
There are exceptional cases in the late endgame and on tiny boards, 
but in general this is not an interesting question (because it 
obviously won't be perfect). 
 
In my opinion the question should be: is the evaluation function of a 
probabilistic nature. 
 
Alpha/Beta cutoffs only make sense when calling the evaluation 
function twice on the exact same position can be guaranteed to provide 
the exact same value. This is obviously not the case for MC 
evaluation, hence the success of UCT. 
 
Erik 
___ 
computer-go mailing list 
computer-go@computer-go.org 
http://www.computer-go.org/mailman/listinfo/computer-go/ 

AOL now offers free email to everyone.  Find out more about what's free from 
AOL at AOL.com.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] The dominance of search (Suzie v. GnuGo)

2007-04-06 Thread Chrilly

Thanks for your report.

The question of UCT versus Alpha-Beta is not open any more in my
opinion. The current state of the art of Monte Carlo tree search is
about 500 Elo points stronger than the version of Crazy Stone you tested
against. Do you believe you can easily catch up with those 500 Elo
points ? Also, I am convinced that UCT has tremendous potential for
further improvement. I have improved Crazy Stone by about 50 Elo points
per day in the past 10 days (on 9x9. The improvement on 13x13 and 19x19
is much more). I am very confident that I can easily improve it further
very much.

Rémi

The main point of my mail was: Search works (at least in 9x9) well. I think 
we can agree on this point.


For the UCT v. Alpha-Beta question there is a simple proof of the pudding: 
Sent us the latest/strongest version and we will try to beat it.


Suzie is so far a very propretiary system which runs only under GoAheads 
GUI. And GoAhead does not support GTP. GnuGo is run with a hack. The matches 
against Crazy-Stone are done by hand. Its Stefan Mertins version of watching 
TV.
I am working currently on a modern C# based GUI which shall support also 
GTP. But progress is due to my engagement by Siemens rather slow***. Once 
this GUI exists we will be able to participate on KGS tournaments and other 
programmers could get Suzie if they like.


*** I have also done almost nothing to improve the search, the main progress 
is due to Peters intensive work on the evaluation.


Chrilly

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] UCT

2007-04-06 Thread compgo123
I just realized that the UCT describes perfectly the case of trading stocks. 
 
 

AOL now offers free email to everyone.  Find out more about what's free from 
AOL at AOL.com.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] The dominance of search (Suzie v. GnuGo)

2007-04-06 Thread Rémi Coulom

Chrilly wrote:
The main point of my mail was: Search works (at least in 9x9) well. I 
think we can agree on this point.


Yes.



For the UCT v. Alpha-Beta question there is a simple proof of the 
pudding: Sent us the latest/strongest version and we will try to beat it.


I do not plan to distribute new versions any more. But I will connect 
Crazy Stone to the servers, CGOS and KGS.


I believe MonteGNU might be available. It is stronger on 9x9 than both 
GNU Go, and the public versions of Crazy Stone.


Rémi
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] The dominance of search (Suzie v. GnuGo)

2007-04-06 Thread Erik van der Werf

On 4/6/07, Don Dailey [EMAIL PROTECTED] wrote:

On Fri, 2007-04-06 at 12:43 -0400, [EMAIL PROTECTED] wrote:
 Alpha/Beta cutoffs only make sense when calling the evaluation
 function twice on the exact same position can be guaranteed to
 provide
 the exact same value. This is obviously not the case for MC
 evaluation, hence the success of UCT.

I don't know if any of this is true.


Noticed the subtle can be in my statement above?

I'm quite aware of all the hairy details (e.g., strictly speaking the
use of a transposition table already voids the above premise).


You can apply alpha beta
cutoffs whether the evaluation function is deterministic or not.


I know quite well how alpha-beta is used in practice, but that's not
the point I tried to get across. The cool thing of alpha-beta cutoffs
is that, under certain conditions, it is guaranteed to preserve the
minimax result without searching the full tree. However, if the
evaluation function is non-deterministic that guarantee simply doesn't
make sense any more. I'm not saying that therefore alpha-beta can't be
used; in fact I'll be the first to admit that you can apply it to
*any* domain with a finite number of known legal actions. Not that
that's a good plan, but for sure you will always get something out. In
many games it will even play quite well. In practice pathology in
search is known to be rare, however, my impression is that in Go it
may be more relevant than in other games, such as chess, where, e.g.,
noise actually helps as a weak mobility component. UCT was
specifically designed to deal with uncertainty, so that's why I think
it's better suited for highly uncertain evaluations .


UCT calls the random play-out only once on any given position
and there is no reason in principle that it couldn't be a
deterministic evaluation function instead.


Sure, and you could even argue that the random play-out *is*
deterministic. Fact remains that the uncertainty/variance of one
playout is in most cases still huge, and the tree-search has to deal
with this (or in Chrilly's words, filter it). Unless the fake mobility
is helping significantly, this could be a severe disadvantage for
alpha-beta type searchers compared to UCT.


To me, a true selective program cuts of a line forever.


I would call that a greedy algorithm (because it never reconsiders
previous decisions).


I have this idea that perhaps a good evaluation function could
replace the play-out portion of the UCT programs.


Agreed.


It only comes down to whether the inherent randomness is critical
to the success of UCT or not.   If it isn't,  then a play-out is
nothing more than just an evaluation function.


My guess is that the answer which type of search works best for a
given evaluation function depends on the amounts of (deterministic)
bias and (probabilistic) uncertainty in the evaluations (and so far I
see MC mainly as an extremely noisy evaluation with not too much
systematic bias).

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] The dominance of search (Suzie v. GnuGo)

2007-04-06 Thread Don Dailey
On Fri, 2007-04-06 at 23:41 +0200, Erik van der Werf wrote:
 My guess is that the answer which type of search works best for a
 given evaluation function depends on the amounts of (deterministic)
 bias and (probabilistic) uncertainty in the evaluations (and so far I
 see MC mainly as an extremely noisy evaluation with not too much
 systematic bias).

It's interesting that Sylvain seems to consider the stronger Mogo
versions more brittle in some sense.   He claimed (in so many
words) that Lazarus gets results more consistent with it's ratings
and that the strongest Mogo's do not.I wonder if this is 
because the play-outs are now more systematically biased?  

- Don


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] The dominance of search (Suzie v. GnuGo)

2007-04-06 Thread Don Dailey
I want to clarify however.

If your evaluation function is not deterministic,  aspiration
search techniques become very dicey.This is a problem
anyway with hash table implementations and speculate cutoffs
based on the the alpha beta window (and especially the
aspiration window) but it's worth mentioning.

However, there is nothing wrong with using alpha beta
search with an evauation function that is not deterministic.

- Don


On Fri, 2007-04-06 at 16:24 -0400, Don Dailey wrote:
 On Fri, 2007-04-06 at 12:43 -0400, [EMAIL PROTECTED] wrote:
  Alpha/Beta cutoffs only make sense when calling the evaluation 
  function twice on the exact same position can be guaranteed to
  provide 
  the exact same value. This is obviously not the case for MC 
  evaluation, hence the success of UCT. 
 
 I don't know if any of this is true.  You can apply alpha beta
 cutoffs whether the evaluation function is deterministic or not. 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] The dominance of search (Suzie v. GnuGo)

2007-04-06 Thread Erik van der Werf

On 4/6/07, Don Dailey [EMAIL PROTECTED] wrote:

However, there is nothing wrong with using alpha beta
search with an evauation function that is not deterministic.


I agree that some limited amount of non-determinism isn't necessarily
a bad thing, and in some cases it actually helps (e.g., when mobility
is important, or to avoid the exploitation of repeatable blunders).
However, do you really believe that this still holds if the variance
causes a spread over the maximum range of possible values of the
underlying ground-truths?

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] The dominance of search (Suzie v. GnuGo)

2007-04-06 Thread Darren Cook
 (R==1). An incorrect pruning decission is not taken forever.  The
 general idea is to use information from the search tree to shape the
 search tree. Ulf Lorenz from the Univ. Paderborn considers the search
 tree as an adaptable error filter.
 ...
 UCT and Monte Carlo.   It's not as much Monte Carlo any longer.
 Yes, ecaxtly. I also think that the difference is fuzzy. Both methods
 fit into the adaptable error filter model of Ulf.

Hi Chrilly,
Do you have a recommendation for a good paper to read on this? Ideally
one that doesn't need specialized chess knowledge to appreciate, but I
may not have a choice: google is giving me 0 hits on adaptable error
filter.

Darren


-- 
Darren Cook
http://dcook.org/mlsn/ (English-Japanese-German-Chinese free dictionary)
http://dcook.org/work/ (About me and my work)
http://dcook.org/work/charts/  (My flash charting demos)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/