Re: [computer-go] java reference bot

2008-10-24 Thread Michael Williams

I noticed that the Java reference bot does not listen to the color parameter of 
genmove.  It alternates colors regardless of what is specified.


Don Dailey wrote:

On Mon, 2008-10-13 at 23:21 -0400, Joshua Shriver wrote:

Is the source available would be neat to see.


Yes,  get it here:  http://cgos.boardspace.net/public/javabot.zip

It includes a unix style simple Makefile. 


For you java programmers:  I'm sure you won't like it - I'm not a java
programmer but I did try to comment it fairly well and make it readable
because it's supposed to be a reference bot.

If anyone wants to clean it up, make it more readable, speed it up
(without uglying it up) I would be interested and would incorporate that
into the final reference bot.I don't know java specific do's and
dont's and idioms for getting faster code.   

But of course first I want to find all the bugs.  


- Don






-Josh

On Mon, Oct 13, 2008 at 7:14 PM, Don Dailey [EMAIL PROTECTED] wrote:
I made a reference bot and I want someone(s) to help me check
it out
with equivalent data from their own program.  There are no
guarantees
that I have this correct of course.




___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] java reference bot

2008-10-24 Thread Don Dailey
On Fri, 2008-10-24 at 23:59 -0400, Michael Williams wrote:
 I noticed that the Java reference bot does not listen to the color parameter 
 of genmove.  It alternates colors regardless of what is specified.

Yes, it's hacked together.  I meant to come back to it later to clean up
stuff like that.   

The GTP stuff is horrible anyway,  I cut and pasted it from something on
the web and then modified that. 

- Don


 
 Don Dailey wrote:
  On Mon, 2008-10-13 at 23:21 -0400, Joshua Shriver wrote:
  Is the source available would be neat to see.
  
  Yes,  get it here:  http://cgos.boardspace.net/public/javabot.zip
  
  It includes a unix style simple Makefile. 
  
  For you java programmers:  I'm sure you won't like it - I'm not a java
  programmer but I did try to comment it fairly well and make it readable
  because it's supposed to be a reference bot.
  
  If anyone wants to clean it up, make it more readable, speed it up
  (without uglying it up) I would be interested and would incorporate that
  into the final reference bot.I don't know java specific do's and
  dont's and idioms for getting faster code.   
  
  But of course first I want to find all the bugs.  
  
  - Don
  
  
  
  
  
  -Josh
 
  On Mon, Oct 13, 2008 at 7:14 PM, Don Dailey [EMAIL PROTECTED] wrote:
  I made a reference bot and I want someone(s) to help me check
  it out
  with equivalent data from their own program.  There are no
  guarantees
  that I have this correct of course.
  
 
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
  
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] java reference bot

2008-10-14 Thread Denis fidaali


 Hi.
 First things first : i think the specification is enougth as it is.
 I hope that we can end up with something useable by anyone,
 even non go people as a final goal. We'll have to get non-go experienced 
 people to beta-read it (and criticized) for us, i suppose. And we'll probably
 have to get a HTML version with some fancy illustration (wich i won't be 
helpfull for)
 
 I really look forward to be able to get involved non go people easily :)
 I'm pretty sure a lot of them accepting this contest would end up
 being very valuable for the community :)

We'll probably have to get a bit deeper in the gtp-part ultimately.
-- 

===
   5.  Scoring is Chinese scoring.  When a play-out completes, the 
  score is taken accounting for komi and statistics are kept.  
  
I think i would like it if we just gave how it should be done.
Using the eye definition we impose anyway.

-
 I propose : 
-
  5. Scoring is done at the end of a game (two consecutive pass)
   , in the following way :
   each stone on the board gives a point to the player who
   owns it. An empty intersection gives a point to the player (if any) who
   has a stone on each orthogonal intersection around it.
   If black's score is greater than 0.0 then it is scored as a black win. 
Otherwise
   it is scored as a white win.
 
 
 ===
   1. Must be able to play complete games for comprehensive conformity
 testing.
 
 I do not quite understand the point. But it can't really hurt either .. :)
 
 ===
   2. In the play-out phase, the moves must be chosen in a uniformly
 random way between legal moves that do not fill 1 point eyes and
 obey the simple-ko restriction.

 When a move in the play-out is not possible, a pass is given.
 
I'd like that we got more descriptive on the simple-ko restriction, if possible.
(i'll try to propose something, but i'm getting low on time right now)

===
  3. Play-outs stop after 2 consecutive pass moves, OR when N*N*3
 moves have been completed, except that at least 1 move gets tried
 where N is the size of the board.  So if the board is 9x9 then
 the game is stopped after 9*9*3 = 81*3 = 243 move assuming at
 least one move has been tried in the play-outs.
 
I don't quite get the point of the except that at least 1 move gets tried
part

===
   ref-nodes - return total moves executed in play-outs
   (including both pass moves at end of each
   play-out.)

   ref-score - return total win fraction for black.  
   
i do not find ref-nodes that much descriptive for return total moves executed 
in play outs
Maybe it is quite standard to call that number ref-nodes ? As it's only amaf,
there are no node per-se are there. what about a ref-numberOfMove command ?
--
Here are the data you requested for with the implementation i want
to use as a reference. I'm not able to get value for integer komi.
(my system do no account for draws ..)

+++
Komi 0.5
mean score =0.5244261847821416
+++
komi 5.5
mean score =0.44674397181685754
+++
Komi 6.5
mean score =0.4467712426921182
+++
Komi 7.5
mean score =0.42132957622630574
+++
87317.15777498983 Playout/sec
Time=11.461321297
Number of playout=1000770.0
Mean moves per sim 111.06128680915695
+++

It uses the mersene twister for random-generation.
But this is a 4 thread test. I use nanotime as an help
to set up the (per thread) seed, combined with thread number.

I think it's interesting to give the Playout/sec score.
Then your reference bot can be used as a refence benchmark.
That is not perfect of course, but that gives something to chew.
(I get quite a large variation in speed from run to run
with 1000 000 simulations. Ranging from less than 80k/s to close to 90k/s
with 4 threads over my 4 cores.

My implementation is in java too, and has nothing fancy to it,
so i might as well publish it later on. I probably should clean it
up a bit. And make a few optimisation (by refactoring).




--
Don said :
--
I made a reference bot and I want someone(s) to help me check it out
with equivalent data from their own program.  There are no guarantees
that I have this correct of course.

Doing 1 million play-outs from the opening position I get the following
numbers for various komi:

  playouts:1,000,000
  komi:5.5
 moves:  111,030,705
 score:  0.445677

  playouts:1,000,000
  komi:6.0
 moves:  111,066,273
 score:  0.446729

  playouts:1,000,000
  komi:

Re: [computer-go] java reference bot

2008-10-14 Thread Don Dailey
Thanks for the comments and suggestions.   My responses below ...

On Tue, 2008-10-14 at 09:00 +0200, Denis fidaali wrote:
 
  Hi.
  First things first : i think the specification is enougth as it is.
  I hope that we can end up with something useable by anyone,
  even non go people as a final goal. We'll have to get non-go experienced 
  people to beta-read it (and criticized) for us, i suppose. And we'll probably
  have to get a HTML version with some fancy illustration (wich i won't be 
 helpfull for)

I don't really want to be too involved in drafting this as it's not my
forte.  However I hope someone else (who is a better writer) will be
willing.   I recognize that my draft is far from ideal and I just hoped
it was enough for those of us experienced to make sense of. Ultimately,
I think we have to explain the rules and we could start with John Tromps
logical rules contained at  http://homepages.cwi.nl/~tromp/go.html  and
modify them accordingly or just integrate them into the document.

Note that the only different between those rules and our rules is the
suicide rule.  

  
  I really look forward to be able to get involved non go people easily :)
  I'm pretty sure a lot of them accepting this contest would end up
  being very valuable for the community :)

It's not clear to me whether others will respond, but it would be pretty
cool and it would probably grow the computer go community at the same
time.

 
 We'll probably have to get a bit deeper in the gtp-part ultimately.

Do you mean the explanation or the actual commands I added?   For the
explanation we could point them to the excellent GTP draft standard and
then explain this as 2 additional commands and then clarify what I
wrong.

I made only 2 GTP command as I wanted to be careful not to add anything
that would likely impose a barrier to implementation and this
influenced the specification in many places too.   I would personally
have made slightly different choices about some things which would have
complicated the draft and the implementation.   But I resisted doing
that to keep it as generic as possible.Examples:  my own test show
that you should stop the AMAF tally a little early.   Christopher Birk
uses a decaying weight in his tally.   I'm sure everyone who has
implemented one of these has their own little enhancement but we want
to keep this real simple.


 -- 
 
 ===
5.  Scoring is Chinese scoring.  When a play-out completes, the 
   score is taken accounting for komi and statistics are kept.  
   
 I think i would like it if we just gave how it should be done.
 Using the eye definition we impose anyway.
 
 -
  I propose : 
 -
   5. Scoring is done at the end of a game (two consecutive pass)
, in the following way :
each stone on the board gives a point to the player who
owns it. An empty intersection gives a point to the player (if any) who
has a stone on each orthogonal intersection around it.
If black's score is greater than 0.0 then it is scored as a black win. 
 Otherwise
it is scored as a white win.

Yes, or as I mentioned we could borrow from the logical rules.

 
  ===
1. Must be able to play complete games for comprehensive conformity
  testing.
  
  I do not quite understand the point. But it can't really hurt either .. :)

It seems slightly out of place but most of this is focused on how to do
the play-outs and I wanted to make it clear that this should be able to
play a real game and real games could be used for conformance testing
(if you don't score very close to 50% against others, something is
wrong.)

  
  ===
2. In the play-out phase, the moves must be chosen in a uniformly
  random way between legal moves that do not fill 1 point eyes and
  obey the simple-ko restriction.
 
  When a move in the play-out is not possible, a pass is given.
  
 I'd like that we got more descriptive on the simple-ko restriction, if 
 possible.
 (i'll try to propose something, but i'm getting low on time right now)


Yes, this goes back to my point (and yours) that we really need to
explain the rules. 

 
 ===
   3. Play-outs stop after 2 consecutive pass moves, OR when N*N*3
  moves have been completed, except that at least 1 move gets tried
  where N is the size of the board.  So if the board is 9x9 then
  the game is stopped after 9*9*3 = 81*3 = 243 move assuming at
  least one move has been tried in the play-outs.
  
 I don't quite get the point of the except that at least 1 move gets tried
 part

Because you don't want play-outs that stop before they begin.  In order
to get samples of moves you must play at least 1 move in the play-outs
even if you are beyond move 243.

 
 ===
ref-nodes - return total moves executed in play-outs
  (including both pass moves at end of each
play-out.)
 
ref-score - return total 

Re: [computer-go] java reference bot

2008-10-14 Thread Don Dailey
Would you be willing to take what I have and integrate this for us?

My currently official document is in the README file here:

 http://cgos.boardspace.net/public/javabot.zip

(which has been slightly corrected, the Makefile didn't produce a valid
jar file because the main class was not properly identified, but it
compiles and works now.)

I will eventually make a web page from this.  

- Don


On Wed, 2008-10-15 at 06:57 +1300, Stuart A. Yeates wrote:
 Seems like we need a short introduction too:
 
 Go is a board game played on a rectangular grid, usually 19x19.
 Pieces (or stones) are placed alternately by the black and white
 players. Pieces are played onto empty vertexes with the aim of
 surrounding and capturing the opponents pieces. The game continues
 until both players pass. Go is scored on territory---essentially
 whoever has the most territory wins. See
 http://senseis.xmp.net/?BasicRulesOfGo for a more complete but
 informal introduction.
 
 This task aims to solve the game of go using Monte Carlo simulation,
 playing many random games to determine the best next move. For an
 introduction to Monte Carlo simulation See
 http://senseis.xmp.net/?MonteCarloTreeSearch or
 http://en.wikipedia.org/wiki/Monte_Carlo_method
 
 This task uses a simple simulation and somewhat simplified
 interpretation of the rules for the sake of ease of implementation.
 
 You also need to explain the following terms you use without
 explanation: komi, play-outs, gtp, genmove, ko, superko. explanation
 by reference to a good beginner page on http://senseis.xmp.net/ or
 http://en.wikipedia.org/wiki/ should do
 
 cheers
 stuart
 
 
 
 
 
 
 On Tue, Oct 14, 2008 at 12:14 PM, Don Dailey [EMAIL PROTECTED] wrote:
  I made a reference bot and I want someone(s) to help me check it out
  with equivalent data from their own program.  There are no guarantees
  that I have this correct of course.
 
  Doing 1 million play-outs from the opening position I get the following
  numbers for various komi:
 
   playouts:1,000,000
   komi:5.5
  moves:  111,030,705
  score:  0.445677
 
   playouts:1,000,000
   komi:6.0
  moves:  111,066,273
  score:  0.446729
 
   playouts:1,000,000
   komi:6.5
  moves:  111,040,546
  score:  0.447138
 
   playouts:1,000,000
   komi:7.0
  moves:  111,029,204
  score:  0.4333795
 
   playouts:1,000,000
   komi:7.5
  moves:  111,047,843
  score:  0.421281
 
  (I also get a score of 0.524478 for 0.0 komi)
 
  Score is from blacks point of view.  Score is not the score of the
  best move of course but the combined average score of all 1 million
  play-outs using the stated komi and ranges from zero to one.
 
  I am going to build a test harness to compare multiple bots side by
  side using gtp commands.  I made up two private gtp commands to
  facilitate this:
 
ref-nodes - return total moves executed in play-outs
(including both pass moves at end of each
play-out.)
 
ref-score - return total win fraction for black.
 
   NOTE: both commands report stats from last given genmove search.
 
 
 
  I hope to get peoples opinion on the following implementation
  specification.  I'm definitely not a writer, so I need to know if this
  very informal spec is enough at least for experienced MC bot authors
  or where there are still some ambiguous points.
 
 
  I'm using the following implementation specification:
 
  [ bot implementation specification ]
 
  This is an informal implementation specification document for
  writing a simple Monte Carlo Bot program.  The idea is to build a bot
  like this in ANY language and test it for performance (and
  conformity.)  Can be used as a general language benchmark but is as much
  about the implementation as the language.This specification assumes
  some knowledge of go and Monte Carlo go programs.   (If you don't like
  it, please write a better one for me!)
 
 
 
   1. Must be able to play complete games for comprehensive conformity
  testing.
 
   2. In the play-out phase, the moves must be chosen in a uniformly
  random way between legal moves that do not fill 1 point eyes and
  obey the simple-ko restriction.
 
  When a move in the play-out is not possible, a pass is given.
 
   3. Play-outs stop after 2 consecutive pass moves, OR when N*N*3
  moves have been completed, except that at least 1 move gets tried
  where N is the size of the board.  So if the board is 9x9 then
  the game is stopped after 9*9*3 = 81*3 = 243 move assuming at
  least one move has been tried in the play-outs.
 
   4.  A 1 point eye is an empty point surrounded by friendly stones
   for the side to move.  Additionally, we have 2 cases.  If the
   stone is NOT on any edge (where the corner is an edge) there
   must be no more than one diagonal enemy stone.  If the point in
   question is on the edge, 

[computer-go] java reference bot

2008-10-13 Thread Don Dailey
I made a reference bot and I want someone(s) to help me check it out
with equivalent data from their own program.  There are no guarantees
that I have this correct of course.

Doing 1 million play-outs from the opening position I get the following
numbers for various komi:

  playouts:1,000,000
  komi:5.5
 moves:  111,030,705
 score:  0.445677

  playouts:1,000,000
  komi:6.0
 moves:  111,066,273
 score:  0.446729

  playouts:1,000,000
  komi:6.5
 moves:  111,040,546
 score:  0.447138

  playouts:1,000,000
  komi:7.0
 moves:  111,029,204
 score:  0.4333795

  playouts:1,000,000
  komi:7.5
 moves:  111,047,843
 score:  0.421281

(I also get a score of 0.524478 for 0.0 komi)

Score is from blacks point of view.  Score is not the score of the
best move of course but the combined average score of all 1 million
play-outs using the stated komi and ranges from zero to one.

I am going to build a test harness to compare multiple bots side by
side using gtp commands.  I made up two private gtp commands to
facilitate this:

   ref-nodes - return total moves executed in play-outs
   (including both pass moves at end of each
   play-out.)

   ref-score - return total win fraction for black.  

  NOTE: both commands report stats from last given genmove search.

   

I hope to get peoples opinion on the following implementation
specification.  I'm definitely not a writer, so I need to know if this
very informal spec is enough at least for experienced MC bot authors
or where there are still some ambiguous points.


I'm using the following implementation specification:

[ bot implementation specification ]

This is an informal implementation specification document for
writing a simple Monte Carlo Bot program.  The idea is to build a bot
like this in ANY language and test it for performance (and
conformity.)  Can be used as a general language benchmark but is as much
about the implementation as the language.This specification assumes
some knowledge of go and Monte Carlo go programs.   (If you don't like
it, please write a better one for me!)



  1. Must be able to play complete games for comprehensive conformity
 testing.

  2. In the play-out phase, the moves must be chosen in a uniformly
 random way between legal moves that do not fill 1 point eyes and
 obey the simple-ko restriction.

 When a move in the play-out is not possible, a pass is given.

  3. Play-outs stop after 2 consecutive pass moves, OR when N*N*3
 moves have been completed, except that at least 1 move gets tried
 where N is the size of the board.  So if the board is 9x9 then
 the game is stopped after 9*9*3 = 81*3 = 243 move assuming at
 least one move has been tried in the play-outs.

  4.  A 1 point eye is an empty point surrounded by friendly stones
  for the side to move.  Additionally, we have 2 cases.  If the
  stone is NOT on any edge (where the corner is an edge) there
  must be no more than one diagonal enemy stone.  If the point in
  question is on the edge, there must be NO diagonal enemy stones.

  5.  Scoring is Chinese scoring.  When a play-out completes, the 
  score is taken accounting for komi and statistics are kept.  

  6.  Scoring for game play uses AMAF - all moves as first.  In the
  play-outs, statistics are taken on moves played during the
  play-outs.  Statistics are taken only on moves that are played by
  the side to move, and only if the move in question is being
  played for the first time in the play-out (by either side.)  A
  win/loss record is kept for these moves.

  7.  The move with the highest statistical win rate is the one
  selected for move in the actual game.  In the case of moves with
  even scores the choice is randomly made between them.

  8.  Pass move are never selected as the final move to play unless no
  other non-eye filling move is possible.

  9.  Random number generator is unspecified - your program should
  simply pass the black box test and possible an optional
  additional test which consists of long matches between other
  known conforming bots.  Your program should score close to 50%
  against other properly implemented programs.

 10.  Suicide not allowed in the play-outs or in games it plays.  
 
 11.  When selecting moves to play in the actual game (not play-outs)
  positional superko is checked and forbidden.

 12.  If stats for a move was never seen in the play-outs, (has a count
  of zero) it is ignored for move selection.  



signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] java reference bot

2008-10-13 Thread Don Dailey
A minor correction in the GTP ref-score command.   Score is not from
blacks point of view, but from the point of view of the player who's
turn to move it is.

- Don


On Mon, 2008-10-13 at 19:14 -0400, Don Dailey wrote:
 I made a reference bot and I want someone(s) to help me check it out
 with equivalent data from their own program.  There are no guarantees
 that I have this correct of course.
 
 Doing 1 million play-outs from the opening position I get the following
 numbers for various komi:
 
   playouts:1,000,000
   komi:5.5
  moves:  111,030,705
  score:  0.445677
 
   playouts:1,000,000
   komi:6.0
  moves:  111,066,273
  score:  0.446729
 
   playouts:1,000,000
   komi:6.5
  moves:  111,040,546
  score:  0.447138
 
   playouts:1,000,000
   komi:7.0
  moves:  111,029,204
  score:  0.4333795
 
   playouts:1,000,000
   komi:7.5
  moves:  111,047,843
  score:  0.421281
 
 (I also get a score of 0.524478 for 0.0 komi)
 
 Score is from blacks point of view.  Score is not the score of the
 best move of course but the combined average score of all 1 million
 play-outs using the stated komi and ranges from zero to one.
 
 I am going to build a test harness to compare multiple bots side by
 side using gtp commands.  I made up two private gtp commands to
 facilitate this:
 
ref-nodes - return total moves executed in play-outs
  (including both pass moves at end of each
play-out.)
 
ref-score - return total win fraction for black.  
 
   NOTE: both commands report stats from last given genmove search.
 

 
 I hope to get peoples opinion on the following implementation
 specification.  I'm definitely not a writer, so I need to know if this
 very informal spec is enough at least for experienced MC bot authors
 or where there are still some ambiguous points.
 
 
 I'm using the following implementation specification:
 
 [ bot implementation specification ]
 
 This is an informal implementation specification document for
 writing a simple Monte Carlo Bot program.  The idea is to build a bot
 like this in ANY language and test it for performance (and
 conformity.)  Can be used as a general language benchmark but is as much
 about the implementation as the language.This specification assumes
 some knowledge of go and Monte Carlo go programs.   (If you don't like
 it, please write a better one for me!)
 
 
 
   1. Must be able to play complete games for comprehensive conformity
  testing.
 
   2. In the play-out phase, the moves must be chosen in a uniformly
  random way between legal moves that do not fill 1 point eyes and
  obey the simple-ko restriction.
 
  When a move in the play-out is not possible, a pass is given.
 
   3. Play-outs stop after 2 consecutive pass moves, OR when N*N*3
  moves have been completed, except that at least 1 move gets tried
  where N is the size of the board.  So if the board is 9x9 then
  the game is stopped after 9*9*3 = 81*3 = 243 move assuming at
  least one move has been tried in the play-outs.
 
   4.  A 1 point eye is an empty point surrounded by friendly stones
   for the side to move.  Additionally, we have 2 cases.  If the
   stone is NOT on any edge (where the corner is an edge) there
   must be no more than one diagonal enemy stone.  If the point in
   question is on the edge, there must be NO diagonal enemy stones.
 
   5.  Scoring is Chinese scoring.  When a play-out completes, the 
   score is taken accounting for komi and statistics are kept.  
 
   6.  Scoring for game play uses AMAF - all moves as first.  In the
   play-outs, statistics are taken on moves played during the
   play-outs.  Statistics are taken only on moves that are played by
   the side to move, and only if the move in question is being
   played for the first time in the play-out (by either side.)  A
   win/loss record is kept for these moves.
 
   7.  The move with the highest statistical win rate is the one
   selected for move in the actual game.  In the case of moves with
   even scores the choice is randomly made between them.
 
   8.  Pass move are never selected as the final move to play unless no
   other non-eye filling move is possible.
 
   9.  Random number generator is unspecified - your program should
   simply pass the black box test and possible an optional
   additional test which consists of long matches between other
   known conforming bots.  Your program should score close to 50%
   against other properly implemented programs.
 
  10.  Suicide not allowed in the play-outs or in games it plays.  
  
  11.  When selecting moves to play in the actual game (not play-outs)
   positional superko is checked and forbidden.
 
  12.  If stats for a move was never seen in the play-outs, (has a count
   of zero) it is ignored for move 

Re: [computer-go] java reference bot

2008-10-13 Thread Joshua Shriver
Is the source available would be neat to see.

-Josh

On Mon, Oct 13, 2008 at 7:14 PM, Don Dailey [EMAIL PROTECTED] wrote:

 I made a reference bot and I want someone(s) to help me check it out
 with equivalent data from their own program.  There are no guarantees
 that I have this correct of course.


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] java reference bot

2008-10-13 Thread Don Dailey

On Mon, 2008-10-13 at 23:21 -0400, Joshua Shriver wrote:
 Is the source available would be neat to see.

Yes,  get it here:  http://cgos.boardspace.net/public/javabot.zip

It includes a unix style simple Makefile. 

For you java programmers:  I'm sure you won't like it - I'm not a java
programmer but I did try to comment it fairly well and make it readable
because it's supposed to be a reference bot.

If anyone wants to clean it up, make it more readable, speed it up
(without uglying it up) I would be interested and would incorporate that
into the final reference bot.I don't know java specific do's and
dont's and idioms for getting faster code.   

But of course first I want to find all the bugs.  

- Don





 
 -Josh
 
 On Mon, Oct 13, 2008 at 7:14 PM, Don Dailey [EMAIL PROTECTED] wrote:
 I made a reference bot and I want someone(s) to help me check
 it out
 with equivalent data from their own program.  There are no
 guarantees
 that I have this correct of course.
 
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/