[computer-go] Request for : Recursive Simulation Data + Tree decision standard

2008-10-22 Thread Denis fidaali

Hi.


--
 Recursive Simulation Data

As anyone hard-data on the performances of a recursive simulator bot ?
(What the hell does that mean ? :) )

You would use for example an AMAF strategy to make up a simulation of how the 
game may go.
(rather than doing a very-fast nearly 100% random one).

[
My tests shows that even with as few as 10 amaf simulation per move, the win 
rate against
the pure-random strategy is almost 100%. (It's not a question of recursive 
anything in those results)
]

 So i really wonder how would behave the engine using this simulation policy :
 For each move, you select one of the highest AMAF score after 10 simulations. 
If all simulation get the same result,
(black win, or black loose) then the game is over (the winner being who ever is 
shown by the simulations).
Obviously you can use more simulation per move too.

 This idea is very basic, so i would like to have data for the following cases :

 - Using AMAF over the recursive simulation
 - Using a well known algorithm (like UCT) with those recursive simulations.

The reasons why it may not have been tried to much, is that it would consumes 
obviously a lot of simulations. (and thus be inefficient)
But i think it's really interesting to have those data. (which i probably 
should be working on myself but well if someone
already have done that particular study ). Montecarlo is well-known to be 
inefficient anyway :)

--
Tree decision standard


Now that we got a standard light policy, maybe we should get our hand dirty 
on a standard tree-decision algorithm,
based on those standard light policy. Mostly, this basic algorithm is known 
to scale well, although inefficient if
not using AMAF or RAVE or whatever. Still it's something that almost everybody 
has done ... so it's standard.

I would like also to discuss not-deterministic alternatives to UCT. As it would 
be more natural to get Multi-threading then.
So it's about a standard tree decision algorithm that is multi-threading 
friendly.


I used a simple stochastic alternative to UCT with good results.
+ Be nmove the number of legal move from the node.
+ Be best_nvisit_child the number of visit to the best of this node.

 I used the following strategy : 
Get a number R between 0 and best_nvisit_child + nmove. If R best_nvisit_child 
then explore the best move. Otherwise pick another one at random.
I think it's an epsilon-greedy equivalent. I didn't try to tune it. But it's 
stochastic, and though i have no hard-data to give you, it gave me a fair 
result.

I made a bot that scored 30% win against a version of gnu go lvl 0, using this 
tree exploration strategy and 1000 light simulations. This bot however had a 
lot of experimental stuff,
with the goal of optimizing the per-light-simulation performances. It was very 
limited in scaling as the number of simulation increased. 
(something we easily run onto when messing up with AMAF :) )

cheers.
_
Téléphonez gratuitement à tous vos proches avec Windows Live Messenger  !  
Téléchargez-le maintenant ! 
http://www.windowslive.fr/messenger/1.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] survey : Debuging a board implementation.

2008-10-22 Thread Denis fidaali


 Hi.

-
WHAT TOOLS DO YOU USE FOR DEBUGGING BOARD IMPLEMENTATIONS ?
-




-
Here is how i do it :
-

I have made quite a few board implementation in the past years.
Initially i used a custom made dirty Java-swing-visualizer. It was very
usefull to get started, as we had the goal to reach the 20k simulation/s marks.
We used not very user-readable structures, and so it was very helpfull
for testing purposes. The java-swing-visualizer enabled me to visualize
easily different views on the board-state.

Later on, when switching to C and Assembly, i felt back on ASCII output. I 
would then scroll to find something worth finding.
When working with those ASCII output, i had the most success by doing it with 
two phases.
First phase i used a simple fixed sequence to verify that the structures where 
still coherent at the end of it.

+*+*+
+*+*+
+*.*+


This shape was made up, move after moves, and positioned on an edge.
Then black plays to capture the two white stones inside the shape,
Then white play inside the eye, to capture black
then black captures the white stone again.

+ represent white
* represent black

Then i would simply make random moves, and verify that all was going well 
(especially capture detection).
usually i deactivated eye-filling-interdiction for this phase. It enable the 
board to loop a few times. I would
then make 1000 moves, and check that everything was fine, even after the board 
had been wiped out a
few time.


-
WHAT TOOLS DO YOU USE FOR DEBUGGING BOARD IMPLEMENTATIONS ?
-
_
Inédit ! Des Emoticônes Déjantées! Installez les dans votre Messenger ! 
http://www.ilovemessenger.fr/Emoticones/EmoticonesDejantees.aspx___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: AMAF

2008-10-22 Thread Darren Cook
(I'd like to hijack Denis's mail; I've changed the subject)

 My tests shows that even with as few as 10 amaf simulation per move,
 the win rate against the pure-random strategy is almost 100%.

I'd thought people were saying that AMAF only helped with weak bots.
Once the playouts were cleverer, and/or you were doing a lot of them, it
didn't help, or even became a liability.

But that is just an impression I'd picked up from day to day reading of
this list; and at CG2008 people were still talking about AMAF in their
papers (*). Can someone with a strong bot confirm or deny that AMAF is
still useful?

*: But those papers may have been comparing programs with relatively few
playouts (i.e. weak programs), due to the excessive CPU cycles needed to
statistically-significantly compare programs with more playouts.

 I used the following strategy ... though i have no hard-data to give
 you, it gave me a fair result.

Another thing I've picked up from this list is that when you get that
hard, statistically significant data you can frequently be surprised ;-).

Darren


-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
open source dictionary/semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: AMAF (from daren)

2008-10-22 Thread Denis fidaali

--
Darren Cook darren at dcook.org
--

(I'd like to hijack Denis's mail; I've changed the subject)

 My tests shows that even with as few as 10 amaf simulation per move,
 the win rate against the pure-random strategy is almost 100%.

I'd thought people were saying that AMAF only helped with weak bots.
Once the playouts were cleverer, and/or you were doing a lot of them, it
didn't help, or even became a liability.

But that is just an impression I'd picked up from day to day reading of
this list; and at CG2008 people were still talking about AMAF in their
papers (*). Can someone with a strong bot confirm or deny that AMAF is
still useful?

*: But those papers may have been comparing programs with relatively few
playouts (i.e. weak programs), due to the excessive CPU cycles needed to
statistically-significantly compare programs with more playouts.

---
Answers : 
---
My results attest that you have to be prudent whenever using AMAF.
Still, i think that MOGO used it for a long time, calling it RAVE 
(i guess you'd have to look out for a paper :) I never really tried
to understand what was going on exactly there. )

My personal tests were exclusively with light-simulations. I used amaf
for biasing the score in the tree-phase, switching gradually to a more
standard node-scoring policy. The result i had was great for low simulation
counts. (I never really tryed to make the thing scale
for i wanted something quick to test)
  Zoe used heavy playout (a project made by Ivan Dubois that i was
distantly involved in). Using amaf for hard-pruning showed impressive results. 
Zoe was about 2000 on Cgos. Although it had some scaling trouble. It didn't used
pondering nor did it used zobrist hashing. I think it remains to be shown what
zobrist hashing really do for you in the context of go bots.

So i really think that AMAF is still promising, although you have to be careful
of what you try to do with it. And you can't avoid a lot of testing, including 
thorough
scalability before trying to conclude anything.
 However i experienced wide issues with scaling. The scaling property proof is 
very very cpu-consuming,
but it can hardly be avoided before stating that something work (or doesn't for 
that matter).
For example MOGO team reported that with really huge number of simulation, they 
suddenly wanted
more randomness in the playout. That makes one wondering if light-playout 
wouldn't be suddenly efficient
with really high numbers of simulations. It'll probably not be known until we 
got more powerful hardware :)
 That's why i grew so interested in the per light-playout efficiency study. 
It was cpu-friendly. That's also why
i believe that having a really fast light-playout implementation could be 
something good. But it would have to be
rock-solid. That's why i'm so happy with the standard light playout project. 
You can get excellent confidence
that your conforming implementation is correct. (you can even test 
non-conforming for that matter :) )


--
Darren Cook darren at dcook.org
--

 I used the following strategy ... though i have no hard-data to give
 you, it gave me a fair result.

Another thing I've picked up from this list is that when you get that
hard, statistically significant data you can frequently be surprised ;-).

Darren
---
Answers : 
---
i think i got the  sign wrong anyway :)

I did a lot of testing. BUT i lost the data :)
Beside, i would really have trouble figuring what the thing really exactly did
at the time, but i tried to give a simple description of something close enough.
 That's true of course. You need a lot of tests before your statement has any 
strength.
I usually go for 1000 games before assuming that i get an approximate feeling 
of how things goes. 
Don seems to feel like 100 000 is more comfortable :)

 I think Hard-Data is really what matters for modern go programming. The more 
you have, the more you can be
confident about your results. You then have to cross-check with somebody's else 
data. It's really hard to get convinced
that a bot-implementation is conforming to what you think it should do. Idea 
are of little values by themselves,
 measures about them are the real treasure. It can hardly be done by a lonely 
team, and get high enough confidence
that all is how it should. I have heard so many reports about something that 
proved to wrong, although it was thoroughly tested
by one team. I have been in this situation so many times myself :) The main 
issue being bugs.
_
Email envoyé avec Windows Live Hotmail. Dites adieux aux spam et virus, passez 
à Hotmail 

Re: [computer-go] Re: AMAF

2008-10-22 Thread Magnus Persson

Quoting Darren Cook [EMAIL PROTECTED]:


But that is just an impression I'd picked up from day to day reading of
this list; and at CG2008 people were still talking about AMAF in their
papers (*). Can someone with a strong bot confirm or deny that AMAF is
still useful?


Valkyria probably has the (?) heaviest and slowest playouts of all  
stronger programs.
Thanks to AMAF Valkyria searches extremely selective. I have not tried  
to turn of AMAF. I am currently comparing Valkyria with the results  
from 13x13 scaling study.


http://cgos.boardspace.net/study/13/index.html

Here is just one datapoint for level 4 the programs uses 1024  
simulations per move.


LeelaLight 824 Elo
Leela 1410 Elo
Mogo 1599 Elo
Valkyria 1860 after 100 games

in general it as least 200 Elo stronger than Mogo using the same  
number of simulations. But currently I think Mogo is fast enough to be  
slightly stronger on equivalent hardware using normal time controls.


If I turned off AMAF for Valkyria I would probably have to retune all  
parameters to make a fair comparison, but I am quite sure that crudely  
turning it off would be very bad.


Magnus

--
Magnus Persson
Berlin, Germany
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Request for : Recursive Simulation Data + Tree decision standard

2008-10-22 Thread Don Dailey
On Wed, 2008-10-22 at 11:26 +0200, Denis fidaali wrote:
 Hi.
 
 
 --
  Recursive Simulation Data
 
 As anyone hard-data on the performances of a recursive simulator bot ?
 (What the hell does that mean ? :) )
 
 You would use for example an AMAF strategy to make up a simulation of how the 
 game may go.
 (rather than doing a very-fast nearly 100% random one).

That is one of the very first things I tried.  It's a logical thing to
do. And yes, even 1 or 2 playouts beats random.  

In my implementation you could set the number of internal simulations
so I think I tried something like 10.   Each playout was generated by 10
playout bots.In theory you could go deeper than that of course.

My results however was disappointing.  I had some trouble debugging this
because the program wasn't designed for this and I tried to hack it up
quickly but I  wondered if perhaps  I had not implemented it correctly,
but what I found was that even with the same number of high level
simulations (taking a huge slowdown of course) it played worse.  

I doesn't make sense to me that it should play worse but I eventually
gave up on the idea.   I figured that even if I could make it play
better at the same time controls this method has definite limitations
because it will always be subject to a level of shortsightedness unless
a real tree was constructed.

- Don



 
 [
 My tests shows that even with as few as 10 amaf simulation per move, the win 
 rate against
 the pure-random strategy is almost 100%. (It's not a question of recursive 
 anything in those results)
 ]
 
  So i really wonder how would behave the engine using this simulation policy :
  For each move, you select one of the highest AMAF score after 10 
 simulations. If all simulation get the same result,
 (black win, or black loose) then the game is over (the winner being who ever 
 is shown by the simulations).
 Obviously you can use more simulation per move too.
 
  This idea is very basic, so i would like to have data for the following 
 cases :
 
  - Using AMAF over the recursive simulation
  - Using a well known algorithm (like UCT) with those recursive simulations.
 
 The reasons why it may not have been tried to much, is that it would consumes 
 obviously a lot of simulations. (and thus be inefficient)
 But i think it's really interesting to have those data. (which i probably 
 should be working on myself but well if someone
 already have done that particular study ). Montecarlo is well-known to be 
 inefficient anyway :)
 
 --
 Tree decision standard
 
 
 Now that we got a standard light policy, maybe we should get our hand dirty 
 on a standard tree-decision algorithm,
 based on those standard light policy. Mostly, this basic algorithm is known 
 to scale well, although inefficient if
 not using AMAF or RAVE or whatever. Still it's something that almost 
 everybody has done ... so it's standard.
 
 I would like also to discuss not-deterministic alternatives to UCT. As it 
 would be more natural to get Multi-threading then.
 So it's about a standard tree decision algorithm that is multi-threading 
 friendly.
 
 
 I used a simple stochastic alternative to UCT with good results.
 + Be nmove the number of legal move from the node.
 + Be best_nvisit_child the number of visit to the best of this node.
 
  I used the following strategy : 
 Get a number R between 0 and best_nvisit_child + nmove. If R 
 best_nvisit_child then explore the best move. Otherwise pick another one at 
 random.
 I think it's an epsilon-greedy equivalent. I didn't try to tune it. But it's 
 stochastic, and though i have no hard-data to give you, it gave me a fair 
 result.
 
 I made a bot that scored 30% win against a version of gnu go lvl 0, using 
 this tree exploration strategy and 1000 light simulations. This bot however 
 had a lot of experimental stuff,
 with the goal of optimizing the per-light-simulation performances. It was 
 very limited in scaling as the number of simulation increased. 
 (something we easily run onto when messing up with AMAF :) )
 
 cheers.
 _
 Téléphonez gratuitement à tous vos proches avec Windows Live Messenger  !  
 Téléchargez-le maintenant ! 
 http://www.windowslive.fr/messenger/1.asp___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org

Re: [computer-go] Re: AMAF

2008-10-22 Thread Don Dailey
AMAF is very powerful.   It's all in the details of course but for a
given level of CPU effort,  you can pull more information using AMAF.

I think the way it's used in strong programs is that it's gradually
phased out,  which is probably the best way to use it.   At some point
AMAF loses to pure first move statistics but it gets you going quickly
in the right direction. 

Simple bots of course do not scale beyond a certain point.  AMAF reaches
the point of seriously diminishing returns very quickly.  Without AMAF
you would probably go a little farther  but it would take a lot more
effort to get there.   So a 3000 playout bot using AMAF is already close
to that point but without AMAF it would take an order of magnitude more
playouts (or more) to reach the same level.


- Don
 

On Wed, 2008-10-22 at 13:27 +0200, Magnus Persson wrote:
 Quoting Darren Cook [EMAIL PROTECTED]:
 
  But that is just an impression I'd picked up from day to day reading of
  this list; and at CG2008 people were still talking about AMAF in their
  papers (*). Can someone with a strong bot confirm or deny that AMAF is
  still useful?
 
 Valkyria probably has the (?) heaviest and slowest playouts of all  
 stronger programs.
 Thanks to AMAF Valkyria searches extremely selective. I have not tried  
 to turn of AMAF. I am currently comparing Valkyria with the results  
 from 13x13 scaling study.
 
 http://cgos.boardspace.net/study/13/index.html
 
 Here is just one datapoint for level 4 the programs uses 1024  
 simulations per move.
 
 LeelaLight 824 Elo
 Leela 1410 Elo
 Mogo 1599 Elo
 Valkyria 1860 after 100 games
 
 in general it as least 200 Elo stronger than Mogo using the same  
 number of simulations. But currently I think Mogo is fast enough to be  
 slightly stronger on equivalent hardware using normal time controls.
 
 If I turned off AMAF for Valkyria I would probably have to retune all  
 parameters to make a fair comparison, but I am quite sure that crudely  
 turning it off would be very bad.
 
 Magnus
 


signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] survey : Debuging a board implementation.

2008-10-22 Thread Jason House
unit tests.

Except for speed builds, any build of my bot will run every unit test on
start up and verify a conforming implementation.  game/go.d contains 20
unit tests.  Along with a with other design by contact elements, I have
a total of 141 assertions.

I have the ability to draw the board state during execution, and I've
used it occasionally, but I rely on my unit tests.  Within the last
month, I accidentally broke my eye detection code.  I was happy to have
my unit tests to pick that up.

Here's a portion of my bot's output when I enable reporting of unit
tests:
Testing goBoardPosition converstions to/from Utf8
Testing goAction conversion to/from Utf8
Testing enumerated move generation
Testing board position structures match expectations
Testing single stone capture in corner
Testing single stone suicide in corner
Testing single stone capture on edge
Testing single stone suicide on edge
Testing single stone capture in center
Testing single stone suicide in center
Testing simple chain formation
Testing simple chain capture
Testing liberties after chain capture
Testing chain mergers
Testing anti-eye-filling rule in corner
Testing anti-eye-filling rule on edge
Testing anti-eye-filling rule in center
Testing positional zorbist hashing
Testing situational zorbist hashing


On Wed, 2008-10-22 at 11:43 +0200, Denis fidaali wrote:
 
  Hi.
 
 -
 WHAT TOOLS DO YOU USE FOR DEBUGGING BOARD IMPLEMENTATIONS ?
 -
 
 
 
 
 -
 Here is how i do it :
 -
 
 I have made quite a few board implementation in the past years.
 Initially i used a custom made dirty Java-swing-visualizer. It was very
 usefull to get started, as we had the goal to reach the 20k simulation/s 
 marks.
 We used not very user-readable structures, and so it was very helpfull
 for testing purposes. The java-swing-visualizer enabled me to visualize
 easily different views on the board-state.
 
 Later on, when switching to C and Assembly, i felt back on ASCII output. I 
 would then scroll to find something worth finding.
 When working with those ASCII output, i had the most success by doing it with 
 two phases.
 First phase i used a simple fixed sequence to verify that the structures 
 where still coherent at the end of it.
 
 +*+*+
 +*+*+
 +*.*+
 
 
 This shape was made up, move after moves, and positioned on an edge.
 Then black plays to capture the two white stones inside the shape,
 Then white play inside the eye, to capture black
 then black captures the white stone again.
 
 + represent white
 * represent black
 
 Then i would simply make random moves, and verify that all was going well 
 (especially capture detection).
 usually i deactivated eye-filling-interdiction for this phase. It enable the 
 board to loop a few times. I would
 then make 1000 moves, and check that everything was fine, even after the 
 board had been wiped out a
 few time.
 
 
 -
 WHAT TOOLS DO YOU USE FOR DEBUGGING BOARD IMPLEMENTATIONS ?
 -
 _
 Inédit ! Des Emoticônes Déjantées! Installez les dans votre Messenger ! 
 http://www.ilovemessenger.fr/Emoticones/EmoticonesDejantees.aspx___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] survey : Debuging a board implementation.

2008-10-22 Thread Mark Boon
I'm using unit-tests also, although somehow for Go programming not  
nearly as much as I usually do.


And I use CruiseControl. It monitors changes in my repository, builds  
the project and runs the tests. That way I don't have to think about  
it, it happens automatically.


Another thing that I find useful is to make comprehensive  
implementations of the toString() method. Even if I don't print it on  
the screen, I can always see it in the debugger, which uses the  
toString() method to display variable contents.


Mark


On 22-okt-08, at 10:38, Jason House wrote:


unit tests.

Except for speed builds, any build of my bot will run every unit  
test on
start up and verify a conforming implementation.  game/go.d  
contains 20
unit tests.  Along with a with other design by contact elements, I  
have

a total of 141 assertions.

I have the ability to draw the board state during execution, and I've
used it occasionally, but I rely on my unit tests.  Within the last
month, I accidentally broke my eye detection code.  I was happy to  
have

my unit tests to pick that up.

Here's a portion of my bot's output when I enable reporting of unit
tests:
Testing goBoardPosition converstions to/from Utf8
Testing goAction conversion to/from Utf8
Testing enumerated move generation
Testing board position structures match expectations
Testing single stone capture in corner
Testing single stone suicide in corner
Testing single stone capture on edge
Testing single stone suicide on edge
Testing single stone capture in center
Testing single stone suicide in center
Testing simple chain formation
Testing simple chain capture
Testing liberties after chain capture
Testing chain mergers
Testing anti-eye-filling rule in corner
Testing anti-eye-filling rule on edge
Testing anti-eye-filling rule in center
Testing positional zorbist hashing
Testing situational zorbist hashing


On Wed, 2008-10-22 at 11:43 +0200, Denis fidaali wrote:


 Hi.

- 
- 
---

WHAT TOOLS DO YOU USE FOR DEBUGGING BOARD IMPLEMENTATIONS ?
- 
- 
---





- 
- 
---

Here is how i do it :
- 
- 
---


I have made quite a few board implementation in the past years.
Initially i used a custom made dirty Java-swing-visualizer. It was  
very
usefull to get started, as we had the goal to reach the 20k  
simulation/s marks.
We used not very user-readable structures, and so it was very  
helpfull
for testing purposes. The java-swing-visualizer enabled me to  
visualize

easily different views on the board-state.

Later on, when switching to C and Assembly, i felt back on ASCII  
output. I would then scroll to find something worth finding.
When working with those ASCII output, i had the most success by  
doing it with two phases.
First phase i used a simple fixed sequence to verify that the  
structures where still coherent at the end of it.


+*+*+
+*+*+
+*.*+


This shape was made up, move after moves, and positioned on an edge.
Then black plays to capture the two white stones inside the shape,
Then white play inside the eye, to capture black
then black captures the white stone again.

+ represent white
* represent black

Then i would simply make random moves, and verify that all was  
going well (especially capture detection).
usually i deactivated eye-filling-interdiction for this phase. It  
enable the board to loop a few times. I would
then make 1000 moves, and check that everything was fine, even  
after the board had been wiped out a

few time.


- 
- 
---

WHAT TOOLS DO YOU USE FOR DEBUGGING BOARD IMPLEMENTATIONS ?
- 
- 
---

_
Inédit ! Des Emoticônes Déjantées! Installez les dans votre  
Messenger !
http://www.ilovemessenger.fr/Emoticones/ 
EmoticonesDejantees.aspx_ 
__

computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



RE: [computer-go] survey : Debuging a board implementation.

2008-10-22 Thread David Fotland
I have many assertions, but no unit tests.  When I use incremental data
structures I have code to in the debug build to calculate the same results
non-incrementally, and assert if they don’t compare.

David

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:computer-go-
 [EMAIL PROTECTED] On Behalf Of Denis fidaali
 Sent: Wednesday, October 22, 2008 2:43 AM
 To: computer-go@computer-go.org
 Subject: [computer-go] survey : Debuging a board implementation.
 
 
 
  Hi.
 
 ---
 ---
 ---
 WHAT TOOLS DO YOU USE FOR DEBUGGING BOARD IMPLEMENTATIONS ?
 ---
 ---
 ---
 
 
 
 
 ---
 ---
 ---
 Here is how i do it :
 ---
 ---
 ---
 
 I have made quite a few board implementation in the past years.
 Initially i used a custom made dirty Java-swing-visualizer. It was very
 usefull to get started, as we had the goal to reach the 20k
 simulation/s marks.
 We used not very user-readable structures, and so it was very helpfull
 for testing purposes. The java-swing-visualizer enabled me to visualize
 easily different views on the board-state.
 
 Later on, when switching to C and Assembly, i felt back on ASCII
 output. I would then scroll to find something worth finding.
 When working with those ASCII output, i had the most success by doing
 it with two phases.
 First phase i used a simple fixed sequence to verify that the
 structures where still coherent at the end of it.
 
 +*+*+
 +*+*+
 +*.*+
 
 
 This shape was made up, move after moves, and positioned on an edge.
 Then black plays to capture the two white stones inside the shape,
 Then white play inside the eye, to capture black
 then black captures the white stone again.
 
 + represent white
 * represent black
 
 Then i would simply make random moves, and verify that all was going
 well (especially capture detection).
 usually i deactivated eye-filling-interdiction for this phase. It
 enable the board to loop a few times. I would
 then make 1000 moves, and check that everything was fine, even after
 the board had been wiped out a
 few time.
 
 
 ---
 ---
 ---
 WHAT TOOLS DO YOU USE FOR DEBUGGING BOARD IMPLEMENTATIONS ?
 ---
 ---
 ---
 _
 Inédit ! Des Emoticônes Déjantées! Installez les dans votre Messenger !
 http://www.ilovemessenger.fr/Emoticones/EmoticonesDejantees.aspx___
 
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] survey : Debuging a board implementation.

2008-10-22 Thread Peter Drake

I use JUnit unit tests.

Peter Drake
http://www.lclark.edu/~drake/




On Oct 22, 2008, at 6:09 AM, Mark Boon wrote:

I'm using unit-tests also, although somehow for Go programming not  
nearly as much as I usually do.


And I use CruiseControl. It monitors changes in my repository,  
builds the project and runs the tests. That way I don't have to  
think about it, it happens automatically.


Another thing that I find useful is to make comprehensive  
implementations of the toString() method. Even if I don't print it  
on the screen, I can always see it in the debugger, which uses the  
toString() method to display variable contents.


Mark


On 22-okt-08, at 10:38, Jason House wrote:


unit tests.

Except for speed builds, any build of my bot will run every unit  
test on
start up and verify a conforming implementation.  game/go.d  
contains 20
unit tests.  Along with a with other design by contact elements, I  
have

a total of 141 assertions.

I have the ability to draw the board state during execution, and I've
used it occasionally, but I rely on my unit tests.  Within the last
month, I accidentally broke my eye detection code.  I was happy to  
have

my unit tests to pick that up.

Here's a portion of my bot's output when I enable reporting of unit
tests:
Testing goBoardPosition converstions to/from Utf8
Testing goAction conversion to/from Utf8
Testing enumerated move generation
Testing board position structures match expectations
Testing single stone capture in corner
Testing single stone suicide in corner
Testing single stone capture on edge
Testing single stone suicide on edge
Testing single stone capture in center
Testing single stone suicide in center
Testing simple chain formation
Testing simple chain capture
Testing liberties after chain capture
Testing chain mergers
Testing anti-eye-filling rule in corner
Testing anti-eye-filling rule on edge
Testing anti-eye-filling rule in center
Testing positional zorbist hashing
Testing situational zorbist hashing


On Wed, 2008-10-22 at 11:43 +0200, Denis fidaali wrote:


 Hi.

 
 
-

WHAT TOOLS DO YOU USE FOR DEBUGGING BOARD IMPLEMENTATIONS ?
 
 
-





 
 
-

Here is how i do it :
 
 
-


I have made quite a few board implementation in the past years.
Initially i used a custom made dirty Java-swing-visualizer. It  
was very
usefull to get started, as we had the goal to reach the 20k  
simulation/s marks.
We used not very user-readable structures, and so it was very  
helpfull
for testing purposes. The java-swing-visualizer enabled me to  
visualize

easily different views on the board-state.

Later on, when switching to C and Assembly, i felt back on ASCII  
output. I would then scroll to find something worth finding.
When working with those ASCII output, i had the most success by  
doing it with two phases.
First phase i used a simple fixed sequence to verify that the  
structures where still coherent at the end of it.


+*+*+
+*+*+
+*.*+


This shape was made up, move after moves, and positioned on an edge.
Then black plays to capture the two white stones inside the shape,
Then white play inside the eye, to capture black
then black captures the white stone again.

+ represent white
* represent black

Then i would simply make random moves, and verify that all was  
going well (especially capture detection).
usually i deactivated eye-filling-interdiction for this phase. It  
enable the board to loop a few times. I would
then make 1000 moves, and check that everything was fine, even  
after the board had been wiped out a

few time.


 
 
-

WHAT TOOLS DO YOU USE FOR DEBUGGING BOARD IMPLEMENTATIONS ?
 
 
-

_
Inédit ! Des Emoticônes Déjantées! Installez les dans votre  
Messenger !
http://www.ilovemessenger.fr/Emoticones/ 
EmoticonesDejantees.aspx 
___

computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list

[computer-go] Some performance number

2008-10-22 Thread Don Dailey
I have some preliminary performance numbers for various programming
languages with reference bots,  I now have reference bots in 3
languages.

I also used the same data structure, and pretty much the same techniques
for each language.The Vala program is almost a direct port of the
java version (the language are VERY similar.)

My disclaimer: I am sure the performance numbers for any given platform
could be easily improved, perhaps even with different compiler options
or simple changes to code.   I am no expert in any of these languages.

Below is the data as I have it now.

- Don



platform:   core 2 duo e6700 

  vendor_id : GenuineIntel
  cpu family: 6
  model : 15
  model name: Intel(R) Core(TM)2 CPU  6700  @ 2.66GHz
  stepping  : 6
  cpu MHz   : 2667.000
  cache size: 4096 KB

boardsize  9
komi   0.5

Starting position

PROGRAM   SPEED  TIME  MOVETOTAL NODES AVE
SCORE
  -      -  
crefgo 1.00  40.2  E5  111,040,853  0.523997   
valago 1.38 55.70  E5  111,060,969  0.523488
jrefgo.jar 1.69 68.16  E5  111,037,354  0.524622
jrefgo 1.75  70.5  E5  111,048,045  0.523744



crefgo 
---
gcc version  4.2.3   
options: -O3 -march=native


valago
---
Vala version 0.3.5
compile:  valac --Xcc=-O3  --disable-assert --disable-checking 
  --disable-non-null -o vgo valaGo.vala   


jrefgo.jar  

version: ibm java 1.6.0
compile: javac -O


jrefgo  

gcj version 4.2.3   (gcc native code compiler)
compile:  gcj -O3 



signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] From zero to playing on CGOS in 10 minutes

2008-10-22 Thread Mark Boon

I'm getting close to something I'd like to show people and get feedback.

One thing to decide is how to make it public. Previously I used  
dev.java.net to host my project. But I stopped using it because I had  
a very slow internet connection and I was getting annoyed with the  
time it took to communicate with Subversion on the remote host. At  
the moment I have a bit more decent internet connection, but it's  
still not fast. Nor reliable. So ideally I'd like to keep the stuff  
in my local repository. Like a two-step process. I can store versions  
locally and when I want I can migrate or merge it with the one  
online. I know ClearCase is ideal for this kind of settup. But too  
expensive and I doubt there's an online service that supports it.  
Does anyone know if something like this is possible to setup with  
Subversion? anyone having experience with something like this?


One of the main benefits I see in making a plugin architecture is  
that I get to configure my bot in an XML file. I simply specify which  
search strategy to combine with which playout strategy and with what  
parameters. At some point I had three different search strategies and  
something like half a dozen different playout strategies. Combine  
that with potentially different values of K in UCT, a different mercy  
threshold and who knows what other parameters and it quickly became a  
real headache to test different configurations against each other.  
And error-prone.


And I'm going to try to do the same with the unit-tests. I have a  
test-set of some positions to test a MCTS bot. But it soon ran into  
the same combinatorial problem as above. So I'm planning to make the  
tests such that you can specify a list of engine-configurations in an  
XML file, which are then all run past the same test-set.


Lastly: I started out making the first pluggable components, which  
consist of a straightforward UCT-search and a light Monte-Carlo  
playout strategy that uses pseudo liberties. More components will  
soon follow. When I run my playouts 1,000,000 times I get the  
following stats:


Komi 7.5, 114.749758 moves per position and Black wins 43.8657%.

That's a bit different from the 111 moves and 42% Don got in his  
reference bot. I haven't looked at Don't implementation (yet) and I  
wonder what may be different.


Mark



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] From zero to playing on CGOS in 10 minutes

2008-10-22 Thread Urban Hafner
Mark Boon wrote:
 I'm getting close to something I'd like to show people and get feedback.
 
 One thing to decide is how to make it public. Previously I used
 dev.java.net to host my project. But I stopped using it because I had a
 very slow internet connection and I was getting annoyed with the time it
 took to communicate with Subversion on the remote host. At the moment I
 have a bit more decent internet connection, but it's still not fast. Nor
 reliable. So ideally I'd like to keep the stuff in my local repository.
 Like a two-step process. I can store versions locally and when I want I
 can migrate or merge it with the one online. I know ClearCase is ideal
 for this kind of settup. But too expensive and I doubt there's an online
 service that supports it. Does anyone know if something like this is
 possible to setup with Subversion? anyone having experience with
 something like this?

How about git or mercurial? Or any other of the distributed SCMs. As
your code is open source you could create a public repository for free
on http://github.com/ .

Urban
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] From zero to playing on CGOS in 10 minutes

2008-10-22 Thread Ian Osgood


On Oct 22, 2008, at 11:16 AM, Mark Boon wrote:

I'm getting close to something I'd like to show people and get  
feedback.


One thing to decide is how to make it public. Previously I used  
dev.java.net to host my project. But I stopped using it because I  
had a very slow internet connection and I was getting annoyed with  
the time it took to communicate with Subversion on the remote host.  
At the moment I have a bit more decent internet connection, but  
it's still not fast. Nor reliable. So ideally I'd like to keep the  
stuff in my local repository. Like a two-step process. I can store  
versions locally and when I want I can migrate or merge it with the  
one online. I know ClearCase is ideal for this kind of settup. But  
too expensive and I doubt there's an online service that supports  
it. Does anyone know if something like this is possible to setup  
with Subversion? anyone having experience with something like this?


I have been using git for all of my new projects. It is distributed;  
users get a clone of the repository. Very fast and proven. Many  
hosting alternatives are listed here: http://git.or.cz/gitwiki/ 
GitHosting and it is possible to set up your own hosting if you have  
a public server.


Ian

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] From zero to playing on CGOS in 10 minutes

2008-10-22 Thread Gunnar Farnebäck

Mark Boon wrote:
 I'm getting close to something I'd like to show people and get feedback.

 One thing to decide is how to make it public. Previously I used
 dev.java.net to host my project. But I stopped using it because I had a
 very slow internet connection and I was getting annoyed with the time it
 took to communicate with Subversion on the remote host. At the moment I
 have a bit more decent internet connection, but it's still not fast. Nor
 reliable. So ideally I'd like to keep the stuff in my local repository.
 Like a two-step process. I can store versions locally and when I want I
 can migrate or merge it with the one online. I know ClearCase is ideal
 for this kind of settup. But too expensive and I doubt there's an online
 service that supports it. Does anyone know if something like this is
 possible to setup with Subversion? anyone having experience with
 something like this?

Subversion 1.4 can do repository replication (see heading svnsync at
http://subversion.tigris.org/svn_1.4_releasenotes.html), which should
work for a one-way mirroring. There's also svk,
http://svk.bestpractical.com, which adds distributed version control
on top of subversion. But I really agree with the people recommending
git or mercurial. I haven't used mercurial myself yet but I have very
positive experience with git.

/Gunnar
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] From zero to playing on CGOS in 10 minutes

2008-10-22 Thread Don Dailey
On Wed, 2008-10-22 at 16:16 -0200, Mark Boon wrote:
 When I run my playouts 1,000,000 times I get the  
 following stats:
 
 Komi 7.5, 114.749758 moves per position and Black wins 43.8657%.
 
 That's a bit different from the 111 moves and 42% Don got in his  
 reference bot. I haven't looked at Don't implementation (yet) and I  
 wonder what may be different.

For one thing,  komi is different.   I used 0.5 for running this test.

I would have use 0.0  but some implementations don't like even komi's.

- Don




signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] From zero to playing on CGOS in 10 minutes

2008-10-22 Thread Don Dailey
I use subversion and git.   Git mostly for just my own personal
repository but it rocks and is my preference.

- Don

On Wed, 2008-10-22 at 20:28 +0200, Urban Hafner wrote:
 Mark Boon wrote:
  I'm getting close to something I'd like to show people and get feedback.
  
  One thing to decide is how to make it public. Previously I used
  dev.java.net to host my project. But I stopped using it because I had a
  very slow internet connection and I was getting annoyed with the time it
  took to communicate with Subversion on the remote host. At the moment I
  have a bit more decent internet connection, but it's still not fast. Nor
  reliable. So ideally I'd like to keep the stuff in my local repository.
  Like a two-step process. I can store versions locally and when I want I
  can migrate or merge it with the one online. I know ClearCase is ideal
  for this kind of settup. But too expensive and I doubt there's an online
  service that supports it. Does anyone know if something like this is
  possible to setup with Subversion? anyone having experience with
  something like this?
 
 How about git or mercurial? Or any other of the distributed SCMs. As
 your code is open source you could create a public repository for free
 on http://github.com/ .
 
 Urban
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Some performance number

2008-10-22 Thread Don Dailey
If anyone wants to TEST their program against one of the reference bots,
you should get close to 50% score when playing lots of games.

If you play 1000 games, you should expect your score to be within the
range  0.46838 - 0.53162  in order to be within two standard deviations,
the amount usually considered to be within a reasonable amount of
noise.  

If you play 10,000 games, you need to be within 49 - 51 percent. 

These ranges are such that you should expect the result to be outside
that range less than 5% of the time (actually, about 4.5 percent) if the
programs really are equal.  

So far the Vala bot is well within 2 STD after almost 2000 games.  It
has a minor implementation defect so I am looking to see if it gets
picked up.   When 2 moves have equal scores,  it does not choose between
them randomly, but chooses the one first in the list (the highest rank
first, then the lowest file.)   With 1000 simulations that should make
very little difference perhaps not even easily measurable.   It's not
even clear if it would make it play 1 or 2 ELO stronger or weaker if it
does make a difference.

Note: I can make a test that catches this too.


- Don





On Wed, 2008-10-22 at 14:10 -0400, Don Dailey wrote:
 I have some preliminary performance numbers for various programming
 languages with reference bots,  I now have reference bots in 3
 languages.
 
 I also used the same data structure, and pretty much the same techniques
 for each language.The Vala program is almost a direct port of the
 java version (the language are VERY similar.)
 
 My disclaimer: I am sure the performance numbers for any given platform
 could be easily improved, perhaps even with different compiler options
 or simple changes to code.   I am no expert in any of these languages.
 
 Below is the data as I have it now.
 
 - Don
 
 
 
 platform:   core 2 duo e6700 
 
   vendor_id   : GenuineIntel
   cpu family  : 6
   model   : 15
   model name  : Intel(R) Core(TM)2 CPU  6700  @ 2.66GHz
   stepping: 6
   cpu MHz : 2667.000
   cache size  : 4096 KB
 
 boardsize  9
 komi   0.5
 
 Starting position
 
 PROGRAM   SPEED  TIME  MOVETOTAL NODES AVE
 SCORE
   -      -  
 crefgo 1.00  40.2  E5  111,040,853  0.523997   
 valago 1.38 55.70  E5  111,060,969  0.523488
 jrefgo.jar 1.69 68.16  E5  111,037,354  0.524622
 jrefgo 1.75  70.5  E5  111,048,045  0.523744
 
 
 
 crefgo 
 ---
 gcc version  4.2.3   
 options: -O3 -march=native
 
 
 valago
 ---
 Vala version 0.3.5
 compile:  valac --Xcc=-O3  --disable-assert --disable-checking 
 --disable-non-null -o vgo valaGo.vala   
 
 
 jrefgo.jar  
 
 version: ibm java 1.6.0
 compile: javac -O
 
 
 jrefgo  
 
 gcj version 4.2.3   (gcc native code compiler)
 compile:  gcj -O3 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] From zero to playing on CGOS in 10 minutes

2008-10-22 Thread Denis fidaali

[computer-go] From zero to playing on CGOS in 10 minutes
---
Don Dailey drdailey at cox.net
---
For one thing,  komi is different.   I used 0.5 for running this test.
I would have use 0.0  but some implementations don't like even komi's.
- Don


--
--
The 114 length is worrisome. Have you tested it against one of don reference 
bots ?

(I have by the way, great trouble managin all the links and urls everyone gives 
all the time :) Is there a way
to get them such that i can bookmark something and have access to all those ? A 
wikipage connected with this list maybe ?)

Currently my own implementation seems to have the komi wrong somehow. It seems 
i give one point more to black :)
My numbers are a bit higher for black wins than dons. I'll have to investigate 
i guess.

_
Téléphonez gratuitement à tous vos proches avec Windows Live Messenger  !  
Téléchargez-le maintenant !
http://www.windowslive.fr/messenger/1.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] From zero to playing on CGOS in 10 minutes

2008-10-22 Thread Don Dailey
On Wed, 2008-10-22 at 20:29 -0200, Mark Boon wrote:
 On Wed, Oct 22, 2008 at 6:07 PM, Don Dailey [EMAIL PROTECTED] wrote:
 
  For one thing,  komi is different.   I used 0.5 for running this test.
 
  I would have use 0.0  but some implementations don't like even komi's.
 
 
 But the komi should have no effect on the playout length. I started
 out with 103 moves, but that was because of a mercy-rule. Without it
 it's 114. You have 111 rather consistently. And I assume you don't use
 a mercy-rule. Nor super-ko checking, is that correct?

The score is what I was looking at, but you are right about the play-out
length because the play-outs don't care what komi is.

Do you use the 3X rule?  Have you checked the other specs?

My guess is that your playouts are not uniformly random - that is very
easy to get wrong.   Of course that is just a wild guess, it may very
well be something completely different.

- Don



 
 Another thing I have never looked at is AMAF. But that also shouldn't
 affect playout length I assume.
 
 By the way, thanks for all the pointers to 'git from everyone.' It's
 new to me and at first glance the specs look good, so I'll definitely
 give it a go.
 
 Mark


signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Some performance number

2008-10-22 Thread Mark Boon

FWIW, since I don't have exactly the same outcome yet, but my stats are

Mac Pro 2.8Ghz 2 x Quad-Core Intel Xeon
Memory is 4Gb 667Mhz DDR2
L2 cache per processor (of 4 cores) 12Mb

Java version: Java HotSpot version 1.5.0.16
Run with java -server (javac optimization level of Eclipse is unknown  
to me.)


Time for 1 million playouts: 42 secs.

Java version: 64-bit Java HotSpot version 1.6.0_07

Time for 1 million playouts: 36 secs.

Interestingly, Java 5 is faster for multiple processing, 12 secs. vs.  
17 sec. for Java 6 running on 8 cores. But those numbers look way too  
high anyway, I used to get more like a factor 6-7 out of 8 processors.


I spent some time optimizing the playouts, but I believe the UCT-tree  
building is still very inefficient as it takes about a third of the  
total time. I built it for clarity, reusabiility and scalability  
rather than speed. Maybe if I improve the tree-building it will be  
faster than crefgo on equal hardware. (That's just a wild guess of  
course.)


Mark


On 22-okt-08, at 16:10, Don Dailey wrote:


I have some preliminary performance numbers for various programming
languages with reference bots,  I now have reference bots in 3
languages.

I also used the same data structure, and pretty much the same  
techniques

for each language.The Vala program is almost a direct port of the
java version (the language are VERY similar.)

My disclaimer: I am sure the performance numbers for any given  
platform

could be easily improved, perhaps even with different compiler options
or simple changes to code.   I am no expert in any of these languages.

Below is the data as I have it now.

- Don



platform:   core 2 duo e6700

  vendor_id : GenuineIntel
  cpu family: 6
  model : 15
  model name: Intel(R) Core(TM)2 CPU  6700  @ 2.66GHz
  stepping  : 6
  cpu MHz   : 2667.000
  cache size: 4096 KB

boardsize  9
komi   0.5

Starting position

PROGRAM   SPEED  TIME  MOVETOTAL NODES AVE
SCORE
  -      -  
crefgo 1.00  40.2  E5  111,040,853  0.523997
valago 1.38 55.70  E5  111,060,969  0.523488
jrefgo.jar 1.69 68.16  E5  111,037,354  0.524622
jrefgo 1.75  70.5  E5  111,048,045  0.523744



crefgo
---
gcc version  4.2.3
options: -O3 -march=native


valago
---
Vala version 0.3.5
compile:  valac --Xcc=-O3  --disable-assert --disable-checking
  --disable-non-null -o vgo valaGo.vala


jrefgo.jar

version: ibm java 1.6.0
compile: javac -O


jrefgo

gcj version 4.2.3   (gcc native code compiler)
compile:  gcj -O3

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/