RE: [computer-go] Great Wall Opening by Bruce Wilcox

2009-10-19 Thread dave.devos
I heard about a pro comment that the Great wall and similar openings are quite 
feasible, but also quite easily thwarted. If you opponent plays it, you just 
prevent him from completing it by taking the last point yourself. This should 
give him an inferior position. If you let him complete it, he probably has a 
superior position.
 
Dave
 



Van: computer-go-boun...@computer-go.org namens Ingo Althöfer
Verzonden: vr 16-10-2009 20:55
Aan: computer-go@computer-go.org
Onderwerp: [computer-go] Great Wall Opening by Bruce Wilcox



In the year 2000 I bought the book
EZ-GO: Oriental Strategy in a Nutshell,
by Bruce and Sue Wilcox. Ki Press; 1996.

I can only recommend it for the many fresh ideas.
A few days ago I found time again to read in it.

This time I was impressed by Bruce Wilcox's strange
opening Great Wall, where Black starts with a loose
wall made of 5 stones, spanning over the whole board.

Bruce proposes to play this setup as a surprise weapon,
even against stronger opponents.

Now I made some autoplay tests, starting from the end position
given in the appendix of this mail.
* one game with Leela 3.16; Black won.
* four games with MFoG 12.016; two wins each for Black and White.
So there is some indiciation that the Great Wall works even
for bots, who are not affected by psychology.

I would like to know how other bots perform in autoplay
after this opening.

Cheers, Ingo.


--
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] bots and handicaps (Re: New CGOS)

2009-06-07 Thread dave.devos
Don Daily wrote:
so for instance a professional player with a relatively low professional 
ranking does not need as many stones
as indicated by his opponents ranking to beat another professional WHO IS 
SEVERAL RANKS HIGHER.  
 
For centuries pro ranks were defined by the handicap required against other 
pro's. But the difference between pro dans is smaller than a stone per rank.
 
rank difference = handicap
0p = alternating black and white
1p = black in two out of three games
2p = black in all games
3p = two stones in one out of three games, black in two out of three games
4p = two stones in two out of three games, black in one out of three games
5p = two stones in all games
...
8p = three stones in all games
 
These were the handicaps defining pro ranks in Japan up until 2003 
 
(in 2003 they dropped the handicap system but it was not changed into a rating 
system either http://senseis.xmp.net/?NihonKiInNewPromotionSystem 
http://senseis.xmp.net/?NihonKiInNewPromotionSystem . In fact I think the new 
system seems quite arbitrary and unpredictible):
 
Dave de Vos



Van: computer-go-boun...@computer-go.org namens Don Dailey
Verzonden: zo 7-6-2009 18:10
Aan: computer-go
Onderwerp: Re: [computer-go] bots and handicaps (Re: New CGOS)


Here is why the handicap system is broken.   We can talk about how the ELO 
system is broken in another discussion but it is too.

At 1 or 2 stones difference, the handicap system works well.At greater 
handicaps it's skewed.   The coincidence that I'm talking about is that it 
works to a reasonable degree at larger handicaps.

The handicap system is based on the idea that no matter what your level of 
play,  you can give 7 stones to a player 7 stones weaker.Everyone knows 
this is not true.   It is very accurate at low handicaps, and progressively 
less accurate at high handicaps (as well as high levels) so for instance a 
professional player with a relatively low professional ranking does not need as 
many stones as indicated by his opponents ranking to beat another professional 
WHO IS SEVERAL RANKS HIGHER.  

Is this not correct? When I say it's imperfect, that is what I am talking 
about.

There is nothing wrong with defining 1 stone handicap as a rank and I don't 
view that definition as imperfect and that is a reasonable basis for defining 
the ranking system in GO.   What's imperfect is the assumption that 9 stones 
makes it a perfectly even match if you are 9 ranks apart no matter where along 
the scale you are.  

If you wanted to know what handicap is needed to win a match at various 
handicaps from beginner to world chamption,  you would need a 2 dimentonal 
table lookup because it's not as simple as just taking the difference in their 
rankings.With the table lookup and a huge amout of data to back it up,  you 
would have a predictor as good as ELO.  

Does that clarify what I meant when I said the handicap system is imperfect?
As we discussed many times on this forum and in private email,  a one stone 
handicap has a different meaning at different levels - it's just an awkward 
system to deal with mathmatically on a go server for computers where wild 
mismatches will be common.





the fact that chess doesn't have a fine-grained way to
handicap a game, in fact, the fact that most games don't,
doesn't mean that it's hard to deal with.


There are simple ways to deal with it.  One could simply increase or decrease 
your rank as soon as you start winning or losing  you games at your current 
handicap level.We don't need ELO for that and it's simple to do.  

What I see on most servers and in this modern day and age is a move away from 
the centuries old system, not an embrace of it as being superior.Of course 
I understand there is a sentimental attachment to it.It was like this in 
chess many years ago when the old fashion descriptive chess notation was 
replace in the USA with algebraic notation in order to stay with the modern 
world and to many people it was just not chess anymore.  





my guess is that any go playing program that doesn't
depend upon an opening book for a lot of its strength
is going to adapt just fine.  experiments between players
of different CGOS-measured strengths could find this out for
sure -- time for another set of experiments?  i'll donate some
cpu time.


It took a lot of work and energy to do these studies - I'll have to think about 
this one.

Of course they would adapt if that was the system.   Even if this idea was not 
good for current MCTS programs they would adjust is that was what was required 
to do well in competition. 

 



if it helps to encourage the authors, just keep in mind
that winning with handicap is extremely convincing
evidence to regular go players that one player is much
stronger than another.  plus, it takes exponentially fewer
   

RE: [computer-go] bots and handicaps (Re: New CGOS)

2009-06-07 Thread dave.devos
I am 3k on KGS and I'd say that I win nearly every game that I 
give handicap and lose over half the games that I receive handicap. That would 
seem to imply that more handicap should be given near my rank. 

That is because the KGS rating system is not based on handicap. It is based on 
winning rates in even games like an Elo system (but it is probably weighted 
like the EGF system so that it stays more or less aligned to the traditional go 
rank system. None of these rating systems seems to have found a satisfactory 
set of weights to achieve a good alignment, but I think EGF does it better than 
KGS)
 
Dave de Vos


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] re-CGT approximating the sum of subgames

2009-03-10 Thread dave.devos
Hi Phillip,
 
Thank you for your help and explanations :)
 
Dave



Van: computer-go-boun...@computer-go.org namens Philipp Hennig
Verzonden: za 7-3-2009 23:50
Aan: computer-go@computer-go.org
Onderwerp: Re: [computer-go] re-CGT approximating the sum of subgames



Hi Dave,

sorry that I'm so slow in replying.

You are right that there seems to be some confusion about the origin 
of the Thompson heuristic, and I am not entirely clear about its 
history myself. My understanding so far is that the heuristic wasn't 
actually invented by Thompson as such. Rather, it uses a principle 
first shown by him, in the 1933 paper I cited. It works like this:

Assume you don't know precise values for a set of variables (like, the 
value of several available moves), but only a set of possible values 
for each of them, together with probabilities for these values. It is 
possible, then, to obtain one sample each from these probability 
distributions.
Then the probability for a given element of the set to have the 
maximal value under your current beliefs is equal to the probability 
of the corresponding sample to be the largest among the set of 
samples. I don't know who was the first to use this idea to guide 
exploration (you may want to have a look at David Stern's PhD thesis 
on uncertainty in Go, he might be among the first people to use it in 
this way. I'd like to provide you with a link, but I currently can't 
access his website. A search for David Stern Microsoft should help).

So the Thompson Heuristic name is shorthand for the idea of keeping 
track of a whole distribution of possible values for each move and 
guiding exploration by choosing greedily among samples drawn from 
these distributions. In doing so, we end up with a policy that starts 
out with entirely random exploration, and converges to a 
deterministic, greedy policy over time as our beliefs converge, in a 
principled way.


On the ideas you mention in your mail: It might well be possible to 
approximately detect sub-games, then approximate their temperature by 
some statistical measure and then focus UCT on areas of the game that 
seem interesting under that measure. Many other AI challenges have 
seen surprising improvements based on unexpected approximate 
simplifications.
But you have to be aware that each of the approximations you propose 
introduces an error of unknown size and direction in your inference 
(for example, very hot games have a habit of looking perfectly cold, 
much more so than less hot games, because there is just _one_ 
particularly good move, and everything else about them is really 
boring. If your method misses such games, it will miss the very 
winning move it is looking for). So you may have to spend a lot of 
time optimising your method by trial and error, to avoid such mishaps, 
and it may be very hard to test for such issues short of actually 
playing hundreds of individual games against your machine and judging 
it's gameplay yourself. (Remember that we have no reference values to 
compare to for the temperature of opening positions in Go. There are 
only a handful of End-Game positions for which people have 
painstakingly derived the temperature. See Elwyn Berlekamp's and David 
Wolfe's book called Chilling Gets the Last Point).

It was this prospect that drove me away from the idea of divide and 
conquer. I wanted to have a method that is guaranteed to converge to 
the right solution eventually, so I turned away from CGT. But my 
yardstick is to get a PhD in probability theory, not to build the 
world's best Go machine, so my scepticism shouldn't worry you too 
much. :-)

Good luck, and happy hacking!

Philipp
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] re-CGT approximating the sum of subgames

2009-03-02 Thread dave.devos
Hi Philipp,
 
Thank you for replying. I've seen your article before, but I read it only 
superficially then. By the way, I couldn't find any online source about the 
Thomson Heuristic. Do you know if ther are any?
 
Fistly, I want to use ownership and local correlation statistics from light 
montecarlo playout for discovering subgames. 
I'm not particularly focussed on combining UCT and CGT directly.
I don't want to perform global UCT or minimax tree search. This kind of tree 
search should be confined to local subgames only (to improve the accuracy of 
the mean and temperature of subgames).
I only want to use global lookahead to discover subgames, not for exploring the 
global game tree.
 
Secondly, discovering temperature and mean of subgames:
Perhaps it is possible to get a rough approximation of the mean and temperature 
of subgames from ownership statistics and their correlations.
Perhaps it is possible to represent large, cool subgames (ownership 
distribution near random, low correlation with surroundings) like the center in 
the opening, by simple statistical properties (the environment) until more 
details are discovered.
 
Thridly, early in the game, moves don't just affect subgames, they also change 
how the global game divides into subgames. 
This is a major complication.
Also, early in the game, subgames could have fuzzy (but cool) edges (like a 
joseki).
I don't have a clear picture yet for how to deal with dynamic subgames with 
fuzzy edges. I'm envisioning a structure containing subgame divisions, pointing 
to subgames in a hashtable.
So this structure would represent the superposition of subgame divisions 
discovered so far. But this is still very unclear.
Also, lookahead within subgames may stuble upon subgame interference in a 
portion of their subgame trees (which may or may not affect the canonicalform 
of the combined subgame). I have no idea yet how to deal with this.
 
I know, this is just a bunch of vague ideas with a lot of ifs.
So I have some work to do ;)
 
Dave



Van: computer-go-boun...@computer-go.org namens Philipp Hennig
Verzonden: ma 2-3-2009 1:04
Aan: computer-go@computer-go.org
Onderwerp: RE: [computer-go] re-CGT approximating the sum of subgames



Hi Martin, Dave and others,

I have never posted to this list before (so, hi everyone! :-), but 
your discussion on approximate CGT caught my eye.

Dave. you mentioned that you are interested in trying to mix UCT and 
CGT:
Dave Devos wrote on 19 Feb 2009, 19:33 GMT:
 Ok, I'll look into your temperature discovery article. I noticed 
 that Amazons is mentioned a lot in CGT, but I am not familiar with 
 the game. I want to use montecarlo to discover subgames. Then I want 
 to use CGT techniques to evaluate and sum subgames to get a full 
 board evaluation. If this is possible, it could dramatically reduce 
 the combinatorial explosion on large boards (like you said, totally 
 killing full board search) This is the idea. I don't even know if it 
 is possible, but I definitely want to give it a try.


I worked  on a similar idea during a short project early on in my PhD 
(which has since taken other directions). We (David Stern, Thore 
Graepel and myself) tried to use UCT to measure the temperature of 
Amazons games. You can find a technical report at

http://www.inference.phy.cam.ac.uk/ph347/CPGS-Report_Hennig.pdf
(I turned this into my first-year report, so you will have to excuse 
the length and the rambling at the beginning. If you know about UCT 
and CGT, you can directly jump to chapters 2.3 or 3.0).

The short of it is: The results were not very encouraging. The problem 
is that TDS is searching for an event (the switch from the stack of 
coupons to play on the board) which happens at some depth 1 in the 
game tree*, and UCT is not very good at performing inference on such 
quantities. The only thing it is really good at is inference about the 
value of the topmost nodes, where the statistical fidelity is good 
(nodes even a few steps deep into the tree will often only be visited 
a handful of times, so there is not much data available to make a good 
estimate from).
Further, adding a coupon stack to the overall game seems to sometimes 
create situations where the roll-outs become biased towards one 
player. In such situations, it can take UCT very long to discover the 
right solution, wasting a lot of search time expanding the wrong part 
of the tree. In chapter 5.1.1 of my report, you will find an 
experimental study of this behaviour using an artificial game tree.
Then, there are a few more subtleties to keep in mind. One is that Go 
games rarely are truly separable into sub-games (End-Games are an 
exception, as Martin showed in his paper mentioned in his earlier 
mail). Independence may hinge on just one particularly ingenious 
move deep in the tree and therefore might be hard to find by Monte-
Carlo search (but this is pure conjecture, so don't let yourself be 
stopped from trying). 

RE: [computer-go] re-CGT approximating the sum of subgames

2009-03-02 Thread dave.devos
Thank you for helping, but there seem to be many Thompson heuristics.
But I didn't find the one that Philip is referring to: 
 
W.R. Thompson: On the likelihood that one unknown probability exceeds another 
in view of two samples. Biometrika, 25:275-294, 1933.
 
vs
 
Clark Thompson: A Greedy Heuristic for the Rectilinear Steiner Tree Problem, 
September 1986
Richard Thompson e.a.: A Heuristic for Determining Efficient Staffing 
Requirements for Call Centers
.
.
. (etc)
 
If i google On the likelihood that one unknown probability exceeds another in 
view I get 90 hits, but as far as I can see, they are citations only.
 
Dave

 


Van: computer-go-boun...@computer-go.org namens terry mcintyre
Verzonden: ma 2-3-2009 21:18
Aan: computer-go
Onderwerp: Re: [computer-go] re-CGT approximating the sum of subgames


If you google thompson heuristic ( don't forget the p in thompson ), you'll 
find a number of hits.

 
Terry McIntyre terrymcint...@yahoo.com


-- Libertarians Do It With Consent! 




From: dave.de...@planet.nl dave.de...@planet.nl
To: computer-go computer-go@computer-go.org
Sent: Monday, March 2, 2009 12:12:15 PM
Subject: RE: [computer-go] re-CGT approximating the sum of subgames


Hi Philipp,
 
Thank you for replying. I've seen your article before, but I read it only 
superficially then. By the way, I couldn't find any online source about the 
Thomson Heuristic. Do you know if ther are any?
 
Fistly, I want to use ownership and local correlation statistics from light 
montecarlo playout for discovering subgames. 
I'm not particularly focussed on combining UCT and CGT directly.
I don't want to perform global UCT or minimax tree search. This kind of tree 
search should be confined to local subgames only (to improve the accuracy of 
the mean and temperature of subgames).
I only want to use global lookahead to discover subgames, not for exploring the 
global game tree.
 
Secondly, discovering temperature and mean of subgames:
Perhaps it is possible to get a rough approximation of the mean and temperature 
of subgames from ownership statistics and their correlations.
Perhaps it is possible to represent large, cool subgames (ownership 
distribution near random, low correlation with surroundings) like the center in 
the opening, by simple statistical properties (the environment) until more 
details are discovered.
 
Thridly, early in the game, moves don't just affect subgames, they also change 
how the global game divides into subgames. 
This is a major complication.
Also, early in the game, subgames could have fuzzy (but cool) edges (like a 
joseki).
I don't have a clear picture yet for how to deal with dynamic subgames with 
fuzzy edges. I'm envisioning a structure containing subgame divisions, pointing 
to subgames in a hashtable.
So this structure would represent the superposition of subgame divisions 
discovered so far. But this is still very unclear.
Also, lookahead within subgames may stuble upon subgame interference in a 
portion of their subgame trees (which may or may not affect the canonicalform 
of the combined subgame). I have no idea yet how to deal with this.
 
I know, this is just a bunch of vague ideas with a lot of ifs.
So I have some work to do ;)
 
Dave




Van: computer-go-boun...@computer-go.org namens Philipp Hennig
Verzonden: ma 2-3-2009 1:04
Aan: computer-go@computer-go.org
Onderwerp: RE: [computer-go] re-CGT approximating the sum of subgames



Hi Martin, Dave and others,

I have never posted to this list before (so, hi everyone! :-), but 
your discussion on approximate CGT caught my eye.

Dave. you mentioned that you are interested in trying to mix UCT and 
CGT:
Dave Devos wrote on 19 Feb 2009, 19:33 GMT:
 Ok, I'll look into your temperature discovery article. I noticed 
 that Amazons is mentioned a lot in CGT, but I am not familiar with 
 the game. I want to use montecarlo to discover subgames. Then I want 
 to use CGT techniques to evaluate and sum subgames to get a full 
 board evaluation. If this is possible, it could dramatically reduce 
 the combinatorial explosion on large boards (like you said, totally 
 killing full board search) This is the idea. I don't even know if it 
 is possible, but I definitely want to give it a try.


I worked  on a similar idea during a short project early on in my PhD 
(which has since taken other directions). We (David Stern, Thore 
Graepel and myself) tried to use UCT to measure the temperature of 
Amazons games. You can find a technical report at

http://www.inference.phy.cam.ac.uk/ph347/CPGS-Report_Hennig.pdf
(I turned this into my first-year report, so you will have to excuse 
the length and the rambling at the beginning. If you know about UCT 
and CGT, you can directly jump to chapters 2.3 or 3.0).

The short of it is: The results were not very encouraging. The problem 
is that TDS is searching for an event (the switch from the stack of 

RE: [computer-go] re-CGT approximating the sum of subgames

2009-02-19 Thread dave.devos
Ok, I'll look into your temperature discovery article.
I noticed that Amazons is mentioned a lot in CGT, but I am not familiar with 
the game.
 
I want to use montecarlo to discover subgames. Then I want to use CGT 
techniques to evaluate and sum subgames to get a full board evaluation.
If this is possible, it could dramatically reduce the combinatorial explosion 
on large boards (like you said, totally killing full board search)  
 
This is the idea. I don't even know if it is possible, but I definitely want to 
give it a try.
 
So far, I've built a GTP engine, a light playouts engine, a transposition table 
and a CGT calculator, so I have some basic building blocks.
I'm about to start experimenting with subgame discovery now (but I have a 
family and a day job, so progress is slow).
 
Dave de Vos



Van: computer-go-boun...@computer-go.org namens Martin Mueller
Verzonden: do 19-2-2009 0:39
Aan: computer-go
Onderwerp: [computer-go] re-CGT approximating the sum of subgames




 I can see in your examples here that the space complexity of the sum 
itself grows like 2^terms if the terms are simple forms but not numbers.
(I also took the liberty of adding your examples to my unit tests :)
Does computation time grow much faster than this?


Not sure how bad the worst case is. But it does get worse than this, because of 
incomparable options, which multiply in number if both players have lots of 
incomparable options at their choice, etc. In my Go example, at least there was 
always a unique best move, which keeps the complexity down somewhat. 

In the game Amazons you get lots more incomparable options, e.g. try 
Amazons(.l...,x...r) in CGSuite, and then Amazons(l,.,xxx.r). 
Then add one more empty square and you can go for a long walk :)
In Go, we are often lucky since only one, or a small number, of incomparable 
incentives exist. But I am sure you can construct examples that are almost as 
bad as in Amazons.



 
I've also stumbled on some complicated articles about loopy games (ko) 
so I realize that there is some ground to cover in that direction too :)
 
I'll read you article about temperature discovery search. Thanks for 
pointing me to it.
Are the techniques in that article improvements over the techniques in 
http://www.icsi.berkeley.edu/ftp/global/pub/techreports/1996/tr-96-030.pdf ?


The topics are a bit different. In the 1996 tech report, we compute 
thermographs for small (but potentially already messy) ko fights. The game 
graphs (the moves of the Go game and the results) were input by hand, and the 
program just computed the thermographs.

In the TDS paper, we deal only with the loopfree case (no ko). We are doing a 
forward search from the starting position, whereas all previous methods used 
bottom-up analysis from end positions. We did the experiments in Amazons since 
we already had the bottom-up data to compare with. In the second part of this 
paper, we try the method on subgames that are too complicated to build the 
whole local game tree. We do a time-limited forward search only, but can still 
get pretty good approximations to the real temperature. At least it is good 
enough to completely kill full-board alphabeta :)

Extending TDS for Go (with ko fights) has been on our to-do list for a long 
time. We have an incomplete implementation for that case. But there are still 
unresolved research questions, e.g. how to approximately model ko threats in a 
way that makes the search computationally feasible yet informative. No progress 
in recent years, since that Monte-Carlo business has sucked up all my spare 
time :)

Martin

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] CGT approximating the sum of subgames

2009-02-18 Thread dave.devos
Are you saying that 
- calculating the exact sum is too expensive?
- calculating the exact sum does not give the the best move?
- the existence of Ko in go makes it impossible to split the board in 
independent subgames?
 
A subgame may not be a number in many cases, but using canonicalforms 
backtracked from terminal nodes, wouldn't we still be able to calculate the 
exact sum of subgames without ambiguity? (if implemented according to the 
definition 2.7 in http://arxiv.org/PS_cache/math/pdf/0410/0410026v2.pdf and 
pruning dominated and reversible options).
Is this incorrect?
 
A full canonical subgame tree would only have a branching factor of at most two 
(each subgame consists of only a left and right stop to represent the confusion 
interval recursively).
That does not seem too bad to me, so I wonder why it is that expensive to 
calculate the sum of subgames.
 
Dave 



Van: computer-go-boun...@computer-go.org namens Eric Boesch
Verzonden: wo 18-2-2009 16:50
Aan: computer-go
Onderwerp: Re: [computer-go] CGT approximating the sum of subgames



On Tue, Feb 17, 2009 at 4:39 PM,  dave.de...@planet.nl wrote:
 I've been looking into CGT lately and I stumbled on some articles about
 approximating strategies for determining the sum of subgames (Thermostrat,
 MixedStrat, HotStrat etc.)
 It is not clear to me why approximating strategies are needed. What is the
 problem? Is Ko the problem? Is an exact computation too slow?
 Can anyone shed some light on this?

I'm sure someone else could give a better answer, but it does come
down to slowness. Same thing as assuming the value of a gote move
equals the position value if black moves first averaged with its value
if white moves first, and the game score equals original score plus
half the value of the largest move on the board -- these assumptions
are wrong, and they estimates are not guaranteed to yield either the
correct score or the biggest play on the board, but you have to do
something if you can't perfectly read out the rest of the game. If CGT
values were all nice real numbers that summed normally when combining
subgames, then CGT would be a lot simpler.

The possibility of wanting to play a ko threat in another part of the
board prevents the two areas from being separate subgames in the
technical sense -- the information sets are shared.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] CGT approximating the sum of subgames

2009-02-18 Thread dave.devos
Or is it it a problem to extract the best move when the sum of subgames is 
known but not equal to a number (which will often be the case)?
 



Van: computer-go-boun...@computer-go.org namens dave.de...@planet.nl
Verzonden: wo 18-2-2009 20:04
Aan: computer-go
Onderwerp: RE: [computer-go] CGT approximating the sum of subgames


Are you saying that 
- calculating the exact sum is too expensive?
- calculating the exact sum does not give the the best move?
- the existence of Ko in go makes it impossible to split the board in 
independent subgames?
 
A subgame may not be a number in many cases, but using canonicalforms 
backtracked from terminal nodes, wouldn't we still be able to calculate the 
exact sum of subgames without ambiguity? (if implemented according to the 
definition 2.7 in http://arxiv.org/PS_cache/math/pdf/0410/0410026v2.pdf and 
pruning dominated and reversible options).
Is this incorrect?
 
A full canonical subgame tree would only have a branching factor of at most two 
(each subgame consists of only a left and right stop to represent the confusion 
interval recursively).
That does not seem too bad to me, so I wonder why it is that expensive to 
calculate the sum of subgames.
 
Dave 



Van: computer-go-boun...@computer-go.org namens Eric Boesch
Verzonden: wo 18-2-2009 16:50
Aan: computer-go
Onderwerp: Re: [computer-go] CGT approximating the sum of subgames



On Tue, Feb 17, 2009 at 4:39 PM,  dave.de...@planet.nl wrote:
 I've been looking into CGT lately and I stumbled on some articles about
 approximating strategies for determining the sum of subgames (Thermostrat,
 MixedStrat, HotStrat etc.)
 It is not clear to me why approximating strategies are needed. What is the
 problem? Is Ko the problem? Is an exact computation too slow?
 Can anyone shed some light on this?

I'm sure someone else could give a better answer, but it does come
down to slowness. Same thing as assuming the value of a gote move
equals the position value if black moves first averaged with its value
if white moves first, and the game score equals original score plus
half the value of the largest move on the board -- these assumptions
are wrong, and they estimates are not guaranteed to yield either the
correct score or the biggest play on the board, but you have to do
something if you can't perfectly read out the rest of the game. If CGT
values were all nice real numbers that summed normally when combining
subgames, then CGT would be a lot simpler.

The possibility of wanting to play a ko threat in another part of the
board prevents the two areas from being separate subgames in the
technical sense -- the information sets are shared.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] static evaluators for tree search

2009-02-17 Thread dave.devos
A simple alfabeta searcher will only get a few plies deep on 19x19, so it won't 
be very useful (unless your static evaluation function is so good that it 
doesn't really need an alfabeta searcher)
 
Dave



Van: computer-go-boun...@computer-go.org namens George Dahl
Verzonden: di 17-2-2009 18:27
Aan: computer-go
Onderwerp: [computer-go] static evaluators for tree search



At the moment I (and another member of my group) are doing research on
applying machine learning to constructing a static evaluator for Go
positions (generally by predicting the final ownership of each point
on the board and then using this to estimate a probability of
winning).  We are looking for someone who might be willing to help us
build a decent tree search bot that can have its static evaluator
easily swapped out so we can create systems that actually play over
GTP.  As much as we try to find quantitative measures for how well our
static evaluators work, the only real test is to build them into a
bot.

Also, if anyone knows of an open source simple tree search bot
(perhaps alpha-beta or something almost chess like) for Go, we might
be able to modify it ourselves.

The expertise of my colleague and I is in machine learning, not in
tree search (although if worst comes to worst I will write my own
simple alpha-beta searcher).  We would be eager to work together with
someone on this list to try and create a competitive bot.  We might at
some point create a randomized evaluator that returns win or loss
nondeterministically for a position instead of a deterministic score,
so an ideal collaborator would also have some experience with
implementing monte carlo tree search (we could replace playouts with
our evaluator to some extent perhaps).  But more important is
traditional, chess-like searching algorithms.

If anyone is interested in working with us on this, please let me
know!  We have a prototype static evaluator complete that is producing
sane board ownership maps, but we will hopefully have many even better
ones soon.

- George
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] static evaluators for tree search

2009-02-17 Thread dave.devos
I agree with you, but I wouldn't qualify MC evaluation with MCTS as a static 
evaluation function on top of a pure alfabeta search to a fixed depth (I have 
the impression that this is what George Dahl is talking about).
 
Dave



Van: computer-go-boun...@computer-go.org namens Don Dailey
Verzonden: di 17-2-2009 20:57
Aan: computer-go
Onderwerp: RE: [computer-go] static evaluators for tree search



On Tue, 2009-02-17 at 20:04 +0100, dave.de...@planet.nl wrote:
 A simple alfabeta searcher will only get a few plies deep on 19x19, so
 it won't be very useful (unless your static evaluation function is so
 good that it doesn't really need an alfabeta searcher)

I have to say that I believe this is very simplistic, and a lot of
people have said things like this so you are certainly not alone.

But how deep it goes is not particularly relevant, it's only a very
small part of the picture.  Before MCTS programs were playing
reasonably well with continuous refinements to 1 ply searches.

You can probably conceptually separate a tree searching program into 3
parts, and in this sense alpha/beta is no different than MCTS:

   1. search
   2. selectivity
   3. evaluation

selectivity is a nebulous term which involves all kinds of decisions
about what to do inside the tree.  It all boils down to 1 thing however,
when to stop and when not to (and possibly what score to assume.)  

Evaluation has always been the most important part of any tree searching
program and alpha/beta just doesn't work without it.   Even if you
search to the very end of a game, such as tic-tac-toe,  you must know
whether the game is a win, loss or draw, so you are forced to evaluate
in the non-recursive sense.When you cannot go to the end of the
game, which is true of any interesting game before it's played out,  you
really must have a high quality evaluation. 

The interesting part of MCTS, is not the TS part, it's the MC part.
That's the part that does the evaluation.   I am convinced that the tree
search part could be done in a number of ways which can be seen as more
or less equivalent.   Even in the papers on MCTS the tree search was
shown to be a type of mini-max search. 

It's possible to structure alpha/beta to have incredibly low branching
factors,  with modern chess programs for instance it is about 2.   That
involves a lot of domain specific knowledge and assumptions about the
tree.  This same thing can be done in GO and would also require a lot of
domain specific knowledge.   That knowledge could be obtained with the
same methods used by MCTS, statistics gathered by playout analysis.  The
evaluation function could also be merged into the nodes in a similar way
to MCTS.  

I believe what makes go different, is that you can be a lot more
selective than you can in chess, it's much easier to evaluate bad moves
in GO, but it's much harder to evaluate the quality of positions.

You stated that alpha/beta can only go a few ply,  but that is also true
of MCTS.  It only goes a few ply unless you count the Monte Carlo part -
in which case you also do that with alpha/beta.  

So in my opinion the jury is still out.  Is the TS in MCTS really so
fundamentally different that alpha/beta with selectivity?   I'm not so
sure and I don't think anyone has really tried very hard probably due to
the wonderful success of the original bandit algorithm.

There are a bunch of things in computer science that were rejected, and
then later found much success  due to some simple discovery or even
research that showed people were just looking at it wrong,  or missed
something relatively simple about it,  or it simply went out of fashion
because something else came along that at the time appeared to better or
simpler and became the fad.   That could easily happen here I think.  


- Don




___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] CGT approximating the sum of subgames

2009-02-17 Thread dave.devos
I've been looking into CGT lately and I stumbled on some articles about 
approximating strategies for determining the sum of subgames (Thermostrat, 
MixedStrat, HotStrat etc.)
It is not clear to me why approximating strategies are needed. What is the 
problem? Is Ko the problem? Is an exact computation too slow?
Can anyone shed some light on this?
 
Dave
 
 
 
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Poll: how long until computers are as strong as pros?

2009-02-13 Thread dave.devos
I think this estimate is a reasonable educated guess. The uncertainties are 
quite big. 
I would say your estimate has a total margin of error of at least 50% (it will 
probably take between 15 years and 50 years) but I don't think it's possible to 
estimate much more accurate at this stage.
 
Dave



Van: computer-go-boun...@computer-go.org namens Bob Hearn
Verzonden: vr 13-2-2009 6:42
Aan: computer-go
Onderwerp: [computer-go] Poll: how long until computers are as strong as pros?



How long until a computer beats a pro -- any pro -- in an even game?
How long until a computer can routinely beat the best pros?

Not a very scientific poll, I realize, but I'd like some numbers to 
use in my AAAS talk on Saturday.

FWIW, this is a back-of-the-envelope calculation I did in August, when 
MoGo beat Myungwan Kim 8p at H9:

 After the match, one of the MoGo programmers mentioned that doubling 
 the computation led to a 63% win rate against the baseline version, 
 and that so far this scaling seemed to continue as computation power 
 increased.

 So -- quick back-of-the-envelope calculation, tell me where I am 
 wrong. 63% win rate = about half a stone advantage in go. So we need 
 4x processing power to increase by a stone. At the current rate of 
 Moore's law, that's about 4 years. Kim estimated that the game with 
 MoGo would be hard at 8 stones. That suggests that in 32 years a 
 supercomputer comparable to the one that played in this match would 
 be as strong as Kim.

 This calculation is optimistic in assuming that you can meaningfully 
 scale the 63% win rate indefinitely, especially when measuring 
 strength against other opponents, and not a weaker version of 
 itself. It's also pessimistic in assuming there will be no 
 improvement in the Monte Carlo technique.

 But still, 32 years seems like a surprisingly long time, much longer 
 than the 10 years that seems intuitively reasonable. Naively, it 
 would seem that improvements in the Monte Carlo algorithms could 
 gain some small number of stones in strength for fixed computation, 
 but that would just shrink the 32 years by maybe a decade.


Thanks,
Bob Hearn

-
Robert A. Hearn
Neukom Institute for Computational Science, Dartmouth College
robert.a.he...@dartmouth.edu
http://www.dartmouth.edu/~rah/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Re: Hardware limits

2009-01-15 Thread dave.devos
Still, in the unlimited class of a rally from Bagdad to Beijing, it is probably 
not the dragracer nor the solar car that wins, but the landrover with good road 
maps and a GPS.
I agree with most posters that in the end, you have to find the best thing to 
do the job. Hardware could play a major role in your solution, but you need 
more than just that to win.
 
Dave



Van: computer-go-boun...@computer-go.org namens Ryan Grant
Verzonden: vr 16-1-2009 3:55
Aan: computer-go
Onderwerp: Re: [computer-go] Re: Hardware limits



On Thu, Jan 15, 2009 at 8:47 AM, steve uurtamo uurt...@gmail.com wrote:
 this isn't an asymptotic problem requiring an algorithmic
 solution.  this is a fixed-size board requiring a best of show
 answer.  whoever gets there, however they get there, deserves
 to win, as long as the machines are choosing their own
 moves.

scalability is still essential and beautiful.
the unlimited class winner will always be important.

hardware access is very unevenly distributed.
yet access to hardware currently matters in every tournament.
some people like that part of the game, some don't.

there is the potential for people in this community to speak up
and create a culture that rewards good algorithm design as much
as possible.  to do so requires at least noticing hardware power.

--
- Ryan
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Black/White winning rates with random playout?

2009-01-11 Thread dave.devos
Just a remark, no advice: the suicide rule, the ko rule and the score 
depend on the rules under which the game is played.
 
Dave



Van: computer-go-boun...@computer-go.org namens Ernest Galbrun
Verzonden: za 10-1-2009 22:01
Aan: computer-go
Onderwerp: Re: [computer-go] Black/White winning rates with random playout?


Hello everyone, 

I am trying to do some genetic experiment with virtual go players I programmed 
using basic neuronal network technology. The principle is to test my randomly 
mutated players against each other and to kill the losers. I have used Opengo 
library to make my players play against each other, the problem is that opengo 
does not have any scoring capability, so I am never certain about the result of 
a game ; and opengo has still some bugs, especially when making computer 
players play against each other.

Could someone tell me what program (if any) I could use to make the players 
play against each other, given that :
- My players dont know anything about the suicide and the ko rule.
- Thay can't count the score by themselves.

Ernest Galbrun

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] UEC cup

2008-12-18 Thread dave.devos
Michael:

Let's say that active Pros should have 2800+, though players 
with 2750+
might still be professional strength.

I think by that definition there would be many players with a professional rank 
who wouldn't have professional strength.
I don't see any point in arbitrarily picking a EGF rating number and then 
defining it as the dividing line for professional strength. What is the basis 
for the number 2750? And why would 2750 be better than 2700?
 

Michael:

Catalin was over 2800 during his time as an active pro
(peaking at 2821 in 2004). He has obviously gotten weaker since 
he
stopped playing pro tournaments, just like Guo, who has been 
out of the
pro scene for so long that I think it's fair to say she doesn't 
have pro
strength anymore.

 
I can imagine that Catalin and Guo won't be able to compete at 5p level 
anymore, but you think it is fair to say they dropped below professional 
strength altogether.
Why do you think that's fair to say?
 
It's true that professional ranks reflect lifetime achievements and not current 
strength, but strong players don't just lose a couple of stones in strength 
when they stop competing actively.
The may lose some strength, but only a little: about one stone at most. So a 
former 1p may drop below professional level, but a former 5p is likely to keep 
professional strength with progressing age).
 
To me, having a professional rank defines professional strength (perhaps 
excepting pensioned 1p and 2p pros). And if amateurs can compete with weaker 
professionals, they have professional strength. That's it.
 
Dave
 
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] UEC cup

2008-12-17 Thread dave.devos
What you are saying is that many professionals are overrated or underrated 
(sometimes by as much as two stones). The same goes for amateur ranks too.
So a rank estimate from a series of 7 stone games against a 4p will still have 
a error margin of one or perhaps two stones.
I agree with that.



Van: computer-go-boun...@computer-go.org namens Michael Goetze
Verzonden: wo 17-12-2008 20:26
Aan: computer-go
Onderwerp: Re: [computer-go] UEC cup



Nick Wedd wrote:
 So what _is_ reality nowadays?  Your previous email did not make this
 clear.  Are Japanese pro grades now closer together than a third of a
 stone, or farther apart?

The reality is that the correlation between ranks and playing strengths
is very low, and that knowing that player A is x-dan professional and
player B is y-dan professional does not actually tell you very much
about how likely player A is to win against player B.

Regards,
Michael
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] UEC cup

2008-12-17 Thread dave.devos
I think a 7 stone handicap against a 4p would be normal for an EGF 1d, not for 
a japanese 1d.
A japanese 1d is about 3k EGF. He would require more than 9 stones.
 
Dave

 


Van: computer-go-boun...@computer-go.org namens Darren Cook
Verzonden: wo 17-12-2008 2:48
Aan: computer-go
Onderwerp: Re: [computer-go] UEC cup



 No one mentioned Korean professionals. But, as far as I know, a Japanese
 7p should be able to give a Japanese 1p 2 stones and win 50% of the
 time. Roughly.

 I don't agree.  Japanese Professinals' ranks never decrease.

Hi,
Are we talking about different things? All I meant to say was that I
thought in Japanese professional ranks that one rank is worth a third of
a handicap stone. So when there are 6 ranks difference then two handicap
stones should give an even game.

I also think a 1-dan Japanese professional is equivalent to about a
7-dan amateur. So, going back to the original 7 handicap against a 4p
situation, then if it is an even game it implies black is about 1 dan
(Japanese).

With all the usual disclaimers about the large error margin on a sample
of just 1 game :-).

Darren


--
Darren Cook, Software Researcher/Developer
http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
open source dictionary/semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] UEC cup

2008-12-17 Thread dave.devos
When the japanese audience stated that CrazyStone was playing like a 4d or 5d 
they were talking about japanese ranks.
This suggests that it played like a 1d EGF or 2d EGF according to the audience.
 
Dave



Van: computer-go-boun...@computer-go.org namens dave.de...@planet.nl
Verzonden: wo 17-12-2008 20:48
Aan: computer-go
Onderwerp: RE: [computer-go] UEC cup


I think a 7 stone handicap against a 4p would be normal for an EGF 1d, not for 
a japanese 1d.
A japanese 1d is about 3k EGF. He would require more than 9 stones.
 
Dave

 


Van: computer-go-boun...@computer-go.org namens Darren Cook
Verzonden: wo 17-12-2008 2:48
Aan: computer-go
Onderwerp: Re: [computer-go] UEC cup



 No one mentioned Korean professionals. But, as far as I know, a Japanese
 7p should be able to give a Japanese 1p 2 stones and win 50% of the
 time. Roughly.

 I don't agree.  Japanese Professinals' ranks never decrease.

Hi,
Are we talking about different things? All I meant to say was that I
thought in Japanese professional ranks that one rank is worth a third of
a handicap stone. So when there are 6 ranks difference then two handicap
stones should give an even game.

I also think a 1-dan Japanese professional is equivalent to about a
7-dan amateur. So, going back to the original 7 handicap against a 4p
situation, then if it is an even game it implies black is about 1 dan
(Japanese).

With all the usual disclaimers about the large error margin on a sample
of just 1 game :-).

Darren


--
Darren Cook, Software Researcher/Developer
http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
open source dictionary/semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] UEC cup

2008-12-15 Thread dave.devos
I was in Japan for a year in 1990. I was 1k - 1d EGF at that time and i was 4d 
in Japan. I think the grade difference between EGF and Japan is more like 3 
grades than 2 grades. 
I also participated in a Japan-Netherlands friendship match by the Japanese 
Embassy in Holland. We played against Japanese players with full handicap but 
our grades were increased by 3.
At that time I was a 3d EFG but I had to give handicap as a 6d Japanese. The 
match was won by Holland with big numbers even with the 3 grades correction.
Also, a 4p is not a 7p. The difference should be about one stone. 4p is 
equivalent to 8d EGF. So i would say winning with seven stones against a 4p 
suggests that Crazy stone is about 1d - 2d. 
This is similar to the result achieved by Mogo.
 
Dave



Van: computer-go-boun...@computer-go.org namens Darren Cook
Verzonden: ma 15-12-2008 9:34
Aan: computer-go
Onderwerp: [computer-go] UEC cup



I was reading a report on the UEC Cup [1] on a (Japanese) blog, and if I
understood correctly the results were:
  1. Crazy stone
  2. Fudogo (?) (Hideki Kato's program)
  3. Many Faces
  4. Katsunari

(Mogo had time trouble and pulled out?)

Crazy stone then played against Kaori Aoba, 4p, at a 7-stone handicap
and won by resignation. Making crazy stone 4 or 5 dan, by Japanese
standards. Maybe 2-3 dan European?

Can anyone post more information? What hardware was Crazy Stone running on?

Darren

[1]: The home page is here, but no information here yet:
 http://jsb.cs.uec.ac.jp/~igo/2008/eng/

--
Darren Cook, Software Researcher/Developer
http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
open source dictionary/semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] MC Opening stage

2008-12-10 Thread dave.devos
And 6-7 every now and then (humans imitating MC bots?).
 
Do you play go competitively, Tony?
 
Dave



Van: [EMAIL PROTECTED] namens Heikki Levanto
Verzonden: wo 10-12-2008 21:50
Aan: computer-go
Onderwerp: Re: [computer-go] MC Opening stage



On Wed, Dec 10, 2008 at 12:39:55PM -0800, terry mcintyre wrote:
 I once heard a simple rule which seems to cover just about everything
 interesting: consider only moves which are on the 3rd and 4th lines,
 and/or within a manhattan distance of n, for some small n, of some other
 stone already on the board. If memory serves, David Fotland mentioned this
 at the Portland Congress. Some players favor opening moves on the fifth
 line, however.

And the occasional funny guy playing the center point...

-H

--
Heikki Levanto   In Murphy We Turst heikki (at) lsd (dot) dk

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] MC Opening stage

2008-12-10 Thread dave.devos
It might be a good idea then to look at some games of competitive players to 
get an idea of how a game develops from the opening to the middle game and the 
endgame.
There are some online game collections, but you could also register an account 
on an online go server to watch some games while they are being played (KGS for 
instance: www.gokgs,com).
 
Dave



Van: [EMAIL PROTECTED] namens tony tang
Verzonden: wo 10-12-2008 22:08
Aan: go mailing list
Onderwerp: RE: [computer-go] MC Opening stage


Grandad taught me how to play when i was a kid (touching story etc)
But no, just thought i'll do a go project for fun.
 
Did anyone record how much memory it took to simulate and record 360 moves? 
From my understanding you have to play atleast a few hundred games of each 
position
to give a fair estimation(light playout) ??






Subject: RE: [computer-go] MC Opening stage
Date: Wed, 10 Dec 2008 21:57:18 +0100
From: [EMAIL PROTECTED]
To: computer-go@computer-go.org



And 6-7 every now and then (humans imitating MC bots?).
 
Do you play go competitively, Tony?
 
Dave



Van: [EMAIL PROTECTED] namens Heikki Levanto
Verzonden: wo 10-12-2008 21:50
Aan: computer-go
Onderwerp: Re: [computer-go] MC Opening stage


On Wed, Dec 10, 2008 at 12:39:55PM -0800, terry mcintyre wrote:
 I once heard a simple rule which seems to cover just about everything
 interesting: consider only moves which are on the 3rd and 4th lines,
 and/or within a manhattan distance of n, for some small n, of some other
 stone already on the board. If memory serves, David Fotland mentioned this
 at the Portland Congress. Some players favor opening moves on the fifth
 line, however.

And the occasional funny guy playing the center point...

-H

--
Heikki Levanto   In Murphy We Turst heikki (at) lsd (dot) dk

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/





Read amazing stories to your kids on Messenger. Try it Now! 
http://clk.atdmt.com/UKM/go/117588488/direct/01/  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Re: Opportunity to promote ...

2008-11-19 Thread dave.devos
I think that would not be enough, because that would only fix one point. 
EGF ratings are not pure Elo ratings. EGF ratings are weighted to fit 100 
points for one handicap stone, which happens to match about 65% winning 
percentage in even games for medium level players (around 3k).
Also, I am not aware that there exists a histogram of the worldwide go 
population. 
So it is hard to match Elo to EGF by using numerical data only. That's why the 
discussion on that page was mainly based on qualitative arguments.
 
Dave



Van: [EMAIL PROTECTED] namens Christoph Birk
Verzonden: wo 19-11-2008 18:19
Aan: computer-go
Onderwerp: Re: [computer-go] Re: Opportunity to promote ...




On Nov 18, 2008, at 11:28 AM, [EMAIL PROTECTED] 
[EMAIL PROTECTED] wrote:
 It depends very much on what exactly you mean by amateur master 
 level. Is it a level that compares to amateur master level in chess?
 And what is amateur master level in chess? USCF master, FIDE master 
 or international master?
 Some time ago I participated in a discussion about comparing chess 
 titles to go ranks by evaluating effort, prestige and other factors 
 [1].

 In my opinion:
 1: USCF master compares to about 4d
 2: FIDE master compares to about 6d
 3: International master compares to about 7d/1p

I suggest to overlay the histrograms of player ratings and shift one
along the x-axis until the mode (peak in the distribution) are at the
same point. From that it should be easy to compare chess ELO
ratings with go ratings (or ranks).

Christoph
 
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Re: Opportunity to promote ...

2008-11-19 Thread dave.devos
That is wonderful!
 
When I look at this table it seems to support my claim that USCF master (2200 
USCF Elo) compares to about 4d (2400 EGF) and that 2d compares to a strong USCF 
class A player.
So this table too, suggests that computer-go has not yet been reached amateur 
master level in the USCF sense of the term.
 
Dave
 
P.S.: 
A minor issue, not relevant for the question of amateur master level: the 
sparsity of data at the top of the EGF (no 9p players play in european 
tournaments) seems to have a negative effect on accuracy in that range.
I don't think 9p equals 2800 EGF. There are two or three EGF players with 
ratings of 2800, but they are actually about 2p - 5p, not 9p.
Also, if you look at the lower end of the EGF range on the graph on that page, 
you can clearly see the distorting effect of the EGF policy of clamping the 
lowest ratings at 100 EGF.
All in all, my feeling is that this page confims my speculations.
 



Van: [EMAIL PROTECTED] namens Andy
Verzonden: wo 19-11-2008 19:26
Aan: computer-go
Onderwerp: Re: [computer-go] Re: Opportunity to promote ...


Here you go!

http://senseis.xmp.net/?RatingHistogramComparisons

  %   AGA  KGS  USCFEGF

 ---   | -- | -- | - | - |
  1%   | -34.61 | -24.26 |   444 |   100 |
  2%   | -32.58 | -22.30 |   531 |   100 |
  5%   | -27.69 | -19.20 |   663 |   153 |
 10%   | -23.47 | -15.36 |   793 |   456 |

 20%   | -18.54 | -11.26 |   964 |   953 |
 30%   | -13.91 |  -8.94 |  1122 |  1200 |
 40%   |  -9.90 |  -7.18 |  1269 |  1387 |
 50%   |  -7.10 |  -5.65 |  1411 |  1557 |
 60%   |  -4.59 |  -4.19 |  1538 |  1709 |

 70%   |  -1.85 |  -2.73 |  1667 |  1884 |
 80%   |   2.10 |  -1.28 |  1807 |  2039 |
 90%   |   4.71 |   2.52 |  1990 |  2217 |
 95%   |   6.12 |   3.88 |  2124 |  2339 |
 98%   |   7.41 |   5.29 |  2265 |  2460 |

 99%   |   8.15 |   6.09 |  2357 |  2536 |
 99.5% |   8.70 |   7.20 |  2470 |  2604 |
 99.9% |   9.64 |   pro  |  2643 |  2747 |
 top   |  10.12 |9p  |  2789 |  2809 |

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Re: Opportunity to promote ...

2008-11-19 Thread dave.devos
Let me rephrase a bit: USCF master level is about 3% away from the human top, 
but 7-8 handicap stones (an estimate of mogo's distance from the top) is about 
13% away from the human top.
 
A program would have to win about 50% of its games with a 5 stone handicap 
against a strong pro before claims can be made that computer-go is withing 3% 
from the human top.
 
Dave



Van: [EMAIL PROTECTED] namens [EMAIL PROTECTED]
Verzonden: wo 19-11-2008 20:19
Aan: computer-go
Onderwerp: RE: [computer-go] Re: Opportunity to promote ...


That is wonderful!
 
When I look at this table it seems to support my claim that USCF master (2200 
USCF Elo) compares to about 4d (2400 EGF) and that 2d compares to a strong USCF 
class A player.
So this table too, suggests that computer-go has not yet been reached amateur 
master level in the USCF sense of the term.
 
Dave
 
P.S.: 
A minor issue, not relevant for the question of amateur master level: the 
sparsity of data at the top of the EGF (no 9p players play in european 
tournaments) seems to have a negative effect on accuracy in that range.
I don't think 9p equals 2800 EGF. There are two or three EGF players with 
ratings of 2800, but they are actually about 2p - 5p, not 9p.
Also, if you look at the lower end of the EGF range on the graph on that page, 
you can clearly see the distorting effect of the EGF policy of clamping the 
lowest ratings at 100 EGF.
All in all, my feeling is that this page confims my speculations.
 



Van: [EMAIL PROTECTED] namens Andy
Verzonden: wo 19-11-2008 19:26
Aan: computer-go
Onderwerp: Re: [computer-go] Re: Opportunity to promote ...


Here you go!

http://senseis.xmp.net/?RatingHistogramComparisons

  %   AGA  KGS  USCFEGF

 ---   | -- | -- | - | - |
  1%   | -34.61 | -24.26 |   444 |   100 |
  2%   | -32.58 | -22.30 |   531 |   100 |
  5%   | -27.69 | -19.20 |   663 |   153 |
 10%   | -23.47 | -15.36 |   793 |   456 |

 20%   | -18.54 | -11.26 |   964 |   953 |
 30%   | -13.91 |  -8.94 |  1122 |  1200 |
 40%   |  -9.90 |  -7.18 |  1269 |  1387 |
 50%   |  -7.10 |  -5.65 |  1411 |  1557 |
 60%   |  -4.59 |  -4.19 |  1538 |  1709 |

 70%   |  -1.85 |  -2.73 |  1667 |  1884 |
 80%   |   2.10 |  -1.28 |  1807 |  2039 |
 90%   |   4.71 |   2.52 |  1990 |  2217 |
 95%   |   6.12 |   3.88 |  2124 |  2339 |
 98%   |   7.41 |   5.29 |  2265 |  2460 |

 99%   |   8.15 |   6.09 |  2357 |  2536 |
 99.5% |   8.70 |   7.20 |  2470 |  2604 |
 99.9% |   9.64 |   pro  |  2643 |  2747 |
 top   |  10.12 |9p  |  2789 |  2809 |

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Re: Opportunity to promote ...

2008-11-19 Thread dave.devos
I doubt that much can be learned from comparing the overall length of rating 
scales.
 
1: The large draw margin in chess compresses the high end of the chess Elo 
range compared to go.
 
It takes a fairly large difference in skill for one very strong chess player to 
win 65% against another very strong chess player, because they will draw a 
large portion of their games.
In go, draws are absent, so it takes a smaller difference in skill for one very 
strong player to score 65% against another very strong player.
If go ratings were pure Elo ratings, the high end would be more stretched than 
the high end of chess ratings, just because of the absense of draws in go.  
 
2: And then there is the fact that EGF is not pure Elo:
 
What if chess ratings were not based on winning percentages, but on 100 points 
for every handicap step of move odds? (That is precisely what go ranks and 
ratings are supposed to reflect)
Such a scale would probably stretch the range at the top and compress it at 
the bottom (move odds being much more valuable to strong players than to weak 
players).
I guess it would be possible to convert such handicap-based ratings to real Elo 
ratings by collecting statistics, but the fit would probably be more complex 
than a simple shift and constant scale factor.
But what else would be the learned from comparing the length of this scale to 
the length of the normal Elo scale? Not much, I think. It's just a different 
system.
 
So I agree with Christoph Birk that percentiles (when available), give better 
absolute skill comparisons when comparing different games with different rating 
systems.
Better at least than a rating conversion based on a simple shift and constant 
scale factor.
 
Dave



Van: [EMAIL PROTECTED] namens Don Dailey
Verzonden: wo 19-11-2008 20:59
Aan: computer-go
Onderwerp: RE: [computer-go] Re: Opportunity to promote ...



On Wed, 2008-11-19 at 10:24 -0800, Christoph Birk wrote:
 That should not matter much. The typical chess player should be
 as strong as the typical Go player and I also expect the strength
 distribution to follow similar lines.

Larry Kaufman,  a chess Grandmaster and also an expert in many games
once told me that there are many more levels in Go than in chess.

For instance in chess, your rating can range from about 500 to 2800 or
so (roughly.)   You can have a rating less than 500 of course and some
do, but 500 represents something like someone who just learned the rules
give or take.   But for arguments sake let's say that 99% of all players
are well within a span of 3000 ELO points to be generous.

For 19x19 go, assuming we were to use ELO ratings the range would be
much higher according to Larry.So it could be more like 4000 or
5000, I don't know.  

So I don't know for sure what you mean when you say the strength
distribution follows similar lines.

For comparison purposes maybe we need to identify the median player
somehow and extrapolate from there.But if you take the median for
both games, and assign them some fixed elo,  I think you would find the
top GO players had ELO ratings hundreds of points higher than the top
Chess players.

There is yet no ceiling on how strong chess players can be either, so I
assume the same for GO, even more so.The top playing chess program
seems to be at least 200 ELO stronger than the best human and they
continue to improve every year.   And it's still very clear that they
could improve a lot. 


- Don





___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Re: Opportunity to promote ...

2008-11-19 Thread dave.devos
Erratum: Such a scale would probably stretch the range at the top and 
compress it at the bottom ...  Such a scale would probably compress the 
range at the top and stretch it at the bottom ...
 
It requires many more Elo points to compensate for a chess handicap at the high 
end than at the low end.



Van: [EMAIL PROTECTED] namens [EMAIL PROTECTED]
Verzonden: wo 19-11-2008 22:07
Aan: [EMAIL PROTECTED]; computer-go
Onderwerp: RE: [computer-go] Re: Opportunity to promote ...


I doubt that much can be learned from comparing the overall length of rating 
scales.
 
1: The large draw margin in chess compresses the high end of the chess Elo 
range compared to go.
 
It takes a fairly large difference in skill for one very strong chess player to 
win 65% against another very strong chess player, because they will draw a 
large portion of their games.
In go, draws are absent, so it takes a smaller difference in skill for one very 
strong player to score 65% against another very strong player.
If go ratings were pure Elo ratings, the high end would be more stretched than 
the high end of chess ratings, just because of the absense of draws in go.  
 
2: And then there is the fact that EGF is not pure Elo:
 
What if chess ratings were not based on winning percentages, but on 100 points 
for every handicap step of move odds? (That is precisely what go ranks and 
ratings are supposed to reflect)
Such a scale would probably stretch the range at the top and compress it at 
the bottom (move odds being much more valuable to strong players than to weak 
players).
I guess it would be possible to convert such handicap-based ratings to real Elo 
ratings by collecting statistics, but the fit would probably be more complex 
than a simple shift and constant scale factor.
But what else would be the learned from comparing the length of this scale to 
the length of the normal Elo scale? Not much, I think. It's just a different 
system.
 
So I agree with Christoph Birk that percentiles (when available), give better 
absolute skill comparisons when comparing different games with different rating 
systems.
Better at least than a rating conversion based on a simple shift and constant 
scale factor.
 
Dave



Van: [EMAIL PROTECTED] namens Don Dailey
Verzonden: wo 19-11-2008 20:59
Aan: computer-go
Onderwerp: RE: [computer-go] Re: Opportunity to promote ...



On Wed, 2008-11-19 at 10:24 -0800, Christoph Birk wrote:
 That should not matter much. The typical chess player should be
 as strong as the typical Go player and I also expect the strength
 distribution to follow similar lines.

Larry Kaufman,  a chess Grandmaster and also an expert in many games
once told me that there are many more levels in Go than in chess.

For instance in chess, your rating can range from about 500 to 2800 or
so (roughly.)   You can have a rating less than 500 of course and some
do, but 500 represents something like someone who just learned the rules
give or take.   But for arguments sake let's say that 99% of all players
are well within a span of 3000 ELO points to be generous.

For 19x19 go, assuming we were to use ELO ratings the range would be
much higher according to Larry.So it could be more like 4000 or
5000, I don't know.  

So I don't know for sure what you mean when you say the strength
distribution follows similar lines.

For comparison purposes maybe we need to identify the median player
somehow and extrapolate from there.But if you take the median for
both games, and assign them some fixed elo,  I think you would find the
top GO players had ELO ratings hundreds of points higher than the top
Chess players.

There is yet no ceiling on how strong chess players can be either, so I
assume the same for GO, even more so.The top playing chess program
seems to be at least 200 ELO stronger than the best human and they
continue to improve every year.   And it's still very clear that they
could improve a lot. 


- Don





___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Re: Opportunity to promote ...

2008-11-18 Thread dave.devos
It depends very much on what exactly you mean by amateur master level. Is it 
a level that compares to amateur master level in chess?
And what is amateur master level in chess? USCF master, FIDE master or 
international master?
Some time ago I participated in a discussion about comparing chess titles to go 
ranks by evaluating effort, prestige and other factors [1].
 
In my opinion:
1: USCF master compares to about 4d
2: FIDE master compares to about 6d
3: International master compares to about 7d/1p
 
If we assume that mogo on Huygens is about 2d, it think it hasn't quite reached 
amateur master level in any chess meaning of the word. It would only compare 
to a fairly strong USCF class A chess player.
 
Dave
 
[1] discussion and my proposal in a table near the bottom of 
http://senseis.xmp.net/?FIDETitlesAndEGFGoRatings



Van: [EMAIL PROTECTED] namens Jason House
Verzonden: di 18-11-2008 14:25
Aan: computer-go
Onderwerp: Re: [computer-go] Re: Opportunity to promote ...



On Nov 18, 2008, at 7:43 AM, Michael Williams [EMAIL PROTECTED]
  wrote:

 The well at the end of the title is implied.  And computers still 
 can't play 19x19 Go anywhere near the master level.


I'm not very familiar with go terms, but I think kyu means student and 
dan means master.

It may be better to say something like computers may have reached 
amateur master using super computers, but are still far away from 
professional play





 Ingo Althöfer wrote:
 Dear Bob Hearn,
 it is not what you have been looking for, but nevertheless
 I want to ask you if the title of your talk
 Games Computers Can't Play is still up-to-date.
 I would accept something like Games Computers Could not play well 
 before 2003,
 but Monte Carlo has changed our world. Ingo Althofer.

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Monte carlo play

2008-11-15 Thread dave.devos
Hello Tony,
 
I'm just speaking for myself here, but I suspect you may not get the answers 
you are looking for, because you're not asking the right questions.
 
I get the impression that the situation is similar to this scenario:
 

A young guy with a surf board and a hawai shirt turns up at 
basecamp on mount Everest and says:
Hey dudes, what's up!
This morning I had this wonderful idea: I wanna climb the mount 
Everest!
So here I am, and I even brought my hiking shoes!
But can anyone give me directions to the nearest McDonalds, cuz 
I'm starving.
 
Everybody in the tent stares at him speechlessy.

I may be wrong, but that's my impression.
I'm not even sure whether if I myself should be here at basecamp, even though I 
am a 4d in Go and even though I have 25 years of programming experience in a 
couple of different languages (including the last 8 years as a fulltime 
professional developer.)
It's only logical that ever since I've learned Go 20 years ago I've had a 
special interest in computer-go and I spent some time over the years attempting 
to build a strong go playing program myself (although not nearly as much time 
as some others here).
I'm a mountaineer, but climbing the mount Everest is something else. I feel a 
bit like a rooky between all the seasoned mountaineers hanging around here at 
the Everest basecamp. Most of the time I keep my mouth shut when one of them is 
talking, because I'm a bit apprehensive that I might say something that is 
utterly obvious to the others or even outright stupid.
 
And then this guy turns up in his hawai shirt.
 
I hope you don't feel offended. Indeed you took up a wonderful endeavour, but I 
sense that you're not quite ready to go for the summit today.
 
Best Regards,
Dave



Van: [EMAIL PROTECTED] namens tony tang
Verzonden: do 13-11-2008 22:03
Aan: go mailing list
Onderwerp: RE: [computer-go] Monte carlo play


Thanks Darren,
 
After reading a couple of interesting papers on this field I am applying 
pattern matching to limit the number of possibilities of the monte carlo tree 
(on 19x19 it appears to take a day to play).
I was wondering if you could comment on weather my following concept is right 
on the evalution part.
 
The possible patterns for the current situation are passed to a class where 
self play begins. Each different pattern runs on a instance, and at the end the 
results are written into a file. The game tree will be constructed using the 
stats from the file. 
 
Thanks again
 
Tony
 
 


 Date: Wed, 12 Nov 2008 10:34:21 +0900
 From: [EMAIL PROTECTED]
 To: computer-go@computer-go.org
 Subject: Re: [computer-go] Monte carlo play
 
  I am new to programming go, could some one explain to me how a monte
  carlo based evalution manages to play random games by itself? ie:
  who/what is the oppoent which supplies the opposing moves which
  allows another move to be randomly played after making the initial
 
 It is self-play, so both players are random.
 
 I'm not aware of any programs that use different playout strategies for
 black/white, though it has been briefly mentioned before on this list,
 and I personally think it might be interesting as a way of discovering
 and more accurately evaluating the imbalances present in most go
 positions (i.e. usually one side has more strength and one side has more
 territory, and so to correctly understand the position one side has to
 play aggressive moves and the other has to play defensive moves).
 
  move at the root. I am implementing in java, is there a
  package/framework which allows me to train my software.
 
 This page, recently started by Eric Marchand, might be a good starting
 place:
 http://ricoh51.free.fr/go/engineeng.htm
 
 Darren
 
 
 -- 
 Darren Cook, Software Researcher/Developer
 http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
 open source dictionary/semantic network)
 http://dcook.org/work/ (About me and my work)
 http://dcook.org/blogs.html (My blogs and articles)
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/





Get the best wallpapers on the Web - FREE. Click here! 
http://wallpapers.msn.com/?ocid=[B001MSN42A0716B]  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Monte carlo play

2008-11-15 Thread dave.devos
You are absolutely right. Tony did not imply he wanted to go for the summit 
today. Also, Tony's qualifications are surely a lot better than a hawaiian 
shirt and a surfboard. 
 
My analogy was an exaggeration with the intention of explaining to Tony that he 
has more catching up to do than he might have expected. 
By no means was my intention to put Tony down. I hope Tony understands this and 
I sincerely hope that Tony won't give up on joining us in this wonderful 
endeavour, because we need all the help we can get.
 
Best Regards,
Dave



Van: [EMAIL PROTECTED] namens Nick Wedd
Verzonden: za 15-11-2008 15:48
Aan: computer-go
Onderwerp: Re: [computer-go] Monte carlo play



In message
[EMAIL PROTECTED],
[EMAIL PROTECTED] writes

 Hawaiian shirt analogy snipped 

I hope you don't feel offended. Indeed you took up a wonderful
endeavour, but I sense that you're not quite ready to go for the summit
today.

But Tony never expressed any interest in going for the summit.  Maybe
he just wants to write a simple MC implementation.  Here is what he
wrote:

I am new to programming go, could some one explain to me how a monte
carlo based evalution manages to play random games by itself?
ie: who/what is the oppoent which supplies the opposing moves which
allows another move to be randomly played after making the initial move
at the root.

I am implementing in java, is there a package/framework which allows me
to train my software.

I will try to answer these questions myself.

 who/what is the oppoent which supplies the opposing moves

It is the same piece of your program that supplies your own moves.  For
each random game or rollout, your program places both black and
white stones, alternately, at random.

is there a package/framework which allows me
to train my software.

Nothing trains your software.  This isn't a neural net.  The way it
works is:
  you write a program
  A: it plays, badly
 you modify it so it plays differently, and with luck, better
 repeat from A

Nick
--
Nick Wedd[EMAIL PROTECTED]
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Monte carlo play

2008-11-15 Thread dave.devos
Just a quickstart here, and I'm not one of the real experts, so it may contain 
inaccuracies. Perhaps you already know all this, perhaps not.
 
The basis of many game-playing programs (like chess programs) is the minimax 
(or negamax) algorithm.
It is a simple algorithm by itself and you can find a lot of background and 
examples for chess or other games if you google it.
Basically, minimax explores all possible variations stemming from the current 
position and then selects the best move.
 
This algorithm works in principle, but there is a problem with it:
For interesting games like chess and go, it will usually take the program 
billions of years to find the best move, given a game position.
 
In the past decades several adaptations of the minimax algorithm were invented 
to enable the algorithm to find a reasonably good move in a timespan that 
humans are prepared to wait for (like evaluation functions, alfabeta pruning 
and transposition tables). The program keeps investigating possibilities for 
some time and then the move seeming best so far is played. The longer the 
program is allowed to think (and the faster the computer), the better the move 
it will come up with.
 
These adaptations work by reducing the number of possibilities the program 
explores, without weakening the program too much. The program must ignore many 
possibilities to save time (this is called pruning), but the more possibilities 
it ignores, the higher the risk that it accidentally ignores the best move. 
Also, there is allways the danger of the horizon effect: The program 
misinterprets the situation because it simply doesn't get enough time to fully 
investigate every possibility. 
 
The challenge for the programmer is to strike an optimal balance between time 
consumption and losing by accidentally ignoring or misinterpreting the best 
move. Some game programs are even able to finetune themselves to optimize this 
balance over time (learning).
 
A combination of these adaptations have proven to be well enough to enable 
chess programs on current hardware to beat any human with reasonable thinking 
time.
But computer-go is a harder problem than computer-chess. The adaptations that 
are succesful in computer-chess are just not enough for computer-go, mainly 
because there ar far more possibilities (and I mean FAR more) to explore in go. 
 
But now there is a recent adaptation that has proven to be fairly succesful in 
computer-go: MonteCarlo playouts.
The program reduces the work to be done by gambling a number of times and 
measuring the average payoff. If the average payoff is good, the move is 
assumed to be good for the time being. Also, it doesn't automatically ignore 
moves with a low payoff (remember the risk of ignoring the best move). The 
program just postpones further investigation of seemingly bad moves until the 
moves seeming good initially give more and more dissapointing payoffs after 
further investigation. 
Although this invention has had only moderate success in computer-chess, it has 
proven to be fairly succesful in computer-go.
 
But at the heart of a MonteCarlo go program, you can still recognize the 
minimax algorithm.
 
Best Regards,
Dave
 
 


Van: [EMAIL PROTECTED] namens tony tang
Verzonden: wo 12-11-2008 2:23
Aan: computer-go@computer-go.org
Onderwerp: [computer-go] Monte carlo play


Greetings all go gurus,
 
I am new to programming go, could some one explain to me how a monte carlo 
based evalution manages to play random games by itself?
ie: who/what is the oppoent which supplies the opposing moves which allows 
another move to be randomly played after making the initial move at the root.
I am implementing in java, is there a package/framework which allows me to 
train my software.
 
Thank you
 
Kind regards
 
Tony




Read amazing stories to your kids on Messenger Try it Now! 
http://clk.atdmt.com/UKM/go/117588488/direct/01/  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] FW: computer-go] Monte carlo play?

2008-11-15 Thread dave.devos
Van: tony tang [mailto:[EMAIL PROTECTED]
Verzonden: za 15-11-2008 21:22
Aan: [EMAIL PROTECTED]
Onderwerp: [RE:computer-go] Monte carlo play?


Hello Dave,

Thank you for a thorough introduction of the theory, and i sincerely hope I am 
not wasting your time with amateur questions.
Because it was quite late when I sent the question I think I didnt word it 
correctly and mis-communicated 
my question. 

Being a computer scientist but new to go, i can grasp some of the theory. The 
question I was trying to get across 
was:

In a game of self play, if both parties are employing only monte carlo, surely 
its not a good conceptual representation
of a human, and if the reinforcement learning is based on random simulations 
wouldnt it be very weak when playing a real human?
According to nick wedd and some playing around I am clearly mistaken because 
its works quite well.
Is it because they are training(not in the neural network sense) their monte 
carlo based engine against another engine with prior knowledge or something
of that nature?

I have seen papers implementing patterns and prior knowledge with monte carlo 
(dated 2006) has that become a standard
now, and when people refer to monte carlo they dont mean absolutly random? 

Thank you

Kind regards

Tony




Read amazing stories to your kids on Messenger Try it Now! 
http://clk.atdmt.com/UKM/go/117588488/direct/01/  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] FW: computer-go] Monte carlo play?

2008-11-15 Thread dave.devos
Van: [EMAIL PROTECTED]
Verzonden: za 15-11-2008 22:44
Aan: tony tang
Onderwerp: RE: computer-go] Monte carlo play?


Hello Tony,
 
Ok, so you know about the minimax algorithm and the like. My impression was 
wrong and I'm very sorry for my analogy.
 
I'm no expert on montecarlo, so I can only say what I know. But most of the 
real experts have been rather quiet during the last week, so perhaps I can act 
as a stand-in.
 
The basics is indeed absolutely random play (alternating turns and removing 
captured stones as in normal play) until the game finishes. The only exceptions 
to random plays are occupied intersections, simple ko recaptures, suicide moves 
(not sure if all programs prune suicide if the rules allow it) and moves that 
fill one-point eyes of the color who is to play (there may be slight 
differences in the exact definition of a one point eye).
 
Without the one eye pruning, random games would never finish, because both 
random players would keep on killing their own stones and refilling the empty 
space after the opponent has captured. 
Even with eye detection and simple ko detection, the game could end up in a 
cycle occasionally because of triple ko or more pathological situations, so one 
should probably also limit the number of moves and discard the result if that 
limit is exceeded. This is probably a much cheaper solution to the cycle 
problem than a more sophisticated detection). 
 
A purely random game is played out until both players are forced to pass. This 
is a called a light playout. Claus Reinke recently dubbed the term random 
fill-in for this process 
(http://www.mail-archive.com/computer-go@computer-go.org/msg09749.html). I like 
this term better than light playout, but it doesn't seem to catch on here.
 
The board is then counted to determine the winner and this binary result is 
added to a statistic for the move that is currently being evaluated. The 
accumulated statistic is effectively the evaluation value for the position. It 
is then backtracked to the game tree root in a minimax way.
 
In go it is hard to quickly evaluate a position statically (or even slowly for 
that matter). So it is nice to have a dynamic evaluation that becomes more 
accurate as you spend more cpu cyles on it. You can see that it's important 
that random fill-ins are fast, but it's still a rather large investment to do a 
lot of playouts, when you compare this with a static evaluation function. So it 
is even more important to avoid wasting effort in evaluating positions by 
random fill-ins. The trick is to use crude statistics (few fill-ins) to decide 
which statistic to refine by investing more fill-ins. This is where the UCT 
value comes in (http://senseis.xmp.net/?UCT). It is derived from the statistic 
and it decides which position to investigate further, preferring good 
statistics over bad ones, but also preferring crude statistics over accurate 
ones.
 
There is no training involved and there is no a priori knowledge involved here 
except for the rules and eye detection. The program has absolutely no idea 
about go concepts.
 
Would you call this process learning?
 
Programs using this algorithm are called pure MC programs and although they 
get better with more CPU cycles, they aren't very strong with normal time 
limits on current hardware (or hardware in the near future). You are right, by 
itself pure MC is just not effective enough.
 
So programmers started to build in more Go knowledge to improve on this. This 
makes the fill-in process slower, so they are called heavy playouts. And this 
is also where my knowledge ends. I may guess that instead of selecting moves in 
a uniform random way, moves are weighted by a priori Go knowledge built in by 
the programmer, but I know no details and I don't know if there have been 
publications on this. Perhaps there aren't any, because it is also a 
competitive field and commercial interests might be involved too.
 
Other contributors can probably tell you more about this.
 
Best Regards,
Dave 
 



Van: tony tang [mailto:[EMAIL PROTECTED]
Verzonden: za 15-11-2008 21:22
Aan: [EMAIL PROTECTED]
Onderwerp: [RE:computer-go] Monte carlo play?


Hello Dave,

Thank you for a thorough introduction of the theory, and i sincerely hope I am 
not wasting your time with amateur questions.
Because it was quite late when I sent the question I think I didnt word it 
correctly and mis-communicated 
my question. 

Being a computer scientist but new to go, i can grasp some of the theory. The 
question I was trying to get across 
was:

In a game of self play, if both parties are employing only monte carlo, surely 
its not a good conceptual representation
of a human, and if the reinforcement learning is based on random simulations 
wouldnt it be very weak when playing a real human?
According to nick wedd and some playing around I am clearly mistaken because 
its works quite well.
Is it because they are training(not in the neural 

[computer-go] FW: computer-go] Monte carlo play?

2008-11-15 Thread dave.devos
Van: tony tang [mailto:[EMAIL PROTECTED]
Verzonden: za 15-11-2008 23:08
Aan: [EMAIL PROTECTED]
Onderwerp: RE: computer-go] Monte carlo play?


Thanks Dave, that was incredible helpful, hopefully this new hobby of mine will 
materialise into a decent project.
 
All the best
 
Tony
 




Subject: RE: computer-go] Monte carlo play?
Date: Sat, 15 Nov 2008 22:44:51 +0100
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]



Hello Tony,
 
Ok, so you know about the minimax algorithm and the like. My impression was 
wrong and I'm very sorry for my analogy.
 
I'm no expert on montecarlo, so I can only say what I know. But most of the 
real experts have been rather quiet during the last week, so perhaps I can act 
as a stand-in.
 
The basics is indeed absolutely random play (alternating turns and removing 
captured stones as in normal play) until the game finishes. The only exceptions 
to random plays are occupied intersections, simple ko recaptures, suicide moves 
(not sure if all programs prune suicide if the rules allow it) and moves that 
fill one-point eyes of the color who is to play (there may be slight 
differences in the exact definition of a one point eye).
 
Without the one eye pruning, random games would never finish, because both 
random players would keep on killing their own stones and refilling the empty 
space after the opponent has captured. 
Even with eye detection and simple ko detection, the game could end up in a 
cycle occasionally because of triple ko or more pathological situations, so one 
should probably also limit the number of moves and discard the result if that 
limit is exceeded. This is probably a much cheaper solution to the cycle 
problem than a more sophisticated detection). 
 
A purely random game is played out until both players are forced to pass. This 
is a called a light playout. Claus Reinke recently dubbed the term random 
fill-in for this process 
(http://www.mail-archive.com/computer-go@computer-go.org/msg09749.html). I like 
this term better than light playout, but it doesn't seem to catch on here.
 
The board is then counted to determine the winner and this binary result is 
added to a statistic for the move that is currently being evaluated. The 
accumulated statistic is effectively the evaluation value for the position. It 
is then backtracked to the game tree root in a minimax way.
 
In go it is hard to quickly evaluate a position statically (or even slowly for 
that matter). So it is nice to have a dynamic evaluation that becomes more 
accurate as you spend more cpu cyles on it. You can see that it's important 
that random fill-ins are fast, but it's still a rather large investment to do a 
lot of playouts, when you compare this with a static evaluation function. So it 
is even more important to avoid wasting effort in evaluating positions by 
random fill-ins. The trick is to use crude statistics (few fill-ins) to decide 
which statistic to refine by investing more fill-ins. This is where the UCT 
value comes in (http://senseis.xmp.net/?UCT). It is derived from the statistic 
and it decides which position to investigate further, preferring good 
statistics over bad ones, but also preferring crude statistics over accurate 
ones.
 
There is no training involved and there is no a priori knowledge involved here 
except for the rules and eye detection. The program has absolutely no idea 
about go concepts.
 
Would you call this process learning?
 
Programs using this algorithm are called pure MC programs and although they 
get better with more CPU cycles, they aren't very strong with normal time 
limits on current hardware (or hardware in the near future). You are right, by 
itself pure MC is just not effective enough.
 
So programmers started to build in more Go knowledge to improve on this. This 
makes the fill-in process slower, so they are called heavy playouts. And this 
is also where my knowledge ends. I may guess that instead of selecting moves in 
a uniform random way, moves are weighted by a priori Go knowledge built in by 
the programmer, but I know no details and I don't know if there have been 
publications on this. Perhaps there aren't any, because it is also a 
competitive field and commercial interests might be involved too.
 
Other contributors can probably tell you more about this.
 
Best Regards,
Dave 
 



Van: tony tang [mailto:[EMAIL PROTECTED]
Verzonden: za 15-11-2008 21:22
Aan: [EMAIL PROTECTED]
Onderwerp: [RE:computer-go] Monte carlo play?


Hello Dave,

Thank you for a thorough introduction of the theory, and i sincerely hope I am 
not wasting your time with amateur questions.
Because it was quite late when I sent the question I think I didnt word it 
correctly and mis-communicated 
my question. 

Being a computer scientist but new to go, i can grasp some of the theory. The 
question I was trying to get across 
was:

In a game of self play, if both parties are employing only monte carlo, 

RE: [computer-go] Monte-Carlo and Japanese rules

2008-11-06 Thread dave.devos
What if the playout uses the AGA rule of paying 1 point for a pass and 
requiring white to pass last (so the game does not end by two passes if black 
plays the second pass).
Wouldn't the score then be equivalent to the japanese score?
 
Dave



Van: [EMAIL PROTECTED] namens Mark Boon
Verzonden: do 6-11-2008 17:11
Aan: computer-go
Onderwerp: Re: [computer-go] Monte-Carlo and Japanese rules



Although what Don writes is all correct, I understood the question to 
be rather different. It's not a matter of being able to determine the 
right score at the end or the right way to play, it's a matter of 
determining the right score after each playout. For performance 
reasons MC programs will cut corners there which could be taken 
advantage of when playing by Japanese rules because the after the 
playout it is prone to getting the wrong score in certain situations.

Mark

On 6-nov-08, at 12:12, Don Dailey wrote:

 On Thu, 2008-11-06 at 09:19 +0100, Ingo Althöfer wrote:
 Hello all, two questions.

 (i) Do there exist strong 9x9-go programs on Monte-Carlo base
 for Japanese rules?

 (ii) Having available only programs for Chinese rules, but playing
 in a tournament with Japanese rules, which special tricks and
 settings should be used to maximise winning chances? (This is meant
 especially in the light of MC's tendency to win games by 0.5
 points according to the rules implemented.)

 I've thought about those questions myself from time to time.  Let me
 think out loud concerning this.   I am by know means an expert in
 Japanese scoring or even GO in general, so I'm just giving some 
 thoughts
 here and a plan for building a Japanese simple bot that you can be
 free to criticize:

 It seems to me the primary difference between the two is knowing 
 when to
 stop playing and of course scoring dead groups.   The Chinese style 
 bots
 do not technically need to know about scoring.

 You can look at the combined statistics at the end of the games for a
 given point to get a sense of whether that point is still in play or
 whether it's a forgone conclusion.  You can do the same to determine
 dead groups.   I don't know how well that works in all cases, but I 
 have
 used it and it works pretty well.

 But we also want to recognize dame,  and not play to dame points early
 in the game even if it doesn't affect the final Chinese outcome.   So
 here is my idea:

   1. If ALL the stones of a particular group belong to the opponent 
 with
 high certainty,  they are dead.

   2. If there are open spaces that belong to you or the opponent with
 high certainty don't move to them.

   3. If an uncertain point is touching stones of both colors and both
 colors have high certainty for the color they belong to, it is 
 probably
 dame and you shouldn't move to them.

 example:   White has a stone on d4 that is clearly alive.
Black has a stone on f4 that is clearly alive.
An empty point on e4 is highly uncertain.
Do not play to e4 - it is probably dame.

question:  Is that a reasonably good rule or does it need some 
 work?


   4. If you have no moves other than these cases, you should pass.

 You can test this idea by playing a bot on KGS under Japanese rules.
 You may have to tweak what you consider your uncertainty margin.   
 Also,
 I'm not considering seki here but we would want to find a way to cope
 with that.

 - Don



 Ingo.
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Monte-Carlo and Japanese rules

2008-11-06 Thread dave.devos
And of course black should pay 1 point for each extra handicap stone. 
http://www.britgo.org/rules/compare.html#coun
 
Dave



Van: [EMAIL PROTECTED] namens [EMAIL PROTECTED]
Verzonden: do 6-11-2008 19:28
Aan: computer-go
Onderwerp: RE: [computer-go] Monte-Carlo and Japanese rules


What if the playout uses the AGA rule of paying 1 point for a pass and 
requiring white to pass last (so the game does not end by two passes if black 
plays the second pass).
Wouldn't the score then be equivalent to the japanese score?
 
Dave



Van: [EMAIL PROTECTED] namens Mark Boon
Verzonden: do 6-11-2008 17:11
Aan: computer-go
Onderwerp: Re: [computer-go] Monte-Carlo and Japanese rules



Although what Don writes is all correct, I understood the question to 
be rather different. It's not a matter of being able to determine the 
right score at the end or the right way to play, it's a matter of 
determining the right score after each playout. For performance 
reasons MC programs will cut corners there which could be taken 
advantage of when playing by Japanese rules because the after the 
playout it is prone to getting the wrong score in certain situations.

Mark

On 6-nov-08, at 12:12, Don Dailey wrote:

 On Thu, 2008-11-06 at 09:19 +0100, Ingo Althöfer wrote:
 Hello all, two questions.

 (i) Do there exist strong 9x9-go programs on Monte-Carlo base
 for Japanese rules?

 (ii) Having available only programs for Chinese rules, but playing
 in a tournament with Japanese rules, which special tricks and
 settings should be used to maximise winning chances? (This is meant
 especially in the light of MC's tendency to win games by 0.5
 points according to the rules implemented.)

 I've thought about those questions myself from time to time.  Let me
 think out loud concerning this.   I am by know means an expert in
 Japanese scoring or even GO in general, so I'm just giving some 
 thoughts
 here and a plan for building a Japanese simple bot that you can be
 free to criticize:

 It seems to me the primary difference between the two is knowing 
 when to
 stop playing and of course scoring dead groups.   The Chinese style 
 bots
 do not technically need to know about scoring.

 You can look at the combined statistics at the end of the games for a
 given point to get a sense of whether that point is still in play or
 whether it's a forgone conclusion.  You can do the same to determine
 dead groups.   I don't know how well that works in all cases, but I 
 have
 used it and it works pretty well.

 But we also want to recognize dame,  and not play to dame points early
 in the game even if it doesn't affect the final Chinese outcome.   So
 here is my idea:

   1. If ALL the stones of a particular group belong to the opponent 
 with
 high certainty,  they are dead.

   2. If there are open spaces that belong to you or the opponent with
 high certainty don't move to them.

   3. If an uncertain point is touching stones of both colors and both
 colors have high certainty for the color they belong to, it is 
 probably
 dame and you shouldn't move to them.

 example:   White has a stone on d4 that is clearly alive.
Black has a stone on f4 that is clearly alive.
An empty point on e4 is highly uncertain.
Do not play to e4 - it is probably dame.

question:  Is that a reasonably good rule or does it need some 
 work?


   4. If you have no moves other than these cases, you should pass.

 You can test this idea by playing a bot on KGS under Japanese rules.
 You may have to tweak what you consider your uncertainty margin.   
 Also,
 I'm not considering seki here but we would want to find a way to cope
 with that.

 - Don



 Ingo.
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] RE: Ending games by two passes

2008-10-24 Thread dave.devos
ERRATUM:
 
Sorry, I made a small mistake in my example.
The komi should be 3.5 so white wins by 0.5 if 2 passes end the game.
 
Dave



Van: [EMAIL PROTECTED]
Verzonden: vr 24-10-2008 10:00
Aan: computer-go
Onderwerp: Ending games by two passes


Is it correct to end games by 2 consecutive passes?
 
When I learned go 20 years ago I was taught that 3 consecutive passes are 
required to end a game of go.
In practice 2 passes are sufficient in nearly all cases, but sometimes 2 passes 
is not enough.
Suppose we have this position in a 5x5 game with area scoring and 2.5 komi:
 
(0 = white, # = black)
 
  ABCDE
5 00###
4 00#+#
3 +0###
2 00##+
1 0#+##
 
Black has just played C4.
 
The controller is very simple. It only prohibits simple ko (superko is not 
checked) and all stones left on the board when the game ends are considered 
alive.
White now at C1. Black has no choice but pass and then white quickly passes 
too. What happens now?
 
If 2 passes end the game, the controller will award a win to white by the komi.
If 3 passes are required to end the game, black captures at B1, white has no 
choice but pass, then black captures at A3 and will (probably) win the game.
 
On could argue that controllers are smarter than the controller in my example, 
so 2 passes are usually sufficient in pactice, because the controller will 
query the engines for dead stones.
But in my example, wouldn't both engines be justified to declare the white 
stones alive because of the 2 pass rule?
 
Also, if I am correct, (light) playouts are usually controlled by an internal 
controller that is very similar to the controller in my example. Wouldn't 
they be vulnerable to this type of situation?
 
Why not avoid this issue simply by requiring 3 consecutive passes to end the 
game?
 
Am I missing something here?
 
Dave
 
 
 
 
 
 
 
 
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Ending games by two passes

2008-10-24 Thread dave.devos
Please correct me if I'm wrong, but are you saying that white is alive with 
TT-rules (=Tromp-Taylor?) or other rulesets with positional superko if black 
has not enough eyes left to fill as ko threats?
If that's true, I would be disgusted if positional superko would ever be 
accepted as a rule in human vs. human games.
 
Dave



Van: [EMAIL PROTECTED] namens Erik van der Werf
Verzonden: vr 24-10-2008 11:32
Aan: computer-go
Onderwerp: Re: [computer-go] Ending games by two passes



Hi Dave,

This is a well-known problem with overly simplified rulesets.
TT-advocates don't care about the rare anomalies.

Did you notice that under positional superko you cannot take back the
ko after *any* number of consecutive passes? This is yet another
reason why in some cases filling an eye or playing in sure territory
may be the best move...

In your engine you don't want to use 3 passes unless absolutely
necessary because of horizon effects. In my experience it is best to
use 3 passes only if there is exactly one basic ko, and in all other
cases use 2 passes to end the game.

Erik


On Fri, Oct 24, 2008 at 10:00 AM,  [EMAIL PROTECTED] wrote:
 Is it correct to end games by 2 consecutive passes?

 When I learned go 20 years ago I was taught that 3 consecutive passes are
 required to end a game of go.
 In practice 2 passes are sufficient in nearly all cases, but sometimes 2
 passes is not enough.
 Suppose we have this position in a 5x5 game with area scoring and 2.5 komi:

 (0 = white, # = black)

   ABCDE
 5 00###
 4 00#+#
 3 +0###
 2 00##+
 1 0#+##

 Black has just played C4.

 The controller is very simple. It only prohibits simple ko (superko is not
 checked) and all stones left on the board when the game ends are considered
 alive.
 White now at C1. Black has no choice but pass and then white quickly passes
 too. What happens now?

 If 2 passes end the game, the controller will award a win to white by the
 komi.
 If 3 passes are required to end the game, black captures at B1, white has no
 choice but pass, then black captures at A3 and will (probably) win the game.

 On could argue that controllers are smarter than the controller in my
 example, so 2 passes are usually sufficient in pactice, because the
 controller will query the engines for dead stones.
 But in my example, wouldn't both engines be justified to declare the white
 stones alive because of the 2 pass rule?

 Also, if I am correct, (light) playouts are usually controlled by an
 internal controller that is very similar to the controller in my example.
 Wouldn't they be vulnerable to this type of situation?

 Why not avoid this issue simply by requiring 3 consecutive passes to end the
 game?

 Am I missing something here?

 Dave








 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] RE: Ending games by two passes

2008-10-24 Thread dave.devos
I know white is dead, but what matters is that the controller does not know.
The only way for the controller to know that white is dead is by requiring 
black to capture white before ending the game.
And when 2 passes end the game, black is unable to do that. So the controller 
will have to assume that white is alive.
 
Dave



Van: [EMAIL PROTECTED] namens Li Li
Verzonden: vr 24-10-2008 11:23
Aan: computer-go
Onderwerp: Re: [computer-go] RE: Ending games by two passes


Wrong assertion: all stones left on the board when the game ends are 
considered alive.
The result has nothing to do with how many pass. The white is dead when the 
game is finished. 

On Fri, Oct 24, 2008 at 4:02 PM, [EMAIL PROTECTED] wrote:


ERRATUM:
 
Sorry, I made a small mistake in my example.
The komi should be 3.5 so white wins by 0.5 if 2 passes end the game.
 
Dave



Van: [EMAIL PROTECTED]
Verzonden: vr 24-10-2008 10:00
Aan: computer-go
Onderwerp: Ending games by two passes


Is it correct to end games by 2 consecutive passes?
 
When I learned go 20 years ago I was taught that 3 consecutive passes 
are required to end a game of go.
In practice 2 passes are sufficient in nearly all cases, but sometimes 
2 passes is not enough.
Suppose we have this position in a 5x5 game with area scoring and 2.5 
komi:
 
(0 = white, # = black)
 
  ABCDE
5 00###
4 00#+#
3 +0###
2 00##+
1 0#+##
 
Black has just played C4.
 
The controller is very simple. It only prohibits simple ko (superko is 
not checked) and all stones left on the board when the game ends are considered 
alive.
White now at C1. Black has no choice but pass and then white quickly 
passes too. What happens now?
 
If 2 passes end the game, the controller will award a win to white by 
the komi.
If 3 passes are required to end the game, black captures at B1, white 
has no choice but pass, then black captures at A3 and will (probably) win the 
game.
 

On could argue that controllers are smarter than the controller in my 
example, so 2 passes are usually sufficient in pactice, because the controller 
will query the engines for dead stones.
But in my example, wouldn't both engines be justified to declare the 
white stones alive because of the 2 pass rule?
 
Also, if I am correct, (light) playouts are usually controlled by an 
internal controller that is very similar to the controller in my example. 
Wouldn't they be vulnerable to this type of situation?
 

Why not avoid this issue simply by requiring 3 consecutive passes to 
end the game?
 
Am I missing something here?
 
Dave
 
 
 
 
 
 
 
 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Ending games by two passes

2008-10-24 Thread dave.devos
I'm glad we agree on this :)
 
Your previous respons suggests that this issue has been debated before on this 
list, so I'll probably be able to find references about this issue.
I wouldn't want to restart a debate here about positional superko :)
 
Thanks,
 
Dave



Van: [EMAIL PROTECTED] namens Erik van der Werf
Verzonden: vr 24-10-2008 12:18
Aan: computer-go
Onderwerp: Re: [computer-go] Ending games by two passes



On Fri, Oct 24, 2008 at 11:47 AM,  [EMAIL PROTECTED] wrote:
 Please correct me if I'm wrong, but are you saying that white is alive with
 TT-rules (=Tromp-Taylor?) or other rulesets with positional superko if black
 has not enough eyes left to fill as ko threats?

Yes.

 If that's true, I would be disgusted if positional superko would ever be
 accepted as a rule in human vs. human games.

Does it really matter if it is human vs. human games? Why have
different (inferior?) standards for computers anyway?

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Ending games by two passes

2008-10-24 Thread dave.devos
In my opinion the goal of a ko rule is to prevent games from not ending.
 
1: If one player can force a game to an end even when the other player aims at 
not ending the game, then the rule is good enough.
In my previous example I would consider it an undesired side effect of a ko 
rule that white would win he game. There would be no danger of the game not 
ending if the ko rule were less restrictive than positional superko. Black 
should be allowed to capture white.
 
2: In rare cases the only non-losing way for either player could be to aim for 
an everlasting game, like a triple ko. In that case an everlasting game it near 
optimal play for both players. 
 
3: I guess a ko rule does not have to be so restrictive to prevent everlasting 
games in in general. If it can solve situation 2 while still allowing 1, than 
that is good enough.
 
Is it mathematically impossible to construct a ko rule that allows 1 and avoid 
2? If not, I would prefer keep 1 and leave 2 undefined.
 
I am no rules expert, but I cannot explain more clearly why I would be 
disgusted by positional superko. It is overly restrictive.
 
Dave



Van: [EMAIL PROTECTED] namens Robert Jasiek
Verzonden: vr 24-10-2008 14:10
Aan: computer-go
Onderwerp: Re: [computer-go] Ending games by two passes



[EMAIL PROTECTED] wrote:
 Is it correct to end games by 2 consecutive passes?

It is correct to end games according to the used rules. Different rules
use different numbers of passes, meanings of passes, or procedures
assiated with passes. Some examples of numers of passes in actually used
rulesets are:

2 passes
3 passes
2 passes + optionally once 2 passes
2 passes + optionally an arbitrary occurrence of yet 2 more passes
2 passes + 2 mandatory further passes
after the first, 2 passes
etc.

One cannot say in general which better because this depends on one's
aims. E.g., if the aim is shortest procedure on the rules level, then
one would choose 2 passes. E.g., if the aim is to always allow each
player a pass as a ko threat while a pass does relieve ko bans
sufficiently, then one might choose 3 passes. E.g., if the first pass
generates a conditional compensation and one wants to allow each player
the filling of 1-sided dame regardless, then one might choose after the
first, 2 passes. Etc.

 white is alive [under] rulesets with positional superko if black has not
  enough eyes left to fill as ko threats?
 If that's true, I would be disgusted if positional superko would ever be
  accepted as a rule in human vs. human games.

Why would you be disgusted?

The so called 1-eye-flaw occurs in much less than 1 of 10,000,000 games
on the 19x19 board. In the entire history of go, it is reported to have
occurred exactly once on the 9x9 board. Why do you dislike rules that
enable something possible in theory but never occurring in practice?

What do you have against 1-eye-flaw staying on the board at the game
end? a) That it is a group with only 1 eye, b) that it is a group with
only 1 ko, or c) that there is a string with only 1 liberty?

Discussion of (a), (b), and (c):

All rulesets used by humans allow games to end with groups with only 1
liberty. Example:

# O # . # O . O # O
# O # . # O . O # O
# O # . # O . O # O
# O # . # O . O # O
. O # . # O . O # .

This example shows two stable anti-sekis. By symmetry, it would be
superfluous to prolong the game to dissolve either.

If you are disgusted by 1-eye-flaw, then you should be even more
disgusted by anti-sekis. I.e., you are disgusted by all rulesets
currently used by humans.

Strings with 1 liberty at the game end can also occur in hane-sekis,
double ko sekis, quadruple kos, etc.

Maybe you would human rules to be changed by ca so called greedy rule
like A player may not pass if there is at least one string with exactly
1 liberty on the board. Such would dissolve all those disgusting
things. One can even be more brutal in rules design like dissolving all
those disgusting ordinary sekis, too. :)

If you want to criticise positional superko, then state your first order
aims! Which are they? I hate 1-eye-flaw!? Why should one particular
shape be that all-important while we do not know some 100^500 other
shapes yet? List them all, and then tell us what makes 1-eye-flaw so
special :)

More importantly, why are you worried about a shape at all? Shapes are
the consequences of move-sequences and strategic decisions, see
http://home.snafu.de/jasiek/j2003.html
for a basis with that I defined eye formally. Write down your
disgusting rules with such a design to enable yourself to define
particular shapes in the first place so that you won't overlook any of
your potentially hated disgusting shapes...

BTW, positional superko IS accepted in some human rulesets like Chinese,
Simplified Ing, or World Mind Sports Games 2008.

--
robert jasiek
___
computer-go mailing list
computer-go@computer-go.org

RE: [computer-go] Ending games by two passes

2008-10-24 Thread dave.devos
After reading up a bit on this issue, I didn't find a clear positive consensus 
in this list about a preferred ruleset for computer-go, human-computer-go, real 
life go and go servers.
(I did find a negative consensus about the current Japanese rules, though)
 
I'm curious if there exists a positive consensus about a particular ruleset 
from the point of view of a go software user who is also very familiar with 
real life go playing (so not from a mathematical or pedagogical point of view, 
but from the point of view of a go player playing on a server and/or playing 
against a bot). This user could prefer area counting or territory counting.
 
Specifically: could the current AGA rules be a serious competitor 
(http://www.usgo.org/resources/downloads/completerules.pdf)?
 
Dave
 


Van: [EMAIL PROTECTED] namens Erik van der Werf
Verzonden: vr 24-10-2008 12:18
Aan: computer-go
Onderwerp: Re: [computer-go] Ending games by two passes



On Fri, Oct 24, 2008 at 11:47 AM,  [EMAIL PROTECTED] wrote:
 Please correct me if I'm wrong, but are you saying that white is alive with
 TT-rules (=Tromp-Taylor?) or other rulesets with positional superko if black
 has not enough eyes left to fill as ko threats?

Yes.

 If that's true, I would be disgusted if positional superko would ever be
 accepted as a rule in human vs. human games.

Does it really matter if it is human vs. human games? Why have
different (inferior?) standards for computers anyway?

Erik
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Analysis of 6x6 Go

2008-09-29 Thread dave.devos
I (EGF 4d) am probably not strong enough to give well founded comments on 9x9 
games, but already move 2 at D3 seems strange from a shape point of view 
(whatever that may be worth on 9x9)
The continuation B C3 B4 D5 seems the most natural continuation once D3 is 
played, but on 19x19 this is kind of exchange is usually bad for white (he gets 
a hane on the head of two stones).
Black's last move at D5 would definitely be better than D2 on 19x19 and I would 
be very surpised if D2 would be better on 9x9.
 
I'm speculating Leela's tendency to respond B C4 at D3 to be the cause of the 
discrepancy between the 2.0 komi from Leela and the 4.0 komi from Erik. 
Might W D3 be 2 points worse then the optimal white move (unknown to me)?
 
Is there any support for W D3 being good from professional 9x9 games? I've 
never seen it in professional play, but I'm not a specialist on 9x9.
 
Dave



Van: [EMAIL PROTECTED] namens Don Dailey
Verzonden: do 25-9-2008 22:14
Aan: computer-go
Onderwerp: Re: [computer-go] Analysis of 6x6 Go



On Wed, 2008-09-24 at 19:48 +0200, Erik van der Werf wrote:
 On Wed, Sep 24, 2008 at 6:30 PM, Don Dailey [EMAIL PROTECTED] wrote:
  I don't know if even size boards are special, but it seems to me that
  such small boards should have very high komi's.   4.0 seems pretty low
  but then I'm really no expert on komi's and I'm a pretty weak player so
  I'm not in any position to really say.

 The center is the best opening move for all small odd size boards.
 Small even size boards have a lower komi because there is no center
 point.

 I'm quite confident that 4.0 is the correct komi for 6x6.

I am playing games with Leela at 5 minutes per side on a loaded core 2
duo computer. 

From the evidence I have now, which I admit is not enough to base a
solid conclusion on,  it looks like 2.0 is the correct komi.

When I set komi to 1.5,  black has won 10 out of 10 games.

When I set komi to 2.5, black onl wins 16.667% or 2 out of 12 games. 

When I did the 7x7 study over a year ago (or maybe 2) I noticed that at
reasonably strong levels it tended to be very one sided in one direction
or other based on how you set komi. 

My plan is to run a LOT of games at 2.5 komi and then analyze the
results based on the move sequences looking to see if some common early
black blunder is preventing wins for black at 2.5 komi.  

When I do this I will try to reorient the move sequence to some
canonical representation so that we are not looking at too many
equivalent games with different orientations. 

Superficially, I noticed this:

  1 W C4 D3 C3 D4 D2 E2
  1 B C4 D3 C3 D4 D5 C2

Which means when black played D5 on move 5 he won, but when he played D2
he lost.

   1 B C4 D3 C3 D4 D2 E2 D5 E5 E3
   1 W C4 D3 C3 D4 D2 E2 D5 E5 E4

Same above - Black played 2 different moves and got two different
results.


The other games vary before this but could be transpositions of these
positions - I don't have the time right now to compute all the
transpositions to check this out.

I didn't actually look at those moves so I don't know if they are game
changing or not.   Are there any strong players willing to comment on
these 2 diversions?

The other possibility is that white is supposed to WIN all those games
and is making the occasional error.   The results indicate that is a
more likely possibility.


Here is the complete list of games up to the 9th move.  The first column
is the number of times this exact result/sequence was played.

  1 W D4 C3 C4 D3 B3 B2 D2 C2 E4
  1 W D4 C3 D3 C4 C5 B5 B3 C2 D6
  1 B C4 D3 C3 D4 D2 E2 D5 E5 E3
  1 W C4 D3 C3 D4 D2 E2 D5 E5 E4
  1 B C4 D3 C3 D4 D5 C2 E5 B2 D2
  1 W C4 D3 C3 D4 D5 E5 D2 E2 E4
  1 W C4 D3 D4 C3 B3 B2 E3 E2 D2
  1 W C3 D4 C4 D3 D2 C5 E2 B5 E4
  1 W C3 D4 C4 D3 D2 E2 D5 E5 E4
  2 W C3 D4 C4 D3 D5 E5 D2 E2 E4
  1 W C3 D4 D3 C4 B3 E3 E2 E4 B5
  1 W C3 D4 D3 C4 B4 B5 D5 B3 B2
  2 W C3 D4 D3 C4 B4 B5 E4 E5 D5




- Don





  1 W C4 D3 C3 D4 D2 E2
  1 B C4 D3 C3 D4 D5 C2



But for now, perhaps you stronger go players can look at the following 6
moves sequences that represent the games.   The first column is how many
times this exact result/sequence occurred.   For instance you see that
white won 3 times when the game started C3 D4 D3 C4 B4 B5

Does anyone see any obviously bad moves for black?

  1 W D4 C3 C4 D3 B3 B2
  1 W D4 C3 D3 C4 C5 B5
  1 B C4 D3 C3 D4 D2 E2
  1 W C4 D3 C3 D4 D2 E2
  1 B C4 D3 C3 D4 D5 C2
  1 W C4 D3 C3 D4 D5 E5
  1 W C4 D3 D4 C3 B3 B2
  1 W C3 D4 C4 D3 D2 C5
  1 W C3 D4 C4 D3 D2 E2
  1 W C3 D4 C4 D3 D5 E5
  1 W C3 D4 D3 C4 B3 E3
  3 W C3 D4 D3 C4 B4 B5

What I see that is slightly interesting (just from this data, not
looking at the actual position) is that  C4 D3 C3 D4 D2




 Erik
 ___
 computer-go mailing list
 computer-go@computer-go.org
 

RE: [computer-go] Analysis of 6x6 Go

2008-09-29 Thread dave.devos
Sorry, I just realized this is about 6x6 go. Please ignore my previous response.
 
Dave



Van: [EMAIL PROTECTED]
Verzonden: ma 29-9-2008 20:09
Aan: [EMAIL PROTECTED]; computer-go; computer-go
Onderwerp: RE: [computer-go] Analysis of 6x6 Go


I (EGF 4d) am probably not strong enough to give well founded comments on 9x9 
games, but already move 2 at D3 seems strange from a shape point of view 
(whatever that may be worth on 9x9)
The continuation B C3 B4 D5 seems the most natural continuation once D3 is 
played, but on 19x19 this is kind of exchange is usually bad for white (he gets 
a hane on the head of two stones).
Black's last move at D5 would definitely be better than D2 on 19x19 and I would 
be very surpised if D2 would be better on 9x9.
 
I'm speculating Leela's tendency to respond B C4 at D3 to be the cause of the 
discrepancy between the 2.0 komi from Leela and the 4.0 komi from Erik. 
Might W D3 be 2 points worse then the optimal white move (unknown to me)?
 
Is there any support for W D3 being good from professional 9x9 games? I've 
never seen it in professional play, but I'm not a specialist on 9x9.
 
Dave



Van: [EMAIL PROTECTED] namens Don Dailey
Verzonden: do 25-9-2008 22:14
Aan: computer-go
Onderwerp: Re: [computer-go] Analysis of 6x6 Go



On Wed, 2008-09-24 at 19:48 +0200, Erik van der Werf wrote:
 On Wed, Sep 24, 2008 at 6:30 PM, Don Dailey [EMAIL PROTECTED] wrote:
  I don't know if even size boards are special, but it seems to me that
  such small boards should have very high komi's.   4.0 seems pretty low
  but then I'm really no expert on komi's and I'm a pretty weak player so
  I'm not in any position to really say.

 The center is the best opening move for all small odd size boards.
 Small even size boards have a lower komi because there is no center
 point.

 I'm quite confident that 4.0 is the correct komi for 6x6.

I am playing games with Leela at 5 minutes per side on a loaded core 2
duo computer. 

From the evidence I have now, which I admit is not enough to base a
solid conclusion on,  it looks like 2.0 is the correct komi.

When I set komi to 1.5,  black has won 10 out of 10 games.

When I set komi to 2.5, black onl wins 16.667% or 2 out of 12 games. 

When I did the 7x7 study over a year ago (or maybe 2) I noticed that at
reasonably strong levels it tended to be very one sided in one direction
or other based on how you set komi. 

My plan is to run a LOT of games at 2.5 komi and then analyze the
results based on the move sequences looking to see if some common early
black blunder is preventing wins for black at 2.5 komi.  

When I do this I will try to reorient the move sequence to some
canonical representation so that we are not looking at too many
equivalent games with different orientations. 

Superficially, I noticed this:

  1 W C4 D3 C3 D4 D2 E2
  1 B C4 D3 C3 D4 D5 C2

Which means when black played D5 on move 5 he won, but when he played D2
he lost.

   1 B C4 D3 C3 D4 D2 E2 D5 E5 E3
   1 W C4 D3 C3 D4 D2 E2 D5 E5 E4

Same above - Black played 2 different moves and got two different
results.


The other games vary before this but could be transpositions of these
positions - I don't have the time right now to compute all the
transpositions to check this out.

I didn't actually look at those moves so I don't know if they are game
changing or not.   Are there any strong players willing to comment on
these 2 diversions?

The other possibility is that white is supposed to WIN all those games
and is making the occasional error.   The results indicate that is a
more likely possibility.


Here is the complete list of games up to the 9th move.  The first column
is the number of times this exact result/sequence was played.

  1 W D4 C3 C4 D3 B3 B2 D2 C2 E4
  1 W D4 C3 D3 C4 C5 B5 B3 C2 D6
  1 B C4 D3 C3 D4 D2 E2 D5 E5 E3
  1 W C4 D3 C3 D4 D2 E2 D5 E5 E4
  1 B C4 D3 C3 D4 D5 C2 E5 B2 D2
  1 W C4 D3 C3 D4 D5 E5 D2 E2 E4
  1 W C4 D3 D4 C3 B3 B2 E3 E2 D2
  1 W C3 D4 C4 D3 D2 C5 E2 B5 E4
  1 W C3 D4 C4 D3 D2 E2 D5 E5 E4
  2 W C3 D4 C4 D3 D5 E5 D2 E2 E4
  1 W C3 D4 D3 C4 B3 E3 E2 E4 B5
  1 W C3 D4 D3 C4 B4 B5 D5 B3 B2
  2 W C3 D4 D3 C4 B4 B5 E4 E5 D5




- Don





  1 W C4 D3 C3 D4 D2 E2
  1 B C4 D3 C3 D4 D5 C2



But for now, perhaps you stronger go players can look at the following 6
moves sequences that represent the games.   The first column is how many
times this exact result/sequence occurred.   For instance you see that
white won 3 times when the game started C3 D4 D3 C4 B4 B5

Does anyone see any obviously bad moves for black?

  1 W D4 C3 C4 D3 B3 B2
  1 W D4 C3 D3 C4 C5 B5
  1 B C4 D3 C3 D4 D2 E2
  1 W C4 D3 C3 D4 D2 E2
  1 B C4 D3 C3 D4 D5 C2
  1 W C4 D3 C3 D4 D5 E5
  1 W C4 D3 D4 C3 B3 B2
  1 W C3 D4 C4 D3 D2 C5
  1 W C3 D4 C4 D3 D2 E2
  1 W C3 D4 C4 D3 D5 E5
  1 W C3 D4 D3 C4 B3 E3
  3 W C3 D4 

RE: Teaching Go (was Re: [computer-go] Re: Disputes under Japaneserules)

2008-09-18 Thread dave.devos
When I teach beginners, I use area scoring on 9x9 until they are advanced 
enough to understand territory scoring without disputes (which usually does not 
take very long).
 
Dave



Van: [EMAIL PROTECTED] namens Peter Drake
Verzonden: do 18-9-2008 6:14
Aan: computer-go
Onderwerp: OT: Teaching Go (was Re: [computer-go] Re: Disputes under 
Japaneserules)



I'm inclined to agree, but it bothers me to have to explain life and 
death before scoring. Life and death therefore become part of the 
rules rather than an emergent consequences of the rules . I want to 
be able to give a tiny set of rules and then let players loose to 
discover things on their own.

I would probably simply use AGA rules, but just about all English 
introductory books (e.g., Learn to Play Go by Janice Kim and Jeong 
Soo-huyn) use the Japanese rules.

Peter Drake
http://www.lclark.edu/~drake/




On Sep 16, 2008, at 7:25 PM, Ross Werner wrote:

 Also, I think when teaching beginners Go, the trust me, you lost 
 here even though you cannot understand it approach is a gigantic 
 mistake no matter which ruleset you are using. Play it out, and 
 show the beginner exactly why those disputed stones are dead (or 
 alive). This is possible no matter what kind of scoring you use. If 
 you're using territory scoring, you will get the exact same 
 (relative) score unless one player passes multiple times, which 
 shouldn't happen in a play-out with a beginner who doesn't 
 understand what is going on.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Re: Disputes under Japanese rules

2008-09-16 Thread dave.devos
It seems to be more efficient for humans to count territory instead of area 
during the game.
I've heard that even chinese professionals save time by estimating the score 
during the game by counting territory japanese style and correcting for stones 
captured (you have to remember captures, which is not that hard even for 
amateur dan players). 
Then late in the endgame it will become clear if there will be an odd number of 
dame, allowing the player playing the first dame and the last dame a small gain 
compared to japanese scoring.
 
Dave

 


Van: [EMAIL PROTECTED] namens Don Dailey
Verzonden: di 16-9-2008 19:47
Aan: computer-go
Onderwerp: RE: [computer-go] Re: Disputes under Japanese rules



I bet with practice and using Chinese scoring, you could very rapidly
calculate the score without touching the board.

In fact, if I were trying to become a dan level players I would think
that in Chinese I would want to be able to quickly sum the board like
this.   In real close games I would want to know that winning some small
group would either do the job, or not do the job and I should
concentrate elsewhere.  

But I am told that good players don't think like that,  they just grab
at everything. 

- Don



On Tue, 2008-09-16 at 10:06 -0700, David Fotland wrote:
 I was speaking of how people count, not computers.  Chinese players count by
 taking all the stones off the board and putting them in piles of ten.

 I've done (and seen) point by point counting on a real board, and it is
 really hard to get a correct result.  You have to count at least twice to
 verify, and usually 3 or 4 times to get two counts that are the same.  So no
 one does it this way.

 Clearly Chinese counting is easier for computers, but Japanese counting
 seems easier to most people.

 David

  -Original Message-
  From: [EMAIL PROTECTED] [mailto:computer-go-
  [EMAIL PROTECTED] On Behalf Of Robert Jasiek
  Sent: Tuesday, September 16, 2008 9:56 AM
  To: computer-go
  Subject: Re: [computer-go] Re: Disputes under Japanese rules
 
  David Fotland wrote:
Japanese rules' [...] the actual counting [...] The position is
  preserved
 
  Japanese counting destroys the position by
  - removal of dead stones
  - filling in of (most) prisoners
  - rearrangements of stones
  - rearrangements of borders
  - border stone colour changes
 
  After the removal of dead stones, these counting methods do NOT destroy
  the position:
  - point by point counting
  - point by point half counting
  - some algorithmic virtual counting like flood-filling
 
  --
  robert jasiek
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Kaori-Crazystone

2008-09-04 Thread dave.devos
I'm not surprised that the data for games with 90% winning chances is lacking.
The McMahon pairing system is widely used in western go tournaments to prevent 
mismatched games, 
because most players don't like mismatched games (either as the stronger or the 
weaker player).
 
Rating systems are relatively new in the go world and they don't seem very 
popular (or even known) in the far east where 99% of the go population lives.
So I guess go rating systems are still quite immature:
 
AGA, KGS and EGF all have different conversion functions to map winning 
probability to dan and kyu ranks.
These conversions are probably based on different statistical data sources and 
probably also on other considerations like
calculation convenience and different theoretical considerations of their 
inventors.
 
Dave
 
 
 



Van: [EMAIL PROTECTED] namens Don Dailey
Verzonden: do 4-9-2008 20:55
Aan: computer-go
Onderwerp: Re: [computer-go] Kaori-Crazystone



Here is something interesting from this page:

Note how different the expectations of each system are regarding even
games between players of unequal strength. If you can win 90% of even
games against a 2 kyu player, the AGA believes you are 1.33 ranks
higher, the EGF believes you are 2.42 ranks higher, and the IGS believes
you are 2.80 ranks higher. The lack of agreement stems from a tradition
of playing handicap games between players of different ranks, so there
is a lack of data regarding non-handicap games between mismatched
opponents.

- Don



On Thu, 2008-09-04 at 11:02 -0700, terry mcintyre wrote:
 This page http://en.wikipedia.org/wiki/Go_ranks_and_ratings gives a table of 
 win probabilities versus rank differences.

 I haven't yet found such a table for handicap games.

  Terry McIntyre [EMAIL PROTECTED]


 Go is very hard. The more I learn about it, the less I know. -Jie Li, 9 dan


  
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Go rating math information

2008-01-31 Thread dave.devos
There seems to be a discrepancy: 11.5% between 2d and 5d in EGF rating system 
versus 2.0% between 2d and 5d in KGS rating system.
I think this can be explained by hidden biases in the EGF statistics:
 
1: To be a 5d in real tournaments in Europe does not mean your rating is 
between 2450 and 2550 EGF. Ranks are self proclaimed in most european 
countries. Many 5d players in fact have a rating below 2450, but they are 
registered as 5d in the EGF system, because that is the rank they claim to 
have. Even in the Netherlands, where dan ranks are awarded and not self 
claimed, it still is highy unusual for a player who has been awarded a 5d rank 
to be demoted to 4d when his rating drops below 2450. KGS ranks are of course 
guaranteed to be in the rating range for the rank. So the rating spread of the 
5d rank is much wider in the EGF system than the KGS system. This results in a 
biass towards larger than expected upset percentages.
 
2: EGF data originates mostly from tournaments using the McMahon pairing 
system. An average 2d will rarely be playing against an average 5d in McMahon 
paired tournaments. The fact of a 2d playing a 5d at all, implies that a strong 
2d is playing a weak 5d. This too results in a bias towards larger than 
expected upset percentages.
  
So if you are looking for the odds that an average 2d wins against an average 
5d, the KGS statistics are a more reliable than the EGF statistics. 
And even KGS seems to overestimate the upset odds in the hig dan region. As a 
result, strong players get highly exaggerated ranks in KGS. There are many 
examples of players who are 7d EGF in real life but 11d KGS (from KGS rating 
graph, 9d is the maximum rating related rank awarded in KGS). These real life 
7d EGF players hardly ever lose an even game against real life 4d-6d EGF 
players (as in real life) and as a result their rank rises sky high in KGS.
 
Dave de Vos 



Van: [EMAIL PROTECTED] namens Andy
Verzonden: do 31-1-2008 18:31
Aan: computer-go
Onderwerp: [computer-go] Go rating math information


There were some questions about the effective ELO difference of two players 3 
ranks apart.  Here are some links to information about go rating formulas, and 
some statistics:

http://senseis.xmp.net/?KGSRatingMath
http://gemma.ujf.cas.cz/~cieply/GO/gor.html
http://gemma.ujf.cas.cz/~cieply/GO/statev.html

Both KGS and EGF scale the k factor according to the player's strength.  For 
weaker players the probability of an upset is greater.

According to KGS formula:
8k vs 5k: 7.2% chance of upset (~440 Elo)
2d vs 5d: 2.0% chance of upset (~676 Elo)

According to EGF even game statistics:
Generally for weaker kyu players the chance of upset is around 30-40% (~80-~150 
ELO), for stronger players it goes down:
2d vs 5d: 11.5% chance of upset (~350 Elo)
3d vs 6d:  7.8% chance of upset (~432 Elo)
4d vs 7d:  3.3% chance of upset (~590 Elo)



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Go rating math information

2008-01-31 Thread dave.devos
The other way around happens too: in 2006 I had a 4 month pause on KGS and my 
rank dropped from 4d to 4k.



Van: [EMAIL PROTECTED] namens Jason House
Verzonden: do 31-1-2008 20:33
Aan: computer-go
Onderwerp: Re: [computer-go] Go rating math information




On Jan 31, 2008 2:20 PM, Don Dailey [EMAIL PROTECTED] wrote:


So if I get rated on KGS all I have to do is stop playing and my rank
will shoot up a few ranks?


It's a pretty common phenomenon on KGS...  I've seen it happen many times

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] update on networking from phils with new CGOSconfiguration

2008-01-05 Thread dave.devos
It might be possible to automatically compensate for lag by looking up the 
geographic location of a bot's ISP. For instance via 
http://www.hostip.info/use.html 
https://webmail.planet.nl/exchweb/bin/redir.asp?URL=http://www.hostip.info/use.html
  .
 
Dave



Van: [EMAIL PROTECTED] namens Peter Christopher
Verzonden: za 5-1-2008 12:25
Aan: computer-go@computer-go.org
Onderwerp: [computer-go] update on networking from phils with new 
CGOSconfiguration



Strangely enough, today the network from the phils to boardspace.net
is so bad that even don's new configuration doesn't make up for the
additional network lag (though I think it alleviates the bot's pain).

With .2 seconds of thinking time, the net time detracted from
time_remaining is still decreasing an average of 1 sec per move.

I did run this same bot from a friend's server @ the same
configuration, .2 sec/move, and the bot always still has 300 seconds
remaining after every move [ a .7s/move bot ALSO always still had 300 seconds],
so this means that Don's change works  no problem in CGOS in general, the
problem is the network from the philippines.

Strangely, my ping from the philippines to boardspace.net at the time
of this testing is 700ms.  BUT my ping from the philippines to MIT.edu
is only 350ms.So, I thought, Maybe boardspace.net has a network
problem.  I tested from the server in the US.  ping to boardspace.net
- 20ms.  ping to mit.edu - 12ms.

I definitely cannot understand those results.

For the present time, it appears I may have to throw in the towel on
getting dependable results running the bots from my laptop in the
philippines.

Raw output of ping  cgos script follows.

Peter

ps I did notice Fat1800 resign a lost game against my bot from my
laptop in the phils, thanks ;-)  Now, if my bot could only achieve
winning positions against Fat1800 on a regular basis ;)

-PING FROM PHILIPPINES 

0 [EMAIL PROTECTED] ping mit.edu
PING mit.edu (18.7.22.69) 56(84) bytes of data.
64 bytes from WEB.MIT.EDU (18.7.22.69): icmp_seq=1 ttl=238 time=354 ms
64 bytes from WEB.MIT.EDU (18.7.22.69): icmp_seq=2 ttl=238 time=354 ms
64 bytes from WEB.MIT.EDU (18.7.22.69): icmp_seq=3 ttl=238 time=363 ms
64 bytes from WEB.MIT.EDU (18.7.22.69): icmp_seq=4 ttl=238 time=371 ms
64 bytes from WEB.MIT.EDU (18.7.22.69): icmp_seq=5 ttl=238 time=346 ms
64 bytes from WEB.MIT.EDU (18.7.22.69): icmp_seq=6 ttl=238 time=355 ms
64 bytes from WEB.MIT.EDU (18.7.22.69): icmp_seq=7 ttl=238 time=330 ms
64 bytes from WEB.MIT.EDU (18.7.22.69): icmp_seq=8 ttl=238 time=338 ms

--- mit.edu ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7086ms
rtt min/avg/max/mdev = 330.218/351.896/371.914/12.419 ms

0 [EMAIL PROTECTED] ping  208.100.19.102 [boardspace.net]
PING 208.100.19.102 (208.100.19.102) 56(84) bytes of data.
64 bytes from 208.100.19.102: icmp_seq=1 ttl=49 time=615 ms
64 bytes from 208.100.19.102: icmp_seq=2 ttl=49 time=668 ms
64 bytes from 208.100.19.102: icmp_seq=3 ttl=49 time=761 ms
64 bytes from 208.100.19.102: icmp_seq=4 ttl=49 time=651 ms
64 bytes from 208.100.19.102: icmp_seq=5 ttl=49 time=740 ms

--- 208.100.19.102 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4026ms
rtt min/avg/max/mdev = 615.330/687.537/761.351/55.010 ms



--PING FROM AMAZON EC2 --



127 [EMAIL PROTECTED] ping boardspace.net
PING boardspace.net (208.100.19.102) 56(84) bytes of data.
64 bytes from boardspace.net (208.100.19.102): icmp_seq=1 ttl=50 time=20.6 ms
64 bytes from boardspace.net (208.100.19.102): icmp_seq=2 ttl=50 time=20.6 ms
64 bytes from boardspace.net (208.100.19.102): icmp_seq=3 ttl=50 time=20.7 ms
64 bytes from boardspace.net (208.100.19.102): icmp_seq=4 ttl=50 time=20.9 ms
64 bytes from boardspace.net (208.100.19.102): icmp_seq=5 ttl=50 time=20.9 ms

--- boardspace.net ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4003ms
rtt min/avg/max/mdev = 20.622/20.782/20.960/0.228 ms

0 [EMAIL PROTECTED] ping mit.edu
PING mit.edu (18.7.22.69) 56(84) bytes of data.
64 bytes from WEB.MIT.EDU (18.7.22.69): icmp_seq=1 ttl=239 time=12.1 ms
64 bytes from WEB.MIT.EDU (18.7.22.69): icmp_seq=2 ttl=239 time=12.4 ms
64 bytes from WEB.MIT.EDU (18.7.22.69): icmp_seq=3 ttl=239 time=12.4 ms
64 bytes from WEB.MIT.EDU (18.7.22.69): icmp_seq=4 ttl=239 time=12.9 ms

--- mit.edu ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2997ms
rtt min/avg/max/mdev = 12.171/12.511/12.929/0.292 ms





-

Output from philippines of .2 sec / move bot with cgos script

18:51:55S-C genmove b 30
18:51:55C-E time_left b 300 0
18:51:55E-C =
18:51:55C-E genmove b
18:51:55E-C = G2
18:51:55C-S G2
18:52:10S-C info Estimated time until next round: 09:56
18:52:10Estimated time until 

RE: [computer-go] more network delay specifics

2008-01-04 Thread dave.devos
It might be possible to estimate lag by looking up the geographic location of a 
bot's ISP. For instance via http://www.hostip.info/use.html .
 
Dave
 



Van: [EMAIL PROTECTED] namens Peter Christopher
Verzonden: vr 4-1-2008 5:27
Aan: computer-go@computer-go.org
Onderwerp: [computer-go] more network delay specifics



steve wrote a few paragraphs:

 --- 208.100.19.102 ping statistics ---
 22 packets transmitted, 19 received, 13% packet loss, time 21134ms
 rtt min/avg/max/mdev = 327.380/352.887/425.192/25.698 ms

this is really pretty darn good, given the setup.

the primary delay inbetween japan and the US is the speed of light
delay inbetween the continents, plus ~5ms per switch delay once
you filter down that far.  they have an excellent infrastructure.

one can successfully (albeit somewhat painfully) write code over a 250+ms
link, so a bot should have no problem sending single go moves over a 350ms
link.  :)

s.

/steve

I calculate 50ms round trip from Japan/Manila to USA at the speed of
light via cables, so either the 250ms (Japan) or 350ms  (Manila) must
be via satellite or getting additional switch delay/packaging.

In any case, I am appending below a sampe of the cgos output when my
bot is set at exactly .2 seconds thinking per move.  As you can see,
even though ping is 350ms, the actual time detracted from my clock is
about .8seconds per move.  If you calculate  the implications of this
for a potential-200-move game, that's over half the game that I'd have
to account for as being spent in network lag.  I currently already
assume about 25% of the game is in network lag and feel ok about it
(the bot does ponder).

My first suggestion (for those bots that can easily resign) was to
resign, if they want good statistics on their bots go-playing ability.
 I also hear that some bots can't easily resign, so it's only a
partial solution.  I do like Don's suggestion to use Fischer rules,
and actually I haven't heard anyone at all provide any drawbacks of
him implementing his suggestion.

-Peter

11:33:56C-E time_left w 299 0
11:33:56E-C =
11:33:56C-E genmove w
11:33:57E-C = F5
11:33:57C-S F5
11:33:58S-C play b F6 297573
11:33:58C-E play b F6
11:33:58E-C =
11:33:58S-C genmove w 298613
11:33:58C-E time_left w 298 0
11:33:58E-C =
11:33:58C-E genmove w
11:33:59E-C = E8
11:33:59C-S E8
11:34:03S-C play b G8 294312
11:34:03C-E play b G8
11:34:03E-C =
11:34:03S-C genmove w 297898
11:34:03C-E time_left w 297 0
11:34:03E-C =
11:34:03C-E genmove w
11:34:03E-C = E6
11:34:03C-S E6
11:34:08S-C info Estimated time until next round: 09:46
11:34:08Estimated time until next round: 09:46
11:34:12S-C play b E4 285841
11:34:12C-E play b E4
11:34:12E-C =
11:34:13S-C genmove w 297243
11:34:13C-E time_left w 297 0
11:34:13E-C =
11:34:13C-E genmove w
11:34:13E-C = D5
11:34:13C-S D5
11:34:23S-C info Estimated time until next round: 09:32
11:34:23Estimated time until next round: 09:32
11:34:26S-C play b D6 272928
11:34:26C-E play b D6
11:34:26E-C =
11:34:27S-C genmove w 296479
11:34:27C-E time_left w 296 0
11:34:27E-C =
11:34:27C-E genmove w
11:34:27E-C = C5
11:34:27C-S C5
11:34:38S-C play b C3 263541
11:34:38C-E play b C3
11:34:38E-C =
11:34:38S-C genmove w 295053
11:34:38C-E time_left w 295 0
11:34:38E-C =
11:34:38C-E genmove w
11:34:38E-C = E7
11:34:38C-S E7
11:34:38S-C info Estimated time until next round: 09:18
11:34:38Estimated time until next round: 09:18
11:34:53S-C info Estimated time until next round: 09:03
11:34:53Estimated time until next round: 09:03
11:35:08S-C info Estimated time until next round: 08:48
11:35:08Estimated time until next round: 08:48
11:35:12S-C play b B5 230184
11:35:12C-E play b B5
11:35:12E-C =
11:35:13S-C genmove w 294273
11:35:13C-E time_left w 294 0
11:35:13E-C =
11:35:13C-E genmove w
11:35:13E-C = J3
11:35:13C-S J3
11:35:18S-C play b F4 225846
11:35:18C-E play b F4
11:35:18E-C =
11:35:18S-C genmove w 293434
11:35:18C-E time_left w 293 0
11:35:18E-C =
11:35:18C-E genmove w
11:35:19E-C = C4
11:35:19C-S C4
11:35:23S-C info Estimated time until next round: 08:34
11:35:23Estimated time until next round: 08:34
11:35:24S-C play b G5 220597
11:35:24C-E play b G5
11:35:24E-C =
11:35:25S-C genmove w 292780
11:35:25C-E time_left w 292 0
11:35:25E-C =
11:35:25C-E genmove w
11:35:25E-C = F7
11:35:25C-S F7
11:35:33S-C play b D4 213174
11:35:33C-E play b D4
11:35:33E-C =
11:35:33S-C genmove w 292003
11:35:33C-E time_left w 292 0
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/