Re: [computer-go] low-hanging fruit - yose

2007-12-13 Thread Magnus Persson

Quoting Gunnar Farnebäck [EMAIL PROTECTED]:



 10k100k   1M

GNU Go CVS   0.079  0.387  0.475


This position seems to fit he extra knowledge of Valkyria well, but  
not perfectly


  500   1k  10k100k
Valkyria  0.76  0.760.64  0.64

It get it right immediatley but I cannot understand why the winrate  
drops with more simulations. Perhaps it gets it right for the wrong  
reason initially?


Under normal time constraints it plays the single good move after 1.5  
seconds spending 3k sims on it, and for the second best move it just  
used 46 simulations in a single test run. The principal variation was  
also correct taking the captures in the right order.


-Magnus
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] A thought about Bot-server communications

2007-12-13 Thread Jacques Basaldúa

Nick Wedd wrote:

In one of the British Championship Match games, a bit over ten years 
ago, Zhang Shutai made an illegal ko move against Matthew Macfadyen, and 
immediately conceded that he had lost the game.


Is the game record available? I am interested because I have only found 2
situations in real games:

a. Triple ko
b. Double ko when the group sharing the kos has nothing better than life in 
seki.

Both have cycles smaller than 8 ply and my software doesn't check longer cycles.

I guess any human player would recognize these situations. So if a strong player
didn't it must be something more complicated.


Jacques.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] A thought about Bot-server communications

2007-12-13 Thread Vlad Dumitrescu
On Dec 13, 2007 12:17 PM, Jacques Basaldúa [EMAIL PROTECTED] wrote:
 Nick Wedd wrote:
  In one of the British Championship Match games, a bit over ten years
  ago, Zhang Shutai made an illegal ko move against Matthew Macfadyen, and
  immediately conceded that he had lost the game.

 Is the game record available? I am interested because I have only found 2
 situations in real games:

 a. Triple ko
 b. Double ko when the group sharing the kos has nothing better than life in 
 seki.

 Both have cycles smaller than 8 ply and my software doesn't check longer 
 cycles.

 I guess any human player would recognize these situations. So if a strong 
 player
 didn't it must be something more complicated.

It doesn't have to be. It can simply be so that he recaptured the ko directly.

I remember (but can't find it) that it was relatively recently that a
japanese 9 dan pro did the same mistake.

regards,
Vlad
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Lisp time

2007-12-13 Thread Joachim Pimiskern
Hi,

 It seems to me like in some ways Scheme is less feature bloated than common 
 lisp. 

in my DOS version of AUGOS, I embedded a small Lisp interpreter (Inflisp)
into the Pascal code. The Lisp files, which make up the inference engine,
are included in the runtime version downloadable from my web page.

Currently I'm writing Augos 7 in C#. This language has many of
the advantages of Lisp, for example polymorphy and garbage collection,
so Lisp isn't needed any more.

For Delphi programmers, I made the sourcecode of Inflisp available at 

http://www.augos.com/software/inflisp.html

Regards,
Joachim


_
In 5 Schritten zur eigenen Homepage. Jetzt Domain sichern und gestalten! 
Nur 3,99 EUR/Monat! http://www.maildomain.web.de/?mc=021114

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Non-global UCT

2007-12-13 Thread Jason House
On Dec 12, 2007 10:19 PM, David Fotland [EMAIL PROTECTED] wrote:

  Many Faces' life and death search is best first and probability based,
 but I don't use UCT to select moves.  I select the move that has the highest
 probability of changing the value of the root (from success to fail or vice
 versa).  I don't use MC to evaluate the endpoints.  I look forward down one
 line until the result is clear (alive or dead), but I follow the best move
 from the move generator, not random moves.


Out of curiosity, how does that compare to proof number?  Proof number uses
a heuristic like for this outcome to occur, N child nodes must have outcome
X.  It picks the path with the  smallest N.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] low-hanging fruit - yose

2007-12-13 Thread Łukasz Lew
This is an artifact of using mercy rule.
You can change it in config.cpp
use_mercy_rule = true

Should I make it default?

Thanks,
Lukasz

On Dec 10, 2007 11:41 PM, Heikki Levanto [EMAIL PROTECTED] wrote:
 On Mon, Dec 10, 2007 at 04:08:48PM -0500, Don Dailey wrote:
  Would you rather be 95% confident of a win or 90% confident?There is
  only 1 correct answer to that question.

 Yes, if you can offer me reliable confidence numbers. We all (should) know
 that MC evaluations suffer from systematic problems that can not just be
 averaged away statistically.

 Compare these two positions:

 playout_benchmark 1
 = Initial board:
 komi 7.5
A B C D E F G H J
  9 . . . . . O O O O 9
  8 O O O O O O O O O 8
  7 O O O O O O O O O 7
  6 O O O O O O O O O 6
  5 # # # # # # # # # 5
  4 O O O # # # # # # 4
  3 O O O O . # # # # 3
  2 . O O O . # # # . 2
  1 # . O O . # # . # 1
A B C D E F G H J
 Performance:
   1 playouts
   0.032002 seconds
   312.481 kpps
 Black wins = 1937
 White wins = 8063
 P(black win) = 0.1937


 playout_benchmark 1
 = Initial board:
 komi 7.5
A B C D E F G H J
  9 . # . . . O O O O 9
  8 O O O O O O O O O 8
  7 O O O O O O O O O 7
  6 O O O O O O # # # 6
  5 # # # # # # # # # 5
  4 O O O # # # # # # 4
  3 O O O O . # # # # 3
  2 . O O O . # # # . 2
  1 . . O O . # # . # 1
A B C D E F G H J
 Performance:
   1 playouts
   0.084006 seconds
   119.039 kpps
 Black wins = 7746
 White wins = 2254
 P(black win) = 0.7746


 Which one is better, 77% or 19%?


 I must admit I don't quite understand the results myself. But they are made
 with LibEGOs MC, so I trust they show what traditional MC simulation would
 see. The top group is obviously alive but can die in a MC simulation. The
 bottom group is obviously unsettled, so who ever plays there first should
 win the game. I suppose in some follow-ups, white may be able to live in the
 resulting space, and win the game that way too.

 This has nothing to do with the argument of attacking the larger group first,
 rather the contrary. But it shows that MC evaluations can give results so
 badly off that there is not always much point in distinguishing between your
 80% and 85% confidence.

 Regards

Heikki

 --
 Heikki Levanto   In Murphy We Turst heikki (at) lsd (dot) dk


 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] How does MC do with ladders?

2007-12-13 Thread Jason House
On Dec 13, 2007 2:03 AM, Harald Korneliussen [EMAIL PROTECTED] wrote:

 Wed, 12 Dec 2007 07:14:48 -0800 (PST) terry mcintyre wrote:

 Heading back to the central idea, of tuning the predicted winning
 rates and evaluations: it might be useful to examine lost games, look
 for divergence between expectations and reality, repair the predictor,
 and test the new predictor against a large database of such blunders.

 Sounds a little like Temporal Difference Learning to me. I understand
 both MoGo and Crazystone use patterns, do anyone know whether they use
 such machine learning techniques to assign weights to them?


MoGo uses TD to predict win rates.  I haven't heard of any other methods to
predict winning rates.  I have seen some successful stuff with predicting
the next move.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] MC evaluation and go knowledge

2007-12-13 Thread Magnus Persson
I just want to make some comments about MC evaluation to remove some  
common misunderstandings.


I have seen some complaints about misevaluation such as a program  
having 65% chance of winning in a game which is lost and the other way  
around. For example arguments has been proposed in line with since  
evaluation is so bad there has to be a better way of doing it.


I just want to point out one thing: any Winrates except 0% and 100% is  
wrong assuming perfect play. 1% and 99% (or anything in between) means  
that the program is not sophisticated enough to either a) always play  
a winning move in the simulations b) search deep enough to solve the  
game proving a win/loss.


*BUT*

Having an incorrect evaluation is unimportant, as long as the program  
plays the best move. What really matters is which move has the highest  
winrate relative to all other candidate moves.


MC-programs with no knowledge except avoiding eyefilling often  
evaluate all positions as being very close to 50%. As soon as one adds  
appropriate knowledge there is a much larger range between the best  
and worst move on the board. Normally this range is correlated with  
the strength of the program. (Be aware that buggy programs might have  
even larger ranges though).


Also with UCT the winrate at the root has little to do with any  
objective probability of the program winning. If one look at the  
principle variation at the end of the game you will notice a large  
difference in winrate at each depth. The winrates at the root change  
very slowly even if it is 0% or 100% at the leaves, but still the  
relative ordering of the moves at the root is often correct.


*FINALLY*

The use of MC-eval and UCT are *completely orthogonal* to using go  
knowledge in the program. You can add any kind of knowledge at all  
stages to the basic search algorithm and possibly benefit from it.


The problem is enginering. If your implementation of the knowledge are  
too slow the program gets weaker with fixed time limits. The knowledge  
you added might make the program weaker for a number of reasons.


Arguments such as I saw program X misplay situation Y and therefore  
MC-eval is  flawed is just plain wrong. It just mean that specific  
program X has a flaw and nothing else.


What one can argue are I wrote a program with a nice new method for  
evaluation and a new approach to searching that plays situation Y  
correctly, and also happens to beat program X all the time using the  
same hardware. Until I see that argument, I will continue to beleive  
that methods similar to MC-eval and UCT-search are the future of  
computer go.


-Magnus
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MC evaluation and go knowledge

2007-12-13 Thread Don Dailey
This was right on the mark!   It exposed a lot of misconceptions and
wrong thinking about MC and evaluation.

- Don


Magnus Persson wrote:
 I just want to make some comments about MC evaluation to remove some
 common misunderstandings.

 I have seen some complaints about misevaluation such as a program
 having 65% chance of winning in a game which is lost and the other way
 around. For example arguments has been proposed in line with since
 evaluation is so bad there has to be a better way of doing it.

 I just want to point out one thing: any Winrates except 0% and 100% is
 wrong assuming perfect play. 1% and 99% (or anything in between) means
 that the program is not sophisticated enough to either a) always play
 a winning move in the simulations b) search deep enough to solve the
 game proving a win/loss.

 *BUT*

 Having an incorrect evaluation is unimportant, as long as the program
 plays the best move. What really matters is which move has the highest
 winrate relative to all other candidate moves.

 MC-programs with no knowledge except avoiding eyefilling often
 evaluate all positions as being very close to 50%. As soon as one adds
 appropriate knowledge there is a much larger range between the best
 and worst move on the board. Normally this range is correlated with
 the strength of the program. (Be aware that buggy programs might have
 even larger ranges though).

 Also with UCT the winrate at the root has little to do with any
 objective probability of the program winning. If one look at the
 principle variation at the end of the game you will notice a large
 difference in winrate at each depth. The winrates at the root change
 very slowly even if it is 0% or 100% at the leaves, but still the
 relative ordering of the moves at the root is often correct.

 *FINALLY*

 The use of MC-eval and UCT are *completely orthogonal* to using go
 knowledge in the program. You can add any kind of knowledge at all
 stages to the basic search algorithm and possibly benefit from it.

 The problem is enginering. If your implementation of the knowledge are
 too slow the program gets weaker with fixed time limits. The knowledge
 you added might make the program weaker for a number of reasons.

 Arguments such as I saw program X misplay situation Y and therefore
 MC-eval is  flawed is just plain wrong. It just mean that specific
 program X has a flaw and nothing else.

 What one can argue are I wrote a program with a nice new method for
 evaluation and a new approach to searching that plays situation Y
 correctly, and also happens to beat program X all the time using the
 same hardware. Until I see that argument, I will continue to beleive
 that methods similar to MC-eval and UCT-search are the future of
 computer go.

 -Magnus
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] How does MC do with ladders?

2007-12-13 Thread Don Dailey


steve uurtamo wrote:
 Currently there is no evidence whatsoever that probability estimates
 
  are
   
 inferior and they are the ones playing the best GO right now
 

 are they?
   
Yes - in both 9x9 and 19x19 go.

- Don


 s.



   
 
 Looking for last minute shopping deals?  
 Find them fast with Yahoo! Search.  
 http://tools.search.yahoo.com/newsearch/category.php?category=shopping
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] How does MC do with ladders?

2007-12-13 Thread steve uurtamo
 Currently there is no evidence whatsoever that probability estimates
 are
 inferior and they are the ones playing the best GO right now

are they?

s.



  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] How does MC do with ladders?

2007-12-13 Thread Eric Boesch
On 12/11/07, Mark Boon [EMAIL PROTECTED] wrote:
 Question: how do MC programs perform with a long ladder on the board?

 My understandig of MC is limited but thinking about it, a crucial
 long ladder would automatically make the chances of any playout
 winning 50-50, regardless of the actual outcome of the ladder.

No, 50/50 would make too much sense. It might be higher or it might be
lower, depending on whose move it is in the ladder and what style of
playouts you use, but exactly 50/50 would be like flipping a coin and
having it land on its edge. In test cases, MC-UCT evaluations tend to
cluster near 50/50 in any case, because MC-UCT, especially dumb
uniform MC-UCT, tends to be conservative about predicting the winner,
especially in 19x19 where the opportunities for luck to overwhelm the
actual advantage on the board are greater. But if you accept this as
just a moot scaling issue -- that a clearly lopsided exchange can mean
just a 2% increment in winning percentage even if read more or less
correctly -- then the numbers may not look so even after all. I's
certainly possible for MC-UCT to climb a broken ladder in a winning
position (and climbing a broken ladder in an even position is at least
half as bad as that anyhow).

I tried testing this on 19x19 using libego at 1 million playouts per
move. The behavior was not consistent, but the numbers trended in the
defender's favor as the sides played out the ladder. In one bizarre
case, the attacker played out the ladder until there were just 17
plies left, and then backed off.

Why would the attacker give up a winning ladder? It appears the MC-UCT
was never actually reading the ladder to begin with; just four or five
plies in, sometimes just a few thousand simulations were still
following the key line. 1 million playouts were not nearly enough for
that in this case; maybe 100 million would be enough, but I couldn't
test that. Also, after enough simulations, decisively inferior moves
lead to fewer losses than slightly inferior ones. Suppose you have
three moves available: one wins 75% of the time, one 50%, and one 25%.
In the long run, the 75% move will be simulated almost all the time,
but the middle move will be simulated roughly four times as often as
the 25% one that, compared to the best move available, is twice as
bad, and four times the simulations with half the loss per simulation
adds up to twice the excess losses compared to the 25% move. That is
apropos here, because giving up on an open-field ladder once it has
been played out for a dozen moves is much more painful for the
defender than for the attacker. The longer the ladder got, the more
the evaluations trended in the defender's favor, and my best
explanation would be the fact that -- until you actually read the
ladder all the way out and find that the defender is dead -- every
move except pulling out of atari is so obviously bad that even uniform
MC-UCT did a better job of focusing on that one good move.

(Incidentally, the conservative nature of MC-UCT ratings largely
explains why maximizing winning probabilities alone is not a bad
strategy, at least in even games. The classic beginner mistake, when
you already have a clear lead in theory, is to fail to fight hard to
grab still more points as blunder insurance. But an MC-UCT evaluation
of 90% typically means a 90% probability of actually winning against
even opposition, not just a 90% likelihood of a theoretical win.
Assigning a 65% evaluation to an obvious THEORETICAL win allows plenty
of room to assign higher evaluations to even more lopsided advantages.
As Don said, when MC-UCT starts blatantly throwing away points for no
obvious reason, it's almost certainly because the game is REALLY over,
because MC-UCT's errors tend to be probabilistic instead of absolute
-- it may in effect evaluate a dead group as 75% alive, but it won't
call it 100% alive except in the rare cases when the underlying random
playout rules forbid the correct line of play.)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Non-global UCT

2007-12-13 Thread David Fotland
It's quite different from PN.  PN expands a leaf node one ply and backs up
values to the root.  I play a line as many ply as needed until I get a high
confidence evaluation of win or lose.  In this sense I am doing something
like UCT with nonrandom play outs.  PN typically doesn't use move ordering
information.  I have a good move generator that sorts candidate moves by
probability of winning, so I always pick the highest probability unevaluated
child to expand.

 

Say I want to do a life and death search on a group to kill it.  The move
generator suggests 5 moves.  Make the best one, and evaluate.  If the status
is still unclear, I call the generator to make move to live.  I pick the
best and make it, etc.  Say I go 8 ply deep and find that the group dies.

 

Now I have one line of play that works.   To pick the next node to expand I
want to try a node that can change to root from win to loss.  So I only have
to look at opponent moves.  I pick the move that has the highest probability
of changing the root to a loss (which can be at any depth), and play out
another line until I have a stable result.

 

This models the way I read life and death problems.

 

David

 

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jason House
Sent: Thursday, December 13, 2007 5:50 AM
To: computer-go
Subject: Re: [computer-go] Non-global UCT

 

 

On Dec 12, 2007 10:19 PM, David Fotland [EMAIL PROTECTED] wrote:

Many Faces' life and death search is best first and probability based, but I
don't use UCT to select moves.  I select the move that has the highest
probability of changing the value of the root (from success to fail or vice
versa).  I don't use MC to evaluate the endpoints.  I look forward down one
line until the result is clear (alive or dead), but I follow the best move
from the move generator, not random moves.


Out of curiosity, how does that compare to proof number?  Proof number uses
a heuristic like for this outcome to occur, N child nodes must have outcome
X.  It picks the path with the  smallest N. 

 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] How does MC do with ladders?

2007-12-13 Thread Don Dailey

Eric,

Yes,  as Magnus also stated MC play-out doesn't really accurately
estimate the real winning probability but it still get the move order
right most of the time.  

The situation is that if the position is really a win,  it doesn't mean
that a MC is able to find the proof tree.   But it means that it's
easier to find wins than losses as so the score as expressed by a
winning percentage goes up - but not to 1.0

I have also found that if Lazarus says 60%,   it is going to win much
more than 60% against equal opposition.   Of course it's not surprise if
it beats weaker opposition from this point.   

I thought of mapping these percentages to actual percentages by playing
a few thousand self play games, but this is pretty much futile. If
the score is, for instance 65% it will tend to grow higher and higher
depending on how long I let it think.Unless it discovers a clever
defense that is - so it's possible that it will start declining at a
deep level.

So these numbers are really meaningless as absolute figures and have
everything to do with the current context of the search.

- Don




Eric Boesch wrote:
 On 12/11/07, Mark Boon [EMAIL PROTECTED] wrote:
   
 Question: how do MC programs perform with a long ladder on the board?

 My understandig of MC is limited but thinking about it, a crucial
 long ladder would automatically make the chances of any playout
 winning 50-50, regardless of the actual outcome of the ladder.
 

 No, 50/50 would make too much sense. It might be higher or it might be
 lower, depending on whose move it is in the ladder and what style of
 playouts you use, but exactly 50/50 would be like flipping a coin and
 having it land on its edge. In test cases, MC-UCT evaluations tend to
 cluster near 50/50 in any case, because MC-UCT, especially dumb
 uniform MC-UCT, tends to be conservative about predicting the winner,
 especially in 19x19 where the opportunities for luck to overwhelm the
 actual advantage on the board are greater. But if you accept this as
 just a moot scaling issue -- that a clearly lopsided exchange can mean
 just a 2% increment in winning percentage even if read more or less
 correctly -- then the numbers may not look so even after all. I's
 certainly possible for MC-UCT to climb a broken ladder in a winning
 position (and climbing a broken ladder in an even position is at least
 half as bad as that anyhow).

 I tried testing this on 19x19 using libego at 1 million playouts per
 move. The behavior was not consistent, but the numbers trended in the
 defender's favor as the sides played out the ladder. In one bizarre
 case, the attacker played out the ladder until there were just 17
 plies left, and then backed off.

 Why would the attacker give up a winning ladder? It appears the MC-UCT
 was never actually reading the ladder to begin with; just four or five
 plies in, sometimes just a few thousand simulations were still
 following the key line. 1 million playouts were not nearly enough for
 that in this case; maybe 100 million would be enough, but I couldn't
 test that. Also, after enough simulations, decisively inferior moves
 lead to fewer losses than slightly inferior ones. Suppose you have
 three moves available: one wins 75% of the time, one 50%, and one 25%.
 In the long run, the 75% move will be simulated almost all the time,
 but the middle move will be simulated roughly four times as often as
 the 25% one that, compared to the best move available, is twice as
 bad, and four times the simulations with half the loss per simulation
 adds up to twice the excess losses compared to the 25% move. That is
 apropos here, because giving up on an open-field ladder once it has
 been played out for a dozen moves is much more painful for the
 defender than for the attacker. The longer the ladder got, the more
 the evaluations trended in the defender's favor, and my best
 explanation would be the fact that -- until you actually read the
 ladder all the way out and find that the defender is dead -- every
 move except pulling out of atari is so obviously bad that even uniform
 MC-UCT did a better job of focusing on that one good move.

 (Incidentally, the conservative nature of MC-UCT ratings largely
 explains why maximizing winning probabilities alone is not a bad
 strategy, at least in even games. The classic beginner mistake, when
 you already have a clear lead in theory, is to fail to fight hard to
 grab still more points as blunder insurance. But an MC-UCT evaluation
 of 90% typically means a 90% probability of actually winning against
 even opposition, not just a 90% likelihood of a theoretical win.
 Assigning a 65% evaluation to an obvious THEORETICAL win allows plenty
 of room to assign higher evaluations to even more lopsided advantages.
 As Don said, when MC-UCT starts blatantly throwing away points for no
 obvious reason, it's almost certainly because the game is REALLY over,
 because MC-UCT's errors tend to be probabilistic instead of absolute
 -- it 

Re: [computer-go] How does MC do with ladders?

2007-12-13 Thread Gian-Carlo Pascutto
Jason House wrote:

 MoGo uses TD to predict win rates.  

Really? Where did you get that information?

-- 
GCP
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Don Dailey
Christoph,

Your bayeselo rating is 1942 on CGOS.  I compiled a table that has
all players with 50 games or more which can be found here:

  http://cgos.boardspace.net/9x9/hof2.html


- Don










Christoph Birk wrote:
 On Tue, 11 Dec 2007, Don Dailey wrote:
 Christoph,
 Let me know when you are finished,  what name you are playing under and
 I will do the bayeselo thing to get a better figure.

 I am playing using the 'tast-3k' account. Right, now I have 71 games
 and a rating of 1979 ELO.

  Also, I can
 throw out any games that were irregular if you can identify them,  such
 as if  a match started when you were not looking or your interface got
 glitchy or something.

 Since I added the GUI I lost no games due to software problems.
 Only a few won games lost to human stupidity :-)

 I will take a break over the holidays, maybe playing a few more
 games in the new year, but I guess for my purposes a zero-point
3k-AGA ~=~ 2000 CGOS-ELO
 is close enough. Unless we get some other (AGA or KGS) rated
 players it not make sense to get a more precise rating for the
 scale.

 Christoph
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] low-hanging fruit - yose

2007-12-13 Thread Mark Boon

Don,

This has taken me some time to formulate an answer. Mainly because  
you are making so many assumption about what I understand or imagine  
and what not. It makes for a confused discussion and I didn't feel  
like getting into arguments like no, that's not what I meant etc.


Let me therefore change the discussion a bit to see if this will make  
things more clear. Consider a chess-playing program with an  
unorthodox search method. When playing a human after while it  
announces check-mate in thirty-four moves. Yet the human can clearly  
see it's check-mate in four. Ah, one could say, but the computer is  
100% confident winning either way so it doesn't care which one it  
chooses. It doesn't matter whether a human thinks mate in four is  
more beautiful.


Now it so happens that with chess we're pretty confident that when a  
chess-program announces check-mate that it is in fact correct. But  
what if there could be a sliver of doubt? Maybe the program has no  
doubt, but the programmer might. Bugs happen. Wouldn't you say it's  
better to choose the shorter mate sequence? Regardless of whether  
humans may find it more beautiful?


It's with this example in mind when I say winning with more points is  
better. A MC Go program may show 100% confidence in winning, but all  
that means is that all playouts ended up with a win. But we know that  
it could still be wrong, and with a much higher likelyhood than the  
chess program described above.


Maybe you're right and a bigger win should only be favoured if the MC  
playouts show the same probability of winning. It seems to me the  
most conservative approach that should be followed at least. Somehow  
I feel there's a bit more to be gained though by giving the size of  
the win slightly more weight than that. But I'm also willing to  
accept that that may just be an illusion.


Mark

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Don Dailey
I'm going to estimate that 100 ELO is roughly 1 rank based on this:

 http://en.wikipedia.org/wiki/Go_ranks_and_ratings

This may not hold for 9x9.If a 1 kyu beats a 2kyu about 64% of the
time in an even game at 19x19,  it doesn't imply that he will do the
same at 9x9,  but until I have a reason to believe differently I will
set this as our first guess in the formula.We can always come back
and modify this later.

We may be able to borrow KGS data of well established players playing
9x9 games against each other to estimate this. Would anyone like to
volunteer to do this?

So as it stands, our formula is:

  3ky = 1942 cgos_elo

   dan =  (your_elo - 2142) / 100   
   kyu  = -dan + 1

1 Dan  =  2242 cgos
1 Kyu   =  2142 cgos
2 Kyu   =  2042 cgos
3 Kyu   =  1942 cgos

The strongest bot on CGOS all time list seems to be  greenpeep0.5.1
http://cgos.boardspace.net/9x9/cross/greenpeep0.5.1.html with a rating
of 2621.

This implies it is almost equal to a 5 Dan player - which doesn't sound
right to me.However,  this could be fluky since it is as at the
extreme end of the scale.  It would be great if this same program
could play some strong humans at the equivalent time control on KGS at
9x9 and we could adjust the difference between ranks accordingly.   

I suspect there is more than 100 ELO between ranks at 9x9.  

- Don



Don Dailey wrote:
 Christoph,

 Your bayeselo rating is 1942 on CGOS.  I compiled a table that has
 all players with 50 games or more which can be found here:

   http://cgos.boardspace.net/9x9/hof2.html


 - Don










 Christoph Birk wrote:
   
 On Tue, 11 Dec 2007, Don Dailey wrote:
 
 Christoph,
 Let me know when you are finished,  what name you are playing under and
 I will do the bayeselo thing to get a better figure.
   
 I am playing using the 'tast-3k' account. Right, now I have 71 games
 and a rating of 1979 ELO.

 
  Also, I can
 throw out any games that were irregular if you can identify them,  such
 as if  a match started when you were not looking or your interface got
 glitchy or something.
   
 Since I added the GUI I lost no games due to software problems.
 Only a few won games lost to human stupidity :-)

 I will take a break over the holidays, maybe playing a few more
 games in the new year, but I guess for my purposes a zero-point
3k-AGA ~=~ 2000 CGOS-ELO
 is close enough. Unless we get some other (AGA or KGS) rated
 players it not make sense to get a more precise rating for the
 scale.

 Christoph
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

 

   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Rémi Coulom

Don Dailey wrote:

We may be able to borrow KGS data of well established players playing
9x9 games against each other to estimate this. Would anyone like to
volunteer to do this?


Bill Shubert kindly provided this data to me. I am working on a study 
about rating systems for the game of Go. I expect I will make a 
preliminary version available in January.


Rémi
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Don Dailey
It would be great if you would provide recommendations for a simple
conversion formula when you are ready based on this study.   Also,
if you have any suggestions in general for CGOS ratings the
cgos-developers would be willing to listen to your suggestions.

- Don



Rémi Coulom wrote:
 Don Dailey wrote:
 We may be able to borrow KGS data of well established players playing
 9x9 games against each other to estimate this. Would anyone like to
 volunteer to do this?

 Bill Shubert kindly provided this data to me. I am working on a study
 about rating systems for the game of Go. I expect I will make a
 preliminary version available in January.

 Rémi
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Hideki Kato
Hi Don,

There are not enough evidence to believe this.

Tast-3k has too few matches against each program, less than ten games 
and has no matches against strongest programs including Crazy Stone, 
MoGo and greenpeep.  In addition, there seems some bias, that is, 
his winning rate against gnugo-3.7.10(s) is less than 50% but is 
100% against FatMan while their ratings are almost the same, about 
1800 ELO.
http://cgos.boardspace.net/9x9/cross/tast-3k.html

-Hideki

Don Dailey: [EMAIL PROTECTED]:
I'm going to estimate that 100 ELO is roughly 1 rank based on this:

 http://en.wikipedia.org/wiki/Go_ranks_and_ratings

This may not hold for 9x9.If a 1 kyu beats a 2kyu about 64% of the
time in an even game at 19x19,  it doesn't imply that he will do the
same at 9x9,  but until I have a reason to believe differently I will
set this as our first guess in the formula.We can always come back
and modify this later.

We may be able to borrow KGS data of well established players playing
9x9 games against each other to estimate this. Would anyone like to
volunteer to do this?

So as it stands, our formula is:

  3ky = 1942 cgos_elo

   dan =  (your_elo - 2142) / 100   
   kyu  = -dan + 1

1 Dan  =  2242 cgos
1 Kyu   =  2142 cgos
2 Kyu   =  2042 cgos
3 Kyu   =  1942 cgos

The strongest bot on CGOS all time list seems to be  greenpeep0.5.1
http://cgos.boardspace.net/9x9/cross/greenpeep0.5.1.html with a rating
of 2621.

This implies it is almost equal to a 5 Dan player - which doesn't sound
right to me.However,  this could be fluky since it is as at the
extreme end of the scale.  It would be great if this same program
could play some strong humans at the equivalent time control on KGS at
9x9 and we could adjust the difference between ranks accordingly.   

I suspect there is more than 100 ELO between ranks at 9x9.  

- Don



Don Dailey wrote:
 Christoph,

 Your bayeselo rating is 1942 on CGOS.  I compiled a table that has
 all players with 50 games or more which can be found here:

   http://cgos.boardspace.net/9x9/hof2.html


 - Don










 Christoph Birk wrote:
   
 On Tue, 11 Dec 2007, Don Dailey wrote:
 
 Christoph,
 Let me know when you are finished,  what name you are playing under and
 I will do the bayeselo thing to get a better figure.
   
 I am playing using the 'tast-3k' account. Right, now I have 71 games
 and a rating of 1979 ELO.

 
  Also, I can
 throw out any games that were irregular if you can identify them,  such
 as if  a match started when you were not looking or your interface got
 glitchy or something.
   
 Since I added the GUI I lost no games due to software problems.
 Only a few won games lost to human stupidity :-)

 I will take a break over the holidays, maybe playing a few more
 games in the new year, but I guess for my purposes a zero-point
3k-AGA ~=~ 2000 CGOS-ELO
 is close enough. Unless we get some other (AGA or KGS) rated
 players it not make sense to get a more precise rating for the
 scale.

 Christoph
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

 

   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
--
[EMAIL PROTECTED] (Kato)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] How does MC do with ladders?

2007-12-13 Thread Jason House
On Dec 13, 2007 11:39 AM, Gian-Carlo Pascutto [EMAIL PROTECTED] wrote:

 Jason House wrote:

  MoGo uses TD to predict win rates.

 Really? Where did you get that information?


I can't seem to load http://www.lri.fr/~gelly/MoGo.htm at the moment, but I
found it there.  One of the papers you can find from there is very heavy in
ML terminology.  It took me a very long time to work my way through it since
I knew nothing about ML.  The paper introduces RAVE and near the end talks
about using heuristics for initial parameter estimation.  The heuristic they
used was based TD.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] low-hanging fruit - yose

2007-12-13 Thread Don Dailey
Hi Mark,

It wasn't my intention to sound argumentative about this,  I apologize
for this.

Yes, I agree that the shorter mate sequence should be chosen and also
that if all else is equal,  the bigger win should be the course to follow.

There is a misconception that MC favors winning by the smallest possible
margin.This is NOT the case.It seems to happen a lot but only
because it is often the case that a MC program prefers to make a loss 
absolutely impossible - even when you could probably win big with very
little risk (but some.)  I'm not saying that you have this
misconception,   but I know that many do.

So far the empirical evidence says that it's hard to improve on this by
considering the magnitude of the win.   Several of us have tried this
for one or both of these reasons:

   1.  Hopes that it makes the program a little better.

   2.  Some of us thinks this is silly behavior on the part of the
program.

I'm all for it if it makes the program stronger.  

One thing I tried is dynamically changing the komi once the program gets
confident.  This was an utter failure and it's not hard to see
why. You are dynamically making it harder to win!   

This would sometimes cause it to ignore the winning strategy because you
just told it that a win is not enough.This insight helped me
understand why it's a problem in general,  no matter how you do it. 

Whenever you tell it that it's better to win big than to win small,  
you are actually giving it conflicting goals.  No matter how you look at
it,  to some extent you have to tell the computer that a win is not good
enough. 

I don't think it's a silly idea however.   An indirect goal COULD be
better if it's more implementable in some sense.In computer chess
you can't tell a chess program that only checkmate matters,  your
evaluation function must contain lot's of intermediate goals that
substitute in some sense for the real goal.   Winning material is such
an intermediate goal.   It's doesn't matter one bit whether you are down
a queen if you checkmate the opponent,  but it sure makes it easier to
win if you have an extra queen.   

Nevertheless,  the simple fact is that this hasn't proved to work in MC
as of this time.I'm all for the idea if someone can make it work. 


- Don





Mark Boon wrote:
 Don,

 This has taken me some time to formulate an answer. Mainly because you
 are making so many assumption about what I understand or imagine and
 what not. It makes for a confused discussion and I didn't feel like
 getting into arguments like no, that's not what I meant etc.

 Let me therefore change the discussion a bit to see if this will make
 things more clear. Consider a chess-playing program with an unorthodox
 search method. When playing a human after while it announces
 check-mate in thirty-four moves. Yet the human can clearly see it's
 check-mate in four. Ah, one could say, but the computer is 100%
 confident winning either way so it doesn't care which one it chooses.
 It doesn't matter whether a human thinks mate in four is more beautiful.

 Now it so happens that with chess we're pretty confident that when a
 chess-program announces check-mate that it is in fact correct. But
 what if there could be a sliver of doubt? Maybe the program has no
 doubt, but the programmer might. Bugs happen. Wouldn't you say it's
 better to choose the shorter mate sequence? Regardless of whether
 humans may find it more beautiful?

 It's with this example in mind when I say winning with more points is
 better. A MC Go program may show 100% confidence in winning, but all
 that means is that all playouts ended up with a win. But we know that
 it could still be wrong, and with a much higher likelyhood than the
 chess program described above.

 Maybe you're right and a bigger win should only be favoured if the MC
 playouts show the same probability of winning. It seems to me the most
 conservative approach that should be followed at least. Somehow I feel
 there's a bit more to be gained though by giving the size of the win
 slightly more weight than that. But I'm also willing to accept that
 that may just be an illusion.

 Mark

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Rémi Coulom

Don Dailey wrote:

It would be great if you would provide recommendations for a simple
conversion formula when you are ready based on this study.   Also,
if you have any suggestions in general for CGOS ratings the
cgos-developers would be willing to listen to your suggestions.

- Don
My suggestion would be to tell programmers to use a different login each 
time they change version or hardware (most do that, already), and use 
bayeselo to rank the programs.


This would be best if combined with a mechanism to recognize that two 
logins are versions of the same program (for instance, if they use the 
same password), and avoid pairing them.


Regarding correspondance with human ranks, and handicap value, I cannot 
tell yet. It is very clear to me that the Elo-rating model is very wrong 
for the game of Go, because strength is not one-dimensional, especially 
when mixing bots and humans. The best way to evaluate a bot in terms of 
human rating is to make it play against humans, on KGS for instance. 
Unfortunately, there is no 9x9 rating there. I will compute 9x9 ratings 
with the KGS data I have.


What I have observed with Crazy Stone is that gaining Elo points against 
humans is more difficult than gaining Elo points against GNU Go, which 
is more difficult than gaining Elo points against MC programs, which is 
more difficult than gaining Elo points against itself. But it is more an 
intuition than a scientific study.


Rémi
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] MC-UCT and tactical information

2007-12-13 Thread dhillismail
I'd like to start a more specific discussion about ways to combine tactical 
information with MC-UCT. Here's the scenario.

It's the bot's turn and, prior to starting any playouts, it runs a tactical 
analyzer (for want of a better name) that labels each string as unconditionally 
alive, unconditionally dead, conditionally alive, uncertain, or too strong to 
bother testing. For each conditionally alive string, it finds the move(s) to 
kill/rescue it. Each possible move, has a critical score indicating the net 
number of stones whose life/death status would be changed for the better if?the 
bot?moved there.?Moves with a non-zero critical score are critical moves.

?I can only afford to run the tactical analyzer at the root node (if that!). 

The tactical analyzer (also MC-UCT based) makes mistakes. There are several 
parameters that trade-off reliability vs. speed. The parameters can be adjusted 
on-the-fly. For uncertain strings, there is a probability score available that 
is meaningful. Finding the right trade-offs to use in the tactical analyzer 
must be done in the context of how the information will be used.?(Replacing the 
tactical analyzer with a better one would doubtless help.)

So how should I make use of this, and perhaps additional related?information? 
Here are some things I have tried, or at least considered.

- SINS OF OMISSION
* If the highest critical score passes some threshold,?force UCT to consider 
only moves, (at the root, remember,) above that threshold. (Please don't give 
me a helpful suggestion for when there is only one such move ;-).)

If the threshold is pretty high and the parameters set for good reliability, 
this never seems to produce worse moves and occasionally saves the bot from 
game-losing blunders. Even when global UCT is?cranked up to?a high number of 
playouts, this technique can still rescue it from itself. The downside is that 
this rule doesn't trigger very often.

If the threshold is a bit lower, quality of moves suffers. My gut-feel is that, 
for 9x9 at least, it's best to only apply this rule for game-swinging moves.

* Put critical moves first in progressive widening. They are almost always 
already there from other rules, so this makes no difference for me. 

* Identify important strings and add rules in the playouts to encourage contact 
moves with these strings. This did not work for me and I'm sick of dealing with 
it.

* Add a small bias to the UCT score so that critical moves get tested more 
often, or just use it directly as a tie breaker. Doesn't seem to help; maybe 
with careful tweaking...

- SINS OF COMMISSION
* Disallow moves that would kill your own string. My guess is that this rule 
should only be applied when the move would kill a pretty large string.

* Disallow moves inside live strings or contact moves with dead strings unless 
those moves are also critical moves. Don't see much impact.

- OTHER

* Serious disagreements between the tactical analyzer and the global UCT might 
be a warning sign that UCT isn't going to be able to understand the current 
position even with more playouts.

* When to pass. On KGS, Antigo might know it's won, but not the status of every 
group. That would be awkward in the scoring phase, so it only passes when it 
can prove life or death for every string.

* Connectivity. I confess to having no clue, but wise people on this list 
doubtless will.

* Collect information from all those local searches and somehow use it in the 
global search. This is my?long term goal. Maybe n-grams...

 I think that there is a crossover point where, once the cpu time needed to 
do a decent tactical analysis at the root becomes a small fraction of the total 
available for a move, it becomes worthwhile to do so. My program/hardware isn't 
there yet, but it's just a matter of time.

- Dave Hillis





More new features than ever.  Check out the new AIM(R) Mail ! - 
http://webmail.aim.com
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] How does MC do with ladders?

2007-12-13 Thread Forrest Curo

It's the approach I believe to be more human-like.   Not necessarily the
playing style.


Human beings chunk.

What all this fuss suggests to me is a meta-mc program... You  
include routines that work out good sequences, as a human would--and  
then you have the random part of the program include the more  
promising sequences, where applicable, as if they were individual moves.


Forrest Curo


This message was sent using IMP, the Internet Messaging Program.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] How does MC do with ladders?

2007-12-13 Thread Álvaro Begué
On Dec 13, 2007 2:28 PM, Forrest Curo [EMAIL PROTECTED] wrote:

  It's the approach I believe to be more human-like.   Not necessarily the
  playing style.

 Human beings chunk.

 What all this fuss suggests to me is a meta-mc program... You
 include routines that work out good sequences, as a human would--and
 then you have the random part of the program include the more
 promising sequences, where applicable, as if they were individual moves.


You can't do that in two-player games, unless you are convinced that the
opponent is forced throughout the entire sequence.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Don Dailey
I am considering to enforce this basic protocol on the server soon:

 Programs of the same family will not be paired against each other.

A family of programs have the same name up to the first hyphen and the
same password.

So if I have these programs:

Name   password
-----
Lazarus-1.2foobar
Lazarus-1.3foobar
Lazarus-1.4foobar
Lazarus-1.5winniepooh

Then Lazarus-1.5 will be allowed to play either of the other programs
listed,  but those other programs will not be allow to play each other
as they are considered relatives.  

We cannot prevent programs from playing each other no matter what we do,
they can always change the name and password.   However this gives the
programmers the ability to prevent multiple versions of his program from
playing each other if he chooses. Most programmers probably log onto
CGOS in order to play other peoples programs.

From time to time I have put highly experimental and very different
programs on CGOS and I don't care if they play themselves - they are not
really different versions of the same program.   In this case I always
give them different names anyway.   I pretty much always use the same
password so I can control this easily with the name. 

- Don



   

  

Rémi Coulom wrote:
 Don Dailey wrote:
 It would be great if you would provide recommendations for a simple
 conversion formula when you are ready based on this study.   Also,
 if you have any suggestions in general for CGOS ratings the
 cgos-developers would be willing to listen to your suggestions.

 - Don
 My suggestion would be to tell programmers to use a different login
 each time they change version or hardware (most do that, already), and
 use bayeselo to rank the programs.

 This would be best if combined with a mechanism to recognize that two
 logins are versions of the same program (for instance, if they use the
 same password), and avoid pairing them.

 Regarding correspondance with human ranks, and handicap value, I
 cannot tell yet. It is very clear to me that the Elo-rating model is
 very wrong for the game of Go, because strength is not
 one-dimensional, especially when mixing bots and humans. The best way
 to evaluate a bot in terms of human rating is to make it play against
 humans, on KGS for instance. Unfortunately, there is no 9x9 rating
 there. I will compute 9x9 ratings with the KGS data I have.

 What I have observed with Crazy Stone is that gaining Elo points
 against humans is more difficult than gaining Elo points against GNU
 Go, which is more difficult than gaining Elo points against MC
 programs, which is more difficult than gaining Elo points against
 itself. But it is more an intuition than a scientific study.

 Rémi
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Don Dailey
From time to time I have put highly experimental and very different
programs on CGOS and I don't care if they play themselves


What I meant to say is that I don't care if they play other programs of
mine.

- Don


Don Dailey wrote:
 I am considering to enforce this basic protocol on the server soon:

  Programs of the same family will not be paired against each other.

 A family of programs have the same name up to the first hyphen and the
 same password.

 So if I have these programs:

 Name   password
 -----
 Lazarus-1.2foobar
 Lazarus-1.3foobar
 Lazarus-1.4foobar
 Lazarus-1.5winniepooh

 Then Lazarus-1.5 will be allowed to play either of the other programs
 listed,  but those other programs will not be allow to play each other
 as they are considered relatives.  

 We cannot prevent programs from playing each other no matter what we
 do, they can always change the name and password.   However this gives
 the programmers the ability to prevent multiple versions of his
 program from playing each other if he chooses. Most programmers
 probably log onto CGOS in order to play other peoples programs.

 From time to time I have put highly experimental and very different
 programs on CGOS and I don't care if they play themselves - they are
 not really different versions of the same program.   In this case I
 always give them different names anyway.   I pretty much always use
 the same password so I can control this easily with the name. 

 - Don





   

 Rémi Coulom wrote:
 Don Dailey wrote:
 It would be great if you would provide recommendations for a simple
 conversion formula when you are ready based on this study.   Also,
 if you have any suggestions in general for CGOS ratings the
 cgos-developers would be willing to listen to your suggestions.

 - Don
 My suggestion would be to tell programmers to use a different login
 each time they change version or hardware (most do that, already),
 and use bayeselo to rank the programs.

 This would be best if combined with a mechanism to recognize that two
 logins are versions of the same program (for instance, if they use
 the same password), and avoid pairing them.

 Regarding correspondance with human ranks, and handicap value, I
 cannot tell yet. It is very clear to me that the Elo-rating model is
 very wrong for the game of Go, because strength is not
 one-dimensional, especially when mixing bots and humans. The best way
 to evaluate a bot in terms of human rating is to make it play against
 humans, on KGS for instance. Unfortunately, there is no 9x9 rating
 there. I will compute 9x9 ratings with the KGS data I have.

 What I have observed with Crazy Stone is that gaining Elo points
 against humans is more difficult than gaining Elo points against GNU
 Go, which is more difficult than gaining Elo points against MC
 programs, which is more difficult than gaining Elo points against
 itself. But it is more an intuition than a scientific study.

 Rémi
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

 

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] RE: Microsoft Research Lectures: Akihiro Kishimoto

2007-12-13 Thread David Fotland
Many faces still finds the correct move on the first trial, but now it takes
74 nodes to prove the first move works, rather than one node.
It looks at a total of 114 nodes to prove that no other move works.

David

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:computer-go-
 [EMAIL PROTECTED] On Behalf Of Harri Salakoski
 Sent: Wednesday, December 12, 2007 9:04 PM
 To: computer-go
 Subject: Re: [computer-go] RE: Microsoft Research Lectures: Akihiro
 Kishimoto
 
 5 ...
 4 WW.
 3 BBB..W.
 2 ...B.W.
 1 ..B..W.
   ABCDEFG
 
 Ha, I give blacks more room to play, and shape database did not find
 it, or
 maybe I used it wrong :).
 But this search find B2 now in 1258437 nodes :| which is quite much and
 takes couple seconds.
 Don't know but really hope it goes better so it comes more useful.
 
 t. Harri
 
 - Original Message -
 From: Dave Dyer [EMAIL PROTECTED]
 To: computer-go computer-go@computer-go.org
 Sent: Thursday, December 13, 2007 5:52 AM
 Subject: [computer-go] RE: Microsoft Research Lectures: Akihiro
 Kishimoto
 
 
  My tsumego applet determines without a search that black can kill,
  and white might live if he moves first.
  http://www.andromeda.com/people/ddyer/go/shape/ShapeApplet.html
 
  A table lookup is a little better than searching 162438 nodes :)
 
 
 
   For example current version(not released) goes trought 162438
 nodes
  before
   it proofs black B2 kills(without any ordering help).
 
   . . . . . . .
   . . . . . . .
   B B B B B . .
   w w w . B . .
   . . . w B . .
   . . w . B . .
   A B C D E F G
 
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] CGOS 19x19 down

2007-12-13 Thread David Fotland
It looks like CGOS 19x19 is down again.

-David

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:computer-go-
 [EMAIL PROTECTED] On Behalf Of Don Dailey
 Sent: Thursday, December 13, 2007 9:47 AM
 To: Don Dailey
 Cc: computer-go
 Subject: Re: [computer-go] Where and How to Test the Strong Programs?
 
 I'm going to estimate that 100 ELO is roughly 1 rank based on this:
 
  http://en.wikipedia.org/wiki/Go_ranks_and_ratings
 
 This may not hold for 9x9.If a 1 kyu beats a 2kyu about 64% of the
 time in an even game at 19x19,  it doesn't imply that he will do the
 same at 9x9,  but until I have a reason to believe differently I will
 set this as our first guess in the formula.We can always come back
 and modify this later.
 
 We may be able to borrow KGS data of well established players playing
 9x9 games against each other to estimate this. Would anyone like to
 volunteer to do this?
 
 So as it stands, our formula is:
 
   3ky = 1942 cgos_elo
 
dan =  (your_elo - 2142) / 100
kyu  = -dan + 1
 
 1 Dan  =  2242 cgos
 1 Kyu   =  2142 cgos
 2 Kyu   =  2042 cgos
 3 Kyu   =  1942 cgos
 
 The strongest bot on CGOS all time list seems to be  greenpeep0.5.1
 http://cgos.boardspace.net/9x9/cross/greenpeep0.5.1.html with a
 rating
 of 2621.
 
 This implies it is almost equal to a 5 Dan player - which doesn't sound
 right to me.However,  this could be fluky since it is as at the
 extreme end of the scale.  It would be great if this same program
 could play some strong humans at the equivalent time control on KGS at
 9x9 and we could adjust the difference between ranks accordingly.
 
 I suspect there is more than 100 ELO between ranks at 9x9.
 
 - Don
 
 
 
 Don Dailey wrote:
  Christoph,
 
  Your bayeselo rating is 1942 on CGOS.  I compiled a table that
 has
  all players with 50 games or more which can be found here:
 
http://cgos.boardspace.net/9x9/hof2.html
 
 
  - Don
 
 
 
 
 
 
 
 
 
 
  Christoph Birk wrote:
 
  On Tue, 11 Dec 2007, Don Dailey wrote:
 
  Christoph,
  Let me know when you are finished,  what name you are playing under
 and
  I will do the bayeselo thing to get a better figure.
 
  I am playing using the 'tast-3k' account. Right, now I have 71 games
  and a rating of 1979 ELO.
 
 
   Also, I can
  throw out any games that were irregular if you can identify them,
 such
  as if  a match started when you were not looking or your interface
 got
  glitchy or something.
 
  Since I added the GUI I lost no games due to software problems.
  Only a few won games lost to human stupidity :-)
 
  I will take a break over the holidays, maybe playing a few more
  games in the new year, but I guess for my purposes a zero-point
 3k-AGA ~=~ 2000 CGOS-ELO
  is close enough. Unless we get some other (AGA or KGS) rated
  players it not make sense to get a more precise rating for the
  scale.
 
  Christoph
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Hideki Kato
Hi Rémi ,

Rémi Coulom: [EMAIL PROTECTED]:
Don Dailey wrote:
 It would be great if you would provide recommendations for a simple
 conversion formula when you are ready based on this study.   Also,
 if you have any suggestions in general for CGOS ratings the
 cgos-developers would be willing to listen to your suggestions.

 - Don
My suggestion would be to tell programmers to use a different login each 
time they change version or hardware (most do that, already), and use 
bayeselo to rank the programs.

Sure, it should be recommended strongly.  Before overall ratings, I
was not so serious to change login names as I just want to know recent
rating which is enough.  In fact, I misestimated the time consumption
of my latest version of ggmc and it lost many games by time for a
week or so.  I noticed it and have changed its setting of time control
a few days ago.  Its rating improved much and so, the difference
between its overall (2011) and current (2043) ratings is relatively
large.  I strongly believe that this kind of _minor_ fixing are made
frequently in past.

As a conclusion, current overall ratings are hard to believe or,
perhaps,  may include lots of error.  I'd like to suggest to postpone
this excellent idea until almost all peticipants surely do change
their login names if they modify, not only changing versions, their
programs, and to exclude older programs from counting.  

-Hideki

This would be best if combined with a mechanism to recognize that two 
logins are versions of the same program (for instance, if they use the 
same password), and avoid pairing them.

Regarding correspondance with human ranks, and handicap value, I cannot 
tell yet. It is very clear to me that the Elo-rating model is very wrong 
for the game of Go, because strength is not one-dimensional, especially 
when mixing bots and humans. The best way to evaluate a bot in terms of 
human rating is to make it play against humans, on KGS for instance. 
Unfortunately, there is no 9x9 rating there. I will compute 9x9 ratings 
with the KGS data I have.

What I have observed with Crazy Stone is that gaining Elo points against 
humans is more difficult than gaining Elo points against GNU Go, which 
is more difficult than gaining Elo points against MC programs, which is 
more difficult than gaining Elo points against itself. But it is more an 
intuition than a scientific study.

Rémi
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
--
[EMAIL PROTECTED] (Kato)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread David Fotland
Isn't Greenpeep an alpha-beta searcher, not UCT/MC?

Since Go ranks are based an handicap stones, and 100 ELO points implies a
particular winning percentage, it would be an unlikely coincidence if 1 rank
is 100 ELO points.  Any web site that claims this must be wrong :) and
should have little credibility.

David


 
 The strongest bot on CGOS all time list seems to be  greenpeep0.5.1
 http://cgos.boardspace.net/9x9/cross/greenpeep0.5.1.html with a
 rating
 of 2621.
 
 This implies it is almost equal to a 5 Dan player - which doesn't sound
 right to me.However,  this could be fluky since it is as at the
 extreme end of the scale.  It would be great if this same program
 could play some strong humans at the equivalent time control on KGS at
 9x9 and we could adjust the difference between ranks accordingly.
 
 I suspect there is more than 100 ELO between ranks at 9x9.
 
 - Don
 
 
 
 Don Dailey wrote:
  Christoph,
 
  Your bayeselo rating is 1942 on CGOS.  I compiled a table that
 has
  all players with 50 games or more which can be found here:
 
http://cgos.boardspace.net/9x9/hof2.html
 
 
  - Don
 
 
 
 
 
 
 
 
 
 
  Christoph Birk wrote:
 
  On Tue, 11 Dec 2007, Don Dailey wrote:
 
  Christoph,
  Let me know when you are finished,  what name you are playing under
 and
  I will do the bayeselo thing to get a better figure.
 
  I am playing using the 'tast-3k' account. Right, now I have 71 games
  and a rating of 1979 ELO.
 
 
   Also, I can
  throw out any games that were irregular if you can identify them,
 such
  as if  a match started when you were not looking or your interface
 got
  glitchy or something.
 
  Since I added the GUI I lost no games due to software problems.
  Only a few won games lost to human stupidity :-)
 
  I will take a break over the holidays, maybe playing a few more
  games in the new year, but I guess for my purposes a zero-point
 3k-AGA ~=~ 2000 CGOS-ELO
  is close enough. Unless we get some other (AGA or KGS) rated
  players it not make sense to get a more precise rating for the
  scale.
 
  Christoph
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Don Dailey


David Fotland wrote:
 Isn't Greenpeep an alpha-beta searcher, not UCT/MC?

 Since Go ranks are based an handicap stones, and 100 ELO points implies a
 particular winning percentage, it would be an unlikely coincidence if 1 rank
 is 100 ELO points.  Any web site that claims this must be wrong :) and
 should have little credibility.
   
I don't believe it's 100 ELO either.   It's probably something between
100 and 200.

 David


   
 The strongest bot on CGOS all time list seems to be  greenpeep0.5.1
 http://cgos.boardspace.net/9x9/cross/greenpeep0.5.1.html with a
 rating
 of 2621.

 This implies it is almost equal to a 5 Dan player - which doesn't sound
 right to me.However,  this could be fluky since it is as at the
 extreme end of the scale.  It would be great if this same program
 could play some strong humans at the equivalent time control on KGS at
 9x9 and we could adjust the difference between ranks accordingly.

 I suspect there is more than 100 ELO between ranks at 9x9.

 - Don



 Don Dailey wrote:
 
 Christoph,

 Your bayeselo rating is 1942 on CGOS.  I compiled a table that
   
 has
 
 all players with 50 games or more which can be found here:

   http://cgos.boardspace.net/9x9/hof2.html


 - Don










 Christoph Birk wrote:

   
 On Tue, 11 Dec 2007, Don Dailey wrote:

 
 Christoph,
 Let me know when you are finished,  what name you are playing under
   
 and
 
 I will do the bayeselo thing to get a better figure.

   
 I am playing using the 'tast-3k' account. Right, now I have 71 games
 and a rating of 1979 ELO.


 
  Also, I can
 throw out any games that were irregular if you can identify them,
   
 such
 
 as if  a match started when you were not looking or your interface
   
 got
 
 glitchy or something.

   
 Since I added the GUI I lost no games due to software problems.
 Only a few won games lost to human stupidity :-)

 I will take a break over the holidays, maybe playing a few more
 games in the new year, but I guess for my purposes a zero-point
3k-AGA ~=~ 2000 CGOS-ELO
 is close enough. Unless we get some other (AGA or KGS) rated
 players it not make sense to get a more precise rating for the
 scale.

 Christoph
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


 
   
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
 

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MC-UCT and tactical information

2007-12-13 Thread Jason House
On Dec 13, 2007 2:17 PM, [EMAIL PROTECTED] wrote:

 I'd like to start a more specific discussion about ways to combine
 tactical information with MC-UCT. Here's the scenario.

 It's the bot's turn and, prior to starting any playouts, it runs a
 tactical analyzer (for want of a better name) that labels each string as
 unconditionally alive, unconditionally dead, conditionally alive, uncertain,
 or too strong to bother testing. For each conditionally alive string, it
 finds the move(s) to kill/rescue it. Each possible move, has a critical
 score indicating the net number of stones whose life/death status would be
 changed for the better if the bot moved there. Moves with a non-zero
 critical score are critical moves.



As worded, I don't know if I agree with that formalization of what
information will be useful.  It's in the right ballpark for the discussion,
but will likely shift a bit with any final design.



 * Connectivity. I confess to having no clue, but wise people on this list
 doubtless will.


Connectivity of stones and LD for stones are intimately tied together.  If
I'm alive, I don't need a connection.  If I connect to another living group,
I don't need to live.

I've been tempted to insert forced responses into my random playouts to
preserve connections, but I didn't get that far since I don't have a
tactical analyzer.  One day I will...




 * Collect information from all those local searches and somehow use it in
 the global search. This is my long term goal. Maybe n-grams...


What's an n-gram?



  I think that there is a crossover point where, once the cpu time
 needed to do a decent tactical analysis at the root becomes a small fraction
 of the total available for a move, it becomes worthwhile to do so. My
 program/hardware isn't there yet, but it's just a matter of time.


Don't forget that local tactical analysis can be reused many moves later if
the local area has remained unaffected.  In a multi-core system, it may
become increasingly valuable to dedicate a core to tactical analysis.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Jason House
On Dec 13, 2007 2:37 PM, Don Dailey [EMAIL PROTECTED] wrote:

  I am considering to enforce this basic protocol on the server soon:

  Programs of the same family will not be paired against each other.


I frequently look at the games between my bot version more than I look at
them with other bots.  This is unfortunately a side-effect of having few
(9x9) bots in the 1200-1400 ELO range.  I like using similar names and the
same password since it lets me keep my sanity when switching between
machines!

If the family feature is added, I'd like another mechanism to control the
family settings.  I suspect that a programmer's decision to have their bots
play each other (or not) may change with time.  It'd be nice if that was
supported.

PS: Will this new feature be committed into the subversion repository for
CGOS?
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Jason House
On Dec 13, 2007 3:09 PM, David Fotland [EMAIL PROTECTED] wrote:

 Isn't Greenpeep an alpha-beta searcher, not UCT/MC?


I could have sworn I heard it described as UCT/MC with MoGo-like
enhancements.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] low-hanging fruit - yose

2007-12-13 Thread Chris Fant
Seems like the final solution to this would need to build out the
search tree to the end of the game, finding a winning line.  And then
search again with a different evaluation function (one based on
points).  If the second search cannot find a line that wins bigger
than the first search did, just play the move returned by the first
search.  And you could get more clever be allowing the second search
to start with some information from the first search.  Note that when
I say winning line, I mean all the way to the end.  No MC here.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MC-UCT and tactical information

2007-12-13 Thread terry mcintyre
Jason House: 
 Don't forget that local tactical analysis can be reused many moves
later if the local area has remained unaffected.  
 In a multi-core
system, it may become increasingly valuable to dedicate a core to
tactical analysis.

In another post, libego with a million playouts per move had trouble with a 
ladder. 

Humans cache and reuse tactical analysis. This can lead to enormously faster 
search time, and ( in many situations ) more accurate evaluations of the 
overall status.

The tricky part is figuring out how to use that information efficiently. 



  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] MC-UCT and tactical information

2007-12-13 Thread John Fan
My program StoneGrid calculates unconditional life and death at every move,
in the UCT Tree and in the random playout. I think it helps on its strength
a little bit, especially in the end game. In the begining of the game, seems
to be completely useless. It is slow. But it makes the random playout
slightly shorter. The random playout is about 80 moves per game. I do not
have a comparison if the unconditional life and death is disabled, since it
is not easy to do in the current data structure.

On Dec 13, 2007 3:20 PM, Jason House [EMAIL PROTECTED] wrote:

 On Dec 13, 2007 2:17 PM, [EMAIL PROTECTED] wrote:

  I'd like to start a more specific discussion about ways to combine
  tactical information with MC-UCT. Here's the scenario.
 
  It's the bot's turn and, prior to starting any playouts, it runs a
  tactical analyzer (for want of a better name) that labels each string as
  unconditionally alive, unconditionally dead, conditionally alive, uncertain,
  or too strong to bother testing. For each conditionally alive string, it
  finds the move(s) to kill/rescue it. Each possible move, has a critical
  score indicating the net number of stones whose life/death status would be
  changed for the better if the bot moved there. Moves with a non-zero
  critical score are critical moves.



 As worded, I don't know if I agree with that formalization of what
 information will be useful.  It's in the right ballpark for the discussion,
 but will likely shift a bit with any final design.



  * Connectivity. I confess to having no clue, but wise people on this
  list doubtless will.


 Connectivity of stones and LD for stones are intimately tied together.
 If I'm alive, I don't need a connection.  If I connect to another living
 group, I don't need to live.

 I've been tempted to insert forced responses into my random playouts to
 preserve connections, but I didn't get that far since I don't have a
 tactical analyzer.  One day I will...




  * Collect information from all those local searches and somehow use it
  in the global search. This is my long term goal. Maybe n-grams...


 What's an n-gram?



   I think that there is a crossover point where, once the cpu time
  needed to do a decent tactical analysis at the root becomes a small fraction
  of the total available for a move, it becomes worthwhile to do so. My
  program/hardware isn't there yet, but it's just a matter of time.


 Don't forget that local tactical analysis can be reused many moves later
 if the local area has remained unaffected.  In a multi-core system, it may
 become increasingly valuable to dedicate a core to tactical analysis.

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] low-hanging fruit - yose

2007-12-13 Thread Chris Fant
On Dec 13, 2007 3:33 PM, Chris Fant [EMAIL PROTECTED] wrote:
 Seems like the final solution to this would need to build out the
 search tree to the end of the game, finding a winning line.  And then
 search again with a different evaluation function (one based on
 points).  If the second search cannot find a line that wins bigger
 than the first search did, just play the move returned by the first
 search.  And you could get more clever be allowing the second search
 to start with some information from the first search.  Note that when
 I say winning line, I mean all the way to the end.  No MC here.



Actually, I suppose it need not be to the absolute end of the game.
As long as all MC sims that finish out the game prior to scoring lead
to a win, then you can consider the tree portion a guaranteed winning
line and try the second search to maximize points.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] How does MC do with ladders?

2007-12-13 Thread Gian-Carlo Pascutto
Jason House wrote:

 The paper introduces RAVE and
 near the end talks about using heuristics for initial parameter
 estimation.  The heuristic they used was based TD.

Ah, you're talking about RLGO. RLGO was trained with TD, but MoGo itself
doesn't use TD (directly).

There are posts from Sylvain and David here that the latest MoGo's use a
simpler and faster heuristic which works just as well.

-- 
GCP
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] low-hanging fruit - yose

2007-12-13 Thread Álvaro Begué
At the end of a playout there is probably some code that says samoething
like
  reward = (score  komi) ? 1.0 : 0.0;

You can just replace it with
  reward = 1 / (1 + exp(- K * (score - komi)));

A huge value of K will reproduce the old behaviour, a tiny value will result
in a program that tries to maximize expected score, and values in the middle
will blend both things nicely. Of course you would precompute this in a
table.

This seems elegant and simple to me. Now we only need to know how it affects
performance. I bet there are values of K that would make everyone happy (no
measurable loss in strength, still play good-looking moves even if the game
is decided).


Álvaro.


On Dec 13, 2007 3:42 PM, Chris Fant [EMAIL PROTECTED] wrote:

 On Dec 13, 2007 3:33 PM, Chris Fant [EMAIL PROTECTED] wrote:
  Seems like the final solution to this would need to build out the
  search tree to the end of the game, finding a winning line.  And then
  search again with a different evaluation function (one based on
  points).  If the second search cannot find a line that wins bigger
  than the first search did, just play the move returned by the first
  search.  And you could get more clever be allowing the second search
  to start with some information from the first search.  Note that when
  I say winning line, I mean all the way to the end.  No MC here.
 


 Actually, I suppose it need not be to the absolute end of the game.
 As long as all MC sims that finish out the game prior to scoring lead
 to a win, then you can consider the tree portion a guaranteed winning
 line and try the second search to maximize points.
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Don Dailey
I don't want to add more mechanisms.   You can build your own mechanism
by making your own password naming convention or bot naming
convention.For instance you can use the underscore character to
build separate families of bots and still keep your own branding. 

We might at some point make a web page that groups bots by family so
that it's publicly visible and easy to see which bots do not play each
other.  

 PS: Will this new feature be committed into the subversion repository
for CGOS?

Yes, when I do this I will use the sourceforge system.  

- Don


 
Jason House wrote:


 On Dec 13, 2007 2:37 PM, Don Dailey [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:

 I am considering to enforce this basic protocol on the server soon:

  Programs of the same family will not be paired against each
 other.


 I frequently look at the games between my bot version more than I look
 at them with other bots.  This is unfortunately a side-effect of
 having few (9x9) bots in the 1200-1400 ELO range.  I like using
 similar names and the same password since it lets me keep my sanity
 when switching between machines!

 If the family feature is added, I'd like another mechanism to control
 the family settings.  I suspect that a programmer's decision to have
 their bots play each other (or not) may change with time.  It'd be
 nice if that was supported.

 PS: Will this new feature be committed into the subversion repository
 for CGOS?
 

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MC-UCT and tactical information

2007-12-13 Thread Jason House
On Dec 13, 2007 3:40 PM, terry mcintyre [EMAIL PROTECTED] wrote:

 Jason House:
  Don't forget that local tactical analysis can be reused many moves later
 if the local area has remained unaffected.
  In a multi-core system, it may become increasingly valuable to dedicate
 a core to tactical analysis.

 In another post, libego with a million playouts per move had trouble with
 a ladder.


Please, please don't turn this thread into that one!



 The tricky part is figuring out how to use that information efficiently.


That's what I hope the point of this thread is all about and the purpose of
my comments.  If that's not the point of this thread, I'll shut up and stop
reading this thread too.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] How does MC do with ladders?

2007-12-13 Thread Jason House
On Dec 13, 2007 3:52 PM, Gian-Carlo Pascutto [EMAIL PROTECTED] wrote:

 Jason House wrote:

  The paper introduces RAVE and
  near the end talks about using heuristics for initial parameter
  estimation.  The heuristic they used was based TD.

 Ah, you're talking about RLGO. RLGO was trained with TD, but MoGo itself
 doesn't use TD (directly).

 There are posts from Sylvain and David here that the latest MoGo's use a
 simpler and faster heuristic which works just as well.



Is it possible for you to provide a link to those posts?  I've missed them
but would be highly interested in them.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] low-hanging fruit - yose

2007-12-13 Thread Don Dailey
Nice idea and worth a try.I predict that this will weaken the
program no matter what value you use, but that there may indeed be a
reasonable compromise that gives you the better behavior with only a
very small decline in strength.  

I think this bother people so much that they would be willing to
sacrifice a tiny bit of strength to get the greedy behavior.

- Don


Álvaro Begué wrote:
 At the end of a playout there is probably some code that says
 samoething like
   reward = (score  komi) ? 1.0 : 0.0;

 You can just replace it with
   reward = 1 / (1 + exp(- K * (score - komi)));

 A huge value of K will reproduce the old behaviour, a tiny value will
 result in a program that tries to maximize expected score, and values
 in the middle will blend both things nicely. Of course you would
 precompute this in a table.

 This seems elegant and simple to me. Now we only need to know how it
 affects performance. I bet there are values of K that would make
 everyone happy (no measurable loss in strength, still play
 good-looking moves even if the game is decided).


 Álvaro.


 On Dec 13, 2007 3:42 PM, Chris Fant [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:

 On Dec 13, 2007 3:33 PM, Chris Fant [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:
  Seems like the final solution to this would need to build out the
  search tree to the end of the game, finding a winning line.  And
 then
  search again with a different evaluation function (one based on
  points).  If the second search cannot find a line that wins bigger
  than the first search did, just play the move returned by the first
  search.  And you could get more clever be allowing the second search
  to start with some information from the first search.  Note that
 when
  I say winning line, I mean all the way to the end.  No MC here.
 


 Actually, I suppose it need not be to the absolute end of the game.
 As long as all MC sims that finish out the game prior to scoring lead
 to a win, then you can consider the tree portion a guaranteed winning
 line and try the second search to maximize points.
 ___
 computer-go mailing list
 computer-go@computer-go.org mailto:computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


 

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] How does MC do with ladders?

2007-12-13 Thread Forrest Curo

Quoting Álvaro Begué [EMAIL PROTECTED]:


On Dec 13, 2007 2:28 PM, Forrest Curo [EMAIL PROTECTED] wrote:


 It's the approach I believe to be more human-like.   Not necessarily the
 playing style.

Human beings chunk.

What all this fuss suggests to me is a meta-mc program... You
include routines that work out good sequences, as a human would--and
then you have the random part of the program include the more
promising sequences, where applicable, as if they were individual moves.



You can't do that in two-player games, unless you are convinced that the
opponent is forced throughout the entire sequence.


Humans do a sort of skinny alpha-beta, automatically narrowing the  
search by applying constraints based on what they see as possible  
outcomes. (In the case of a ladder, most of this is searching a long  
branch one-move wide!) So if an opponent fails to follow the sequence  
you've expected, he might be on to something you missed, or he may  
have missed the point of the sequence. An unexpected move thus  
suggests a pair of local searches, concentrating both on the vicinity  
of that move and on the vicinity of the expected move.


But for a program to do something similar, it does need some way of  
arriving at an idea of what moves in a certain area could reasonably  
be expected to accomplish. Seeing where the groups end up  in a large  
number of playoffs might give a hint...(?)


Forrest Curo


This message was sent using IMP, the Internet Messaging Program.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Jason House
On Dec 13, 2007 4:01 PM, Don Dailey [EMAIL PROTECTED] wrote:

 I don't want to add more mechanisms.   You can build your own mechanism
 by making your own password naming convention or bot naming
 convention.For instance you can use the underscore character to
 build separate families of bots and still keep your own branding.


And if I then lose interest in self-play games (because I've seen enough to
prove to myself whatever I was looking for)?  Do I then create a separate
account with the same bot running on the same hardware and either discard
historical data or manually combine the results?

Maybe a more generic mechanism of opponent preferences would work?  Then I
could specify bots I don't want to play (my family members) or others I want
to play more (my rival?).  Obviously, the don't play might be restricted to
bots in the same family or that use the same password.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] MC-UCT and tactical information

2007-12-13 Thread dhillismail
That's a strong program, and interesting?information.?For clarity, I assume 
that you mean something like Benson's algorithm, while my intended meaning was 
alive assuming perfect play. Both are relevant, we just need to keep them 
sorted out.

- Dave Hillis


-Original Message-
From: John Fan [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org
Sent: Thu, 13 Dec 2007 3:42 pm
Subject: Re: [computer-go] MC-UCT and tactical information


My program StoneGrid calculates unconditional life and death at every move, in 
the UCT Tree and in the random playout. I think it helps on its strength a 
little bit, especially in the end game. In the begining of the game, seems to 
be completely useless. It is slow. But it makes the random playout slightly 
shorter. The random playout is about 80 moves per game. I do not have a 
comparison if the unconditional life and death is disabled, since it is not 
easy to do in the current data structure. 



More new features than ever.  Check out the new AIM(R) Mail ! - 
http://webmail.aim.com
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] low-hanging fruit - yose

2007-12-13 Thread Hideki Kato
Hi Begué  and Don,

I did this in my earlier version of ggmc.  The real code was:

reward = 0.5 * (1 + tanhf(K * (score - komi)));
# tanhf() is a float, not double, version of hyperbolic tangent
function.
# I use tanh() as exp() may cause overflow.
# You can see the code from http://www.gggo.jp/ as it's still left.

I got best performance around 10 of K but it's so little that I'm
using simpler one now.

-Hideki

Don Dailey: [EMAIL PROTECTED]:
Nice idea and worth a try.I predict that this will weaken the
program no matter what value you use, but that there may indeed be a
reasonable compromise that gives you the better behavior with only a
very small decline in strength.  

I think this bother people so much that they would be willing to
sacrifice a tiny bit of strength to get the greedy behavior.

- Don


Álvaro Begué wrote:
 At the end of a playout there is probably some code that says
 samoething like
   reward = (score  komi) ? 1.0 : 0.0;

 You can just replace it with
   reward = 1 / (1 + exp(- K * (score - komi)));

 A huge value of K will reproduce the old behaviour, a tiny value will
 result in a program that tries to maximize expected score, and values
 in the middle will blend both things nicely. Of course you would
 precompute this in a table.

 This seems elegant and simple to me. Now we only need to know how it
 affects performance. I bet there are values of K that would make
 everyone happy (no measurable loss in strength, still play
 good-looking moves even if the game is decided).


 Álvaro.


 On Dec 13, 2007 3:42 PM, Chris Fant [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:

 On Dec 13, 2007 3:33 PM, Chris Fant [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:
  Seems like the final solution to this would need to build out the
  search tree to the end of the game, finding a winning line.  And
 then
  search again with a different evaluation function (one based on
  points).  If the second search cannot find a line that wins bigger
  than the first search did, just play the move returned by the first
  search.  And you could get more clever be allowing the second search
  to start with some information from the first search.  Note that
 when
  I say winning line, I mean all the way to the end.  No MC here.
 


 Actually, I suppose it need not be to the absolute end of the game.
 As long as all MC sims that finish out the game prior to scoring lead
 to a win, then you can consider the tree portion a guaranteed winning
 line and try the second search to maximize points.
 ___
 computer-go mailing list
 computer-go@computer-go.org mailto:computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


 

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
--
[EMAIL PROTECTED] (Kato)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Don Dailey
Regarding correspondance with human ranks, and handicap value, I cannot
tell yet. It is very clear to me that the Elo-rating model is very wrong
for the game of Go, because strength is not one-dimensional, especially
when mixing bots and humans. The best way to evaluate a bot in terms of
human rating is to make it play against humans, on KGS for instance.
Unfortunately, there is no 9x9 rating there. I will compute 9x9 ratings
with the KGS data I have.

I'm only interested in measuring the ELO gaps between 9x9 players of
different (19x19) rankings.   This you can do by simply taking the
statistics of wins and losses between players of various strengths. 

I don't really know what you mean by one-dimensional.   My
understanding of playing strength is that it's not one-dimensional
meaning that it is foiled by in-transitivities between players with
different styles.You may be able to beat me,  but I might be able to
beat someone that you cannot.  If that's what you are saying how
does the kyu/dan system  handle it that makes it superior to ELO for
predicting who will win?Is there some mechanism that measures
playing styles?I don't see this.  

What I THINK you mean is that the gap between various GO ranks is not
consistent in ELO terms.   In other words there is no single constant
that works in my simple formula. I definitely think this is probably
the case but surely it can be easily handled by a slightly more
sophisticated formula that fits the curve.

So surely, the average 2 dan player will beat the average 1 dan player
some statistically measurable percentage of the time at 9x9 go.   This
is what I want to know.Then I want to know if that percentage is the
same at different points in the scale.   If not,  then we find an
appropriate fit statistically.   

Once this is done then we still have the problem of calibrating CGOS -
we have to determine which ELO rating on CGOS corresponds to 1 dan (or
some arbitrary AGA or KGS ranking.)

Once all of this is done, we at least have something that doesn't yet
exist.   A credible way to claim your 9x9 program would likely hold it's
own against a 19x19 player of a given level.

Of course this will be somewhat noisy,  as any ranking system is.   It
will be subject to in-transitivities just like ELO and go ranking
are.  But you have to admit that there has been talk about certain
9x9 programs playing at the dan level and so on. Technically this
makes no sense,  but intuitively we know exactly what we mean when we
say that - we mean that it is the equal of a 1 dan 19x19 player.   
This is what I want to capture as a footnote on the 9x9 server.

I agree with you about program playing different versions of
themselves.   I can throw out games where a program plays another
verison of itself if you want to study that too (I would go by
password.)Just let me know and I will run another hall of fame
using that criteria or I can send you the data from cgos in a compact (1
line per game) representation if you think it would be useful to help
you understand this.   Or I can send you the pgn files I produced to be
compatible with bayeselo.


- Don


 
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MC-UCT and tactical information

2007-12-13 Thread John Fan
Yes, StoneGrid only uses Benson's algorithm.

On Dec 13, 2007 4:30 PM, [EMAIL PROTECTED] wrote:

 That's a strong program, and interesting information. For clarity, I
 assume that you mean something like Benson's algorithm, while my intended
 meaning was alive assuming perfect play. Both are relevant, we just need
 to keep them sorted out.

 - Dave Hillis


 -Original Message-
 From: John Fan [EMAIL PROTECTED]
 To: computer-go computer-go@computer-go.org
 Sent: Thu, 13 Dec 2007 3:42 pm
 Subject: Re: [computer-go] MC-UCT and tactical information

 My program StoneGrid calculates unconditional life and death at every
 move, in the UCT Tree and in the random playout. I think it helps on its
 strength a little bit, especially in the end game. In the begining of the
 game, seems to be completely useless. It is slow. But it makes the random
 playout slightly shorter. The random playout is about 80 moves per game. I
 do not have a comparison if the unconditional life and death is disabled,
 since it is not easy to do in the current data structure.
  --
 More new features than ever. Check out the new AIM(R) 
 Mailhttp://o.aolcdn.com/cdn.webmail.aol.com/mailtour/aol/en-us/text.htm?ncid=aimcmp000501
 !

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] unconditional life and death

2007-12-13 Thread George Dahl
Please excuse me if this question has been answered before, my brief
look through the archives I have did not find it.  How does one
compute unconditional life and death?  Ideally, in an efficient
manner.  In other words, I want to know, for each group of stones on
the board that share a common fate, if they are alive with certainty
or if there fate is unknown.  I only want to find results that would
be totally obvious to a very weak human player and provably correct.
What algorithms are used?
- George
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] unconditional life and death

2007-12-13 Thread Jason House
On Dec 13, 2007 4:40 PM, George Dahl [EMAIL PROTECTED] wrote:

 Please excuse me if this question has been answered before, my brief
 look through the archives I have did not find it.  How does one
 compute unconditional life and death?  Ideally, in an efficient
 manner.  In other words, I want to know, for each group of stones on
 the board that share a common fate, if they are alive with certainty
 or if there fate is unknown.  I only want to find results that would
 be totally obvious to a very weak human player and provably correct.
 What algorithms are used?


The standard one is Benson's algorithm
http://senseis.xmp.net/?BensonsAlgorithm
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] unconditional life and death

2007-12-13 Thread George Dahl
Thanks!
- George

On 12/13/07, Jason House [EMAIL PROTECTED] wrote:


 On Dec 13, 2007 4:40 PM, George Dahl [EMAIL PROTECTED] wrote:
  Please excuse me if this question has been answered before, my brief
  look through the archives I have did not find it.  How does one
  compute unconditional life and death?  Ideally, in an efficient
  manner.  In other words, I want to know, for each group of stones on
  the board that share a common fate, if they are alive with certainty
  or if there fate is unknown.  I only want to find results that would
  be totally obvious to a very weak human player and provably correct.
  What algorithms are used?

 The standard one is Benson's algorithm
 http://senseis.xmp.net/?BensonsAlgorithm

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] MC-UCT and tactical information

2007-12-13 Thread David Fotland
I think Martin Mueller published an improvement to benson's algorithm that
is also proved correct.

 

David

 

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of John Fan
Sent: Thursday, December 13, 2007 1:36 PM
To: computer-go
Subject: Re: [computer-go] MC-UCT and tactical information

 

Yes, StoneGrid only uses Benson's algorithm.

On Dec 13, 2007 4:30 PM, [EMAIL PROTECTED] wrote:

That's a strong program, and interesting information. For clarity, I assume
that you mean something like Benson's algorithm, while my intended meaning
was alive assuming perfect play. Both are relevant, we just need to keep
them sorted out. 

- Dave Hillis




-Original Message-
From: John Fan [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org
Sent: Thu, 13 Dec 2007 3:42 pm
Subject: Re: [computer-go] MC-UCT and tactical information

My program StoneGrid calculates unconditional life and death at every move,
in the UCT Tree and in the random playout. I think it helps on its strength
a little bit, especially in the end game. In the begining of the game, seems
to be completely useless. It is slow. But it makes the random playout
slightly shorter. The random playout is about 80 moves per game. I do not
have a comparison if the unconditional life and death is disabled, since it
is not easy to do in the current data structure. 

  _  

More new features than ever. Check out the new AIM(R) Mail
http://o.aolcdn.com/cdn.webmail.aol.com/mailtour/aol/en-us/text.htm?ncid=ai
mcmp000501 !


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Don Dailey
Are you suggesting a mechanism that allows you to turn this off and on
at will and that is separate from the naming and password convention?   

One thing I definitely would not do is allow you to select opponents you
prefer to play or not to play - whatever control we have will be limited
to our own bots. The main reason for CGOS was to ensure variety and
guarantee that players could not be selective about their opponents
the way bots on the chess servers are. It is claimed that many
players on the chess servers manipulate their ratings by carefully
selecting their opponents. I think this is CGOS strength, it does
not allow this.This makes ratings far more trustworthy. This is
also why you are not allowed to abort a game (once you see who your
opponent is.)   Otherwise it would be very easy to use this as a
mechanism to control your own pairings.  

Do you have a suggestion for a specific mechanism for this?

- Don
   



Jason House wrote:


 On Dec 13, 2007 4:01 PM, Don Dailey [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:

 I don't want to add more mechanisms.   You can build your own
 mechanism
 by making your own password naming convention or bot naming
 convention.For instance you can use the underscore character to
 build separate families of bots and still keep your own branding. 


 And if I then lose interest in self-play games (because I've seen
 enough to prove to myself whatever I was looking for)?  Do I then
 create a separate account with the same bot running on the same
 hardware and either discard historical data or manually combine the
 results?

 Maybe a more generic mechanism of opponent preferences would work? 
 Then I could specify bots I don't want to play (my family members) or
 others I want to play more (my rival?).  Obviously, the don't play
 might be restricted to bots in the same family or that use the same
 password. 
 

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] erm...

2007-12-13 Thread terry mcintyre
There's some value to human-human games in this proposed tournament, I think. 
Some humans might play or worse at 5 minute time controls. Comparison with 
longer games might be interesting.
 
Terry McIntyre [EMAIL PROTECTED]
They mean to govern well; but they mean to govern. They promise to be kind 
masters; but they mean to be masters. -- Daniel Webster

- Original Message 
From: steve uurtamo [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org
Sent: Wednesday, December 5, 2007 8:18:34 AM
Subject: Re: [computer-go] erm...


two followups, and i'm sorry for not referencing the
original notes directly:

i) i agree that 9x9 has fewer standard deviations of
skill.  there's simply less to be good at (ladders are
tiny, life and death can only be so large, the difference
between influence and territory is skewed, etc.).  this
doesn't mean that dan level play is meaningless, and
i'm sorry for making that implication.

ii) i do think that organized play by a range (not too low,
or people will get irritated) of bots against a large batch
of humans with solid kgs ranks is a good idea.  a tournament
is the best way to do this, and i don't even think that strictly
human-bot games are necessary: human-human games
will give us quite a bit of information about 9x9 ranks as
well, and how the conversion should go.  that data is probably
at least as useful as the human-bot games.  also, there's no
real reason to exclude bot-bot games on kgs.  it'll provide
equally good data about how a tournament works.

to this end, if anyone wants to organize a 9x9 tournament on
kgs (i'm not sure how this is done) where computer players
are allowed (encouraged) to play, with cgos-like time controls,
i'd be happy to do the ELO-rank analysis after the fact (although
i imagine that jacques would do a better job.  :)

the more games the better, and with reasonable time controls,
there's no reason that this couldn't be something like the ironman
tournaments that kgs occasionally runs, in the sense that a
multi-day tournament during various times of the day would provide
for a rich and wide variety of players.  i'd ignore everything but
the game results, of course, and would just need kgs rank at start
of tournament for each player, and elo information at start of
tournament for each bot.

s.




  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.
  http://tools.search.yahoo.com/newsearch/category.php?category=shopping
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/






  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] low-hanging fruit - yose

2007-12-13 Thread Harald Korneliussen
Mark Boon wrote:
Let me therefore change the discussion a bit to see if this will make
things more clear. Consider a chess-playing program with an
unorthodox search method. When playing a human after while it
announces check-mate in thirty-four moves. Yet the human can clearly
see it's check-mate in four. Ah, one could say, but the computer is
100% confident winning either way so it doesn't care which one it
chooses. It doesn't matter whether a human thinks mate in four is
more beautiful.

Now it so happens that with chess we're pretty confident that when a
chess-program announces check-mate that it is in fact correct. But
what if there could be a sliver of doubt? Maybe the program has no
doubt, but the programmer might. Bugs happen. Wouldn't you say it's
better to choose the shorter mate sequence? Regardless of whether
humans may find it more beautiful?

Any probabilistic algorithm should of course prefer a quick win to a
tedious one, since there is less assumptions it has to make, the less
likely it is that one of them is disastrously wrong. But wouldn't they
actually take that into account in their win estimates?

Anyway, turns to win is a completely different measure than score at end.

But I found something that may interest you. Three researchers at a
Paris university, Julien Kloetzer Hiroyuki Iida and Bruno Bouzy, implemented
a UCT program for playing Amazons. They found that for that game,
performance increased when the algorithm took into account winning
margin in addition to win/loss.

I wonder what that means. Perhaps greedy
strategies work better in Amazons than in Go?

The program lost big in the olympiad, though, against a more traditional
program, so it might be premature to draw conclusions...

Here's the research paper:
www.math-info.univ-paris5.fr/~bouzy/publications/KIB-MCAmazons-CGW07.pdf
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: unconditional life and death

2007-12-13 Thread Dave Dyer


The standard one is Benson's algorithm
http://senseis.xmp.net/?BensonsAlgorithmhttp://senseis.xmp.net/?BensonsAlgorithm

The standard caveat is that this algorithm alone is very weak - it
typically applies to zero stones on a position played out using
Japanese rules.  But you have to start somewhere.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Re: unconditional life and death

2007-12-13 Thread Dave Dyer


The standard one is Benson's algorithm
http://senseis.xmp.net/?BensonsAlgorithmhttp://senseis.xmp.net/?BensonsAlgorithm

The standard caveat is that this algorithm alone is very weak - it
typically applies to zero stones on a position played out using
Japanese rules.  But you have to start somewhere.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Jason House
On Dec 13, 2007 4:51 PM, Don Dailey [EMAIL PROTECTED] wrote:

 Do you have a suggestion for a specific mechanism for this?


I was mostly just thinking a file that cgos looks for that includes bot
names and the preferences.  The don't play list would need obvious
restrictions like what you've already suggested.  A preferred opponent was
intended more to get additional games against a particular bot in a shorter
period of time.  I was thinking it'd be tough to do and would only affect
the probability of playing a particular opponent.  Honestly, I wouldn't care
if these extra games didn't count for a rating.  If the bias isn't too bad,
maybe it wouldn't really matter that much.

Here's a motivational example:
  I'm a bot owner who's noticed that against bot X, I lose 80% of my games
because of some peculiar bug in my code.  I fix that code and, like a well
behaved CGOS participant, use a new login.  I then have to wait for my
rating to (somewhat) stabilize and then wait for the games against that bot
to accumulate enough samples to be sure I fixed the problem.  Letting new
bots specify their starting rating may help this process.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] MC-UCT and tactical information

2007-12-13 Thread Jason House
On Dec 13, 2007 4:50 PM, David Fotland [EMAIL PROTECTED] wrote:

  I think Martin Mueller published an improvement to benson's algorithm
 that is also proved correct.


Yes.  Safety under alternating play.  It's more generally applicable but I
didn't think it met the needs of the original request.  There's some
guesswork/backupretry that occurs in certain situations with that
algorithm.  I've also seen plenty of other papers on safety solvers.  All
are far more complex than Benson's.  I've always seen Benson's as the
classic example of a safety solver (all papers mention it)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Re: unconditional life and death

2007-12-13 Thread terry mcintyre
Thomas Wolf wrote a life-and-death program some while back, with much stronger 
abilities; he mentions that it only works for fully enclosed positions. 

http://www.qmw.ac.uk/~ugah006/gotools/

You may wish to read http://lie.math.brocku.ca/twolf/papers/mono.pdf

 Dave Dyer is too modest to refer to his own work at 
http://www.andromeda.com/people/ddyer/go/shape-library.html


Terry McIntyre [EMAIL PROTECTED]
They mean to govern well; but they mean to govern. They promise to be kind 
masters; but they mean to be masters. -- Daniel Webster

- Original Message 
From: Dave Dyer [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org; computer-go 
computer-go@computer-go.org
Sent: Thursday, December 13, 2007 2:03:33 PM
Subject: [computer-go] Re: unconditional life and death






The standard one is Benson's algorithm


http://senseis.xmp.net/?BensonsAlgorithm



The standard caveat is that this algorithm alone is very weak - it

typically applies to zero stones on a position played out using

Japanese rules.  But you have to start somewhere.








  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] MC-UCT and tactical information

2007-12-13 Thread dhillismail


 -Original Message-
 From: Jason House [EMAIL PROTECTED]
 To: computer-go computer-go@computer-go.org
 Sent: Thu, 13 Dec 2007 3:20 pm
 Subject: Re: [computer-go] MC-UCT and tactical information


 On Dec 13, 2007 2:17 PM, [EMAIL PROTECTED] wrote:


I'd like to start a more specific discussion about ways to combine tactical 
information with MC-UCT. Here's the scenario.

It's the bot's turn and, prior to starting any playouts, it runs a tactical 
analyzer (for want of a better name) that labels each string as unconditionally 
alive, unconditionally dead, conditionally alive, uncertain, or too strong to 
bother testing. For each conditionally alive string, it finds the move(s) to 
kill/rescue it. Each possible move, has a critical score indicating the net 
number of stones whose life/death status would be changed for the better if?the 
bot?moved there.?Moves with a non-zero critical score are critical moves. 



 As worded, I don't know if I agree with that formalization of what 
 information will be useful.? 
 It's in the right ballpark for the discussion, but will likely shift a bit 
 with any final design.?

 Right. It's not meant to be comprehensive. By... um?coincidence, that's 
the information my bot has access to. I'm soliciting ideas on what?other 
information?should be included as well as how to use it. It's also a hint at 
the minimum level of detail I'm hoping for.
?...

 What's an n-gram?

 The?idea would be to use n-grams in much the same way as they are used in 
machine language translation.?Wikipedia has good descriptions for both. To give 
the flavor of it, an n-gram is a sequence of length n. In machine translation, 
they are usually sequences of words. In this case, it would be a sequence 
of?moves. I have a body of examples, the local searches,?and I calculate a 
score for each short sequence of moves that came up. Now, in my global 
playouts, when I encounter one of these sequences again, I have a measure of 
goodness for the next move in that sequence. This would be a way of trying to 
generalize between different lines of play.

??? A move could be just the coordinates of the stone, or it could include 
pattern information. or whatever. What I *don't* want to do is wind up 
storing?every huge hairy hash table that came up in the local searches.

- Dave Hillis

?




More new features than ever.  Check out the new AIM(R) Mail ! - 
http://webmail.aim.com
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Rémi Coulom

Don Dailey wrote:

I don't really know what you mean by one-dimensional.   My
understanding of playing strength is that it's not one-dimensional
meaning that it is foiled by in-transitivities between players with
different styles.You may be able to beat me,  but I might be able to
beat someone that you cannot.  If that's what you are saying how
does the kyu/dan system  handle it that makes it superior to ELO for
predicting who will win?Is there some mechanism that measures
playing styles?I don't see this.  
  


That's what I am saying. The kyu/dan system does not handle it better.


What I THINK you mean is that the gap between various GO ranks is not
consistent in ELO terms.   In other words there is no single constant
that works in my simple formula. I definitely think this is probably
the case but surely it can be easily handled by a slightly more
sophisticated formula that fits the curve.
  


This is not what I meant, but I agree.

What I mean is that if human player H beats computer C1 65% of the time, 
and computer C2 also beats computer C1 65% of the time, then I would 
expect that H would be stronger than C2, especially if both C1 and C2 
are MC programs. If it is the case, then it would make it difficult to 
compare human scale to computer scale. But that is just my intuition.


For instance, against computers, I estimate that Crazy Stone improved 
about 3 stones between this summer and now. But it clearly did not 
improve 3 stones on KGS. I vaguely remember that Sylvain also noticed 
that MoGo could beat GNU go with a 4-stone handicap, but was only 2 
stones stronger than GNU on KGS.


Rémi
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go]mc analogue

2007-12-13 Thread Forrest Curo
Here is a card game I thought of while considering how to chunk  
moves based on mc outcomes...


It is not in any way equivalent to programming go, but there are  
significant similarities.


You have a deck of 360 cards numbered sequentially. (This is not as  
complex as go, but the tree of potential moves is going to be of  
similar size, narrower in places but always extending to 360 moves  
total. The size could be changed without significantly changing the  
game, so long as the number is even.)


To make a legal move:

1) If your opponent has played a card, and there is a lower card  
available, you must play a lower card.


2) If 1) does not apply, you can play any card.

The game ends when all cards are played, and each player scores the  
total of the numbers on the cards it has played.


A human player can readily analyze the game, seeing that the first  
player wins with correct play, but loses against correct play if he  
misses the right first move.


Our computer players, however, are to be limited to whatever  
information they can derive from the final scores at the end of the  
potential playouts from a particular move. They aren't allowed to read  
the actual numbers on the cards or count the number of branches  
resulting from a move, although they can of course distinguish  
whichever moves they've made in a playout from those the opponent  
played, and when, as an aid to choosing which moves are better in the  
sense of working more often against limited-information play.


So--How many playouts to find the best line of play against 1) another  
computer with the same limitations or 2) a human? At what point in the  
game does it become possible for the poor artificially-handicapped  
program to find a winning line, if one remains?


If two programs play one another under these limitations, it is  
obvious that some moves will be more likely to win than others,  
depending on which cards have been played. Some responses, while  
falling short of ideal play, will still leave better odds of the  
opponent missing whatever winning lines they've left  open, and will  
thus be objectively better than others. Can an mc player recognize  
them? At what point?


Forrest Curo


This message was sent using IMP, the Internet Messaging Program.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: unconditional life and death

2007-12-13 Thread Dave Dyer

There's a sort of hierarchy of life-and-death methods, for which
Benson's algorithm is the base.   

My status database is next above that, but it is actually a lookup table
based on a problem solver, such as Wolfs or mine.  The unique thing
about the database is that it could be dropped in to a program which
was prepared to provide the inputs it requires.

Thomas and I compared results and cross validated our problem solvers 
back in the day, so for the set of positions both solvers accepted,
very similar results are produced.  My solver was not limited to closed 
shapes, but his was always orders of magnitude faster.  In any case,
other than discussing the techniques used, these solvers aren't any
use to practical programs - there's no solve module anyone can load
and use.

Above this level (of solvers which work on very simple positions)
there is the problem of applying it to a realistic board position.
Much more go-specific knowledge is needed here.  I worked some on
the problem near endgame, but earlier than that is pretty much 
unexplored territory, except of course by real playing programs
which have to apply some heuristics just to function.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MC-UCT and tactical information

2007-12-13 Thread Jason House
On Dec 13, 2007 5:33 PM, [EMAIL PROTECTED] wrote:



  -Original Message-
  From: Jason House [EMAIL PROTECTED]
  To: computer-go computer-go@computer-go.org
  Sent: Thu, 13 Dec 2007 3:20 pm
  Subject: Re: [computer-go] MC-UCT and tactical information

  On Dec 13, 2007 2:17 PM, [EMAIL PROTECTED] wrote:

  I'd like to start a more specific discussion about ways to combine
  tactical information with MC-UCT. Here's the scenario.
 
  It's the bot's turn and, prior to starting any playouts, it runs a
  tactical analyzer (for want of a better name) that labels each string as
  unconditionally alive, unconditionally dead, conditionally alive, uncertain,
  or too strong to bother testing. For each conditionally alive string, it
  finds the move(s) to kill/rescue it. Each possible move, has a critical
  score indicating the net number of stones whose life/death status would be
  changed for the better if the bot moved there. Moves with a non-zero
  critical score are critical moves.



  As worded, I don't know if I agree with that formalization of what
 information will be useful.
  It's in the right ballpark for the discussion, but will likely shift a
 bit with any final design.

  Right. It's not meant to be comprehensive. By... um coincidence,
 that's the information my bot has access to. I'm soliciting ideas on
 what other information should be included as well as how to use it. It's
 also a hint at the minimum level of detail I'm hoping for.
  ...


Change for the better seems to imply only a one-sided analysis.  I would
imagine any analysis would have to include both sides.  There's also
important concepts that can some into play like the number of moves that can
be ignored (or must be ignored) before something happens.  This can be
critical info for ko fights and semeai.

I've done some dabbling (thought experiments) with  how I'd like to cache
search results and I'm not yet happy with any of them.  Not taking into
account miai and such logic could lead to excessive storage bloat.  I'd love
to enter a discussion talking just about how to store cached LD search
results.

When I go down this road, I'd probably initially focus on maintaining
connections in playouts and then extend it to maintaining life.  There's
some tricky parts about when severing a connection or giving up a group is
ok.  Deep in a random game, it may be ok to conservatively protect stuff,
giving a sort of quiescence search.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] MC-UCT and tactical information

2007-12-13 Thread terry mcintyre
The rule of thumb  I try to follow is to connect when possible; try to 
disconnect enemy groups - but don't bother with cuts if the separated groups 
are alive; don't let yourself be cut off if you can't make two eyes. 

These rules seem to make good sense; they're not just human style for the 
sake of style, but are rooted in the fundamentals of the game. I don't know how 
to capture the subtlety of the connect or cut when it makes a difference to 
the outcome in the playouts, but this should make a difference, since it tends 
to favor more efficient play. It's all about making the most for your one play, 
before your opponent gets his turn.

I was playing against a 7 or 8 dan (AGA rating ) player last night, and he 
reminded me of a terrible truth: we take turns placing stones. So frustrating 
that I don't get a two stone tesuji to solve my problems. ;)


Terry McIntyre [EMAIL PROTECTED]
They mean to govern well; but they mean to govern. They promise to be kind 
masters; but they mean to be masters. -- Daniel Webster

- Original Message 
From: Jason House [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org
Sent: Thursday, December 13, 2007 3:30:27 PM
Subject: Re: [computer-go] MC-UCT and tactical information




On Dec 13, 2007 5:33 PM,  [EMAIL PROTECTED] wrote:





 -Original Message-

 From: Jason House [EMAIL PROTECTED]

 To: computer-go computer-go@computer-go.org


 Sent: Thu, 13 Dec 2007 3:20 pm

 Subject: Re: [computer-go] MC-UCT and tactical information






 On Dec 13, 2007 2:17 PM, [EMAIL PROTECTED] wrote:





I'd like to start a more specific discussion about ways to combine tactical 
information with MC-UCT. Here's the scenario.




It's the bot's turn and, prior to starting any playouts, it runs a tactical 
analyzer (for want of a better name) that labels each string as unconditionally 
alive, unconditionally dead, conditionally alive, uncertain, or too strong to 
bother testing. For each conditionally alive string, it finds the move(s) to 
kill/rescue it. Each possible move, has a critical score indicating the net 
number of stones whose life/death status would be changed for the better if the 
bot moved there. Moves with a non-zero critical score are critical moves. 







 As worded, I don't know if I agree with that formalization of what 
 information will be useful.  

 It's in the right ballpark for the discussion, but will likely shift a bit 
 with any final design. 




 Right. It's not meant to be comprehensive. By... um coincidence, that's 
the information my bot has access to. I'm soliciting ideas on what other 
information should be included as well as how to use it. It's also a hint at 
the minimum level of detail I'm hoping for.


 ...



Change for the better seems to imply only a one-sided analysis.  I would 
imagine any analysis would have to include both sides.  There's also important 
concepts that can some into play like the number of moves that can be ignored 
(or must be ignored) before something happens.  This can be critical info for 
ko fights and semeai.


I've done some dabbling (thought experiments) with  how I'd like to cache 
search results and I'm not yet happy with any of them.  Not taking into account 
miai and such logic could lead to excessive storage bloat.  I'd love to enter a 
discussion talking just about how to store cached LD search results.




When I go down this road, I'd probably initially focus on maintaining 
connections in playouts and then extend it to maintaining life.  There's some 
tricky parts about when severing a connection or giving up a group is ok.  Deep 
in a random game, it may be ok to conservatively protect stuff, giving a sort 
of quiescence search.








  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Generative Code Specialisation for High-Performance Monte Carlo Simulations

2007-12-13 Thread Ian Osgood
This might be of interest given the recent interest in Go programming  
in functional languages (Lisp).


http://lambda-the-ultimate.org/node/2533

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Don Dailey


 What I mean is that if human player H beats computer C1 65% of the
 time, and computer C2 also beats computer C1 65% of the time, then I
 would expect that H would be stronger than C2, especially if both C1
 and C2 are MC programs. If it is the case, then it would make it
 difficult to compare human scale to computer scale. But that is just
 my intuition.

 For instance, against computers, I estimate that Crazy Stone improved
 about 3 stones between this summer and now. But it clearly did not
 improve 3 stones on KGS. I vaguely remember that Sylvain also noticed
 that MoGo could beat GNU go with a 4-stone handicap, but was only 2
 stones stronger than GNU on KGS.
This happened in computer chess many years ago.There was a period of
time when chess playing computers were relatively unknown but then
suddenly became very common.This is anecdotal,   but it appears that
for a  certain amount of time the computers continued to improve, but at
the same time humans adapted very quickly to them. Humans quickly
became educated.  They didn't become educated until computers
approached their playing strength.  

So there is a fairly significant ELO advantage to the human that has
experience playing computers.   I think it's very possible that this is
what you observed.The advantage of a human however has limits.   You
are not going to beat a player 1000 ELO stronger just because you know
how he plays.

Also, we don't see any serious discrepancy in computer vs human ratings
in chess although it was always imagined.   If you look in the sky you
can imagine interesting shapes,  but only because you are looking for
them. Whenever we think we observe something based on a few data
points it's extremely subject to error.   Thus people often believed
that some given program might be 200 ELO stronger than other programs
but that it would translate into something very modest against
humans.This NEVER turned out to be true - it was fantasy  based on
the continuing need to believe  that the improvements were just too good
to be true.  

A little common sense will tell you that this cannot be true but to a
very limited extent if any.You would have to believe that the
in-transitive rubber band stretches to infinity. Or you have to do
what Einstein did and regretted,  which is to impose an artificial
explanation such as some kind of constant that pushes this back at
higher strength levels.  The main point is that in computer chess
computers improved approximately 2000 ELO against humans over a couple
of decades or so.But when it's claimed that ELO improvement against
other computers  is 4 times that,  you imply 8000 ELO points for
computer vs computer! This is  clearly not the case.  That was
generally the claim I heard - 100 ELO improvement but it's someones
belief that against humans it's only about 25 ELO.  Nonsense.

I could see one things possibly happening however.   You might make a
real improvement that doesn't hit a human weakness that hard and at the
local limited  horizon it really may not translate to the same 
improvement against humans.But intransitivity in playing strength is
like a rubber band.You can only stretch it so far and it fights
back. You are not going to get  A beats B 99% of the time,   B beats
C 99% of the time,  but  A cannot beat C.Not unless you go way out
of your way to construct programs that have this behavior.   

I don't believe multi-dimensional playing strength  is much of an
issue.  It exists, but it's not severe.   You are not going to have to
worry that some 4 dan player will beat the world champion because he
happens to have a playing style that the world champion cannot handle.  
   Otherwise all rating/ranking systems would be useless.And it
would be easy to  construct some really ludicrous cases of intransitivity. 


- Don













 Rémi
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] low-hanging fruit - yose

2007-12-13 Thread dhillismail

Impasse: noun,
1. There is no argument so elegant and compelling that it will prove the 
negative that making UCT greedier could not possibly lead to more won games.
2. Everyone who has tried it one way, will have tried some variations. It's not 
as if it takes a?lot of code. No one has reported improvement to date. Maybe 
they will, but not without getting their hands dirty and doing some tests.

- Dave Hillis

More new features than ever.  Check out the new AIM(R) Mail ! - 
http://webmail.aim.com
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] low-hanging fruit - yose

2007-12-13 Thread Don Dailey


[EMAIL PROTECTED] wrote:

 Impasse: noun,
 1. There is no argument so elegant and compelling that it will prove
 the negative that making UCT greedier could not possibly lead to more
 won games.
I could hardly fail to disagree with you less.


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Hall of fame for CGOS

2007-12-13 Thread Hideki Kato
Hi Don,

Don Dailey: [EMAIL PROTECTED]:
I want to clarify this:

The new CGOS chart uses bayeselo to recalculate all the ratings for the
players - it does not use CGOS ratings.

Hm, now I remembered that there were not so few games wrongly ended
and scored by server's hang-up.  In addition, recently my bot has
lost some games by, perhaps, network delay over one minute.
Moreover there might be many bots with different minor versions
under same login name.

Such accidents and/or troubles were not so severe because we could see
and mind only current ratings but it's changing now.

I think this excellent idea is too early to do unless we can exclude
wrongly scored games.  Otherwise the ratings include some error that
we cannot estimate.  It should not be suitable for the name of 'Hall
of fame'.

-Hideki

I may update this list each month.   I wanted to make it a top 100 list
but FatMan does not even make the top 100.


- Don





Don Dailey wrote:
 I put up a web page that displays EVERY player who has played at least
 200 games on CGOS.

 It uses the bayeselo program that Rémi authored. 


   http://cgos.boardspace.net/9x9/hof.html


 I'm not sure I used the program correctly - it's rather complicated and
 I'm not that great with statistics.   If anyone is interested in the
 settings I used I can provide that.

 - Don

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
--
[EMAIL PROTECTED] (Kato)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Hall of fame for CGOS

2007-12-13 Thread Don Dailey
Many strong programs have 100% scores against many opponents and many
games.   They cannot be hanging up very often.

When the server hangs,  the current game you are playing is not scored. 
  I don't think there is a major problem here.  

As far as network problems CGOS considers that part of the computing
system.   If you provider is having glitches that affect your program I
can't account for that.It's the same if you lose your connection and
lose on time as a result.It's part of your equipment, it's as if
you had automobile failure at the Nascar races or your tennis racket
breaks during an important point.

- Don

 


Hideki Kato wrote:
 Hi Don,

 Don Dailey: [EMAIL PROTECTED]:
   
 I want to clarify this:

 The new CGOS chart uses bayeselo to recalculate all the ratings for the
 players - it does not use CGOS ratings.
 

 Hm, now I remembered that there were not so few games wrongly ended 
 and scored by server's hang-up.  In addition, recently my bot has 
 lost some games by, perhaps, network delay over one minute.  
 Moreover there might be many bots with different minor versions 
 under same login name.

 Such accidents and/or troubles were not so severe because we could see 
 and mind only current ratings but it's changing now.

 I think this excellent idea is too early to do unless we can exclude 
 wrongly scored games.  Otherwise the ratings include some error that 
 we cannot estimate.  It should not be suitable for the name of 'Hall 
 of fame'.

 -Hideki

   
 I may update this list each month.   I wanted to make it a top 100 list
 but FatMan does not even make the top 100.


 - Don





 Don Dailey wrote:
 
 I put up a web page that displays EVERY player who has played at least
 200 games on CGOS.

 It uses the bayeselo program that Rémi authored. 


   http://cgos.boardspace.net/9x9/hof.html


 I'm not sure I used the program correctly - it's rather complicated and
 I'm not that great with statistics.   If anyone is interested in the
 settings I used I can provide that.

 - Don

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

   
   
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
 
 --
 [EMAIL PROTECTED] (Kato)
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] low-hanging fruit - yose

2007-12-13 Thread dhillismail
I got a smile out of that. 

_ Dave Hillis

-Original Message-
From: Don Dailey [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org
Sent: Thu, 13 Dec 2007 8:52 pm
Subject: Re: [computer-go] low-hanging fruit - yose





[EMAIL PROTECTED] wrote:

 Impasse: noun,
 1. There is no argument so elegant and compelling that it will prove
 the negative that making UCT greedier could not possibly lead to more
 won games.
I could hardly fail to disagree with you less.


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



More new features than ever.  Check out the new AIM(R) Mail ! - 
http://webmail.aim.com
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Hall of fame for CGOS

2007-12-13 Thread Hideki Kato
Why don't you mention the several versions on one login name
problem?

And, I considered CGOS is not the Nascar type commercial races but a
field to help developers to improve their progrms, say, in some
academic sense.

What is your reason to name it as 'Hall of fame'?  I'm not Western
and can just estimate the value of the name but it's so heavy and
important, isn't it?

-Hideki

Don Dailey: [EMAIL PROTECTED]:
Many strong programs have 100% scores against many opponents and many
games.   They cannot be hanging up very often.

When the server hangs,  the current game you are playing is not scored. 
  I don't think there is a major problem here.  

As far as network problems CGOS considers that part of the computing
system.   If you provider is having glitches that affect your program I
can't account for that.It's the same if you lose your connection and
lose on time as a result.It's part of your equipment, it's as if
you had automobile failure at the Nascar races or your tennis racket
breaks during an important point.

- Don

 


Hideki Kato wrote:
 Hi Don,

 Don Dailey: [EMAIL PROTECTED]:
   
 I want to clarify this:

 The new CGOS chart uses bayeselo to recalculate all the ratings for the
 players - it does not use CGOS ratings.
 

 Hm, now I remembered that there were not so few games wrongly ended 
 and scored by server's hang-up.  In addition, recently my bot has 
 lost some games by, perhaps, network delay over one minute.  
 Moreover there might be many bots with different minor versions 
 under same login name.

 Such accidents and/or troubles were not so severe because we could see 
 and mind only current ratings but it's changing now.

 I think this excellent idea is too early to do unless we can exclude 
 wrongly scored games.  Otherwise the ratings include some error that 
 we cannot estimate.  It should not be suitable for the name of 'Hall 
 of fame'.

 -Hideki

   
 I may update this list each month.   I wanted to make it a top 100 list
 but FatMan does not even make the top 100.


 - Don





 Don Dailey wrote:
 
 I put up a web page that displays EVERY player who has played at least
 200 games on CGOS.

 It uses the bayeselo program that Rémi authored. 


   http://cgos.boardspace.net/9x9/hof.html


 I'm not sure I used the program correctly - it's rather complicated and
 I'm not that great with statistics.   If anyone is interested in the
 settings I used I can provide that.

 - Don

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

   
   
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
 
 --
 [EMAIL PROTECTED] (Kato)
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
--
[EMAIL PROTECTED] (Kato)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Python bindings for libego?

2007-12-13 Thread Darren Cook
 I was thinking that it could be quicker to do prototyping in something
 like python, while having fast low-level functions in C. ...
 
 I have done a Python binding for the current libego. You can get it from
 http://mjw.woodcraft.me.uk/2007/pyego/ .
 
 I did this as an exercise in using Pyrex (in fact, I ended up using a
 variant called Cython) to wrap a C++ library. I'm not planning to use it
 myself, and I'm not particularly planning to update it for future
 releases of libego.

Hi,
I wonder if you had anything to say on how the development was? I'm
especially interested if you think if there was some aspect of the way
libego is written that made it either hard work for you, or made it
inefficient to wrap?

In notes.txt you say the speed dropped from 70-75 kpps to 43 kpps when
used in the wrapper (compared to 1 or 2.5 kpps in native python). And
you say this is due to the overhead of converting and checking
parameters? If you increase the number of playouts by a factor of 10
does it close in to 70kpps? I wondered if it might be possible that some
compiler optimization got missed, or is just not possible, when built as
an extension?

Darren
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Where and How to Test the Strong Programs?

2007-12-13 Thread Christopher Rosin
Hi.  My program greenpeep is currently UCT-based, with some MoGo-like
enhancements and some additional learning.
I described it more here:
http://computer-go.org/pipermail/computer-go/2007-October/011438.html
http://computer-go.org/pipermail/computer-go/2007-November/011865.html

Regarding the current discussion, at this point I have very little data for
greenpeep against humans.

-Chris Rosin


On Dec 13, 2007 12:09 PM, David Fotland [EMAIL PROTECTED] wrote:

 Isn't Greenpeep an alpha-beta searcher, not UCT/MC?

 Since Go ranks are based an handicap stones, and 100 ELO points implies a
 particular winning percentage, it would be an unlikely coincidence if 1
 rank
 is 100 ELO points.  Any web site that claims this must be wrong :) and
 should have little credibility.

 David


 
  The strongest bot on CGOS all time list seems to be  greenpeep0.5.1
  http://cgos.boardspace.net/9x9/cross/greenpeep0.5.1.html with a
  rating
  of 2621.
 
  This implies it is almost equal to a 5 Dan player - which doesn't sound
  right to me.However,  this could be fluky since it is as at the
  extreme end of the scale.  It would be great if this same program
  could play some strong humans at the equivalent time control on KGS at
  9x9 and we could adjust the difference between ranks accordingly.
 
  I suspect there is more than 100 ELO between ranks at 9x9.
 
  - Don
 
 
 
  Don Dailey wrote:
   Christoph,
  
   Your bayeselo rating is 1942 on CGOS.  I compiled a table that
  has
   all players with 50 games or more which can be found here:
  
 http://cgos.boardspace.net/9x9/hof2.html
  
  
   - Don
  
  
  
  
  
  
  
  
  
  
   Christoph Birk wrote:
  
   On Tue, 11 Dec 2007, Don Dailey wrote:
  
   Christoph,
   Let me know when you are finished,  what name you are playing under
  and
   I will do the bayeselo thing to get a better figure.
  
   I am playing using the 'tast-3k' account. Right, now I have 71 games
   and a rating of 1979 ELO.
  
  
Also, I can
   throw out any games that were irregular if you can identify them,
  such
   as if  a match started when you were not looking or your interface
  got
   glitchy or something.
  
   Since I added the GUI I lost no games due to software problems.
   Only a few won games lost to human stupidity :-)
  
   I will take a break over the holidays, maybe playing a few more
   games in the new year, but I guess for my purposes a zero-point
  3k-AGA ~=~ 2000 CGOS-ELO
   is close enough. Unless we get some other (AGA or KGS) rated
   players it not make sense to get a more precise rating for the
   scale.
  
   Christoph
   ___
   computer-go mailing list
   computer-go@computer-go.org
   http://www.computer-go.org/mailman/listinfo/computer-go/
  
  
  
  
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Hall of fame for CGOS

2007-12-13 Thread Don Dailey


Hideki Kato wrote:
 Why don't you mention the several versions on one login name 
 problem?
   
I don't consider it a major problem.  The theory is that a big
improvement against versions of the same program might not translate to
equivalent improvements vs other programs.I want to see that proven
convincingly before I buy into it,  it would take thousands of games to
prove this unless the effect was quite large so I won't accept anecdotal
evidence.I'm not saying this doesn't happen,   but it's a
superstition until proven otherwise.  

Still, I would prefer to not rate games between members of the same
family just for the sake of appearance and accusations of nepotism.  
(Although you can't really prevent nepotism.) 

 And, I considered CGOS is not the Nascar type commercial races but a 
 field to help developers to improve their progrms, say, in some 
 academic sense.
   
I think it is an appropriate analogy.   It's part of your equipment. 
Every sport or field of endeavor has these variables beyond your control.

But more to the point,  if I could take this variable out of the
equation I would gladly do so.   But I cannot detect the difference
between a network outage and cheating. 

 What is your reason to name it as 'Hall of fame'?  I'm not Western 
 and can just estimate the value of the name but it's so heavy and 
 important, isn't it?

   
Hall of fame is not a good name and it's not really called that.   It's
the 9x9 all time ratings but I almost called it hall of fame because
originally I intended to only include the top 50 players.I chose not
to for 3 reasons:

   1.  Many players are represented multiple times.
   2.  It's more useful to be able to see every program.
   3.  Fame doesn't imply you are the best.  You might just be
sentimental favorite like gnugo.

- Don


 -Hideki

 Don Dailey: [EMAIL PROTECTED]:
   
 Many strong programs have 100% scores against many opponents and many
 games.   They cannot be hanging up very often.

 When the server hangs,  the current game you are playing is not scored. 
  I don't think there is a major problem here.  

 As far as network problems CGOS considers that part of the computing
 system.   If you provider is having glitches that affect your program I
 can't account for that.It's the same if you lose your connection and
 lose on time as a result.It's part of your equipment, it's as if
 you had automobile failure at the Nascar races or your tennis racket
 breaks during an important point.

 - Don




 Hideki Kato wrote:
 
 Hi Don,

 Don Dailey: [EMAIL PROTECTED]:
   
   
 I want to clarify this:

 The new CGOS chart uses bayeselo to recalculate all the ratings for the
 players - it does not use CGOS ratings.
 
 
 Hm, now I remembered that there were not so few games wrongly ended 
 and scored by server's hang-up.  In addition, recently my bot has 
 lost some games by, perhaps, network delay over one minute.  
 Moreover there might be many bots with different minor versions 
 under same login name.

 Such accidents and/or troubles were not so severe because we could see 
 and mind only current ratings but it's changing now.

 I think this excellent idea is too early to do unless we can exclude 
 wrongly scored games.  Otherwise the ratings include some error that 
 we cannot estimate.  It should not be suitable for the name of 'Hall 
 of fame'.

 -Hideki

   
   
 I may update this list each month.   I wanted to make it a top 100 list
 but FatMan does not even make the top 100.


 - Don





 Don Dailey wrote:
 
 
 I put up a web page that displays EVERY player who has played at least
 200 games on CGOS.

 It uses the bayeselo program that Rémi authored. 


   http://cgos.boardspace.net/9x9/hof.html


 I'm not sure I used the program correctly - it's rather complicated and
 I'm not that great with statistics.   If anyone is interested in the
 settings I used I can provide that.

 - Don

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

   
   
   
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
 
 
 --
 [EMAIL PROTECTED] (Kato)
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

   
   
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
 
 --
 [EMAIL PROTECTED] (Kato)
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

   
___
computer-go mailing list

Re: [computer-go] Hall of fame for CGOS

2007-12-13 Thread Hideki Kato
Your sentences make me strongly believe it's too early.

I won't be against your idea. Again, just claiming it's too early.
Following your analogy to sports, there should be some gurantee of
fairness and agreement of participants.

Our presupposition was that only recent results were important.  Any
temporal confusion of ratings would be fixed soon.  So I could ignore
or didn't mind wrong scored games.  You are, however, changing it
without any notification nor agreement.

I think the problem of different versions are running with the same
login names cannot be ignored.  We have to announce and make sure
almost all perticipants won't do it.

Yes, network troubles are out of our control, some other troubles
and/or accidents will happen.  That's why we have to have some
experiments before using the name of 'Hall of fame'.

-Hideki

Don Dailey: [EMAIL PROTECTED]:


Hideki Kato wrote:
 Why don't you mention the several versions on one login name 
 problem?
   
I don't consider it a major problem.  The theory is that a big
improvement against versions of the same program might not translate to
equivalent improvements vs other programs.I want to see that proven
convincingly before I buy into it,  it would take thousands of games to
prove this unless the effect was quite large so I won't accept anecdotal
evidence.I'm not saying this doesn't happen,   but it's a
superstition until proven otherwise.  

Still, I would prefer to not rate games between members of the same
family just for the sake of appearance and accusations of nepotism.  
(Although you can't really prevent nepotism.) 

 And, I considered CGOS is not the Nascar type commercial races but a 
 field to help developers to improve their progrms, say, in some 
 academic sense.
   
I think it is an appropriate analogy.   It's part of your equipment. 
Every sport or field of endeavor has these variables beyond your control.

But more to the point,  if I could take this variable out of the
equation I would gladly do so.   But I cannot detect the difference
between a network outage and cheating. 

 What is your reason to name it as 'Hall of fame'?  I'm not Western 
 and can just estimate the value of the name but it's so heavy and 
 important, isn't it?

   
Hall of fame is not a good name and it's not really called that.   It's
the 9x9 all time ratings but I almost called it hall of fame because
originally I intended to only include the top 50 players.I chose not
to for 3 reasons:

   1.  Many players are represented multiple times.
   2.  It's more useful to be able to see every program.
   3.  Fame doesn't imply you are the best.  You might just be
sentimental favorite like gnugo.

- Don


 -Hideki

 Don Dailey: [EMAIL PROTECTED]:
   
 Many strong programs have 100% scores against many opponents and many
 games.   They cannot be hanging up very often.

 When the server hangs,  the current game you are playing is not scored. 
  I don't think there is a major problem here.  

 As far as network problems CGOS considers that part of the computing
 system.   If you provider is having glitches that affect your program I
 can't account for that.It's the same if you lose your connection and
 lose on time as a result.It's part of your equipment, it's as if
 you had automobile failure at the Nascar races or your tennis racket
 breaks during an important point.

 - Don




 Hideki Kato wrote:
 
 Hi Don,

 Don Dailey: [EMAIL PROTECTED]:
   
   
 I want to clarify this:

 The new CGOS chart uses bayeselo to recalculate all the ratings for the
 players - it does not use CGOS ratings.
 
 
 Hm, now I remembered that there were not so few games wrongly ended 
 and scored by server's hang-up.  In addition, recently my bot has 
 lost some games by, perhaps, network delay over one minute.  
 Moreover there might be many bots with different minor versions 
 under same login name.

 Such accidents and/or troubles were not so severe because we could see 
 and mind only current ratings but it's changing now.

 I think this excellent idea is too early to do unless we can exclude 
 wrongly scored games.  Otherwise the ratings include some error that 
 we cannot estimate.  It should not be suitable for the name of 'Hall 
 of fame'.

 -Hideki

   
   
 I may update this list each month.   I wanted to make it a top 100 list
 but FatMan does not even make the top 100.


 - Don





 Don Dailey wrote:
 
 
 I put up a web page that displays EVERY player who has played at least
 200 games on CGOS.

 It uses the bayeselo program that Rémi authored. 


   http://cgos.boardspace.net/9x9/hof.html


 I'm not sure I used the program correctly - it's rather complicated and
 I'm not that great with statistics.   If anyone is interested in the
 settings I used I can provide that.

 - Don

 ___
 computer-go mailing list
 computer-go@computer-go.org
 

Re: [computer-go] Hall of fame for CGOS

2007-12-13 Thread Don Dailey
Don't worry Hideki,

Nothing has changed on CGOS,  only something has been added and it has
no affect on what is already there.  

The standard current standings page also stays the same.   No change I
promise. 

Different versions of a program running on CGOS has never been an issue
before,  and nothing has changed in that regard either.   

Right now if you have 2 programs running on CGOS they might occasionally
play each other.   They each get their own rating independent of the
other.That's how it's always been and that has not changed nor is it
broken.

Bu that is the ONLY thing we might actually change later.   But no
matter how you look at it, you cannot ENFORCE that change.   

Also, even though we can ask people to never change their program unless
they give it a new login name,  we can't enforce that, nor is it
reasonable to try. I might have a program with an on-line learning
algorithm which improves itself over time - it would be unreasonable to
ask me not to put that on CGOS.   

Are you bothered by the fact that the all time list will have some
programs suffer that have improved over time but will always have the
prior bad results go against it? Don't worry about that.I will
probably put a time-limit on the games - perhaps I will only include the
games of the previous 12 months.   This is going to be a list that is
updated every month.  

Also, there is no Hall of Fame.   It's only a  list to show ALL
players over time and it uses bayeselo instead of the standard CGOS elo
system (which doesn't change.) 

- Don




Hideki Kato wrote:
 Your sentences make me strongly believe it's too early.

 I won't be against your idea. Again, just claiming it's too early.  
 Following your analogy to sports, there should be some gurantee of 
 fairness and agreement of participants.

 Our presupposition was that only recent results were important.  Any 
 temporal confusion of ratings would be fixed soon.  So I could ignore 
 or didn't mind wrong scored games.  You are, however, changing it 
 without any notification nor agreement.

 I think the problem of different versions are running with the same 
 login names cannot be ignored.  We have to announce and make sure 
 almost all perticipants won't do it.

 Yes, network troubles are out of our control, some other troubles 
 and/or accidents will happen.  That's why we have to have some 
 experiments before using the name of 'Hall of fame'.

 -Hideki

 Don Dailey: [EMAIL PROTECTED]:
   
 Hideki Kato wrote:
 
 Why don't you mention the several versions on one login name 
 problem?
   
   
 I don't consider it a major problem.  The theory is that a big
 improvement against versions of the same program might not translate to
 equivalent improvements vs other programs.I want to see that proven
 convincingly before I buy into it,  it would take thousands of games to
 prove this unless the effect was quite large so I won't accept anecdotal
 evidence.I'm not saying this doesn't happen,   but it's a
 superstition until proven otherwise.  

 Still, I would prefer to not rate games between members of the same
 family just for the sake of appearance and accusations of nepotism.  
(Although you can't really prevent nepotism.) 

 
 And, I considered CGOS is not the Nascar type commercial races but a 
 field to help developers to improve their progrms, say, in some 
 academic sense.
   
   
 I think it is an appropriate analogy.   It's part of your equipment. 
 Every sport or field of endeavor has these variables beyond your control.

 But more to the point,  if I could take this variable out of the
 equation I would gladly do so.   But I cannot detect the difference
 between a network outage and cheating. 

 
 What is your reason to name it as 'Hall of fame'?  I'm not Western 
 and can just estimate the value of the name but it's so heavy and 
 important, isn't it?

   
   
 Hall of fame is not a good name and it's not really called that.   It's
 the 9x9 all time ratings but I almost called it hall of fame because
 originally I intended to only include the top 50 players.I chose not
 to for 3 reasons:

   1.  Many players are represented multiple times.
   2.  It's more useful to be able to see every program.
   3.  Fame doesn't imply you are the best.  You might just be
 sentimental favorite like gnugo.

 - Don


 
 -Hideki

 Don Dailey: [EMAIL PROTECTED]:
   
   
 Many strong programs have 100% scores against many opponents and many
 games.   They cannot be hanging up very often.

 When the server hangs,  the current game you are playing is not scored. 
  I don't think there is a major problem here.  

 As far as network problems CGOS considers that part of the computing
 system.   If you provider is having glitches that affect your program I
 can't account for that.It's the same if you lose your connection and
 lose on time as a result.It's part of your equipment, it's as if
 you had automobile 

Re: [computer-go] Re: Lisp time

2007-12-13 Thread Stefan Nobis
Don Dailey [EMAIL PROTECTED] writes:

 I thinks it's very difficult to outperform C since C really is just
 about at the level of assembly language.

No, in special cases it's not that hard to outperform C, because the
language spec dictates some not so efficient details. C has an ABI and
it's specification is optimized for the general case. Nearly every
design decision has advantages and disadvantages. So if you look at
the spec it's clearly possible to construct cases that are
exceptionally bad for this specific spec. If you have a language
without these restriction it's quite possible to get better results in
this constructed case.

So yes, it may be difficult and maybe most or even all cases in which
it's possible to outperform C with Lisp are unimportant for any
practical purposes, but nevertheless it's possible.

And by the way I don't think C is really the best language for maximum
performance. Just think about Fortran. AFAIK in it's main domain it
easily outperforms C. There are also languages like OCAML, Ada, Forth
and quite some others with really capable compilers und language specs
that leave more freedom to compiler writers and such leave more room
for different kind of optimizations.

So the claim that C is the single one high level language with which
you can get the maximum performance is quite an urban legend and
completly unjustified.

 I think the right approach to a language faster than C is to simply
 write a better C that is specifically designed to have more
 opportunities for optimization.

C is a really crappy language, especially from the performace point of
view. It's so low level that you are able to get really good
performance despite all the odds of the language. You have to program
around all the performace pitfalls, you have to do all performance
optimizations yourself. C is no help here. So why should C be a good
starting point for a language striving to make high performance
computing easy?

BTW: No, Lisp is also not a very good starting point for this special
goal, but I would say Lisp would make a much better start than C,
because Lisp shows you how much a good language and compiler can easy
the pain for the developer while in C he is all alone.

 I really don't think a truly higher level language will ever beat C
 or a performance improved C.

Have you really any hints, done any scientific benchmarks with
languages like OCAML, Oz, Forth, Fortran, Ada, Scheme, and many others
to make your claim any more than pure guesswork?

 There is some hope that a JIT can do some optimizations not possible
 with a static compiler - but even if this is the case C could follow
 this path too.

It's not easy and maybe not even possible for C to follow this route
because C code (source and object) don't have very much
information. Higher level languages provide much more information
about what's going on, what data is in the game and the like. Or they
set much more restrictions (like restrictive type systems). All these
additional informations and/or restrictions make some algorithms
for inference tractrable or even fast. Without these you are lost. So
no, there are no hints that C would ever get such optimizations.

-- 
Until the next mail...,
Stefan.


pgpTH0sCBMMBD.pgp
Description: PGP signature
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/