Re: [computer-go] 9x9 study rolloff

2008-01-31 Thread Sylvain Gelly
Hi Don,

As I said it is confusing :). limitTreeSize does not make much sense
since collectorLimitTreeSize, so it is more or less historical. Just
put limitTreeSize to something bigger to collectorLimitTreeSize, and
just forget about it. collectorLimitTreeSize is the size where the
garbage collection occurs, it is the only interesting variable, but if
limitTreeSize is smaller than that, then the garbage collection will
not occur. Sorry for the confusion.


Sylvain

2008/2/1, Don Dailey <[EMAIL PROTECTED]>:
> Sylvain,
>
> These 2 parameters are confusing to me.   --collectorLimitTreeSize
> sounds like it limits the tree size to whatever your setting are,  but
> so does  --limitTreeSize. What is the difference between a tree and
> a collector tree?
>
> I assume the collector is a garbage collector of some sort?My guess
> is that the collector limit is the largest tree allowed after collection
> and the --limitTreeSize is how big it must get before collection?
>
> - Don
>
>
>
>
> Sylvain Gelly wrote:
> > Hi,
> >
> > With such a large number of playouts, the tree size limit (and so
> > heavy pruning) is certainly a possible hypothesis. The simplest way to
> > test it would be to run the same MoGo_17 or _18 with a much bigger
> > tree (taking more memory). --collectorLimitTreeSize is by default
> > 40 (number of nodes). If you want to increase beyond 100, you
> > should also add --limitTreeSize 200 (this limitTreeSize does not
> > make much sense with the pruning, but it is a hard limit which,
> > whatever happens, will not be reached... modulo bugs ;))
> >
> > Sylvain
> >
> > 2008/1/31, Janzert <[EMAIL PROTECTED]>:
> >
> >> I haven't seen anyone else mention this, although I may have missed it
> >> in one of the previous discussions.
> >>
> >> I find it pretty amazing that both Mogo and Fatman are leveling off at
> >> exactly, or almost exactly, the same number of playouts (i.e. Fatman lvl
> >> 14 == Mogo lvl 18 == 8388608 playouts). Could it simply be that they
> >> have run out of memory to build a larger tree and are starting to prune
> >> branches that would become critical if they had the space to explore them?
> >>
> >> Janzert
> >>
> >> ___
> >> computer-go mailing list
> >> computer-go@computer-go.org
> >> http://www.computer-go.org/mailman/listinfo/computer-go/
> >>
> >>
> > ___
> > computer-go mailing list
> > computer-go@computer-go.org
> > http://www.computer-go.org/mailman/listinfo/computer-go/
> >
> >
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 9x9 study rolloff

2008-01-31 Thread Don Dailey
Sylvain,

These 2 parameters are confusing to me.   --collectorLimitTreeSize
sounds like it limits the tree size to whatever your setting are,  but
so does  --limitTreeSize. What is the difference between a tree and
a collector tree?   

I assume the collector is a garbage collector of some sort?My guess
is that the collector limit is the largest tree allowed after collection
and the --limitTreeSize is how big it must get before collection?  

- Don




Sylvain Gelly wrote:
> Hi,
>
> With such a large number of playouts, the tree size limit (and so
> heavy pruning) is certainly a possible hypothesis. The simplest way to
> test it would be to run the same MoGo_17 or _18 with a much bigger
> tree (taking more memory). --collectorLimitTreeSize is by default
> 40 (number of nodes). If you want to increase beyond 100, you
> should also add --limitTreeSize 200 (this limitTreeSize does not
> make much sense with the pruning, but it is a hard limit which,
> whatever happens, will not be reached... modulo bugs ;))
>
> Sylvain
>
> 2008/1/31, Janzert <[EMAIL PROTECTED]>:
>   
>> I haven't seen anyone else mention this, although I may have missed it
>> in one of the previous discussions.
>>
>> I find it pretty amazing that both Mogo and Fatman are leveling off at
>> exactly, or almost exactly, the same number of playouts (i.e. Fatman lvl
>> 14 == Mogo lvl 18 == 8388608 playouts). Could it simply be that they
>> have run out of memory to build a larger tree and are starting to prune
>> branches that would become critical if they had the space to explore them?
>>
>> Janzert
>>
>> ___
>> computer-go mailing list
>> computer-go@computer-go.org
>> http://www.computer-go.org/mailman/listinfo/computer-go/
>>
>> 
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
>   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: Move Prediction and Strength in Monte-Carlo Go Program

2008-01-31 Thread Hideki Kato
Hi Rémi and all,

It's not final version of his thesis, rather it has some (or a
lot of :) errors.  Please wait for the final version.

-Hideki

Rémi Coulom: <[EMAIL PROTECTED]>:
>Hi,
>
>I found the Master Thesis of Nobuo Araki is available online:
>http://ark.qp.land.to/main.pdf
>
>Abstract:
>Recently in the Go program, there was a breakthrough by the Monte-Carlo 
>method using
>a game tree search method called UCT (UCB applied to trees, UCB stands 
>for Upper Confidence
>Bounds) in combination with the reduction of search space by move 
>prediction. By
>this method, Go programs easily become stronger than existing programs. 
>However, there
>are hardly any studies concerning the relationship between the strength 
>of a program, and
>the accuracy of move prediction, which is integrated into the 
>Monte-Carlo method; therefore,
>we cannot assume the direction of future research that makes stronger 
>programs. In this
>study, we developed a move prediction system based on machine learning 
>techniques, and
>researched the relationship between the accuracy of move prediction, and 
>the strength of
>Monte-Carlo method. Our move prediction system based on the maximum 
>entropy method
>attained top level accuracies of those days. Furthermore, it became 
>clear that even when
>the move prediction accuracy goes higher, the programs do not always 
>become stronger. We
>investigated the reasons behind this result. Additionally, we have 
>attempted to create a Go
>player by enforcing move prediction, but the result was not beyond 
>satisfactory. We will also
>describe the reasons behind this result.
>
>Rémi
>___
>computer-go mailing list
>computer-go@computer-go.org
>http://www.computer-go.org/mailman/listinfo/computer-go/
--
[EMAIL PROTECTED] (Kato)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] not "Go knowledge and UCT"

2008-01-31 Thread Christoph Birk

On Thu, 31 Jan 2008, terry mcintyre wrote:
It has to be said that the game of Go differs from Chess in an 
important way.
There are many games where a skilled player can definitively say, this 
game is won by so-and-so regardless of how clever the opponent may be, 
unless so-and-so makes an egregious blunder.


This is true for both, Chess and Go.

Christoph

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: Move Prediction and Strength in Monte-Carlo Go Program

2008-01-31 Thread 荒木伸夫
Hello, Coulom. I'm Nobuo Araki.

Thank you for reading my thesis. However, this thesis is first version, not 
final version. Therefore, there are too few experiments. And Mr. Hideki Kato 
sent me many warnings about this thesis, for example "English is too bad." You 
may be confused while reading my English...sorry.

Anyway, thanks again.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] not "Go knowledge and UCT"

2008-01-31 Thread terry mcintyre
It has to be said that the game of Go differs from Chess in an important way.

There are many games where a skilled player can definitively say, this game is 
won by so-and-so regardless of how clever the opponent may be, unless so-and-so 
makes an egregious blunder.

Unlike chess, the Go board tends to resolve during the midgame into separable 
regions which a skilled human player can definitively analyze. I have in front 
of me a slim volume "Go Proverbs"; on page 7, there is a 7x7 shape which takes 
up about a quarter of the board. If white plays correctly, black is left with a 
dead "rabbity six" eyespace, and is totally dead. Proper analysis by black 
would recognize this, and either gain compensation elsewhere, or resign. 

Just last night, I watched a 7 dan player annihilate a shodan by application of 
similar analysis.

Today's programs, impressive as they are, don't yet have a handle on this.

I'm hoping that someone will figure out how to graft a top-level tactical 
analysis program to UCT and combine the strengths of both. 




  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] not "Go knowledge and UCT"

2008-01-31 Thread dhillismail



That's an interesting story. I just wish I hadn't precipitated it by sharing my 
blinding flash of the obvious with the whole list. You are correct, of course. 
I got so focused on something that I didn't see the forest for the trees. 

- Dave Hillis

-Original Message-
From: Don Dailey <[EMAIL PROTECTED]>
To: computer-go 
Sent: Thu, 31 Jan 2008 3:20 pm
Subject: Re: [computer-go] not "Go knowledge and UCT"




This is the famous horizon effect chess programs are suckers for.
One interesting example of it happened to my program in a human
tournament years ago.

It played the classic Bxa7 getting it's bishop trapped by b6.  
Computers used to be real suckers for this because it wins a pawn,  but
eventually the bishop, which has no where to go, will get captured.  It
has to see deeply enough to realize this.

It gets particular ugly when it only takes a couple of moves to capture
the bishop,  but the computer pushes the inevitable over it's "horizon"
with spite checks, and useless attacking moves,  which to the program
appears to "save" the bishop.   

In this case, it "sacked" a pawn to get a vicious attack in order to
delay the "loss" of the bishop.

As it turned out after later analysis,  the pawn capture was actually
the winning move after all, but the program did it for the entirely
wrong reasons.   The attack would not have worked if the bishop had not
moved, the pawn sacrifice was brilliant and there was not way to refute
the attack. Nevertheless, the program was simply fighting for it's
life - the whole vicious assault was not for the sake of the attack
itself but a desperate measure to delay the capture of the bishop.  

People watching the game believed the computer had played absolutely
brilliantly and didn't realize the whole thing was just self-deception
by the computer.

It's funny that a similar thing had happened to Bobby Fischer against
Spassky in 1972.Fischer took the pawn on h2 and everyone gasped.  
This move was talked about for a long time.It started out as being
called a stupid blunder,  a beginners error and so on.   But the move
actually had some good points.This move was Fischers way of pressing
for win in a basically drawn endgame and even though Fischer lost that
game, it was due to a blunder later in the game.The move itself
probably wasn't a blunder although it wasn't a winning move either.   
It probably was the best try for a win.

- Don




[EMAIL PROTECTED] wrote:
>
> A few trick moves in a row can cause problems. But the cases where I
> am most likely to be watching my bot's play through my fingers are
> when there is an obvious (to a 20 Kyu like me) situation but it's some
> plys in the future. (Or it can be pushed into the future!)
>
> A case of seki is easy for my tree search with UCT to handle. For
> external playouts, it's likely to be played wrong. So what does my bot
> do? Each color learns not to play into the seki at shallow, internal,
> nodes. They play somewhere else on the board, often totally pointless
> moves. But when, inevitably, play hits external nodes, one side or the
> other plays into the seki. I can tweak the policy for self-atari
> moves, but that just moves the problem elsewhere-it misplays more
> nakade situations.
>
> The real issue is that UCT (or any tree-search with MC) tends to push
> confusing board situations into the future where they will be dealt
> with using playout logic. It's not hard to hit local board situations
> where tree search would lead to a win for white, but playout logic
> would give black an edge. Black responds (sorry for anthropomorphism)
> by making distracting moves elsewhere on the board, deferring the
> situation to later plys. It's a bit like a ko fight with a huge supply
> of kos.
>
> - Dave Hillis
>
> -Original Message-
> From: Don Dailey <[EMAIL PROTECTED]>
> To: computer-go 
> Cc: Joe Cepiel <[EMAIL PROTECTED]>; Chris Hayashida
> <[EMAIL PROTECTED]>
> Sent: Thu, 31 Jan 2008 1:25 pm
> Subject: Re: [computer-go] Go knowledge and UCT
>
>
>
> terry mcintyre wrote:
> >
> > I may have misunderstood, so please clarify a point.
> >
> > Let's say the game hinges on solving a life-and-death problem - if you find 
> the right move, you win the game; if not, you lose. Many such problems - as 
> found in real games - are extremely order-dependent; there is exactly one 
right 
> way to begin the sequence; all other attempts fail. It is not unusual for a 
> sequence to be ten, twenty or thirty moves long, and to lead to an absolutely 
> provable result no matter how the opponent may twist and turn.
> >
> > Does "when the depth is great enough they are still considered" mean that 
> > an 

> unlikely move would not be considered for move A at the top of the tree, but 
> would be considered for move E or F, perhaps? Or have I misunderstood?
> >   
> The top of the tree is moves near the root since the tree is normally
> displayed upside down in computer science.So moves closer to t

Re: [computer-go] 9x9 study rolloff

2008-01-31 Thread Jason House
On Jan 31, 2008 4:31 PM, Don Dailey <[EMAIL PROTECTED]> wrote:

> FatMan doesn't use a hash table to represent the tree,  it actually uses
> a tree with pointers and so on.
>
> For detection of repetition in the search part,  FatMan uses a 64 bit
> zobrist key.



How do you find a pre-existing node to point to without a hash table
lookup?  I view a hash table as a mapping between the zorbist key and the
(pointer to) the node of interest.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] 9x9 study rolloff

2008-01-31 Thread Don Dailey
FatMan doesn't use a hash table to represent the tree,  it actually uses
a tree with pointers and so on.

For detection of repetition in the search part,  FatMan uses a 64 bit
zobrist key.

- Don


steve uurtamo wrote:
> how many bits are you guys using for your (presumably)
> zobrist hashes?  just curious.
>
> :)
>
> s.
>
> - Original Message 
> From: Don Dailey <[EMAIL PROTECTED]>
> To: computer-go 
> Sent: Thursday, January 31, 2008 3:33:36 PM
> Subject: Re: [computer-go] 9x9 study rolloff
>
>
>
> Janzert wrote:
> > I haven't seen anyone else mention this, although I may have missed it
> > in one of the previous discussions.
> >
> > I find it pretty amazing that both Mogo and Fatman are leveling off at
> > exactly, or almost exactly, the same number of playouts (i.e. Fatman lvl
> > 14 == Mogo lvl 18 == 8388608 playouts). Could it simply be that they
> > have run out of memory to build a larger tree and are starting to prune
> > branches that would become critical if they had the space to explore
> > them?
> That could explain mogo and so could the floating point issue.  But it
> doesn't explain FatMan, because there are still enough nodes in it's
> memory pool for this number of play-outs.
>
> - Don
>
> >
> > Janzert
> >
> > ___
> > computer-go mailing list
> > computer-go@computer-go.org 
> > http://www.computer-go.org/mailman/listinfo/computer-go/
> >
> ___
> computer-go mailing list
> computer-go@computer-go.org 
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
>
> 
> Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try
> it now.
> 
>
> 
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 9x9 study rolloff

2008-01-31 Thread Jason House
I recently upped mine from 32 bit to 64 bit.  Once I put more checks in my
code, I found that stale data was getting reused.  I may be an exception to
the rule though because I've never implemented a way to clear out old search
data.  My engine is slow, so that's less of a problem in short CGOS games.

On Jan 31, 2008 4:14 PM, steve uurtamo <[EMAIL PROTECTED]> wrote:

> how many bits are you guys using for your (presumably)
> zobrist hashes?  just curious.
>
> :)
>
> s.
>
>
> - Original Message 
> From: Don Dailey <[EMAIL PROTECTED]>
> To: computer-go 
> Sent: Thursday, January 31, 2008 3:33:36 PM
> Subject: Re: [computer-go] 9x9 study rolloff
>
>
>
> Janzert wrote:
> > I haven't seen anyone else mention this, although I may have missed it
> > in one of the previous discussions.
> >
> > I find it pretty amazing that both Mogo and Fatman are leveling off at
> > exactly, or almost exactly, the same number of playouts (i.e. Fatman lvl
> > 14 == Mogo lvl 18 == 8388608 playouts). Could it simply be that they
> > have run out of memory to build a larger tree and are starting to prune
> > branches that would become critical if they had the space to explore
> > them?
> That could explain mogo and so could the floating point issue.  But it
> doesn't explain FatMan, because there are still enough nodes in it's
> memory pool for this number of play-outs.
>
> - Don
>
> >
> > Janzert
> >
> > ___
> > computer-go mailing list
> > computer-go@computer-go.org
> > http://www.computer-go.org/mailman/listinfo/computer-go/
> >
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
>
> --
> Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it
> now.
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] 9x9 study rolloff

2008-01-31 Thread steve uurtamo
how many bits are you guys using for your (presumably)
zobrist hashes?  just curious.

:)

s.

- Original Message 
From: Don Dailey <[EMAIL PROTECTED]>
To: computer-go 
Sent: Thursday, January 31, 2008 3:33:36 PM
Subject: Re: [computer-go] 9x9 study rolloff




Janzert 
wrote:
> 
I 
haven't 
seen 
anyone 
else 
mention 
this, 
although 
I 
may 
have 
missed 
it
> 
in 
one 
of 
the 
previous 
discussions.
>
> 
I 
find 
it 
pretty 
amazing 
that 
both 
Mogo 
and 
Fatman 
are 
leveling 
off 
at
> 
exactly, 
or 
almost 
exactly, 
the 
same 
number 
of 
playouts 
(i.e. 
Fatman 
lvl
> 
14 
== 
Mogo 
lvl 
18 
== 
8388608 
playouts). 
Could 
it 
simply 
be 
that 
they
> 
have 
run 
out 
of 
memory 
to 
build 
a 
larger 
tree 
and 
are 
starting 
to 
prune
> 
branches 
that 
would 
become 
critical 
if 
they 
had 
the 
space 
to 
explore
> 
them?
That 
could 
explain 
mogo 
and 
so 
could 
the 
floating 
point 
issue.  
 
But 
it
doesn't 
explain 
FatMan, 
because 
there 
are 
still 
enough 
nodes 
in 
it's
memory 
pool 
for 
this 
number 
of 
play-outs.

- 
Don

>
> 
Janzert
>
> 
___
> 
computer-go 
mailing 
list
> 
computer-go@computer-go.org
> 
http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go 
mailing 
list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/






  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] 9x9 study rolloff

2008-01-31 Thread Don Dailey


Janzert wrote:
> I haven't seen anyone else mention this, although I may have missed it
> in one of the previous discussions.
>
> I find it pretty amazing that both Mogo and Fatman are leveling off at
> exactly, or almost exactly, the same number of playouts (i.e. Fatman lvl
> 14 == Mogo lvl 18 == 8388608 playouts). Could it simply be that they
> have run out of memory to build a larger tree and are starting to prune
> branches that would become critical if they had the space to explore
> them?
That could explain mogo and so could the floating point issue.   But it
doesn't explain FatMan, because there are still enough nodes in it's
memory pool for this number of play-outs.

- Don

>
> Janzert
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Move Prediction and Strength in Monte-Carlo Go Program

2008-01-31 Thread Rémi Coulom

Hi,

I found the Master Thesis of Nobuo Araki is available online:
http://ark.qp.land.to/main.pdf

Abstract:
Recently in the Go program, there was a breakthrough by the Monte-Carlo 
method using
a game tree search method called UCT (UCB applied to trees, UCB stands 
for Upper Confidence
Bounds) in combination with the reduction of search space by move 
prediction. By
this method, Go programs easily become stronger than existing programs. 
However, there
are hardly any studies concerning the relationship between the strength 
of a program, and
the accuracy of move prediction, which is integrated into the 
Monte-Carlo method; therefore,
we cannot assume the direction of future research that makes stronger 
programs. In this
study, we developed a move prediction system based on machine learning 
techniques, and
researched the relationship between the accuracy of move prediction, and 
the strength of
Monte-Carlo method. Our move prediction system based on the maximum 
entropy method
attained top level accuracies of those days. Furthermore, it became 
clear that even when
the move prediction accuracy goes higher, the programs do not always 
become stronger. We
investigated the reasons behind this result. Additionally, we have 
attempted to create a Go
player by enforcing move prediction, but the result was not beyond 
satisfactory. We will also

describe the reasons behind this result.

Rémi
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re : [computer-go] Re: 19x19 Study. Nakade is difficult

2008-01-31 Thread ivan dubois
In the tree search part, there is generaly no restriction to the moves that can 
be played. So a UCT program should have no problem seeing that A1 is the best 
move localy. However, A1 will be considered a 50% killing move, not 100%. This 
is because UCT will have trouble looking ahead the forced sequence that makes 
the white group 100 % dead. I explained in a previous post why.

- Message d'origine 
De : Jacques Basaldúa <[EMAIL PROTECTED]>
À : computer-go@computer-go.org
Envoyé le : Jeudi, 31 Janvier 2008, 20h39mn 58s
Objet : [computer-go] Re: 19x19 Study. Nakade is difficult

I mentioned nakade in a list including "not filling own eyes". Perhaps,
not "filling own eyes" is a simpler example:

| . . . . . . .
| . # # . # . .
| . O # . # . .
| O O O . # # .
| # # O O O # .
| . # # . . . .
  ---

(Unless I made a mistake: Black to play and a1 is the only move killing
white.)

All MC programs avoid eye filling. I am not claiming that this is a wrong
decision. It has been tested, it is the correct decision. It is not a bug
that can't be fixed, either. If white is blind to the a1 move, then it is
happy with this position because it thinks it is unconditionally alive.
Therefore, it should not be impossible to force white to make this shape
because white likes it.

If no program understands this, white is alive and the game continues as
if a1 was illegal. A program that finds a1 changes all.

It is not a question of limiting the move in the playouts or in the tree.
The playouts make an evaluation function, if they are systematically wrong
the tree won't expand in that direction in advance. Playing go is about
reading the problems before they happen. A program that does the playouts
systematically wrong and the tree right, may be a good tsumego solver, but
not necessarily a good player, because it won't see it coming.


Jacques.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


  
_ 
Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail 
http://mail.yahoo.fr
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] not "Go knowledge and UCT"

2008-01-31 Thread Don Dailey
This is the famous horizon effect chess programs are suckers for.
One interesting example of it happened to my program in a human
tournament years ago.

It played the classic Bxa7 getting it's bishop trapped by b6.  
Computers used to be real suckers for this because it wins a pawn,  but
eventually the bishop, which has no where to go, will get captured.  It
has to see deeply enough to realize this.

It gets particular ugly when it only takes a couple of moves to capture
the bishop,  but the computer pushes the inevitable over it's "horizon"
with spite checks, and useless attacking moves,  which to the program
appears to "save" the bishop.   

In this case, it "sacked" a pawn to get a vicious attack in order to
delay the "loss" of the bishop.

As it turned out after later analysis,  the pawn capture was actually
the winning move after all, but the program did it for the entirely
wrong reasons.   The attack would not have worked if the bishop had not
moved, the pawn sacrifice was brilliant and there was not way to refute
the attack. Nevertheless, the program was simply fighting for it's
life - the whole vicious assault was not for the sake of the attack
itself but a desperate measure to delay the capture of the bishop.  

People watching the game believed the computer had played absolutely
brilliantly and didn't realize the whole thing was just self-deception
by the computer.

It's funny that a similar thing had happened to Bobby Fischer against
Spassky in 1972.Fischer took the pawn on h2 and everyone gasped.  
This move was talked about for a long time.It started out as being
called a stupid blunder,  a beginners error and so on.   But the move
actually had some good points.This move was Fischers way of pressing
for win in a basically drawn endgame and even though Fischer lost that
game, it was due to a blunder later in the game.The move itself
probably wasn't a blunder although it wasn't a winning move either.   
It probably was the best try for a win.

- Don




[EMAIL PROTECTED] wrote:
>
> A few trick moves in a row can cause problems. But the cases where I
> am most likely to be watching my bot's play through my fingers are
> when there is an obvious (to a 20 Kyu like me) situation but it's some
> plys in the future. (Or it can be pushed into the future!)
>
> A case of seki is easy for my tree search with UCT to handle. For
> external playouts, it's likely to be played wrong. So what does my bot
> do? Each color learns not to play into the seki at shallow, internal,
> nodes. They play somewhere else on the board, often totally pointless
> moves. But when, inevitably, play hits external nodes, one side or the
> other plays into the seki. I can tweak the policy for self-atari
> moves, but that just moves the problem elsewhere-it misplays more
> nakade situations.
>
> The real issue is that UCT (or any tree-search with MC) tends to push
> confusing board situations into the future where they will be dealt
> with using playout logic. It's not hard to hit local board situations
> where tree search would lead to a win for white, but playout logic
> would give black an edge. Black responds (sorry for anthropomorphism)
> by making distracting moves elsewhere on the board, deferring the
> situation to later plys. It's a bit like a ko fight with a huge supply
> of kos.
>
> - Dave Hillis
>
> -Original Message-
> From: Don Dailey <[EMAIL PROTECTED]>
> To: computer-go 
> Cc: Joe Cepiel <[EMAIL PROTECTED]>; Chris Hayashida
> <[EMAIL PROTECTED]>
> Sent: Thu, 31 Jan 2008 1:25 pm
> Subject: Re: [computer-go] Go knowledge and UCT
>
>
>
> terry mcintyre wrote:
> >
> > I may have misunderstood, so please clarify a point.
> >
> > Let's say the game hinges on solving a life-and-death problem - if you find 
> the right move, you win the game; if not, you lose. Many such problems - as 
> found in real games - are extremely order-dependent; there is exactly one 
> right 
> way to begin the sequence; all other attempts fail. It is not unusual for a 
> sequence to be ten, twenty or thirty moves long, and to lead to an absolutely 
> provable result no matter how the opponent may twist and turn.
> >
> > Does "when the depth is great enough they are still considered" mean that 
> > an 
> unlikely move would not be considered for move A at the top of the tree, but 
> would be considered for move E or F, perhaps? Or have I misunderstood?
> >   
> The top of the tree is moves near the root since the tree is normally
> displayed upside down in computer science.So moves closer to the top
> of the tree are more likely to be considered.   The vast majority of
> time is spent near the leaf nodes, and so it makes sense to prune more
> aggressively near leaf nodes.Eventually, leaf nodes become nodes
> closer to the root as the tree expands beyond them.
>
> So if the first move of the sequence is hard to find but the rest are
> reasonably normal moves,  the situation is not so bad.An example of

Re: [computer-go] not "Go knowledge and UCT"

2008-01-31 Thread dhillismail
Right you are. Silly me.


-Original Message-
From: Álvaro Begué <[EMAIL PROTECTED]>
To: computer-go 
Sent: Thu, 31 Jan 2008 3:06 pm
Subject: Re: [computer-go] not "Go knowledge and UCT"





On Jan 31, 2008 2:59 PM, <[EMAIL PROTECTED]> wrote:

I'm going to call this the "procrastination effect." MY claim is that, when 
MC-UCT encounters a critical life and death board situation that its playout 
policy consistently gets wrong, the search will naturally tend to skew the tree 
so that relevant moves continue to be made during the playouts.


This has traditionally been called the "horizon effect".







___
omputer-go mailing list
[EMAIL PROTECTED]
ttp://www.computer-go.org/mailman/listinfo/computer-go/



More new features than ever.  Check out the new AIM(R) Mail ! - 
http://webmail.aim.com
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Re: 19x19 Study

2008-01-31 Thread Dave Dyer
At 11:44 AM 1/31/2008, David Doshay wrote:
>That is correct.
>
>It is my understanding that the Intel machines can compile to
>a "universal binary" that will run on the G5 machines, but we
>have not verified that. I trust that it works, but have no idea
>if there is an efficiency hit.

Universal binaries just contain two sets of object code.  There's
no effeciency hit, but your program does need to be aware of potential
"big endian" issues because the same binary might be run on two 
platforms with different byte orders.


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: 19x19 Study

2008-01-31 Thread Dave Dyer
At 11:44 AM 1/31/2008, David Doshay wrote:
>That is correct.
>
>It is my understanding that the Intel machines can compile to
>a "universal binary" that will run on the G5 machines, but we
>have not verified that. I trust that it works, but have no idea
>if there is an efficiency hit.

Universal binaries just contain two sets of object code.  There's
no effeciency hit, but your program does need to be aware of potential
"big endian" issues because the same binary might be run on two 
platforms with different byte orders.


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] not "Go knowledge and UCT"

2008-01-31 Thread Álvaro Begué
On Jan 31, 2008 2:59 PM, <[EMAIL PROTECTED]> wrote:

> I'm going to call this the "procrastination effect." MY claim is that,
> when MC-UCT encounters a critical life and death board situation that its
> playout policy consistently gets wrong, the search will naturally tend to
> skew the tree so that relevant moves continue to be made during the
> playouts.


This has traditionally been called the "horizon effect".
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Go rating math information

2008-01-31 Thread dave.devos
The other way around happens too: in 2006 I had a 4 month pause on KGS and my 
rank dropped from 4d to 4k.



Van: [EMAIL PROTECTED] namens Jason House
Verzonden: do 31-1-2008 20:33
Aan: computer-go
Onderwerp: Re: [computer-go] Go rating math information




On Jan 31, 2008 2:20 PM, Don Dailey <[EMAIL PROTECTED]> wrote:


So if I get rated on KGS all I have to do is stop playing and my rank
will shoot up a few ranks?


It's a pretty common phenomenon on KGS...  I've seen it happen many times

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] not "Go knowledge and UCT"

2008-01-31 Thread dhillismail
I'm going to call this the "procrastination effect." MY claim is that, when 
MC-UCT encounters a critical life and death board situation that its playout 
policy consistently gets wrong, the search will naturally tend to skew the tree 
so that relevant moves continue to be made during the playouts.

- Dave Hillis


-Original Message-
From: [EMAIL PROTECTED]
To: computer-go@computer-go.org
Sent: Thu, 31 Jan 2008 2:18 pm
Subject: Re: [computer-go] not "Go knowledge and UCT"



A few trick moves in a row can cause problems. But the cases where I am most 
likely to be watching my bot's play through my fingers are when there is an 
obvious (to a 20 Kyu like me) situation but it's some plys in the future. (Or 
it can be pushed into the future!) 

A case of seki is easy for my tree search with UCT to handle. For external 
playouts, it's likely to be played wrong. So what does my bot do? Each color 
learns not to play into the seki at shallow, internal, nodes. They play 
somewhere else on the board, often totally pointless moves. But when, 
inevitably,?play hits external nodes, one side or the other plays into the 
seki. I can tweak the policy for self-atari moves, but that just moves the 
problem elsewhere-it misplays more nakade situations. 

The real issue is that UCT (or any tree-search with MC)?tends to push confusing 
board situations into the future where they will be dealt with using playout 
logic. It's not hard to hit local board situations where tree search would lead 
to a win for white, but playout logic would give black an edge. Black responds 
(sorry for anthropomorphism) by making distracting moves elsewhere on the 
board, deferring the situation to later plys. It's a bit like a ko fight with a 
huge supply of kos.

- Dave Hillis

-Original Message-
From: Don Dailey <[EMAIL PROTECTED]>
To: computer-go 
Cc: Joe Cepiel <[EMAIL PROTECTED]>; Chris Hayashida <[EMAIL PROTECTED]>
Sent: Thu, 31 Jan 2008 1:25 pm
Subject: Re: [computer-go] Go knowledge and UCT





terry mcintyre wrote:
>
> I may have misunderstood, so please clarify a point.
>
> Let's say the game hinges on solving a life-and-death problem - if you find 
the right move, you win the game; if not, you lose. Many such problems - as 
found in real games - are extremely order-dependent; there is exactly one right 
way to begin the sequence; all other attempts fail. It is not unusual for a 
sequence to be ten, twenty or thirty moves long, and to lead to an absolutely 
provable result no matter how the opponent may twist and turn.
>
> Does "when the depth is great enough they are still considered" mean that an 
unlikely move would not be considered for move A at the top of the tree, but 
would be considered for move E or F, perhaps? Or have I misunderstood?
>   
The top of the tree is moves near the root since the tree is normally
displayed upside down in computer science.So moves closer to the top
of the tree are more likely to be considered.   The vast majority of
time is spent near the leaf nodes, and so it makes sense to prune more
aggressively near leaf nodes.Eventually, leaf nodes become nodes
closer to the root as the tree expands beyond them.

So if the first move of the sequence is hard to find but the rest are
reasonably normal moves,  the situation is not so bad.An example of
this might be a critical move to a 1 point eye where the program uses
progressive pruning and doesn't even consider the move for a while.   
In such a case the problem might still be solved in a reasonable amount
of time once some "more likley" moves are first exhausted.  

If the problem is such that many "unusual" moves must be discovered, 
moves that are normally bad patterns for instance,  then a UCT program
would truly have a difficult time with this position.   In theory it HAS
to eventually recover but of course we can find positions where this is
arbitrarily difficult for UCT. But you must understand that as you
get more difficult,  it takes more human expertise also.   So it's not
fair to fixate on just what the computers limitations are. I believe
humans are very very far away from perfect play themselves and "god"
could construct problems that would also give us fits. 

When go computers get into the several Dan strength level,   you will
start to see that they can do some things far better than humans but
will still have some obvious weaknesses that seem silly to us.   But
the only thing that matters is whether they can win. That's the
"bottom line" as the old timers like to say.

> Here are pointers to pictures of some real-life problems on the 19x19 board; 
these happened in actual play at the recent Oza 2008 West tournament.
>
> Black to play and capture 5 stones: 
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158560976542534642
>
> What is the status of the white stones in the corner? 
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158566963726945906
>
> White to play

Re: [computer-go] 9x9 study rolloff

2008-01-31 Thread Sylvain Gelly
Hi,

With such a large number of playouts, the tree size limit (and so
heavy pruning) is certainly a possible hypothesis. The simplest way to
test it would be to run the same MoGo_17 or _18 with a much bigger
tree (taking more memory). --collectorLimitTreeSize is by default
40 (number of nodes). If you want to increase beyond 100, you
should also add --limitTreeSize 200 (this limitTreeSize does not
make much sense with the pruning, but it is a hard limit which,
whatever happens, will not be reached... modulo bugs ;))

Sylvain

2008/1/31, Janzert <[EMAIL PROTECTED]>:
> I haven't seen anyone else mention this, although I may have missed it
> in one of the previous discussions.
>
> I find it pretty amazing that both Mogo and Fatman are leveling off at
> exactly, or almost exactly, the same number of playouts (i.e. Fatman lvl
> 14 == Mogo lvl 18 == 8388608 playouts). Could it simply be that they
> have run out of memory to build a larger tree and are starting to prune
> branches that would become critical if they had the space to explore them?
>
> Janzert
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Go rating math information

2008-01-31 Thread dave.devos
The variable k factor used in go rating systems is just a function mapping a 1 
rank difference to 100 "Elo" points difference over all ranks.
 
Dave de Vos



Van: [EMAIL PROTECTED] namens terry mcintyre
Verzonden: do 31-1-2008 20:26
Aan: computer-go
Onderwerp: Re: [computer-go] Go rating math information


From: Andy <[EMAIL PROTECTED]>

Sorry, the KGS formula uses a constant k which is different from the K-factor 
in Elo.
P(A wins) = 1 / ( 1 + exp(k*(RankB-RankA)) )

This would be equivalent to changing the constant 400 in:
P(A wins) = 1 / ( 1 + 10^((Ra-Rb)/400)) )

EGF has a similar scheme except of course they use different letters for 
equivalent constants.  So this varying of k is what accounts for the fact that 
upsets are more likely for weak kyu players than for dan players.


I think the varying k factor does not explain ( account for ) the fact that kyu 
players are more likely to win upset victories; I believe it is an attempt to  
model the inconsistency or variability in kyu-level play.  




Looking for last minute shopping deals? Find them fast with Yahoo! Search. 

 
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] 19x19 Study

2008-01-31 Thread David Doshay

That is correct.

It is my understanding that the Intel machines can compile to
a "universal binary" that will run on the G5 machines, but we
have not verified that. I trust that it works, but have no idea
if there is an efficiency hit.

Cheers,
David



On 31, Jan 2008, at 11:30 AM, terry mcintyre wrote:


The G5 macs are the power-pc version, right? the pre-Intel version.

- Original Message 

From: David Doshay <[EMAIL PROTECTED]>

These

are
G5
Macs,






   
__ 
__

Never miss a thing.  Make Yahoo your home page.
http://www.yahoo.com/r/hs
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: 19x19 Study. Nakade is difficult

2008-01-31 Thread Jacques Basaldúa

I mentioned nakade in a list including "not filling own eyes". Perhaps,
not "filling own eyes" is a simpler example:

| . . . . . . .
| . # # . # . .
| . O # . # . .
| O O O . # # .
| # # O O O # .
| . # # . . . .
 ---

(Unless I made a mistake: Black to play and a1 is the only move killing
white.)

All MC programs avoid eye filling. I am not claiming that this is a wrong
decision. It has been tested, it is the correct decision. It is not a bug
that can't be fixed, either. If white is blind to the a1 move, then it is
happy with this position because it thinks it is unconditionally alive.
Therefore, it should not be impossible to force white to make this shape
because white likes it.

If no program understands this, white is alive and the game continues as
if a1 was illegal. A program that finds a1 changes all.

It is not a question of limiting the move in the playouts or in the tree.
The playouts make an evaluation function, if they are systematically wrong
the tree won't expand in that direction in advance. Playing go is about
reading the problems before they happen. A program that does the playouts
systematically wrong and the tree right, may be a good tsumego solver, but
not necessarily a good player, because it won't see it coming.


Jacques.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] 9x9 study rolloff

2008-01-31 Thread Janzert

I haven't seen anyone else mention this, although I may have missed it
in one of the previous discussions.

I find it pretty amazing that both Mogo and Fatman are leveling off at
exactly, or almost exactly, the same number of playouts (i.e. Fatman lvl
14 == Mogo lvl 18 == 8388608 playouts). Could it simply be that they
have run out of memory to build a larger tree and are starting to prune
branches that would become critical if they had the space to explore them?

Janzert

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Go rating math information

2008-01-31 Thread Jason House
On Jan 31, 2008 2:20 PM, Don Dailey <[EMAIL PROTECTED]> wrote:

> So if I get rated on KGS all I have to do is stop playing and my rank
> will shoot up a few ranks?


It's a pretty common phenomenon on KGS...  I've seen it happen many times
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] 19x19 Study

2008-01-31 Thread terry mcintyre
The G5 macs are the power-pc version, right? the pre-Intel version.
 
- Original Message 
> From: David Doshay <[EMAIL PROTECTED]>
> 
> These 
are 
G5 
Macs,






  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Go rating math information

2008-01-31 Thread Andy
Yes, the math is attempting to model reality.  :)

That's why I also included the EGF data which is based on observed
statistical upset rate.  Of course those ratings are calculated using a
formula which pre-supposes an upset rate.  Round and round.  :)


On Jan 31, 2008 1:26 PM, terry mcintyre <[EMAIL PROTECTED]> wrote:

> From: Andy <[EMAIL PROTECTED]>
>
> Sorry, the KGS formula uses a constant k which is different from the
> K-factor in Elo.
> P(A wins) = 1 / ( 1 + exp(k*(RankB-RankA)) )
>
> This would be equivalent to changing the constant 400 in:
> P(A wins) = 1 / ( 1 + 10^((Ra-Rb)/400)) )
>
> EGF has a similar scheme except of course they use different letters for
> equivalent constants.  So this varying of k is what accounts for the fact
> that upsets are more likely for weak kyu players than for dan players.
>
> I think the varying k factor does not explain ( account for ) the fact
> that kyu players are more likely to win upset victories; I believe it is an
> attempt to  model the inconsistency or variability in kyu-level play.
>
> --
> Looking for last minute shopping deals? Find them fast with Yahoo! 
> Search.
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Go rating math information

2008-01-31 Thread Andy
A little more information on what CrazyStone's real performance on KGS has
been like during the time the graph depicts:

http://www.gokgs.com/gameArchives.jsp?user=crazystone&year=2007&month=3
http://www.gokgs.com/gameArchives.jsp?user=crazystone&year=2007&month=4
http://www.gokgs.com/gameArchives.jsp?user=crazystone&year=2007&month=11
http://www.gokgs.com/gameArchives.jsp?user=crazystone&year=2007&month=12

In March 2007, CrazyStone starts to play some rated games after a long time
without playing.  It's initially [?] (totally unknown rating).  It starts
with a lucky streak, and gets up to 1k.  Then in April it plays some more,
and the rating settles to weak 2k.  It continues to play many games and
stabilizes there.

Then in November/December 2007, I assume a new version started playing (I
remember reading somewhere it was new version.  Or perhaps just better
hardware).  It quickly settled to a strong 1k.  But in Nov/Dec it played
only 12 rated games, so the sample size is quite small.

Then it quit playing again.  When such a small sample size, it's rating is
very sensitive to "drift".  KGS uses a maximum likelyhood algorithm.  So if
any of CS's 12 opponents gets stronger/weaker, CS will drift with them.  I
assume that is what caused it to move to 1d.

Don asked if he could play some games, stop, and have is rating increase.
The answer is yes if your opponents' ratings increase.  If your opponent's
ratings decrease, you'll go down instead.  Generally for low kyu players,
everyone is improving quickly so there is a strong upward drift.  It's
usually not so bad for the strong kyus or dans since those players don't
generally improve very fast.  However as I said in this case CS has only
played 10 games, so a much smaller pool of opponents is involved.

- Andy

On Jan 31, 2008 12:59 PM, Andy <[EMAIL PROTECTED]> wrote:

> CrazyStone hasn't played since the initial spike to 1k in December.  The
> movement of the chart afterwards is "rating drift".
>
>
> On Jan 31, 2008 12:49 PM, Gian-Carlo Pascutto <[EMAIL PROTECTED]> wrote:
>
> > Don Dailey wrote:
> >
> > > I don't know how David figures 1000 ELO,  but I would expect the
> > > difference to be much larger than that for 19x19 go. I don't
> > believe
> > > they are yet very close to 1 Dan.
> >
> > http://www.gokgs.com/graphPage.jsp?user=CrazyStone
> >
> > You're right. They're closer to 2 Dan.
> >
> > :)
> >
> > --
> > GCP
> > ___
> > computer-go mailing list
> > computer-go@computer-go.org
> > http://www.computer-go.org/mailman/listinfo/computer-go/
> >
>
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Go rating math information

2008-01-31 Thread dave.devos
There seems to be a discrepancy: 11.5% between 2d and 5d in EGF rating system 
versus 2.0% between 2d and 5d in KGS rating system.
I think this can be explained by hidden biases in the EGF statistics:
 
1: To be a 5d in real tournaments in Europe does not mean your rating is 
between 2450 and 2550 EGF. Ranks are self proclaimed in most european 
countries. Many 5d players in fact have a rating below 2450, but they are 
registered as 5d in the EGF system, because that is the rank they claim to 
have. Even in the Netherlands, where dan ranks are awarded and not self 
claimed, it still is highy unusual for a player who has been awarded a 5d rank 
to be demoted to 4d when his rating drops below 2450. KGS ranks are of course 
guaranteed to be in the rating range for the rank. So the rating spread of the 
5d rank is much wider in the EGF system than the KGS system. This results in a 
biass towards larger than expected upset percentages.
 
2: EGF data originates mostly from tournaments using the McMahon pairing 
system. An average 2d will rarely be playing against an average 5d in McMahon 
paired tournaments. The fact of a 2d playing a 5d at all, implies that a strong 
2d is playing a weak 5d. This too results in a bias towards larger than 
expected upset percentages.
  
So if you are looking for the odds that an average 2d wins against an average 
5d, the KGS statistics are a more reliable than the EGF statistics. 
And even KGS seems to overestimate the upset odds in the hig dan region. As a 
result, strong players get highly exaggerated ranks in KGS. There are many 
examples of players who are 7d EGF in real life but 11d KGS (from KGS rating 
graph, 9d is the maximum rating related rank awarded in KGS). These real life 
7d EGF players hardly ever lose an even game against real life 4d-6d EGF 
players (as in real life) and as a result their rank rises sky high in KGS.
 
Dave de Vos 



Van: [EMAIL PROTECTED] namens Andy
Verzonden: do 31-1-2008 18:31
Aan: computer-go
Onderwerp: [computer-go] Go rating math information


There were some questions about the effective ELO difference of two players 3 
ranks apart.  Here are some links to information about go rating formulas, and 
some statistics:

http://senseis.xmp.net/?KGSRatingMath
http://gemma.ujf.cas.cz/~cieply/GO/gor.html
http://gemma.ujf.cas.cz/~cieply/GO/statev.html

Both KGS and EGF scale the "k" factor according to the player's strength.  For 
weaker players the probability of an upset is greater.

According to KGS formula:
8k vs 5k: 7.2% chance of upset (~440 Elo)
2d vs 5d: 2.0% chance of upset (~676 Elo)

According to EGF even game statistics:
Generally for weaker kyu players the chance of upset is around 30-40% (~80-~150 
ELO), for stronger players it goes down:
2d vs 5d: 11.5% chance of upset (~350 Elo)
3d vs 6d:  7.8% chance of upset (~432 Elo)
4d vs 7d:  3.3% chance of upset (~590 Elo)



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Go rating math information

2008-01-31 Thread terry mcintyre
From: Andy <[EMAIL PROTECTED]>


Sorry, the KGS formula uses a constant k which is different from the K-factor 
in Elo.
P(A wins) = 1 / ( 1 + exp(k*(RankB-RankA)) )

This would be equivalent to changing the constant 400 in:
P(A wins) = 1 / ( 1 + 10^((Ra-Rb)/400)) )


EGF has a similar scheme except of course they use different letters for 
equivalent constants.  So this varying of k is what accounts for the fact that 
upsets are more likely for weak kyu players than for dan players.



I think the varying k factor does not explain ( account for ) the fact that kyu 
players are more likely to win upset victories; I believe it is an attempt to  
model the inconsistency or variability in kyu-level play.   





  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Go rating math information

2008-01-31 Thread Don Dailey
So if I get rated on KGS all I have to do is stop playing and my rank
will shoot up a few ranks?

- Don


Jason House wrote:
> I was shocked for a second, until I check Crazy Stone's playing
> record.  Its rating shot up after it stopped playing!  It hasn't
> played a single rated game on KGS since Dec 2, 2007.
>
> On Jan 31, 2008 1:49 PM, Gian-Carlo Pascutto <[EMAIL PROTECTED]
> > wrote:
>
> Don Dailey wrote:
>
> > I don't know how David figures 1000 ELO,  but I would expect the
> > difference to be much larger than that for 19x19 go. I don't
> believe
> > they are yet very close to 1 Dan.
>
> http://www.gokgs.com/graphPage.jsp?user=CrazyStone
>
> You're right. They're closer to 2 Dan.
>
> :)
>
> --
> GCP
> ___
> computer-go mailing list
> computer-go@computer-go.org 
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
>
> 
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] not "Go knowledge and UCT"

2008-01-31 Thread dhillismail

A few trick moves in a row can cause problems. But the cases where I am most 
likely to be watching my bot's play through my fingers are when there is an 
obvious (to a 20 Kyu like me) situation but it's some plys in the future. (Or 
it can be pushed into the future!) 

A case of seki is easy for my tree search with UCT to handle. For external 
playouts, it's likely to be played wrong. So what does my bot do? Each color 
learns not to play into the seki at shallow, internal, nodes. They play 
somewhere else on the board, often totally pointless moves. But when, 
inevitably,?play hits external nodes, one side or the other plays into the 
seki. I can tweak the policy for self-atari moves, but that just moves the 
problem elsewhere-it misplays more nakade situations. 

The real issue is that UCT (or any tree-search with MC)?tends to push confusing 
board situations into the future where they will be dealt with using playout 
logic. It's not hard to hit local board situations where tree search would lead 
to a win for white, but playout logic would give black an edge. Black responds 
(sorry for anthropomorphism) by making distracting moves elsewhere on the 
board, deferring the situation to later plys. It's a bit like a ko fight with a 
huge supply of kos.

- Dave Hillis

-Original Message-
From: Don Dailey <[EMAIL PROTECTED]>
To: computer-go 
Cc: Joe Cepiel <[EMAIL PROTECTED]>; Chris Hayashida <[EMAIL PROTECTED]>
Sent: Thu, 31 Jan 2008 1:25 pm
Subject: Re: [computer-go] Go knowledge and UCT





terry mcintyre wrote:
>
> I may have misunderstood, so please clarify a point.
>
> Let's say the game hinges on solving a life-and-death problem - if you find 
the right move, you win the game; if not, you lose. Many such problems - as 
found in real games - are extremely order-dependent; there is exactly one right 
way to begin the sequence; all other attempts fail. It is not unusual for a 
sequence to be ten, twenty or thirty moves long, and to lead to an absolutely 
provable result no matter how the opponent may twist and turn.
>
> Does "when the depth is great enough they are still considered" mean that an 
unlikely move would not be considered for move A at the top of the tree, but 
would be considered for move E or F, perhaps? Or have I misunderstood?
>   
The top of the tree is moves near the root since the tree is normally
displayed upside down in computer science.So moves closer to the top
of the tree are more likely to be considered.   The vast majority of
time is spent near the leaf nodes, and so it makes sense to prune more
aggressively near leaf nodes.Eventually, leaf nodes become nodes
closer to the root as the tree expands beyond them.

So if the first move of the sequence is hard to find but the rest are
reasonably normal moves,  the situation is not so bad.An example of
this might be a critical move to a 1 point eye where the program uses
progressive pruning and doesn't even consider the move for a while.   
In such a case the problem might still be solved in a reasonable amount
of time once some "more likley" moves are first exhausted.  

If the problem is such that many "unusual" moves must be discovered, 
moves that are normally bad patterns for instance,  then a UCT program
would truly have a difficult time with this position.   In theory it HAS
to eventually recover but of course we can find positions where this is
arbitrarily difficult for UCT. But you must understand that as you
get more difficult,  it takes more human expertise also.   So it's not
fair to fixate on just what the computers limitations are. I believe
humans are very very far away from perfect play themselves and "god"
could construct problems that would also give us fits. 

When go computers get into the several Dan strength level,   you will
start to see that they can do some things far better than humans but
will still have some obvious weaknesses that seem silly to us.   But
the only thing that matters is whether they can win. That's the
"bottom line" as the old timers like to say.

> Here are pointers to pictures of some real-life problems on the 19x19 board; 
these happened in actual play at the recent Oza 2008 West tournament.
>
> Black to play and capture 5 stones: 
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158560976542534642
>
> What is the status of the white stones in the corner? 
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158566963726945906
>
> White to play; can the black group at the bottom middle be killed? 
http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158572701803253954
>
> What is the status of the black group at the top? 
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158572860717043922
>
> Black to play: who wins the capturing race between the large black dragon and 
the white group in the bottom right / center ? 
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158579148549165890

Re: [computer-go] Go rating math information

2008-01-31 Thread Gian-Carlo Pascutto

Andy wrote:

CrazyStone hasn't played since the initial spike to 1k in December.  The 
movement of the chart afterwards is "rating drift". 


Ok. For me this is actually GOOD news :)

--
GCP
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Go rating math information

2008-01-31 Thread Andy
Sorry, the KGS formula uses a constant k which is different from the
K-factor in Elo.
P(A wins) = 1 / ( 1 + exp(k*(RankB-RankA)) )

This would be equivalent to changing the constant 400 in:
P(A wins) = 1 / ( 1 + 10^((Ra-Rb)/400)) )

EGF has a similar scheme except of course they use different letters for
equivalent constants.  So this varying of k is what accounts for the fact
that upsets are more likely for weak kyu players than for dan players.

- Andy


On Jan 31, 2008 12:37 PM, Don Dailey <[EMAIL PROTECTED]> wrote:

> ELO ratings don't have to be absolute, just self consistent.   So if you
> beat someone 7.2% of the time,  that means you are about 440 ELO
> stronger than him.
>
> However, I don't understand what the K-factor has to do with anything.
> scaling it up or down doesn't change anything.  It's common practice to
> make the rating of strong players change more slowly as the result of a
> win or loss but that's not relevant here.
>
> The findings below indicate that differences between dan players is
> greater than the difference between kyu players. So you could not
> assign a fixed ELO per rank but it would have to progressively get
> higher as the players get stronger.
>
> I don't know how David figures 1000 ELO,  but I would expect the
> difference to be much larger than that for 19x19 go. I don't believe
> they are yet very close to 1 Dan.
>
> - Don
>
>
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Go rating math information

2008-01-31 Thread Don Dailey
That's a crazy looking graph!It looks like CrazyStone was close to
1d last spring, then a version change (perhaps) dropped it back to 2k.   

Then suddenly in Dec (a new version?)  it jumped suddenly to 1d then
gradually increased to 1.5d


- Don




Gian-Carlo Pascutto wrote:
> Don Dailey wrote:
>
>> I don't know how David figures 1000 ELO,  but I would expect the
>> difference to be much larger than that for 19x19 go. I don't believe
>> they are yet very close to 1 Dan.  
>
> http://www.gokgs.com/graphPage.jsp?user=CrazyStone
>
> You're right. They're closer to 2 Dan.
>
> :)
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Go rating math information

2008-01-31 Thread Jason House
I was shocked for a second, until I check Crazy Stone's playing record.  Its
rating shot up after it stopped playing!  It hasn't played a single rated
game on KGS since Dec 2, 2007.

On Jan 31, 2008 1:49 PM, Gian-Carlo Pascutto <[EMAIL PROTECTED]> wrote:

> Don Dailey wrote:
>
> > I don't know how David figures 1000 ELO,  but I would expect the
> > difference to be much larger than that for 19x19 go. I don't believe
> > they are yet very close to 1 Dan.
>
> http://www.gokgs.com/graphPage.jsp?user=CrazyStone
>
> You're right. They're closer to 2 Dan.
>
> :)
>
> --
> GCP
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Go rating math information

2008-01-31 Thread Andy
CrazyStone hasn't played since the initial spike to 1k in December.  The
movement of the chart afterwards is "rating drift".

On Jan 31, 2008 12:49 PM, Gian-Carlo Pascutto <[EMAIL PROTECTED]> wrote:

> Don Dailey wrote:
>
> > I don't know how David figures 1000 ELO,  but I would expect the
> > difference to be much larger than that for 19x19 go. I don't believe
> > they are yet very close to 1 Dan.
>
> http://www.gokgs.com/graphPage.jsp?user=CrazyStone
>
> You're right. They're closer to 2 Dan.
>
> :)
>
> --
> GCP
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Go rating math information

2008-01-31 Thread Gian-Carlo Pascutto

Don Dailey wrote:


I don't know how David figures 1000 ELO,  but I would expect the
difference to be much larger than that for 19x19 go. I don't believe
they are yet very close to 1 Dan.  


http://www.gokgs.com/graphPage.jsp?user=CrazyStone

You're right. They're closer to 2 Dan.

:)

--
GCP
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Go knowledge and UCT

2008-01-31 Thread Gian-Carlo Pascutto

David Fotland wrote:


So, can the strong 19x19 programs please tell us your playout rates?  I
expect the higher the rank, the fewer playouts per second.  I'm not
interested in 9x9 data, since I think much less go knowledge is needed to
play 9x9.  With your playout rate, please include the machine, number of
CPUs, and how you measured it (from an empty board, or averaged over a whole
game, or ...).


On an 1.7Ghz Pentium M, Leela 0.3.7 does 1350 playouts per second in the 
openings position.


--
GCP
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Go rating math information

2008-01-31 Thread David Fotland
I'm  not saying anything about absolute ELO ratings J  The data shows that
for humans, a 7d is about 1000 ELO stronger than a 3 k (interpolating a
little from the numbers below).  On KGS the top programs are about 3 kyu.
So they are about 1000 ELO below 7d.

 

David

 

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jason House
Sent: Thursday, January 31, 2008 10:24 AM
To: computer-go
Subject: Re: [computer-go] Go rating math information

 

How do you compute that?  I think the ELO values given are the rating
differences that correspond to the particular upset (win) rate.  They're not
absolute ELO values.  A truly absolute value is also tough to define.  CGOS
picked its value somewhat arbitrarily.

On Jan 31, 2008 12:54 PM, David Fotland <[EMAIL PROTECTED]> wrote:

This implies that the top UCT programs are still over 1000 ELO points from
the top human amateurs.

 

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andy
Sent: Thursday, January 31, 2008 9:32 AM
To: computer-go
Subject: [computer-go] Go rating math information

 

There were some questions about the effective ELO difference of two players
3 ranks apart.  Here are some links to information about go rating formulas,
and some statistics:

http://senseis.xmp.net/?KGSRatingMath
http://gemma.ujf.cas.cz/~cieply/GO/gor.html
 
http://gemma.ujf.cas.cz/~cieply/GO/statev.html
 

Both KGS and EGF scale the "k" factor according to the player's strength.
For weaker players the probability of an upset is greater.

According to KGS formula:
8k vs 5k: 7.2% chance of upset (~440 Elo)
2d vs 5d: 2.0% chance of upset (~676 Elo)

According to EGF even game statistics:
Generally for weaker kyu players the chance of upset is around 30-40%
(~80-~150 ELO), for stronger players it goes down:
2d vs 5d: 11.5% chance of upset (~350 Elo)
3d vs 6d:  7.8% chance of upset (~432 Elo)
4d vs 7d:  3.3% chance of upset (~590 Elo)


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Go knowledge and UCT - PLEASE DO NOT REAPLY WITH ANY WORD ABOUT SCALING!

2008-01-31 Thread David Fotland
Please can someone answer my question about playout rates?  If you must make
some comment about scaling or nakade or anything similar, please change the
title and start a new thread.

David

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:computer-go-
> [EMAIL PROTECTED] On Behalf Of David Fotland
> Sent: Thursday, January 31, 2008 8:48 AM
> To: 'computer-go'
> Subject: [computer-go] Go knowledge and UCT
> 
> UCT with light playouts that just avoid filling eyes is scalable, but
> much
> weaker than the strongest programs at 19x19 go.
> 
> The strong programs have incorporated significant go knowledge, to
> direct
> the search to promising lines (usually local), to order moves to try,
> and to
> prune unpromising moves (please no scalability flames - I know they can
> unprune and stay scalable).
> 
> I suspect that the stronger programs have more go knowledge.  From
> published
> descriptions it looks like CrazyStone has the most pattern knowledge,
> and it
> is on top of the 19x19 ladder.
> 
> I suspect that programs with more go knowledge have slower playouts.
> 
> So, can the strong 19x19 programs please tell us your playout rates?  I
> expect the higher the rank, the fewer playouts per second.  I'm not
> interested in 9x9 data, since I think much less go knowledge is needed
> to
> play 9x9.  With your playout rate, please include the machine, number
> of
> CPUs, and how you measured it (from an empty board, or averaged over a
> whole
> game, or ...).
> 
> David
> 
> 
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] UCT and solving life and death

2008-01-31 Thread David Fotland
Since you hijacked my thread, I'm changing the title and injecting some data
:)

I tried to be very clear that I didn't want that thread to become another
scalability flame fest.

Here is a high level mogo game, level 15 vs level 16, that hinges on a life
and death problem that mogo gets wrong over and over again.

I added some comments in the sgf file.  White lets a big group die, tehn
both side play like they think it's alive, then finally black gives the game
away and loses.

It's clear mogo has a long way to go when a life/death problem includes a
big eye.

David

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:computer-go-
> [EMAIL PROTECTED] On Behalf Of Don Dailey
> Sent: Thursday, January 31, 2008 10:26 AM
> To: computer-go
> Cc: Joe Cepiel; Chris Hayashida
> Subject: Re: [computer-go] Go knowledge and UCT
> 
> 
> 
> terry mcintyre wrote:
> >
> > I may have misunderstood, so please clarify a point.
> >
> > Let's say the game hinges on solving a life-and-death problem - if
> you find the right move, you win the game; if not, you lose. Many such
> problems - as found in real games - are extremely order-dependent;
> there is exactly one right way to begin the sequence; all other
> attempts fail. It is not unusual for a sequence to be ten, twenty or
> thirty moves long, and to lead to an absolutely provable result no
> matter how the opponent may twist and turn.
> >
> > Does "when the depth is great enough they are still considered" mean
> that an unlikely move would not be considered for move A at the top of
> the tree, but would be considered for move E or F, perhaps? Or have I
> misunderstood?
> >
> The top of the tree is moves near the root since the tree is normally
> displayed upside down in computer science.So moves closer to the
> top
> of the tree are more likely to be considered.   The vast majority of
> time is spent near the leaf nodes, and so it makes sense to prune more
> aggressively near leaf nodes.Eventually, leaf nodes become nodes
> closer to the root as the tree expands beyond them.
> 
> So if the first move of the sequence is hard to find but the rest are
> reasonably normal moves,  the situation is not so bad.An example of
> this might be a critical move to a 1 point eye where the program uses
> progressive pruning and doesn't even consider the move for a while.
> In such a case the problem might still be solved in a reasonable amount
> of time once some "more likley" moves are first exhausted.
> 
> If the problem is such that many "unusual" moves must be discovered,
> moves that are normally bad patterns for instance,  then a UCT program
> would truly have a difficult time with this position.   In theory it
> HAS
> to eventually recover but of course we can find positions where this is
> arbitrarily difficult for UCT. But you must understand that as you
> get more difficult,  it takes more human expertise also.   So it's not
> fair to fixate on just what the computers limitations are. I
> believe
> humans are very very far away from perfect play themselves and "god"
> could construct problems that would also give us fits.
> 
> When go computers get into the several Dan strength level,   you will
> start to see that they can do some things far better than humans but
> will still have some obvious weaknesses that seem silly to us.
> But
> the only thing that matters is whether they can win. That's the
> "bottom line" as the old timers like to say.
> 
> > Here are pointers to pictures of some real-life problems on the 19x19
> board; these happened in actual play at the recent Oza 2008 West
> tournament.
> >
> > Black to play and capture 5 stones:
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158560976
> 542534642
> >
> > What is the status of the white stones in the corner?
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158566963
> 726945906
> >
> > White to play; can the black group at the bottom middle be killed?
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158572701
> 803253954
> >
> > What is the status of the black group at the top?
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158572860
> 717043922
> >
> > Black to play: who wins the capturing race between the large black
> dragon and the white group in the bottom right / center ?
> >
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158579148
> 549165890
> >
> > The players in these games were all 8 or 9 kyu AGA at the time. ( One
> has been promoted to 4 kyu due to his 6-0 score. Way to go, Jerry! )
> >
> > As for me, I got half of these problems right during the course of
> play.
> >
> >
> >
> >
> >
> ___
> _
> > Looking for last minute shopping deals?
> > Find them fast with Yahoo! Search.
> http://tools.search.yahoo.com/newsearch/category.php?category=shopping
> > ___
> > compu

Re: [computer-go] Go rating math information

2008-01-31 Thread Don Dailey
ELO ratings don't have to be absolute, just self consistent.   So if you
beat someone 7.2% of the time,  that means you are about 440 ELO
stronger than him.

However, I don't understand what the K-factor has to do with anything.  
scaling it up or down doesn't change anything.  It's common practice to
make the rating of strong players change more slowly as the result of a
win or loss but that's not relevant here.

The findings below indicate that differences between dan players is
greater than the difference between kyu players. So you could not
assign a fixed ELO per rank but it would have to progressively get
higher as the players get stronger.

I don't know how David figures 1000 ELO,  but I would expect the
difference to be much larger than that for 19x19 go. I don't believe
they are yet very close to 1 Dan.  

- Don

  

Jason House wrote:
> How do you compute that?  I think the ELO values given are the rating
> differences that correspond to the particular upset (win) rate. 
> They're not absolute ELO values.  A truly absolute value is also tough
> to define.  CGOS picked its value somewhat arbitrarily.
>
> On Jan 31, 2008 12:54 PM, David Fotland <[EMAIL PROTECTED]
> > wrote:
>
> This implies that the top UCT programs are still over 1000 ELO
> points from the top human amateurs.
>
>  
>
> *From:* [EMAIL PROTECTED]
> 
> [mailto:[EMAIL PROTECTED]
> ] *On Behalf Of *Andy
> *Sent:* Thursday, January 31, 2008 9:32 AM
> *To:* computer-go
> *Subject:* [computer-go] Go rating math information
>
>  
>
> There were some questions about the effective ELO difference of
> two players 3 ranks apart.  Here are some links to information
> about go rating formulas, and some statistics:
>
> http://senseis.xmp.net/?KGSRatingMath
> http://gemma.ujf.cas.cz/~cieply/GO/gor.html
> 
> http://gemma.ujf.cas.cz/~cieply/GO/statev.html
> 
>
> Both KGS and EGF scale the "k" factor according to the player's
> strength.  For weaker players the probability of an upset is greater.
>
> According to KGS formula:
> 8k vs 5k: 7.2% chance of upset (~440 Elo)
> 2d vs 5d: 2.0% chance of upset (~676 Elo)
>
> According to EGF even game statistics:
> Generally for weaker kyu players the chance of upset is around
> 30-40% (~80-~150 ELO), for stronger players it goes down:
> 2d vs 5d: 11.5% chance of upset (~350 Elo)
> 3d vs 6d:  7.8% chance of upset (~432 Elo)
> 4d vs 7d:  3.3% chance of upset (~590 Elo)
>
>
> ___
> computer-go mailing list
> computer-go@computer-go.org 
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
>
> 
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Go knowledge and UCT

2008-01-31 Thread Don Dailey


terry mcintyre wrote:
>
> I may have misunderstood, so please clarify a point.
>
> Let's say the game hinges on solving a life-and-death problem - if you find 
> the right move, you win the game; if not, you lose. Many such problems - as 
> found in real games - are extremely order-dependent; there is exactly one 
> right way to begin the sequence; all other attempts fail. It is not unusual 
> for a sequence to be ten, twenty or thirty moves long, and to lead to an 
> absolutely provable result no matter how the opponent may twist and turn.
>
> Does "when the depth is great enough they are still considered" mean that an 
> unlikely move would not be considered for move A at the top of the tree, but 
> would be considered for move E or F, perhaps? Or have I misunderstood?
>   
The top of the tree is moves near the root since the tree is normally
displayed upside down in computer science.So moves closer to the top
of the tree are more likely to be considered.   The vast majority of
time is spent near the leaf nodes, and so it makes sense to prune more
aggressively near leaf nodes.Eventually, leaf nodes become nodes
closer to the root as the tree expands beyond them.

So if the first move of the sequence is hard to find but the rest are
reasonably normal moves,  the situation is not so bad.An example of
this might be a critical move to a 1 point eye where the program uses
progressive pruning and doesn't even consider the move for a while.   
In such a case the problem might still be solved in a reasonable amount
of time once some "more likley" moves are first exhausted.  

If the problem is such that many "unusual" moves must be discovered, 
moves that are normally bad patterns for instance,  then a UCT program
would truly have a difficult time with this position.   In theory it HAS
to eventually recover but of course we can find positions where this is
arbitrarily difficult for UCT. But you must understand that as you
get more difficult,  it takes more human expertise also.   So it's not
fair to fixate on just what the computers limitations are. I believe
humans are very very far away from perfect play themselves and "god"
could construct problems that would also give us fits. 

When go computers get into the several Dan strength level,   you will
start to see that they can do some things far better than humans but
will still have some obvious weaknesses that seem silly to us.   But
the only thing that matters is whether they can win. That's the
"bottom line" as the old timers like to say.

> Here are pointers to pictures of some real-life problems on the 19x19 board; 
> these happened in actual play at the recent Oza 2008 West tournament.
>
> Black to play and capture 5 stones: 
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158560976542534642
>
> What is the status of the white stones in the corner? 
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158566963726945906
>
> White to play; can the black group at the bottom middle be killed? 
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158572701803253954
>
> What is the status of the black group at the top? 
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158572860717043922
>
> Black to play: who wins the capturing race between the large black dragon and 
> the white group in the bottom right / center ? 
> http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158579148549165890
>
> The players in these games were all 8 or 9 kyu AGA at the time. ( One has 
> been promoted to 4 kyu due to his 6-0 score. Way to go, Jerry! )
>
> As for me, I got half of these problems right during the course of play. 
>
>
>
>
>   
> 
> Looking for last minute shopping deals?  
> Find them fast with Yahoo! Search.  
> http://tools.search.yahoo.com/newsearch/category.php?category=shopping
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
>   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Go rating math information

2008-01-31 Thread Jason House
How do you compute that?  I think the ELO values given are the rating
differences that correspond to the particular upset (win) rate.  They're not
absolute ELO values.  A truly absolute value is also tough to define.  CGOS
picked its value somewhat arbitrarily.

On Jan 31, 2008 12:54 PM, David Fotland <[EMAIL PROTECTED]> wrote:

>  This implies that the top UCT programs are still over 1000 ELO points
> from the top human amateurs.
>
>
>
> *From:* [EMAIL PROTECTED] [mailto:
> [EMAIL PROTECTED] *On Behalf Of *Andy
> *Sent:* Thursday, January 31, 2008 9:32 AM
> *To:* computer-go
> *Subject:* [computer-go] Go rating math information
>
>
>
> There were some questions about the effective ELO difference of two
> players 3 ranks apart.  Here are some links to information about go rating
> formulas, and some statistics:
>
> http://senseis.xmp.net/?KGSRatingMath
> http://gemma.ujf.cas.cz/~cieply/GO/gor.html
> http://gemma.ujf.cas.cz/~cieply/GO/statev.html
>
> Both KGS and EGF scale the "k" factor according to the player's strength.
> For weaker players the probability of an upset is greater.
>
> According to KGS formula:
> 8k vs 5k: 7.2% chance of upset (~440 Elo)
> 2d vs 5d: 2.0% chance of upset (~676 Elo)
>
> According to EGF even game statistics:
> Generally for weaker kyu players the chance of upset is around 30-40%
> (~80-~150 ELO), for stronger players it goes down:
> 2d vs 5d: 11.5% chance of upset (~350 Elo)
> 3d vs 6d:  7.8% chance of upset (~432 Elo)
> 4d vs 7d:  3.3% chance of upset (~590 Elo)
>
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Go rating math information

2008-01-31 Thread David Fotland
This implies that the top UCT programs are still over 1000 ELO points from
the top human amateurs.

 

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andy
Sent: Thursday, January 31, 2008 9:32 AM
To: computer-go
Subject: [computer-go] Go rating math information

 

There were some questions about the effective ELO difference of two players
3 ranks apart.  Here are some links to information about go rating formulas,
and some statistics:

 
http://senseis.xmp.net/?KGSRatingMath
 
http://gemma.ujf.cas.cz/~cieply/GO/gor.html
 
http://gemma.ujf.cas.cz/~cieply/GO/statev.html

Both KGS and EGF scale the "k" factor according to the player's strength.
For weaker players the probability of an upset is greater.

According to KGS formula:
8k vs 5k: 7.2% chance of upset (~440 Elo)
2d vs 5d: 2.0% chance of upset (~676 Elo)

According to EGF even game statistics:
Generally for weaker kyu players the chance of upset is around 30-40%
(~80-~150 ELO), for stronger players it goes down:
2d vs 5d: 11.5% chance of upset (~350 Elo)
3d vs 6d:  7.8% chance of upset (~432 Elo)
4d vs 7d:  3.3% chance of upset (~590 Elo)



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Go knowledge and UCT

2008-01-31 Thread terry mcintyre
> From: Don Dailey <[EMAIL PROTECTED]>
 
> Yes, 
basically 
the 
good 
programs
"prune" 
in 
a 
temporary 
sense. We 
call this 
progressive 
pruning 
because 
moves 
are 
identified 
which 
are extremely 
unlikely 
to 
be 
good,  
but 
when 
the 
depth 
is 
great 
enough 
they are 
still 
considered.  


I may have misunderstood, so please clarify a point.

Let's say the game hinges on solving a life-and-death problem - if you find the 
right move, you win the game; if not, you lose. Many such problems - as found 
in real games - are extremely order-dependent; there is exactly one right way 
to begin the sequence; all other attempts fail. It is not unusual for a 
sequence to be ten, twenty or thirty moves long, and to lead to an absolutely 
provable result no matter how the opponent may twist and turn.

Does "when the depth is great enough they are still considered" mean that an 
unlikely move would not be considered for move A at the top of the tree, but 
would be considered for move E or F, perhaps? Or have I misunderstood?

Here are pointers to pictures of some real-life problems on the 19x19 board; 
these happened in actual play at the recent Oza 2008 West tournament.

Black to play and capture 5 stones: 
http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158560976542534642

What is the status of the white stones in the corner? 
http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158566963726945906

White to play; can the black group at the bottom middle be killed? 
http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158572701803253954

What is the status of the black group at the top? 
http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158572860717043922

Black to play: who wins the capturing race between the large black dragon and 
the white group in the bottom right / center ? 
http://picasaweb.google.com/terry.mcintyre/OzaWest2008/photo#5158579148549165890

The players in these games were all 8 or 9 kyu AGA at the time. ( One has been 
promoted to 4 kyu due to his 6-0 score. Way to go, Jerry! )

As for me, I got half of these problems right during the course of play. 




  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study

2008-01-31 Thread David Doshay

These are G5 Macs, so if we get a binary it needs to be appropriate.
We can do the compiling if you don't want to, but you may not wish
to deliver us your code, and in that case I can make you an account
so you can compile it and then delete the source if you wish.

The cluster will be available in about 10 days for the study, but we
always keep one CPU available for compiling, so that can be done
at most any time.

Whichever you prefer, we can take this off-line for the details.

Cheers,
David



On 30, Jan 2008, at 9:24 AM, Don Dailey wrote:


Hi Olivier,

Yes, that would be great.   Please do. Also,  is there a Mac  
version

of this?   We have the possiblity of using a huge cluster of Mac
machines if we have a working binary. We could probably get you a
temporary account to build such a thing if you don't already have it.

- Don


Olivier Teytaud wrote:

I can provide a new release with double instead of float.
(unless the other mogo-people reading this mailing-list do not agree
for this; Sylvain, no problem for you ?).


I don't know exactly when it begins to do bad moves. However, I know
that
after several hours, the estimated winning rate converges to 1 or 0,
with
crazy principal variations, and the cause is low resolution of  
single

floats. In this study, it should no be a big factor of unscalability
given
the number of simulations.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Go rating math information

2008-01-31 Thread Andy
There were some questions about the effective ELO difference of two players
3 ranks apart.  Here are some links to information about go rating formulas,
and some statistics:

http://senseis.xmp.net/?KGSRatingMath
http://gemma.ujf.cas.cz/~cieply/GO/gor.html
http://gemma.ujf.cas.cz/~cieply/GO/statev.html

Both KGS and EGF scale the "k" factor according to the player's strength.
For weaker players the probability of an upset is greater.

According to KGS formula:
8k vs 5k: 7.2% chance of upset (~440 Elo)
2d vs 5d: 2.0% chance of upset (~676 Elo)

According to EGF even game statistics:
Generally for weaker kyu players the chance of upset is around 30-40%
(~80-~150 ELO), for stronger players it goes down:
2d vs 5d: 11.5% chance of upset (~350 Elo)
3d vs 6d:  7.8% chance of upset (~432 Elo)
4d vs 7d:  3.3% chance of upset (~590 Elo)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Go knowledge and UCT

2008-01-31 Thread Don Dailey


David Fotland wrote:
> UCT with light playouts that just avoid filling eyes is scalable, but much
> weaker than the strongest programs at 19x19 go.
>
> The strong programs have incorporated significant go knowledge, to direct
> the search to promising lines (usually local), to order moves to try, and to
> prune unpromising moves (please no scalability flames - I know they can
> unprune and stay scalable).
>   
Yes, basically the good program "prune" in a temporary sense.   We call
this progressive pruning because moves are identified which are
extremely unlikely to be good,  but when the depth is great enough they
are still considered.  

Any good global search is based on this principle,  determining what to
look at and what NOT to look at and yet never permanently discarding
something (unless it can be eliminated admissibly.)

- Don


> I suspect that the stronger programs have more go knowledge.  From published
> descriptions it looks like CrazyStone has the most pattern knowledge, and it
> is on top of the 19x19 ladder.
>
> I suspect that programs with more go knowledge have slower playouts.
>
> So, can the strong 19x19 programs please tell us your playout rates?  I
> expect the higher the rank, the fewer playouts per second.  I'm not
> interested in 9x9 data, since I think much less go knowledge is needed to
> play 9x9.  With your playout rate, please include the machine, number of
> CPUs, and how you measured it (from an empty board, or averaged over a whole
> game, or ...).
>
> David
>
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
>   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Go knowledge and UCT

2008-01-31 Thread David Fotland
UCT with light playouts that just avoid filling eyes is scalable, but much
weaker than the strongest programs at 19x19 go.

The strong programs have incorporated significant go knowledge, to direct
the search to promising lines (usually local), to order moves to try, and to
prune unpromising moves (please no scalability flames - I know they can
unprune and stay scalable).

I suspect that the stronger programs have more go knowledge.  From published
descriptions it looks like CrazyStone has the most pattern knowledge, and it
is on top of the 19x19 ladder.

I suspect that programs with more go knowledge have slower playouts.

So, can the strong 19x19 programs please tell us your playout rates?  I
expect the higher the rank, the fewer playouts per second.  I'm not
interested in 9x9 data, since I think much less go knowledge is needed to
play 9x9.  With your playout rate, please include the machine, number of
CPUs, and how you measured it (from an empty board, or averaged over a whole
game, or ...).

David


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 9x9 Study

2008-01-31 Thread Don Dailey
I'll put together a few games and send them to you.

- Don



David Fotland wrote:
> I'm only 3 dan but I wouldn't mind taking a look.  I suspect that there
> might be some systematic issue causing the rolloff at high levels.
>
> David
>
>   
>> -Original Message-
>> From: [EMAIL PROTECTED] [mailto:computer-go-
>> [EMAIL PROTECTED] On Behalf Of Don Dailey
>> Sent: Thursday, January 31, 2008 8:33 AM
>> To: computer-go
>> Subject: Re: [computer-go] 9x9 Study
>>
>> I would like to see a very strong players analysis of some of the games
>> of Mogo at the high levels in the study,   but I am very leery of
>> subjective human analysis. Even though I would like to see it,  I
>> would take what I heard with a grain of salt.
>>
>> I hate to keep bringing this up,  but the "objective" analysis by chess
>> grandmasters of computer games played a couple of decades ago where
>> full
>> of nonsense, ego and bias.Humans cannot objectively  judge playing
>> ability by looking at individual moves of games.They can only
>> comment on the quality of moves as judged by their own playing
>> ability.They tend to draw sweeping conclusions based on one or two
>> moves they don't like.
>>
>> And it's even more dodgy with the UCT style of not maximizing total
>> territory,  just wining probabilities.   A really strong player is
>> liable to be very critical of those cases where the UCT programs don't
>> defend an easily defendable group because the game is over anyway.
>>
>> - Don
>>
>>
>>
>> terry mcintyre wrote:
>> 
>>> I can well believe that a through implementation of even a flawed
>>>   
>> knowledge of the game of Go will lead to fairly strong play.
>> 
>>> For instance, given enough time, just knowing enough to keep score
>>>   
>> and remove groups which are deprived of liberties would be enough to
>> prune the more egregious mistakes made by beginners - and sometimes
>> even by dan level players; somewhere there is a youtube video of a pro
>> who put himself in atari. I once read the commentary of a game by pro
>> players; one was asked why he played at a certain move. He answered "I
>> was hallucinating. I saw something which wasn't there."
>> 
>>> I observe that Mogo levels 17 and 18, with approximately 100 games
>>>   
>> each, are doing about as well as Mogo 16, with about 500 games. It may
>> be too soon for this to be statistically significant?
>> 
>>> It would be interesting to examine the games, and determine what the
>>>   
>> distribution of scores is like, and whether any patterns are being
>> established, any preferred avenues of play; whether any weaknesses
>> remain, or whether they are being discovered and ameliorated at higher
>> levels of play.
>> 
>>> Gnugo has a regression test suite of various problems; it might be
>>>   
>> useful to do something with game records, wherever blunders are
>> discovered.
>> 
>>>
>>>
>>>
>>>
>>>   
>> ___
>> _
>> 
>>> Looking for last minute shopping deals?
>>> Find them fast with Yahoo! Search.
>>>   
>> http://tools.search.yahoo.com/newsearch/category.php?category=shopping
>> 
>>> ___
>>> computer-go mailing list
>>> computer-go@computer-go.org
>>> http://www.computer-go.org/mailman/listinfo/computer-go/
>>>
>>>
>>>   
>> ___
>> computer-go mailing list
>> computer-go@computer-go.org
>> http://www.computer-go.org/mailman/listinfo/computer-go/
>> 
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
>   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] 9x9 Study

2008-01-31 Thread David Fotland
I'm only 3 dan but I wouldn't mind taking a look.  I suspect that there
might be some systematic issue causing the rolloff at high levels.

David

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:computer-go-
> [EMAIL PROTECTED] On Behalf Of Don Dailey
> Sent: Thursday, January 31, 2008 8:33 AM
> To: computer-go
> Subject: Re: [computer-go] 9x9 Study
> 
> I would like to see a very strong players analysis of some of the games
> of Mogo at the high levels in the study,   but I am very leery of
> subjective human analysis. Even though I would like to see it,  I
> would take what I heard with a grain of salt.
> 
> I hate to keep bringing this up,  but the "objective" analysis by chess
> grandmasters of computer games played a couple of decades ago where
> full
> of nonsense, ego and bias.Humans cannot objectively  judge playing
> ability by looking at individual moves of games.They can only
> comment on the quality of moves as judged by their own playing
> ability.They tend to draw sweeping conclusions based on one or two
> moves they don't like.
> 
> And it's even more dodgy with the UCT style of not maximizing total
> territory,  just wining probabilities.   A really strong player is
> liable to be very critical of those cases where the UCT programs don't
> defend an easily defendable group because the game is over anyway.
> 
> - Don
> 
> 
> 
> terry mcintyre wrote:
> > I can well believe that a through implementation of even a flawed
> knowledge of the game of Go will lead to fairly strong play.
> >
> > For instance, given enough time, just knowing enough to keep score
> and remove groups which are deprived of liberties would be enough to
> prune the more egregious mistakes made by beginners - and sometimes
> even by dan level players; somewhere there is a youtube video of a pro
> who put himself in atari. I once read the commentary of a game by pro
> players; one was asked why he played at a certain move. He answered "I
> was hallucinating. I saw something which wasn't there."
> >
> > I observe that Mogo levels 17 and 18, with approximately 100 games
> each, are doing about as well as Mogo 16, with about 500 games. It may
> be too soon for this to be statistically significant?
> >
> > It would be interesting to examine the games, and determine what the
> distribution of scores is like, and whether any patterns are being
> established, any preferred avenues of play; whether any weaknesses
> remain, or whether they are being discovered and ameliorated at higher
> levels of play.
> >
> > Gnugo has a regression test suite of various problems; it might be
> useful to do something with game records, wherever blunders are
> discovered.
> >
> >
> >
> >
> >
> >
> ___
> _
> > Looking for last minute shopping deals?
> > Find them fast with Yahoo! Search.
> http://tools.search.yahoo.com/newsearch/category.php?category=shopping
> > ___
> > computer-go mailing list
> > computer-go@computer-go.org
> > http://www.computer-go.org/mailman/listinfo/computer-go/
> >
> >
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 9x9 Study

2008-01-31 Thread Don Dailey
I would like to see a very strong players analysis of some of the games
of Mogo at the high levels in the study,   but I am very leery of
subjective human analysis. Even though I would like to see it,  I
would take what I heard with a grain of salt. 

I hate to keep bringing this up,  but the "objective" analysis by chess
grandmasters of computer games played a couple of decades ago where full
of nonsense, ego and bias.Humans cannot objectively  judge playing
ability by looking at individual moves of games.They can only
comment on the quality of moves as judged by their own playing
ability.They tend to draw sweeping conclusions based on one or two
moves they don't like.

And it's even more dodgy with the UCT style of not maximizing total
territory,  just wining probabilities.   A really strong player is
liable to be very critical of those cases where the UCT programs don't
defend an easily defendable group because the game is over anyway.

- Don



terry mcintyre wrote:
> I can well believe that a through implementation of even a flawed knowledge 
> of the game of Go will lead to fairly strong play.
>
> For instance, given enough time, just knowing enough to keep score and remove 
> groups which are deprived of liberties would be enough to prune the more 
> egregious mistakes made by beginners - and sometimes even by dan level 
> players; somewhere there is a youtube video of a pro who put himself in 
> atari. I once read the commentary of a game by pro players; one was asked why 
> he played at a certain move. He answered "I was hallucinating. I saw 
> something which wasn't there."
>
> I observe that Mogo levels 17 and 18, with approximately 100 games each, are 
> doing about as well as Mogo 16, with about 500 games. It may be too soon for 
> this to be statistically significant?
>
> It would be interesting to examine the games, and determine what the 
> distribution of scores is like, and whether any patterns are being 
> established, any preferred avenues of play; whether any weaknesses remain, or 
> whether they are being discovered and ameliorated at higher levels of play.
>
> Gnugo has a regression test suite of various problems; it might be useful to 
> do something with game records, wherever blunders are discovered. 
>
>
>
>
>
>   
> 
> Looking for last minute shopping deals?  
> Find them fast with Yahoo! Search.  
> http://tools.search.yahoo.com/newsearch/category.php?category=shopping
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
>   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 9x9 Study

2008-01-31 Thread terry mcintyre
I can well believe that a through implementation of even a flawed knowledge of 
the game of Go will lead to fairly strong play.

For instance, given enough time, just knowing enough to keep score and remove 
groups which are deprived of liberties would be enough to prune the more 
egregious mistakes made by beginners - and sometimes even by dan level players; 
somewhere there is a youtube video of a pro who put himself in atari. I once 
read the commentary of a game by pro players; one was asked why he played at a 
certain move. He answered "I was hallucinating. I saw something which wasn't 
there."

I observe that Mogo levels 17 and 18, with approximately 100 games each, are 
doing about as well as Mogo 16, with about 500 games. It may be too soon for 
this to be statistically significant?

It would be interesting to examine the games, and determine what the 
distribution of scores is like, and whether any patterns are being established, 
any preferred avenues of play; whether any weaknesses remain, or whether they 
are being discovered and ameliorated at higher levels of play.

Gnugo has a regression test suite of various problems; it might be useful to do 
something with game records, wherever blunders are discovered. 





  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-31 Thread Don Dailey

>> You probably don't understand how UCT works.   UCT balances exploration
>> with exploitation.   The UCT tree WILL explore B1, but will explore it
>> with low frequency.That is unless the tree actually throws out 1
>> point eye moves (in which case it is not properly scalable and broken in
>> some sense.)
>> 
>
>
> It was my understanding that most UCT programs would not consider b1, since
> they use the same move-generation for the MC playouts as for the UCT tree,
> and that forbids filling your own eyes. "Broken in some sense", as you say,
> although probably playing a bit stronger for it.
>
> If the move is considered at all, I have no problems believing that UCT will
> eventually find it. That much I understand of UCT.
>
> Sorry if I confused practical implementations and the abstract. As to
> publishing my findings, I need to make some real ones first, and then be sure
> of them. I have some ideas I am pursuing, but things go slowly when I only
> have some of my spare time for this project. When I do, it may be on a web
> page, or maybe just on this list - I am not in the game to publish academic
> papers. More to learn things myself, and if possible to add my small
> contribution to a field I find interesting.
>
>   
I suspect that many UCT programs do actually prune some moves in the
tree such as 1 point eyes,  but that of course depends on the
implementation details for each program.My own program does this
simply because it seems pragmatic to do at practical playing levels  and
I fully understand that I give up scalability at the higher levels.   Of
course I assume that the levels I would benefit from are too high to be
practical.I could very well be mistaken about this and it's a
possible explanation for why FatMan peaked out in the study (latest
results show that it's still improving but very slowly.)   I will
have to say I was lazy and didn't test this.Of course the eye rule
or something like it must be applied in the play-outs, but in the tree
to be "fully" scalable you must not prune permanently.

I think most of us are interested in producing the strongest practical
playing program so it's very easy to confuse, as you say, the practical
from the abstract.  

I think what will eventually happen is that as programs improve,  the
weakness we observe will get corrected.   There is no law that forbids
us from fixing issues such as slow recognition of relatively basic
nakade positions and of course I simply make the point that in a truly
scalable program you are guaranteed a general strength increase as you
go deeper, including handling of these kinds of positions.Every
problem eventually goes away but I make no claim that it will happen as
fast as we wish it would.I remind everyone that even in chess, 
it's possible to find or construct positions that make programs look
like fools, but it's another thing entirely to be able to beat these
programs.   

Should someone find some clever solution to the nakade problem,   it's
no guarantee that it will actually make the program stronger, as someone
recently pointed out. As Gian-Carlo pointed out, what seems like a
serious failing to us may not represent a serious failing in some
idealistic sense (it may not have the impact on play WE think it should be.)

I can say that I have known chess players that were stronger than myself
who were totally deficient in areas of play that astounded me, yet they
could get results.So what we think is important, again as Gian-Carlo
points out,  may not matter as much as we think.  The Chess programs
of a few years ago were remarkable examples of that - they were
incredibly naive and laughable at one point in the early days and from
our twisted point of view it was unbelievable because they were playing
a few hundred ELO higher than you would expect.

- Don


> - Heikki
>
>
>   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] CGOS result tables

2008-01-31 Thread Jason House
Please go to http://www.sourceforge.net/projects/cgos and submit a feature
request.

On Jan 30, 2008 7:00 PM, Michael Williams <[EMAIL PROTECTED]>
wrote:

> I was not talking about the study.  I was talking about the main CGOS
> servers.
>
> Don Dailey wrote:
> > That information is in the autotester,  but it wouldn't be accurate on
> > the main page since many computers of various types and with various
> > loads are being used in the study.
> >
> > I suppose one could argue that the total conglomerate average would be
> > reasonably accurate according to the law of averages.
> >
> > But I just noticed that I'm not sending that data over - I would have to
> > modify the infrastructure to deal with that by sending new kits - but
> > it's possible to do.If anyone has dropped out  of the study I would
> > have to get them to update again - probably a lot of hassle.
> >
> > Perhaps I will make the next test handle this.
> >
> > - Don
> >
> >
> > Michael Williams wrote:
> >> Any chance of adding a new column to the CGOS result tables to show
> >> the average amount of time used per game?
> >> ___
> >> computer-go mailing list
> >> computer-go@computer-go.org
> >> http://www.computer-go.org/mailman/listinfo/computer-go/
> >>
> > ___
> > computer-go mailing list
> > computer-go@computer-go.org
> > http://www.computer-go.org/mailman/listinfo/computer-go/
> >
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] 19x19 Study

2008-01-31 Thread Don Dailey
We want a version simply for the study - it may not make a performance
difference and will probably hurt the performance for a given time
level,   so I would suggest it not to be the primary version. 

- Don


Sylvain Gelly wrote:
> No problem for me. I did not want to multiply the number of versions
> not to confuse people. With the double version, don't forget it will
> increase the memory footprint for a given number of nodes.
>
> Sylvain
>
> 2008/1/30, Olivier Teytaud <[EMAIL PROTECTED]
> >:
>
> I can provide a new release with double instead of float.
> (unless the other mogo-people reading this mailing-list do not
> agree for
> this; Sylvain, no problem for you ?).
>
> > I don't know exactly when it begins to do bad moves. However, I
> know that
> > after several hours, the estimated winning rate converges to 1
> or 0, with
> > crazy principal variations, and the cause is low resolution of
> single
> > floats. In this study, it should no be a big factor of
> unscalability given
> > the number of simulations.
> ___
> computer-go mailing list
> computer-go@computer-go.org 
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
>
> 
>
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study

2008-01-31 Thread Sylvain Gelly
No problem for me. I did not want to multiply the number of versions not to
confuse people. With the double version, don't forget it will increase the
memory footprint for a given number of nodes.

Sylvain

2008/1/30, Olivier Teytaud <[EMAIL PROTECTED]>:
>
> I can provide a new release with double instead of float.
> (unless the other mogo-people reading this mailing-list do not agree for
> this; Sylvain, no problem for you ?).
>
> > I don't know exactly when it begins to do bad moves. However, I know
> that
> > after several hours, the estimated winning rate converges to 1 or 0,
> with
> > crazy principal variations, and the cause is low resolution of single
> > floats. In this study, it should no be a big factor of unscalability
> given
> > the number of simulations.
> ___
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-31 Thread Heikki Levanto
On Wed, Jan 30, 2008 at 04:35:18PM -0500, Don Dailey wrote:
> Heikki Levanto wrote:
>
> > On Wed, Jan 30, 2008 at 03:23:35PM -0500, Don Dailey wrote:
> >   
> >> Having said that,  I am interested in this.  Is there something that
> >> totally prevents the program from EVER seeing the best move?  
> >> 
> >
> > Someone, I think it was Gunnar, pointed out that something like this:
> >
> > 5 | # # # # # # 
> > 4 | + + + + + # 
> > 3 | O O O O + # 
> > 2 | # # + O + # 
> > 1 | # + # O + # 
> >   -
> > a b c d e f
> >
> > Here black (#) must play at b1 to kill white (O). If white gets to move
> > first, he can live with c2, and later making two eyes by capturing at b1.
> >
> 
> You are totally incorrect about this. First of all, saying that "no
> amount of UCT-tree bashing will discover this move"  invalidates all the
> research and subsequent proofs done by researchers.   You may want
> publish your own findings on this and see how well it flies.
>
> You probably don't understand how UCT works.   UCT balances exploration
> with exploitation.   The UCT tree WILL explore B1, but will explore it
> with low frequency.That is unless the tree actually throws out 1
> point eye moves (in which case it is not properly scalable and broken in
> some sense.)


It was my understanding that most UCT programs would not consider b1, since
they use the same move-generation for the MC playouts as for the UCT tree,
and that forbids filling your own eyes. "Broken in some sense", as you say,
although probably playing a bit stronger for it.

If the move is considered at all, I have no problems believing that UCT will
eventually find it. That much I understand of UCT.

Sorry if I confused practical implementations and the abstract. As to
publishing my findings, I need to make some real ones first, and then be sure
of them. I have some ideas I am pursuing, but things go slowly when I only
have some of my spare time for this project. When I do, it may be on a web
page, or maybe just on this list - I am not in the game to publish academic
papers. More to learn things myself, and if possible to add my small
contribution to a field I find interesting.

- Heikki


-- 
Heikki Levanto   "In Murphy We Turst" heikki (at) lsd (dot) dk

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/