Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-31 Thread Heikki Levanto
On Wed, Jan 30, 2008 at 04:35:18PM -0500, Don Dailey wrote:
 Heikki Levanto wrote:

  On Wed, Jan 30, 2008 at 03:23:35PM -0500, Don Dailey wrote:

  Having said that,  I am interested in this.  Is there something that
  totally prevents the program from EVER seeing the best move?  
  
 
  Someone, I think it was Gunnar, pointed out that something like this:
 
  5 | # # # # # # 
  4 | + + + + + # 
  3 | O O O O + # 
  2 | # # + O + # 
  1 | # + # O + # 
-
  a b c d e f
 
  Here black (#) must play at b1 to kill white (O). If white gets to move
  first, he can live with c2, and later making two eyes by capturing at b1.
 
 
 You are totally incorrect about this. First of all, saying that no
 amount of UCT-tree bashing will discover this move  invalidates all the
 research and subsequent proofs done by researchers.   You may want
 publish your own findings on this and see how well it flies.

 You probably don't understand how UCT works.   UCT balances exploration
 with exploitation.   The UCT tree WILL explore B1, but will explore it
 with low frequency.That is unless the tree actually throws out 1
 point eye moves (in which case it is not properly scalable and broken in
 some sense.)


It was my understanding that most UCT programs would not consider b1, since
they use the same move-generation for the MC playouts as for the UCT tree,
and that forbids filling your own eyes. Broken in some sense, as you say,
although probably playing a bit stronger for it.

If the move is considered at all, I have no problems believing that UCT will
eventually find it. That much I understand of UCT.

Sorry if I confused practical implementations and the abstract. As to
publishing my findings, I need to make some real ones first, and then be sure
of them. I have some ideas I am pursuing, but things go slowly when I only
have some of my spare time for this project. When I do, it may be on a web
page, or maybe just on this list - I am not in the game to publish academic
papers. More to learn things myself, and if possible to add my small
contribution to a field I find interesting.

- Heikki


-- 
Heikki Levanto   In Murphy We Turst heikki (at) lsd (dot) dk

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-31 Thread Don Dailey

 You probably don't understand how UCT works.   UCT balances exploration
 with exploitation.   The UCT tree WILL explore B1, but will explore it
 with low frequency.That is unless the tree actually throws out 1
 point eye moves (in which case it is not properly scalable and broken in
 some sense.)
 


 It was my understanding that most UCT programs would not consider b1, since
 they use the same move-generation for the MC playouts as for the UCT tree,
 and that forbids filling your own eyes. Broken in some sense, as you say,
 although probably playing a bit stronger for it.

 If the move is considered at all, I have no problems believing that UCT will
 eventually find it. That much I understand of UCT.

 Sorry if I confused practical implementations and the abstract. As to
 publishing my findings, I need to make some real ones first, and then be sure
 of them. I have some ideas I am pursuing, but things go slowly when I only
 have some of my spare time for this project. When I do, it may be on a web
 page, or maybe just on this list - I am not in the game to publish academic
 papers. More to learn things myself, and if possible to add my small
 contribution to a field I find interesting.

   
I suspect that many UCT programs do actually prune some moves in the
tree such as 1 point eyes,  but that of course depends on the
implementation details for each program.My own program does this
simply because it seems pragmatic to do at practical playing levels  and
I fully understand that I give up scalability at the higher levels.   Of
course I assume that the levels I would benefit from are too high to be
practical.I could very well be mistaken about this and it's a
possible explanation for why FatMan peaked out in the study (latest
results show that it's still improving but very slowly.)   I will
have to say I was lazy and didn't test this.Of course the eye rule
or something like it must be applied in the play-outs, but in the tree
to be fully scalable you must not prune permanently.

I think most of us are interested in producing the strongest practical
playing program so it's very easy to confuse, as you say, the practical
from the abstract.  

I think what will eventually happen is that as programs improve,  the
weakness we observe will get corrected.   There is no law that forbids
us from fixing issues such as slow recognition of relatively basic
nakade positions and of course I simply make the point that in a truly
scalable program you are guaranteed a general strength increase as you
go deeper, including handling of these kinds of positions.Every
problem eventually goes away but I make no claim that it will happen as
fast as we wish it would.I remind everyone that even in chess, 
it's possible to find or construct positions that make programs look
like fools, but it's another thing entirely to be able to beat these
programs.   

Should someone find some clever solution to the nakade problem,   it's
no guarantee that it will actually make the program stronger, as someone
recently pointed out. As Gian-Carlo pointed out, what seems like a
serious failing to us may not represent a serious failing in some
idealistic sense (it may not have the impact on play WE think it should be.)

I can say that I have known chess players that were stronger than myself
who were totally deficient in areas of play that astounded me, yet they
could get results.So what we think is important, again as Gian-Carlo
points out,  may not matter as much as we think.  The Chess programs
of a few years ago were remarkable examples of that - they were
incredibly naive and laughable at one point in the early days and from
our twisted point of view it was unbelievable because they were playing
a few hundred ELO higher than you would expect.

- Don


 - Heikki


   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Jacques Basaldúa

Dave Hillis wrote:

 I've noticed this in games on KGS; a lot of people lose games
 with generous time limits because they, rashly, try to keep up
 with my dumb but very fast bot and make blunders.

What Don says about humans scaling applies to humans making
an effort to use the time they have, but when we play on KGS
after a hard work day, we (I guess a lot of people like myself,
including my opponents and the players I watch) play for
pleasure. We avoid too fast settings, because it introduces
unnecessary pressure, and we hate too slow because it makes
the game endless. We play _independently of the remaining time_.
Most moves fast, and from time to time we ponder 10 to 20 seconds.

That's why KGS time settings for humans have to be taken with a
grain of salt.


About Don's arguments on self testing:

I would agree at 100% if it wasn't for the known limitations:
Nakade, not filling own eyes, etc. Because the program is blind
to them it is blind in both senses: it does not consider those
moves when defending, but it does not consider them when attacking
either. Your programs behave as if they were converging to perfect
play in a game whose rules forbid those moves. But these moves are
perfectly legal! At weak levels, there are more important things
to care about, but as the level rises there is a point at which
understanding or not understanding these situations makes
the difference. A program that understood these situations,
but had some other small weakness could have strong impact
in the ratings. Perhaps, Mogo12 and Mogo16 would not be so
different in their chances of beating that program as they
are in beating each other.


Jacques.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey
Dave,

I really thought about mentioning that in my original post because it
does affect the ability of human players. In fact one technique I
use when I'm losing badly in chess is to start playing instantly.I
have actually salvaged a few games that way - the opponent starts
playing fast and now there is a chance for a blunder. 

It's well known that when your opponent is in time pressure, the
stupidest thing you can do is try to play fast - essentially giving up
your time advantage and turning it into a even contest. 

But I didn't mention this in my original post because mature players
won't fall for this.   I would implement this by just forcing the move
to take at least 10 seconds.

- Don


[EMAIL PROTECTED] wrote:

  From: Don Dailey [EMAIL PROTECTED]
  ...
   Rémi Coulom wrote:
   ...
   Instead of playing UCT bot vs UCT bot, I am thinking about running a
   scaling experiment against humans on KGS. I'll probably start with 2k,
   8k, 16k, and 32k playouts.

  That would be a great experiment.   There is only 1 issue here and
  that's time control.   I would suggest the test is more meaningful if
  you use the same time-control for all play-out levels, even if Crazy
  Stone plays really fast.This is because the ELO curve for humans is
  also based on thinking time. 

  If you set the time control at just the rate the program needs to use
  all it's time,you might very well find the program plays better at
  fast time controls, it would be meaningless even as a rough measurement
  of ELO strength.


 In addition to having all versions of the program use the same time
 control, I think it would be best if they all made their moves at the
 same rate. When humans play against a bot playing at a fast tempo,
 they tend to speed up themselves regardless of the time limits. The
 human's pondering is also a factor.

 I've noticed this in games on KGS; a lot of people lose games with
 generous time limits because they, rashly, try to keep up with my dumb
 but very fast bot and make blunders.

 - Dave Hillis


 
 More new features than ever. Check out the new AIM(R) Mail
 http://o.aolcdn.com/cdn.webmail.aol.com/mailtour/aol/en-us/text.htm?ncid=aimcmp000501!
 

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey


Jacques Basaldúa wrote:
 Dave Hillis wrote:

  I've noticed this in games on KGS; a lot of people lose games
  with generous time limits because they, rashly, try to keep up
  with my dumb but very fast bot and make blunders.

 What Don says about humans scaling applies to humans making
 an effort to use the time they have, but when we play on KGS
 after a hard work day, we (I guess a lot of people like myself,
 including my opponents and the players I watch) play for
 pleasure. We avoid too fast settings, because it introduces
 unnecessary pressure, and we hate too slow because it makes
 the game endless. We play _independently of the remaining time_.
 Most moves fast, and from time to time we ponder 10 to 20 seconds.

 That's why KGS time settings for humans have to be taken with a
 grain of salt.

That's why there should be something at stake.   I assumed that the
games played are rated games on KGS. Is that not so?   

If I were going to do a really serious experiment and had the funds,  I
would make the games rated,  and I would further motivate the players
with prize money.A modest amount for playing and a larger amount for
win. This would motivate players to play the really long
time-controls and play them seriously.   It doesn't take much money
to motivate players,   it could be 5 or 10 dollars.  Just the fact that
there is money involved at all will get their ego and pride working.

 About Don's arguments on self testing:

 I would agree at 100% if it wasn't for the known limitations:
 Nakade, not filling own eyes, etc. Because the program is blind
 to them it is blind in both senses: it does not consider those
 moves when defending, but it does not consider them when attacking
 either. Your programs behave as if they were converging to perfect
 play in a game whose rules forbid those moves. But these moves are
 perfectly legal! At weak levels, there are more important things
 to care about, but as the level rises there is a point at which
 understanding or not understanding these situations makes
 the difference. A program that understood these situations,
 but had some other small weakness could have strong impact
 in the ratings. Perhaps, Mogo12 and Mogo16 would not be so
 different in their chances of beating that program as they
 are in beating each other.
Please note that I was talking about a program that is properly and
correctly scalable so I think we are in 100% agreement after all.The
current MC programs, as you point out, are not fully admissible in
this sense.  

However, at the levels we are testing,  I don't believe this is
affecting the ratings (or self-play effect we are talking about) in a
significant way, but I could be wrong since I am no expert on playing Go.

How often is this observed?   Perhaps at some point we could compile a
list of games that represent serious upsets and see how often this would
have been a factor. Probably a more accurate way is to put in
programs that understand these things and see if the crank the ratings
down.My guess is that they will only slightly decrease the ratings
of the top programs.

- Don



 Jacques.

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread steve uurtamo
 I 
would 
agree 
at 
100% 
if 
it 
wasn't 
for 
the 
known 
limitations:
 Nakade, 
not 
filling 
own 
eyes, 
etc. 
Because 
the 
program 
is 
blind
 to 
them 
it 
is 
blind 
in 
both 
senses: 
it 
does 
not 
consider 
those
 moves 
when 
defending, 
but 
it 
does 
not 
consider 
them 
when 
attacking
 either.

this is well said.

one problem with testing this is that there aren't a lot of good examples
of programs that can scale as easily but which are known to not have
these disadvantages.  it's not an issue of time, really -- anything that
can scale as well as mogo would have been worth testing, even if it took 2x as
long to run, simply because it'd be nice for mogo to have some alternate
competition.  and if it really did avoid these particular faults (nakade in the
corner, not filling own eyes, correct seki knowledge, etc.), it'd be
interesting to see when the two programs crossed over, i.e. at what
ELO one started to dominate the other.  that would give a rough idea about
the strength you'd need to be in order to take advantage of these flaws.

my guess is that anyone at the 1k level can generate these situations
with some regularity on a 9x9 board, and even more easily on a bigger board.
making them game-altering, however, might take a much stronger player, or
a much bigger board.  i don't mean because bigger boards are harder for
programs to read, i literally mean simply because there is more room on
the board.

s.



  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread terry mcintyre
I am, sadly, in the 9 kyu AGA range, yet can regularly create situations which 
Mogo cannot read on a 19x19 board. Harder to do on a 9x9 board, but I have done 
it.

Don asks how significant a jump of 3 kyu is. On a 19x19 board, one with a 3 kyu 
advantage can give a 3 stone handicap to the weaker player, and still win half 
the games. For an even game, a 3 kyu difference usually translates to about 30 
points. Humans don't usually keep track of the winning percentage for even 
games among disparate players, unfortunately.

This is modulo experience with handicap vs even games. I have a lot of 
experience with using handicap stones to my advantage; many players do not. 
Conversely, even games give me trouble; I am often behind by 20 points coming 
out of the even-game fuseki, and must make up the difference during the midgame.

Handicap stones give a formidable advantage on the smaller 9x9 board; they are 
used for teaching games where there is a great disparity of skill.

I am concerned that the current study is, as Jacques has so ably described, a 
study of a restricted game where nakade and certain other moves are considered 
to be illegal; this restricted game approaches the game of Go, but the programs 
have certain blind spots which humans can and do take advantage of. These 
aren't computer-specific blind spots; humans train on life-and-death problems 
in order to gain an advantage over other humans also. 

Terry McIntyre [EMAIL PROTECTED] 
“Wherever is found what is called a paternal government, there is found state 
education. It has been discovered that the best way to insure implicit 
obedience is to commence tyranny in the nursery.”
 
Benjamin Disraeli, Speech in the House of Commons [June 15, 1874]




  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey


 I am concerned that the current study is, as Jacques has so ably described, a 
 study of a restricted game where nakade and certain other moves are 
 considered to be illegal; this restricted game approaches the game of Go, but 
 the programs have certain blind spots which humans can and do take advantage 
 of. These aren't computer-specific blind spots; humans train on 
 life-and-death problems in order to gain an advantage over other humans also. 
   
This is good news and nothing to worry about.You are basically
saying mogo has a bug, and if this bug is fixed then we can expect even
better scalability. So any success here can be viewed as a lower
bound on it's actual rating. 

If a nakade fixed version of mogo (that is truly scalable) was in the
study,  how much higher would it be in your estimation?

- Don

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Gian-Carlo Pascutto

Don Dailey wrote:


If a nakade fixed version of mogo (that is truly scalable) was in the
study,  how much higher would it be in your estimation?


You do realize that you are asking how much perfect life and death 
knowledge is worth?


--
GCP
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey

I changed bayeselo to use the prior command as Rémi suggested I could do.

It raised the ELO rating of the highest rated well established player by
about 60 ELO!

I set prior to 0.1 

  http://cgos.boardspace.net/study/

- Don



Rémi Coulom wrote:
 Don Dailey wrote:
 They seem under-rated to me also.   Bayeselo pushes the ratings together
 because that is apparently a valid initial assumption.   With enough
 games I believe that effect goes away.

 I could test that theory with some work.Unless there is a way to
 turn that off in bayelo (I don't see it) I could rate them with my own
 program.

 Perhaps I will do that test.

 - Don
 The factor that pushes ratings together is the prior virtual draws
 between opponents. You can remove or reduce this factor with the 
 prior command. (before the mm command, you can run prior 0 or
 prior 0.1). This command indicates the number of virtual draws. If I
 remember correctly, the default is 3. You may get convergence problem
 if you set the prior to 0 and one player has 100% wins.

 The effect of the prior should vanish as the number of games grows.
 But if the winning rate is close to 100%, it may take a lot of games
 before the effect of these 3 virtual draws becomes small. It is not
 possible to reasonably measure rating differences when the winning
 rate is close to 100% anyway.

 Instead of playing UCT bot vs UCT bot, I am thinking about running a
 scaling experiment against humans on KGS. I'll probably start with 2k,
 8k, 16k, and 32k playouts.

 Rémi
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey
Is nakade actually a problem in mogo?   Are there positions it could
never solve or is merely a general weakness.

I thought the search corrected such problems eventually.

- Don


Gian-Carlo Pascutto wrote:
 Don Dailey wrote:

 If a nakade fixed version of mogo (that is truly scalable) was in the
 study,  how much higher would it be in your estimation?

 You do realize that you are asking how much perfect life and death
 knowledge is worth?

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey
I must not understand the problem. My program has no trouble with
nakade unless you are talking about some special case position.My
program immediately places the stone on the magic square to protect it's
2 eyes.I can't believe mogo doesn't do this, it would be very weak
if it didn't.

Do you guys have a special definition of nakade?  

- Don




terry mcintyre wrote:
 We could test this: find some nakade problems in the games, crank up the 
 number of simulations, and see if Mogo finds the crucial moves.

 There's the question of how long eventually is. 

 I
  
 Terry McIntyre [EMAIL PROTECTED] 
 “Wherever is found what is called a paternal government, there is found state 
 education. It has been discovered that the best way to insure implicit 
 obedience is to commence tyranny in the nursery.”
  
 Benjamin Disraeli, Speech in the House of Commons [June 15, 1874]

 - Original Message 
 From: Don Dailey [EMAIL PROTECTED]
 To: computer-go computer-go@computer-go.org
 Sent: Wednesday, January 30, 2008 11:10:01 AM
 Subject: Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study


 Is 
 nakade 
 actually 
 a 
 problem 
 in 
 mogo?  
  
 Are 
 there 
 positions 
 it 
 could
 never 
 solve 
 or 
 is 
 merely 
 a 
 general 
 weakness.  
   

 I 
 thought 
 the 
 search 
 corrected 
 such 
 problems 
 eventually.

 - 
 Don


 Gian-Carlo 
 Pascutto 
 wrote:
   
 Don 
 Dailey 
 wrote:
   
 If 
 a 
 nakade 
 fixed 
 version 
 of 
 mogo 
 (that 
 is 
 truly 
 scalable) 
 was 
 in 
 the
   
 study,  
 how 
 much 
 higher 
 would 
 it 
 be 
 in 
 your 
 estimation?  
   
   
 
 You 
 do 
 realize 
 that 
 you 
 are 
 asking 
 how 
 much 
 perfect 
 life 
 and 
 death
   
 knowledge 
 is 
 worth?
   
 ___
 computer-go 
 mailing 
 list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/






   
 
 Never miss a thing.  Make Yahoo your home page. 
 http://www.yahoo.com/r/hs
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread terry mcintyre
We could test this: find some nakade problems in the games, crank up the number 
of simulations, and see if Mogo finds the crucial moves.

There's the question of how long eventually is. 

I
 
Terry McIntyre [EMAIL PROTECTED] 
“Wherever is found what is called a paternal government, there is found state 
education. It has been discovered that the best way to insure implicit 
obedience is to commence tyranny in the nursery.”
 
Benjamin Disraeli, Speech in the House of Commons [June 15, 1874]

- Original Message 
From: Don Dailey [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org
Sent: Wednesday, January 30, 2008 11:10:01 AM
Subject: Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study


Is 
nakade 
actually 
a 
problem 
in 
mogo?  
 
Are 
there 
positions 
it 
could
never 
solve 
or 
is 
merely 
a 
general 
weakness.  
  

I 
thought 
the 
search 
corrected 
such 
problems 
eventually.

- 
Don


Gian-Carlo 
Pascutto 
wrote:
 
Don 
Dailey 
wrote:

 
If 
a 
nakade 
fixed 
version 
of 
mogo 
(that 
is 
truly 
scalable) 
was 
in 
the
 
study,  
how 
much 
higher 
would 
it 
be 
in 
your 
estimation?  
  

 
You 
do 
realize 
that 
you 
are 
asking 
how 
much 
perfect 
life 
and 
death
 
knowledge 
is 
worth?

___
computer-go 
mailing 
list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/






  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Gian-Carlo Pascutto

Don Dailey wrote:

I must not understand the problem. My program has no trouble with
nakade unless you are talking about some special case position.My
program immediately places the stone on the magic square to protect it's
2 eyes.   


Can your program identify sekis? Nice examples in attachement.


I can't believe mogo doesn't do this, it would be very weak
if it didn't.


That's just an assumption shaped by a non-objective human bias.

--
GCP


llSeki.sgf
Description: x-go-sgf


seki.sgf
Description: x-go-sgf
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey


Gian-Carlo Pascutto wrote:
 Don Dailey wrote:
 I must not understand the problem. My program has no trouble with
 nakade unless you are talking about some special case position.My
 program immediately places the stone on the magic square to protect it's
 2 eyes.   

 Can your program identify sekis? Nice examples in attachement.

Yes,  the tree generates pass moves and with 2 passes the game is scored
without play-outs.  Even if there is a playable move,  it will not play
it if it leads to a loss and a pass doesn't.   

I'm not claiming every possible situation is handled correctly however,
because the play-outs are simplistic and there are end of game cases
that the play-outs themselves could get wrong every single time. But
basic nakade is something even my pure MC player handles correctly in
the basic case.

 I can't believe mogo doesn't do this, it would be very weak
 if it didn't.

 That's just an assumption shaped by a non-objective human bias.

So are you saying that if mogo had this position:

| # # # # # #
| O O O O O #
| + + + + O # 
  a b c d e

That mogo would not know to move to nakade point c1 with either color?

- Don


 

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey
Is is just my email client or does Terry's post have one word per line
when quoting others?

- Don


terry mcintyre wrote:
 Someone recently posted a 19x19 example. Mogo failed to defend it's position. 
  
 Terry McIntyre [EMAIL PROTECTED] 
 “Wherever is found what is called a paternal government, there is found state 
 education. It has been discovered that the best way to insure implicit 
 obedience is to commence tyranny in the nursery.”
  
 Benjamin Disraeli, Speech in the House of Commons [June 15, 1874]

 - Original Message 
 From: Don Dailey [EMAIL PROTECTED]
 To: computer-go computer-go@computer-go.org
 Sent: Wednesday, January 30, 2008 11:22:16 AM
 Subject: Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study


 I 
 must 
 not 
 understand 
 the 
 problem.  
   
  
 My 
 program 
 has 
 no 
 trouble 
 with
 nakade 
 unless 
 you 
 are 
 talking 
 about 
 some 
 special 
 case 
 position.  
   
 My
 program 
 immediately 
 places 
 the 
 stone 
 on 
 the 
 magic 
 square 
 to 
 protect 
 it's
 2 
 eyes.  
   
 I 
 can't 
 believe 
 mogo 
 doesn't 
 do 
 this, 
 it 
 would 
 be 
 very 
 weak
 if 
 it 
 didn't.

 Do 
 you 
 guys 
 have 
 a 
 special 
 definition 
 of 
 nakade?  

 - 
 Don




 terry 
 mcintyre 
 wrote:
   
 We 
 could 
 test 
 this: 
 find 
 some 
 nakade 
 problems 
 in 
 the 
 games, 
 crank 
 up 
 the 
 number 
 of 
 simulations, 
 and 
 see 
 if 
 Mogo 
 finds 
 the 
 crucial 
 moves.
   
 
 There's 
 the 
 question 
 of 
 how 
 long 
 eventually 
 is. 
   
 
 I
   
  

 
 Terry 
 McIntyre 
 [EMAIL PROTECTED] 
   
 “Wherever 
 is 
 found 
 what 
 is 
 called 
 a 
 paternal 
 government, 
 there 
 is 
 found 
 state 
 education. 
 It 
 has 
 been 
 discovered 
 that 
 the 
 best 
 way 
 to 
 insure 
 implicit 
 obedience 
 is 
 to 
 commence 
 tyranny 
 in 
 the 
 nursery.”
   
  

 
 Benjamin 
 Disraeli, 
 Speech 
 in 
 the 
 House 
 of 
 Commons 
 [June 
 15, 
 1874]
   
 
 - 
 Original 
 Message 
 
   
 From: 
 Don 
 Dailey 
 [EMAIL PROTECTED]
   
 To: 
 computer-go 
 computer-go@computer-go.org
   
 Sent: 
 Wednesday, 
 January 
 30, 
 2008 
 11:10:01 
 AM
   
 Subject: 
 Re: 
 [computer-go] 
 19x19 
 Study 
 - 
 prior 
 in 
 bayeselo, 
 and 
 KGS 
 study
   

 
 Is 
   
 nakade 
   
 actually 
   
 a 
   
 problem 
   
 in 
   
 mogo?  
   
  

 
 Are 
   
 there 
   
 positions 
   
 it 
   
 could
   
 never 
   
 solve 
   
 or 
   
 is 
   
 merely 
   
 a 
   
 general 
   
 weakness.  
   
  
 
  
   
 
 I 
   
 thought 
   
 the 
   
 search 
   
 corrected 
   
 such 
   
 problems 
   
 eventually.
   
 
 - 
   
 Don
   

 
 Gian-Carlo 
   
 Pascutto 
   
 wrote:
   
  
 
  
   
 Don 
   
 Dailey 
   
 wrote:
   
  
 
  
   
 If 
   
 a 
   
 nakade 
   
 fixed 
   
 version 
   
 of 
   
 mogo 
   
 (that 
   
 is 
   
 truly 
   
 scalable) 
   
 was 
   
 in 
   
 the
   
  
 
  
   
 study,  
   
 how 
   
 much 
   
 higher 
   
 would 
   
 it 
   
 be 
   
 in 
   
 your 
   
 estimation?  
   
  
 
  
   
  
 
  
   
  
   
   
  
   
 You 
   
 do 
   
 realize 
   
 that 
   
 you 
   
 are 
   
 asking 
   
 how 
   
 much 
   
 perfect 
   
 life 
   
 and 
   
 death
   
  
 
  
   
 knowledge 
   
 is 
   
 worth?
   
  
 
  
   
 ___
   
 computer-go 
   
 mailing 
   
 list
   
 computer-go@computer-go.org
   
 http://www.computer-go.org/mailman/listinfo/computer-go/
   




  
 
   
   
  
 
   
 Never 
 miss 
 a 
 thing.  
 Make 
 Yahoo 
 your 
 home 
 page. 
   
 http://www.yahoo.com/r/hs
   
 ___
   
 computer-go 
 mailing 
 list
   
 computer-go@computer-go.org
   
 http://www.computer-go.org/mailman/listinfo/computer-go/
   
  
 
  
 ___
 computer-go 
 mailing 
 list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/






   
 
 Be a better friend, newshound, and 
 know-it-all with Yahoo! Mobile.  Try it now.  
 http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Jason House
On Jan 30, 2008 2:48 PM, Don Dailey [EMAIL PROTECTED] wrote:

 So are you saying that if mogo had this position:

 | # # # # # #
 | O O O O O #
 | + + + + O #
  a b c d e

 That mogo would not know to move to nakade point c1 with either color?


That's not nakade...  Even if it was one shorter, I'd expect nearly all MC
bots to get it right.  I think the problems come with big eyes and
throw-ins.  Big eyes require many moves to be correctly played for a kill.
I can easily imagine a playout policy (expecially with avoiding self-atari
plays) that fails to read a big eye nakade correctly.





 - Don


  
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Jason House
You're not crazy.  Gmail shows it that way too.

On Jan 30, 2008 2:49 PM, Don Dailey [EMAIL PROTECTED] wrote:

 Is is just my email client or does Terry's post have one word per line
 when quoting others?

 - Don


 terry mcintyre wrote:
  Someone recently posted a 19x19 example. Mogo failed to defend it's
 position.
 
  Terry McIntyre [EMAIL PROTECTED]
  Wherever is found what is called a paternal government, there is found
 state education. It has been discovered that the best way to insure implicit
 obedience is to commence tyranny in the nursery.
 
  Benjamin Disraeli, Speech in the House of Commons [June 15, 1874]
 
  - Original Message 
  From: Don Dailey [EMAIL PROTECTED]
  To: computer-go computer-go@computer-go.org
  Sent: Wednesday, January 30, 2008 11:22:16 AM
  Subject: Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS
 study
 
 
  I
  must
  not
  understand
  the
  problem.
 
 
  My
  program
  has
  no
  trouble
  with
  nakade
  unless
  you
  are
  talking
  about
  some
  special
  case
  position.
 
  My
  program
  immediately
  places
  the
  stone
  on
  the
  magic
  square
  to
  protect
  it's
  2
  eyes.
 
  I
  can't
  believe
  mogo
  doesn't
  do
  this,
  it
  would
  be
  very
  weak
  if
  it
  didn't.
 
  Do
  you
  guys
  have
  a
  special
  definition
  of
  nakade?
 
  -
  Don
 
 
 
 
  terry
  mcintyre
  wrote:
 
  We
  could
  test
  this:
  find
  some
  nakade
  problems
  in
  the
  games,
  crank
  up
  the
  number
  of
  simulations,
  and
  see
  if
  Mogo
  finds
  the
  crucial
  moves.
 
 
  There's
  the
  question
  of
  how
  long
  eventually
  is.
 
 
  I
 
 
 
 
  Terry
  McIntyre
  [EMAIL PROTECTED]
 
  Wherever
  is
  found
  what
  is
  called
  a
  paternal
  government,
  there
  is
  found
  state
  education.
  It
  has
  been
  discovered
  that
  the
  best
  way
  to
  insure
  implicit
  obedience
  is
  to
  commence
  tyranny
  in
  the
  nursery.
 
 
 
 
  Benjamin
  Disraeli,
  Speech
  in
  the
  House
  of
  Commons
  [June
  15,
  1874]
 
 
  -
  Original
  Message
  
 
  From:
  Don
  Dailey
  [EMAIL PROTECTED]
 
  To:
  computer-go
  computer-go@computer-go.org
 
  Sent:
  Wednesday,
  January
  30,
  2008
  11:10:01
  AM
 
  Subject:
  Re:
  [computer-go]
  19x19
  Study
  -
  prior
  in
  bayeselo,
  and
  KGS
  study
 
 
 
  Is
 
  nakade
 
  actually
 
  a
 
  problem
 
  in
 
  mogo?
 
 
 
 
  Are
 
  there
 
  positions
 
  it
 
  could
 
  never
 
  solve
 
  or
 
  is
 
  merely
 
  a
 
  general
 
  weakness.
 
 
 
 
 
 
  I
 
  thought
 
  the
 
  search
 
  corrected
 
  such
 
  problems
 
  eventually.
 
 
  -
 
  Don
 
 
 
  Gian-Carlo
 
  Pascutto
 
  wrote:
 
 
 
 
 
  Don
 
  Dailey
 
  wrote:
 
 
 
 
 
  If
 
  a
 
  nakade
 
  fixed
 
  version
 
  of
 
  mogo
 
  (that
 
  is
 
  truly
 
  scalable)
 
  was
 
  in
 
  the
 
 
 
 
 
  study,
 
  how
 
  much
 
  higher
 
  would
 
  it
 
  be
 
  in
 
  your
 
  estimation?
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  You
 
  do
 
  realize
 
  that
 
  you
 
  are
 
  asking
 
  how
 
  much
 
  perfect
 
  life
 
  and
 
  death
 
 
 
 
 
  knowledge
 
  is
 
  worth?
 
 
 
 
 
  ___
 
  computer-go
 
  mailing
 
  list
 
  computer-go@computer-go.org
 
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 
 
 
 
 
 
 
 
 
 
 
 
  Never
  miss
  a
  thing.
  Make
  Yahoo
  your
  home
  page.
 
  http://www.yahoo.com/r/hs
 
  ___
 
  computer-go
  mailing
  list
 
  computer-go@computer-go.org
 
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 
 
 
  ___
  computer-go
  mailing
  list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 
 
 
 
 
 
 
  Be a better friend, newshound, and
  know-it-all with Yahoo! Mobile.  Try it now.
 http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread terry mcintyre
Someone recently posted a 19x19 example. Mogo failed to defend it's position. 
 
Terry McIntyre [EMAIL PROTECTED] 
“Wherever is found what is called a paternal government, there is found state 
education. It has been discovered that the best way to insure implicit 
obedience is to commence tyranny in the nursery.”
 
Benjamin Disraeli, Speech in the House of Commons [June 15, 1874]

- Original Message 
From: Don Dailey [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org
Sent: Wednesday, January 30, 2008 11:22:16 AM
Subject: Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study


I 
must 
not 
understand 
the 
problem.  
  
 
My 
program 
has 
no 
trouble 
with
nakade 
unless 
you 
are 
talking 
about 
some 
special 
case 
position.  
  
My
program 
immediately 
places 
the 
stone 
on 
the 
magic 
square 
to 
protect 
it's
2 
eyes.  
  
I 
can't 
believe 
mogo 
doesn't 
do 
this, 
it 
would 
be 
very 
weak
if 
it 
didn't.

Do 
you 
guys 
have 
a 
special 
definition 
of 
nakade?  

- 
Don




terry 
mcintyre 
wrote:
 
We 
could 
test 
this: 
find 
some 
nakade 
problems 
in 
the 
games, 
crank 
up 
the 
number 
of 
simulations, 
and 
see 
if 
Mogo 
finds 
the 
crucial 
moves.

 
There's 
the 
question 
of 
how 
long 
eventually 
is. 

 
I
  
 
Terry 
McIntyre 
[EMAIL PROTECTED] 
 
“Wherever 
is 
found 
what 
is 
called 
a 
paternal 
government, 
there 
is 
found 
state 
education. 
It 
has 
been 
discovered 
that 
the 
best 
way 
to 
insure 
implicit 
obedience 
is 
to 
commence 
tyranny 
in 
the 
nursery.”
  
 
Benjamin 
Disraeli, 
Speech 
in 
the 
House 
of 
Commons 
[June 
15, 
1874]

 
- 
Original 
Message 

 
From: 
Don 
Dailey 
[EMAIL PROTECTED]
 
To: 
computer-go 
computer-go@computer-go.org
 
Sent: 
Wednesday, 
January 
30, 
2008 
11:10:01 
AM
 
Subject: 
Re: 
[computer-go] 
19x19 
Study 
- 
prior 
in 
bayeselo, 
and 
KGS 
study


 
Is 
 
nakade 
 
actually 
 
a 
 
problem 
 
in 
 
mogo?  
  
 
Are 
 
there 
 
positions 
 
it 
 
could
 
never 
 
solve 
 
or 
 
is 
 
merely 
 
a 
 
general 
 
weakness.  
  
 

 
I 
 
thought 
 
the 
 
search 
 
corrected 
 
such 
 
problems 
 
eventually.

 
- 
 
Don


 
Gian-Carlo 
 
Pascutto 
 
wrote:
  
 
 
Don 
 
Dailey 
 
wrote:
  
 
 
If 
 
a 
 
nakade 
 
fixed 
 
version 
 
of 
 
mogo 
 
(that 
 
is 
 
truly 
 
scalable) 
 
was 
 
in 
 
the
  
 
 
study,  
 
how 
 
much 
 
higher 
 
would 
 
it 
 
be 
 
in 
 
your 
 
estimation?  
  
 
  
 
  
  
 
 
You 
 
do 
 
realize 
 
that 
 
you 
 
are 
 
asking 
 
how 
 
much 
 
perfect 
 
life 
 
and 
 
death
  
 
 
knowledge 
 
is 
 
worth?
  
 
 
___
 
computer-go 
 
mailing 
 
list
 
computer-go@computer-go.org
 
http://www.computer-go.org/mailman/listinfo/computer-go/






  
  
  
 

 
Never 
miss 
a 
thing.  
Make 
Yahoo 
your 
home 
page. 
 
http://www.yahoo.com/r/hs
 
___
 
computer-go 
mailing 
list
 
computer-go@computer-go.org
 
http://www.computer-go.org/mailman/listinfo/computer-go/

  
 
___
computer-go 
mailing 
list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/






  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey
According to Sensei's Library,  nakade is:

 

It refers to a situation in which a group has a single large
internal, enclosed space that can be made into two eyes by the right
move--or prevented from doing so by an enemy move.

Several examples are shown that where there are exactly 3 points.   My
example shows 4 empty points in a big eye but they have even bigger
examples.   

So I think this is nakade.

- Don


Jason House wrote:


 On Jan 30, 2008 2:48 PM, Don Dailey [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:

 So are you saying that if mogo had this position:

 | # # # # # #
 | O O O O O #
 | + + + + O #
  a b c d e

 That mogo would not know to move to nakade point c1 with either color?


 That's not nakade...  Even if it was one shorter, I'd expect nearly
 all MC bots to get it right.  I think the problems come with big eyes
 and throw-ins.  Big eyes require many moves to be correctly played for
 a kill.  I can easily imagine a playout policy (expecially with
 avoiding self-atari plays) that fails to read a big eye nakade correctly.

  



 - Don


 
 
 
  ___
  computer-go mailing list
  computer-go@computer-go.org mailto:computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 ___
 computer-go mailing list
 computer-go@computer-go.org mailto:computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


 

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Gian-Carlo Pascutto

Don Dailey wrote:


So I think this is nakade.


Yes. Leela 0.2.x would get it wrong [1].

[1] Not eternally, but it would still take unreasonably long.

--
GCP
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey
Does mogo have a play-out rule that says, don't move into self-atari?  
If so, then I can see how the play-out would miss this.

But the tree search would not miss this.I still don't see the
problem.   I can see how a play-out strategy would delay the
understanding of positions,   but that's not relevant - all play-out
strategies introduce some bias.Uniformly random play-outs delay the
understanding of positions too for instance.

- Don


Gian-Carlo Pascutto wrote:
 Don Dailey wrote:

 Yes,  the tree generates pass moves and with 2 passes the game is scored
 without play-outs.  

 How do you detect dead groups after 2 passes? Static analysis? All is
 alive/CGOS?

 I can't believe mogo doesn't do this, it would be very weak
 if it didn't.
 That's just an assumption shaped by a non-objective human bias.

 So are you saying that if mogo had this position:

 | # # # # # #
 | O O O O O #
 | + + + + O #   a b c d e

 That mogo would not know to move to nakade point c1 with either color?

 I was referring to your it would be very weak, not to what MoGo does
 or does not do. I don't know exactly what MoGo does. I do know you can
 not know the above and not be very weak. You can also not know about
 ladders and not be very weak. Many people seem to think this is
 completely unfathomable, and I was surprised you made the same mistake.
 I think it has something to do with both things being the first things
 a human player learns. Because he thinks it's basic, he concludes
 anybody not knowing it is weak. But strength just doesn't work that way.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Jason House
While bigger examples exist, 4 in a line (with both ends enclosed) is not
nakade because the two center points are miai (b and c in your example).  It
requires two moves (both b and c) to reduce your example to a single eye.
Because of that, it is not nakade.

A comprehensive list of nakade shapes (with a complete border and no enemy
stones inside the eye to start with) is given at
http://senseis.xmp.net/?UnsettledEyeshapes  The marked black stones are the
vital point that is the one move needed to make or prevent two eyes.  Larger
nakade shapes usually involve systematic reduction of eyes to ever smaller
nakade shapes until a final settled dead position is reached.  Filling of
nakade can be very order specific and a successful kill that requires a very
long sequence of stones can easily be messed up in random play.

On Jan 30, 2008 3:02 PM, Don Dailey [EMAIL PROTECTED] wrote:

 According to Sensei's Library,  nakade is:



It refers to a situation in which a group has a single large
internal, enclosed space that can be made into two eyes by the right
move--or prevented from doing so by an enemy move.

 Several examples are shown that where there are exactly 3 points.   My
 example shows 4 empty points in a big eye but they have even bigger
 examples.

 So I think this is nakade.

 - Don


 Jason House wrote:
 
 
  On Jan 30, 2008 2:48 PM, Don Dailey [EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] wrote:
 
  So are you saying that if mogo had this position:
 
  | # # # # # #
  | O O O O O #
  | + + + + O #
   a b c d e
 
  That mogo would not know to move to nakade point c1 with either
 color?
 
 
  That's not nakade...  Even if it was one shorter, I'd expect nearly
  all MC bots to get it right.  I think the problems come with big eyes
  and throw-ins.  Big eyes require many moves to be correctly played for
  a kill.  I can easily imagine a playout policy (expecially with
  avoiding self-atari plays) that fails to read a big eye nakade
 correctly.
 
 
 
 
 
  - Don
 
 
  
 
 
  
   ___
   computer-go mailing list
   computer-go@computer-go.org mailto:computer-go@computer-go.org
   http://www.computer-go.org/mailman/listinfo/computer-go/
  ___
  computer-go mailing list
  computer-go@computer-go.org mailto:computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 
  
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Alain Baeckeroot
Le mercredi 30 janvier 2008, David Fotland a écrit :
 3 kyu at this level is a lot for a person.  I've know club players who never
 got better than 9k, and people who study and play may still take a year or
 more to make this much improvement.
 
 Many club players stall somewhere between 7k and 4k and never get any
 better.
 

Agreed 100%
One example from one serious young guy (14yo) in my club.
I think he is clever (outside of go), and serious and he 
works on go with 2 friends of his classroom, they discovered go together
last year. They also read books and once a month or so have a 3d
player who come to teach them... 
http://www.gokgs.com/graphPage.jsp?user=minoru

I wish i could improve as fast as they are doing, and they are
already stronger than some adults who play since years and don't improve
anymore.

Alain

 
  -Original Message-
  From: [EMAIL PROTECTED] [mailto:computer-go-
  [EMAIL PROTECTED] On Behalf Of Don Dailey
  Sent: Tuesday, January 29, 2008 7:18 PM
  To: computer-go
  Subject: Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS
  study
  
  I wish I knew how that translates to win expectancy (ELO rating.)Is
  3 kyu at this level a pretty significant improvement?
  
  - Don
  
  
  
  Hiroshi Yamashita wrote:
   Instead of playing UCT bot vs UCT bot, I am thinking about running a
   scaling experiment against humans on KGS. I'll probably start with
   2k, 8k, 16k, and 32k playouts.
  
   I have a result on KGS.
  
   AyaMC  6k (5.9k) 16po
  http://www.gokgs.com/graphPage.jsp?user=AyaMC
   AyaMC2 9k (8.4k)  1po
  http://www.gokgs.com/graphPage.jsp?user=AyaMC2
  
   16po ... 2po x8 core (8sec/move on Xeon 2.66GHz)
   1po ...  5000po x2 core (2sec/move on Opteron 2.4GHz)
  
   (5.9k) and (8.4k) are from the graph.
  
   AyaMC2 has played 97 games in a day on average. (2sec/move)
   I changed program 01/19/2008, but it is not stable yet.
   On this condition, 7 days or more will be needed for stable rating.
  
   Hiroshi Yamashita
  
  
   ___
   computer-go mailing list
   computer-go@computer-go.org
   http://www.computer-go.org/mailman/listinfo/computer-go/
  
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
 
 



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey
It would get it eventually, which means this doesn't inhibit scalability.  

I don't expect every aspect of a program to improve at the same rate -
but if a program is properly scalable, you can expect that it doesn't
regress with extra time.   It only moves forward, gets stronger with
more thinking time.You might complain about a glaring
weakness,   but even that weakness doesn't get worse, it gets better.   
Some aspects of it's play by improve more quickly than others by our
perceptions.  

Having said that,  I am interested in this.  Is there something that
totally prevents the program from EVER seeing the best move?I don't
mean something that takes a long time,  I mean something that has the
theoretical property that it's impossible to every find the best move,
even given eternity?

I believe this is possible based on an interaction of the pass rules and
the eye rule in the play-outs,  but I'm not a very strong go player so I
would have to think about it.  

- Don




Gian-Carlo Pascutto wrote:
 Don Dailey wrote:

 So I think this is nakade.

 Yes. Leela 0.2.x would get it wrong [1].

 [1] Not eternally, but it would still take unreasonably long.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey


Vlad Dumitrescu wrote:
 Hi Don,

 On Jan 30, 2008 9:02 PM, Don Dailey [EMAIL PROTECTED] wrote:
   
 According to Sensei's Library,  nakade is:
 It refers to a situation in which a group has a single large
 internal, enclosed space that can be made into two eyes by the right
 move--or prevented from doing so by an enemy move.

 Several examples are shown that where there are exactly 3 points.   My
 example shows 4 empty points in a big eye but they have even bigger
 examples.

 So I think this is nakade.
 
Yes, my mistake.   I should have constructed a 3 point eye.

- Don


 3 points make a nakade, but 4 points in a line don't: c1 is answered
 with d1 and d1 with c1.

 regards,
 Vlad
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Gian-Carlo Pascutto

Don Dailey wrote:


Yes,  the tree generates pass moves and with 2 passes the game is scored
without play-outs.  


How do you detect dead groups after 2 passes? Static analysis? All is 
alive/CGOS?



I can't believe mogo doesn't do this, it would be very weak
if it didn't.

That's just an assumption shaped by a non-objective human bias.


So are you saying that if mogo had this position:

| # # # # # #
| O O O O O #
| + + + + O # 
  a b c d e


That mogo would not know to move to nakade point c1 with either color?


I was referring to your it would be very weak, not to what MoGo does 
or does not do. I don't know exactly what MoGo does. I do know you can 
not know the above and not be very weak. You can also not know about 
ladders and not be very weak. Many people seem to think this is 
completely unfathomable, and I was surprised you made the same mistake.
I think it has something to do with both things being the first things a 
human player learns. Because he thinks it's basic, he concludes anybody 
not knowing it is weak. But strength just doesn't work that way.


--
GCP
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Vlad Dumitrescu
Hi Don,

On Jan 30, 2008 9:02 PM, Don Dailey [EMAIL PROTECTED] wrote:
 According to Sensei's Library,  nakade is:
 It refers to a situation in which a group has a single large
 internal, enclosed space that can be made into two eyes by the right
 move--or prevented from doing so by an enemy move.

 Several examples are shown that where there are exactly 3 points.   My
 example shows 4 empty points in a big eye but they have even bigger
 examples.

 So I think this is nakade.

3 points make a nakade, but 4 points in a line don't: c1 is answered
with d1 and d1 with c1.

regards,
Vlad
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread terry mcintyre
There are other shapes which are known to be dead. For example, four points in 
a square shape make one eye, not two. If the defender plays one point, trying 
to make two eyes, the opponent plays the diagonally opposite point, which is 
the center of three; the group dies.  http://senseis.xmp.net/?SquaredFour

The rectangular eight in the corner is an interesting position.  
http://senseis.xmp.net/?RectangularEightInTheCorner -- I first discovered this 
when studying a pro 9x9 game. I asked myself why a pro would place an extra 
stone inside the rectangular eight - surely such a large eyespace could produce 
two eyes? See the www page for the explanation. Failing to defend would have 
cost seven points, losing the game. As Jason has said, the sequence of play is 
intricate; if random playouts fail to discover the proper order of play,  the 
evaluation will be incorrect. The evaluation also depends critically on whether 
the shape has external liberties, or not. When the last outside liberty is 
taken, defense is crucial; prior to that, the same defensive move is probably 
inefficient. Playing one's own external liberties can also convert one's shape 
from defensible to dead - as many players have discovered, to their great 
chagrin.
 
Capturing races can get still more intricate. 
http://senseis.xmp.net/?CapturingRace

I recommend the book by Richard Hunter, which details some of the analytical 
issues. In my own games, I sometimes use the properties of big eyes to gain 
enough liberties to win capturing races.  
http://senseis.xmp.net/?CountingLibertiesAndWinningCapturingRaces -- Hunter 
observes that shodans get some of these problems wrong, especially the counting 
of big eyes, which provide more liberties than one would guess.


Terry McIntyre [EMAIL PROTECTED] 
“Wherever is found what is called a paternal government, there is found state 
education. It has been discovered that the best way to insure implicit 
obedience is to commence tyranny in the nursery.”
 
Benjamin Disraeli, Speech in the House of Commons [June 15, 1874]

- Original Message 
From: Jason House [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org
Sent: Wednesday, January 30, 2008 12:15:04 PM
Subject: Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study


While bigger examples exist, 4 in a line (with both ends enclosed) is not 
nakade because the two center points are miai (b and c in your example).  It 
requires two moves (both b and c) to reduce your example to a single eye.  
Because of that, it is not nakade.


A comprehensive list of nakade shapes (with a complete border and no enemy 
stones inside the eye to start with) is given at 
http://senseis.xmp.net/?UnsettledEyeshapes  The marked black stones are the 
vital point that is the one move needed to make or prevent two eyes.  Larger 
nakade shapes usually involve systematic reduction of eyes to ever smaller 
nakade shapes until a final settled dead position is reached.  Filling of 
nakade can be very order specific and a successful kill that requires a very 
long sequence of stones can easily be messed up in random play.


On Jan 30, 2008 3:02 PM, Don Dailey [EMAIL PROTECTED] wrote:

According to Sensei's Library,  nakade is:



It refers to a situation in which a group has a single large
internal, enclosed space that can be made into two eyes by the right
move--or prevented from doing so by an enemy move.


Several examples are shown that where there are exactly 3 points.   My
example shows 4 empty points in a big eye but they have even bigger
examples.

So I think this is nakade.

- Don



Jason House wrote:


 On Jan 30, 2008 2:48 PM, Don Dailey [EMAIL PROTECTED]

 mailto:[EMAIL PROTECTED] wrote:


 So are you saying that if mogo had this position:

 | # # # # # #
 | O O O O O #
 | + + + + O #
  a b c d e

 That mogo would not know to move to nakade point c1 with either color?



 That's not nakade...  Even if it was one shorter, I'd expect nearly
 all MC bots to get it right.  I think the problems come with big eyes
 and throw-ins.  Big eyes require many moves to be correctly played for

 a kill.  I can easily imagine a playout policy (expecially with
 avoiding self-atari plays) that fails to read a big eye nakade correctly.





 - Don



 
 
 
  ___
  computer-go mailing list


  computer-go@computer-go.org mailto:computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/

 ___
 computer-go mailing list

 computer-go@computer-go.org mailto:computer-go@computer-go.org


 http://www.computer-go.org/mailman/listinfo/computer-go/


 


 ___
 computer-go mailing

Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread terry mcintyre
I think yahoo changed something. I first saw this on Steve's posts, and he also 
uses Yahoo. I haven't changed anything on my preferences, so blame Yahoo for 
tinkering with their software.
 
Terry McIntyre [EMAIL PROTECTED] 
“Wherever is found what is called a paternal government, there is found state 
education. It has been discovered that the best way to insure implicit 
obedience is to commence tyranny in the nursery.”
 
Benjamin Disraeli, Speech in the House of Commons [June 15, 1874]

- Original Message 
From: Jason House [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org
Sent: Wednesday, January 30, 2008 11:53:57 AM
Subject: Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study


You're not crazy.  Gmail shows it that way too.

On Jan 30, 2008 2:49 PM, Don Dailey [EMAIL PROTECTED] wrote:

Is is just my email client or does Terry's post have one word per line
when quoting others?

- Don








  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Christoph Birk

On Tue, 29 Jan 2008, Don Dailey wrote:

I wish I knew how that translates to win expectancy (ELO rating.)Is
3 kyu at this level a pretty significant improvement?


in the order of 90%

Christoph

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey


Alain Baeckeroot wrote:
 Le mercredi 30 janvier 2008, David Fotland a écrit :
   
 3 kyu at this level is a lot for a person.  I've know club players who never
 got better than 9k, and people who study and play may still take a year or
 more to make this much improvement.

 Many club players stall somewhere between 7k and 4k and never get any
 better.

 

 Agreed 100%
 One example from one serious young guy (14yo) in my club.
 I think he is clever (outside of go), and serious and he 
 works on go with 2 friends of his classroom, they discovered go together
 last year. They also read books and once a month or so have a 3d
 player who come to teach them... 
 http://www.gokgs.com/graphPage.jsp?user=minoru

 I wish i could improve as fast as they are doing, and they are
 already stronger than some adults who play since years and don't improve
 anymore.
   
I think hard work is WAY more important than raw talent.   When I was
young I watched old-timers in the chess club who never made it beyond
about 1600-1700 ELO and they played chess their whole lives. You can
play fun games forever and not improve much. I passed those guys
in less than 1 year even though I was not any more intelligent or
talented than they were.All I did was spent a few hours of very
intense focused study.Not casual reading, but hard work study.I
doubt this was more than half a work week of total time,  but it was
super quality time. If I had done 3 or 4 hours a week of this for a
year or two, I would be at least a weak master, and I'm sure they would
have been too with serious work.I never went beyond the low 1900's
because I didn't put in any work.   

I believe you COULD improve as fast as that young guy you are talking
about,   but you would need to do serious study.   Not read some books
while watching television,  but putting yourself in a quiet room and
being totally focused.A 3 dan teacher would help enormously.

Bobby Fischer, (who recently died) was considered enormously talented
and the best ever in his day.   But a secret about him is that he
probably studied chess more than any other human alive.   Not casual
study,  but with an intense super-focused obsession.   

There are anecdotes of really strong players who didn't have to work at
it,  but they are anecdotes, not realities.You can be sure than any
incredibly strong player put in the time.   Some I'm sure had to put in
more than others,  but none of them are strangers to focused intense study.

- Don






 Alain

   
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:computer-go-
 [EMAIL PROTECTED] On Behalf Of Don Dailey
 Sent: Tuesday, January 29, 2008 7:18 PM
 To: computer-go
 Subject: Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS
 study

 I wish I knew how that translates to win expectancy (ELO rating.)Is
 3 kyu at this level a pretty significant improvement?

 - Don



 Hiroshi Yamashita wrote:
   
 Instead of playing UCT bot vs UCT bot, I am thinking about running a
 scaling experiment against humans on KGS. I'll probably start with
 2k, 8k, 16k, and 32k playouts.
   
 I have a result on KGS.

 AyaMC  6k (5.9k) 16po
 
 http://www.gokgs.com/graphPage.jsp?user=AyaMC
   
 AyaMC2 9k (8.4k)  1po
 
 http://www.gokgs.com/graphPage.jsp?user=AyaMC2
   
 16po ... 2po x8 core (8sec/move on Xeon 2.66GHz)
 1po ...  5000po x2 core (2sec/move on Opteron 2.4GHz)

 (5.9k) and (8.4k) are from the graph.

 AyaMC2 has played 97 games in a day on average. (2sec/move)
 I changed program 01/19/2008, but it is not stable yet.
 On this condition, 7 days or more will be needed for stable rating.

 Hiroshi Yamashita


 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
   
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


 



 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Jason House
On Jan 30, 2008 3:51 PM, terry mcintyre [EMAIL PROTECTED] wrote:

 There are other shapes which are known to be dead. For example, four
 points in a square shape make one eye, not two. If the defender plays one
 point, trying to make two eyes, the opponent plays the diagonally opposite
 point, which is the center of three; the group dies.
 http://senseis.xmp.net/?SquaredFour


Known to be dead is different than nakade which require an opponent action
in order to make them dead.  It's good to see that you tried to post a much
more comprehensive list of living and dying.



 The rectangular eight in the corner is an interesting position.
 http://senseis.xmp.net/?RectangularEightInTheCorner -- I first discovered
 this when studying a pro 9x9 game. I asked myself why a pro would place an
 extra stone inside the rectangular eight - surely such a large eyespace
 could produce two eyes? See the www page for the explanation. Failing to
 defend would have cost seven points, losing the game. As Jason has said, the
 sequence of play is intricate; if random playouts fail to discover the
 proper order of play,  the evaluation will be incorrect. The evaluation also
 depends critically on whether the shape has external liberties, or not. When
 the last outside liberty is taken, defense is crucial; prior to that, the
 same defensive move is probably inefficient. Playing one's own external
 liberties can also convert one's shape from defensible to dead - as many
 players have discovered, to their great chagrin.

 Capturing races can get still more intricate.
 http://senseis.xmp.net/?CapturingRace

 I recommend the book by Richard Hunter, which details some of the
 analytical issues. In my own games, I sometimes use the properties of big
 eyes to gain enough liberties to win capturing races.
 http://senseis.xmp.net/?CountingLibertiesAndWinningCapturingRaces --
 Hunter observes that shodans get some of these problems wrong, especially
 the counting of big eyes, which provide more liberties than one would
 guess.



That book is excellent.  I read it as a player and hope one day to use it as
a computer go coder.  I've also recommended it to other computer go authors
for the same reason.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Heikki Levanto
On Wed, Jan 30, 2008 at 03:23:35PM -0500, Don Dailey wrote:
 Having said that,  I am interested in this.  Is there something that
 totally prevents the program from EVER seeing the best move?I don't
 mean something that takes a long time,  I mean something that has the
 theoretical property that it's impossible to every find the best move,
 even given eternity?

Someone, I think it was Gunnar, pointed out that something like this:

5 | # # # # # # 
4 | + + + + + # 
3 | O O O O + # 
2 | # # + O + # 
1 | # + # O + # 
  -
a b c d e f

Here black (#) must play at b1 to kill white (O). If white gets to move
first, he can live with c2, and later making two eyes by capturing at b1.

Depending on the definitions, b1 can be seen as an 'eyelike' point, and will
not be considered in any playouts. No amount of UCT-tree bashing will make
the program play it. 

In random playouts, it is 50-50 who first egts to c2. But it does not matter,
as white lives in any case (at least as long as he has some outside
liberties, I think). 

As I mentioned earlier, it is possible to get around that by allowing even
eye-filling suicide moves in the UCT tree, even if not allowing them in the
playouts. Even then, the UCT tree has to extend to the point where this kind
of situation can occur, before the program can see it. 




 - Heikki

-- 
Heikki Levanto   In Murphy We Turst heikki (at) lsd (dot) dk

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey


Heikki Levanto wrote:
 On Wed, Jan 30, 2008 at 03:23:35PM -0500, Don Dailey wrote:
   
 Having said that,  I am interested in this.  Is there something that
 totally prevents the program from EVER seeing the best move?I don't
 mean something that takes a long time,  I mean something that has the
 theoretical property that it's impossible to every find the best move,
 even given eternity?
 

 Someone, I think it was Gunnar, pointed out that something like this:

 5 | # # # # # # 
 4 | + + + + + # 
 3 | O O O O + # 
 2 | # # + O + # 
 1 | # + # O + # 
   -
 a b c d e f

 Here black (#) must play at b1 to kill white (O). If white gets to move
 first, he can live with c2, and later making two eyes by capturing at b1.

 Depending on the definitions, b1 can be seen as an 'eyelike' point, and will
 not be considered in any playouts. No amount of UCT-tree bashing will make
 the program play it. 
   
You are totally incorrect about this. First of all, saying that no
amount of UCT-tree bashing will discover this move  invalidates all the
research and subsequent proofs done by researchers.   You may want
publish your own findings on this and see how well it flies.

You probably don't understand how UCT works.   UCT balances exploration
with exploitation.   The UCT tree WILL explore B1, but will explore it
with low frequency.That is unless the tree actually throws out 1
point eye moves (in which case it is not properly scalable and broken in
some sense.)

There are two cases.In the first case,  it doesn't matter whether
black plays the right more here or not,  black has a won game even if he
lets white live with c2.

The other case is that black MUST kill the white group to win the
game.   UCT will eventually discover the game is lost in all other
lines,  and start giving more attention to b1.   UCT will do this as a
natural consequence of the way it works.  

What you guys don't seem to grok is that not matter what game you want
to consider,  it's possible to construct arbitrarily difficult
positions.   There is nothing special about go in this sense except that
it's easier to find these positions than for many other games.   But so
what?I'll grant you that some positions are more difficult than
others.I'll grant you that you can construct positions that are
difficult for computers but easy for humans.You can do this in chess
too.You can do this in any non-trivial game but it is a lousy
argument against scalability.   

If you can show me a proof or position that cannot be eventually solved,
then you at best have an argument that this could hold it back from
reaching perfect play,  but you can only guess about where the ceiling is. 

Based on what Gian-Carlo Pascutto says,   strength doesn't work that
way,  a program can be remarkably strong even though it has weaknesses
that might even seem unfathomable to us.It's certainly not the
case that because someone is weaker than you in some area that you MUST
be a stronger player.   They could be so superior in many other ways
that you might as well resign before you even start.

There is a story about a chess grandmaster who didn't understand one of
the fine points of castling.   This is a very common move in chess and
it's almost unfathomable that he didn't know the rules.   Somehow, this
did not prevent his rise to the grandmaster levels.   In the right
position a beginner who knew the rules could outplay him.But do you
think there is any realistic chance that a beginner would beat him in a
serious game?

- Don



 In random playouts, it is 50-50 who first egts to c2. But it does not matter,
 as white lives in any case (at least as long as he has some outside
 liberties, I think). 

 As I mentioned earlier, it is possible to get around that by allowing even
 eye-filling suicide moves in the UCT tree, even if not allowing them in the
 playouts. Even then, the UCT tree has to extend to the point where this kind
 of situation can occur, before the program can see it. 




  - Heikki

   
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Jason House
On Jan 30, 2008 4:35 PM, Don Dailey [EMAIL PROTECTED] wrote:



 Heikki Levanto wrote:
  On Wed, Jan 30, 2008 at 03:23:35PM -0500, Don Dailey wrote:
 
  Having said that,  I am interested in this.  Is there something that
  totally prevents the program from EVER seeing the best move?I don't
  mean something that takes a long time,  I mean something that has the
  theoretical property that it's impossible to every find the best move,
  even given eternity?
 
 
  Someone, I think it was Gunnar, pointed out that something like this:
 
  5 | # # # # # #
  4 | + + + + + #
  3 | O O O O + #
  2 | # # + O + #
  1 | # + # O + #
-
  a b c d e f
 
  Here black (#) must play at b1 to kill white (O). If white gets to move
  first, he can live with c2, and later making two eyes by capturing at
 b1.
 
  Depending on the definitions, b1 can be seen as an 'eyelike' point, and
 will
  not be considered in any playouts. No amount of UCT-tree bashing will
 make
  the program play it.
 
 You are totally incorrect about this. First of all, saying that no
 amount of UCT-tree bashing will discover this move  invalidates all the
 research and subsequent proofs done by researchers.   You may want
 publish your own findings on this and see how well it flies.


Actually, I think you're totally incorrect.  (Please try to be kinder when
responding to others?)

Regardless of the exact example, _if_ pruning rules exclude a move, then an
engine will never play it.  That means that for that situation, they're not
scalable.  That may be a big if but will definitely affect some bot
implementations.  Progressive widening and soft-pruning rules probably get
around this kind of limitation.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey


 Regardless of the exact example, _if_ pruning rules exclude a move,
 then an engine will never play it.  That means that for that
 situation, they're not scalable.  That may be a big if but will
 definitely affect some bot implementations.  Progressive widening and
 soft-pruning rules probably get around this kind of limitation.
You didn't read what I said.  I explicitly mentioned that you cannot
throw out moves in the UCT part of the search and expect it to find the
right move.

- Don



 

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Gian-Carlo Pascutto

Don Dailey wrote:

I am concerned that the current study is, as Jacques has so ably
described, a study of a restricted game where nakade and certain
other moves are considered to be illegal; this restricted game
approaches the game of Go, but the programs have certain blind
spots which humans can and do take advantage of. These aren't
computer-specific blind spots; humans train on life-and-death
problems in order to gain an advantage over other humans also.

This is good news and nothing to worry about.You are basically 
saying mogo has a bug, and if this bug is fixed then we can expect

even better scalability. So any success here can be viewed as a
lower bound on it's actual rating.

If a nakade fixed version of mogo (that is truly scalable) was in the
 study,  how much higher would it be in your estimation?


I wanted to come back here because in the heat of the discussion it's 
easy to forget what you are actually discussing about.


I think you wanted to make the point that it's possible to fix MoGo that 
it considers all moves in the UCT tree, and this scales to perfect play.


This in turns means that the scaling results are to be considered a 
lower bound.


One thing I want to point out to that is that fixing MoGo in the sense 
described could mean that its curve is a lot lower.


The question is if the curve would have a different steepness. For sure 
it cannot actually flatten out at the same point!


--
GCP
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Don Dailey
I agree with this completely.   If fixing this problem was just a simple
matter of course, then I'm sure the mogo team would have done so very
quickly.The cure could be worse than the disease in this case.

But I think what we forget is that this discussion has been hijacked in
a sense, because it became about a specific  implementation,  not
general sound principles.  I don't really care about the exact
implementation details of each and every program or whether a program
took some shortcuts that makes it play stronger now but not at some
point out in the distant future.   

This is rather like a Monte Python script - where someone is talking
seriously about one thing,  and someone else enters with some
unimportant irrelevancy and throws the whole discussion into something
silly.

I started by always saying, properly scalable and someone starts
talking about a totally different thing,  programs that throw out moves
in the search.   That's well and good and you can talk about that, but
isn't that a different discussion? I could make this even sillier
and say, what if the operator drops dead before the game is over?   
The scalability didn't consider THAT factor.  Ha Ha Ha, got you on that
one!  I know that's pretty silly,  but it illustrates the
frustration of this conversation to me.

- Don


Gian-Carlo Pascutto wrote:
 Don Dailey wrote:
 I am concerned that the current study is, as Jacques has so ably
 described, a study of a restricted game where nakade and certain
 other moves are considered to be illegal; this restricted game
 approaches the game of Go, but the programs have certain blind
 spots which humans can and do take advantage of. These aren't
 computer-specific blind spots; humans train on life-and-death
 problems in order to gain an advantage over other humans also.

 This is good news and nothing to worry about.You are basically
 saying mogo has a bug, and if this bug is fixed then we can expect
 even better scalability. So any success here can be viewed as a
 lower bound on it's actual rating.

 If a nakade fixed version of mogo (that is truly scalable) was in the
  study,  how much higher would it be in your estimation?

 I wanted to come back here because in the heat of the discussion it's
 easy to forget what you are actually discussing about.

 I think you wanted to make the point that it's possible to fix MoGo
 that it considers all moves in the UCT tree, and this scales to
 perfect play.

 This in turns means that the scaling results are to be considered a
 lower bound.

 One thing I want to point out to that is that fixing MoGo in the
 sense described could mean that its curve is a lot lower.

 The question is if the curve would have a different steepness. For
 sure it cannot actually flatten out at the same point!

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Michael Williams
Don, welcome to my battle last week (or was it the week before?).  It was the exact same discussion.  I don't know if people are assuming that a typical UCT 
reference implementation does not consider all moves or if they just don't understand the difference between a playout policy and a tree node expansion policy. 
 But I got frustrated, too.  I'm glad I wasn't checking computer go emails during the day today.



Don Dailey wrote:

I agree with this completely.   If fixing this problem was just a simple
matter of course, then I'm sure the mogo team would have done so very
quickly.The cure could be worse than the disease in this case.

But I think what we forget is that this discussion has been hijacked in
a sense, because it became about a specific  implementation,  not
general sound principles.  I don't really care about the exact
implementation details of each and every program or whether a program
took some shortcuts that makes it play stronger now but not at some
point out in the distant future.   


This is rather like a Monte Python script - where someone is talking
seriously about one thing,  and someone else enters with some
unimportant irrelevancy and throws the whole discussion into something
silly.


I started by always saying, properly scalable and someone starts
talking about a totally different thing,  programs that throw out moves
in the search.   That's well and good and you can talk about that, but
isn't that a different discussion? I could make this even sillier
and say, what if the operator drops dead before the game is over?   
The scalability didn't consider THAT factor.  Ha Ha Ha, got you on that

one!  I know that's pretty silly,  but it illustrates the
frustration of this conversation to me.

- Don


Gian-Carlo Pascutto wrote:

Don Dailey wrote:

I am concerned that the current study is, as Jacques has so ably
described, a study of a restricted game where nakade and certain
other moves are considered to be illegal; this restricted game
approaches the game of Go, but the programs have certain blind
spots which humans can and do take advantage of. These aren't
computer-specific blind spots; humans train on life-and-death
problems in order to gain an advantage over other humans also.


This is good news and nothing to worry about.You are basically
saying mogo has a bug, and if this bug is fixed then we can expect
even better scalability. So any success here can be viewed as a
lower bound on it's actual rating.

If a nakade fixed version of mogo (that is truly scalable) was in the
 study,  how much higher would it be in your estimation?

I wanted to come back here because in the heat of the discussion it's
easy to forget what you are actually discussing about.

I think you wanted to make the point that it's possible to fix MoGo
that it considers all moves in the UCT tree, and this scales to
perfect play.

This in turns means that the scaling results are to be considered a
lower bound.

One thing I want to point out to that is that fixing MoGo in the
sense described could mean that its curve is a lot lower.

The question is if the curve would have a different steepness. For
sure it cannot actually flatten out at the same point!


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread David Fotland
 
 I believe you COULD improve as fast as that young guy you are talking
 about,   but you would need to do serious study.   Not read some books
 while watching television,  but putting yourself in a quiet room and
 being totally focused.A 3 dan teacher would help enormously.
 

Agreed.  It took me a couple of years of casual play while writing a go
program to get to 4 kyu, but I didn't get stronger until I started serious
study, every day, solving life and death problems, and playing through
professional games.  It took a year or 2 of this to get to 1 dan, then I
stalled again and couldn't improve.  I took weekly lessons from a pro for 2
years to improve to 3 dan.  Surprisingly, even though I very rarely play,
when I do play, I still get 3 dan results, so there was some deep learning
that doesn't go away.

I was handicapped because I was 22 when I started.  I think people who start
younger improve much faster.

It also helps to get good instruction right from the start to avoid learning
bad habits that are hard to unlearn later.

David


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Hideki Kato
Don Dailey: [EMAIL PROTECTED]:

 About Don's arguments on self testing:

 I would agree at 100% if it wasn't for the known limitations:
 Nakade, not filling own eyes, etc. Because the program is blind
 to them it is blind in both senses: it does not consider those
 moves when defending, but it does not consider them when attacking
 either. Your programs behave as if they were converging to perfect
 play in a game whose rules forbid those moves. But these moves are
 perfectly legal! At weak levels, there are more important things
 to care about, but as the level rises there is a point at which
 understanding or not understanding these situations makes
 the difference. A program that understood these situations,
 but had some other small weakness could have strong impact
 in the ratings. Perhaps, Mogo12 and Mogo16 would not be so
 different in their chances of beating that program as they
 are in beating each other.
Please note that I was talking about a program that is properly and
correctly scalable so I think we are in 100% agreement after all.The
current MC programs, as you point out, are not fully admissible in
this sense.  

However, at the levels we are testing,  I don't believe this is
affecting the ratings (or self-play effect we are talking about) in a
significant way, but I could be wrong since I am no expert on playing Go.

How often is this observed?   Perhaps at some point we could compile a
list of games that represent serious upsets and see how often this would
have been a factor. Probably a more accurate way is to put in
programs that understand these things and see if the crank the ratings
down.My guess is that they will only slightly decrease the ratings
of the top programs.

See the handtalk's winning rates on cgos 9x9 
(http://cgos.boardspace.net/9x9/cross/handtalk.html).
He won agains MoGo at 60% but his rating is about 200 ELO behind 
it.  This happened probably because he know MoGo's weakpoint, 
misunderstanding of LD at corners (including Nakde) very well, at 
least from the games I've watched.

-Hideki

- Don



 Jacques.

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
--
[EMAIL PROTECTED] (Kato)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Michael Williams

Are you kidding?  That's based on only 10 games.


Hideki Kato wrote:

Don Dailey: [EMAIL PROTECTED]:


About Don's arguments on self testing:

I would agree at 100% if it wasn't for the known limitations:
Nakade, not filling own eyes, etc. Because the program is blind
to them it is blind in both senses: it does not consider those
moves when defending, but it does not consider them when attacking
either. Your programs behave as if they were converging to perfect
play in a game whose rules forbid those moves. But these moves are
perfectly legal! At weak levels, there are more important things
to care about, but as the level rises there is a point at which
understanding or not understanding these situations makes
the difference. A program that understood these situations,
but had some other small weakness could have strong impact
in the ratings. Perhaps, Mogo12 and Mogo16 would not be so
different in their chances of beating that program as they
are in beating each other.

Please note that I was talking about a program that is properly and
correctly scalable so I think we are in 100% agreement after all.The
current MC programs, as you point out, are not fully admissible in
this sense.  


However, at the levels we are testing,  I don't believe this is
affecting the ratings (or self-play effect we are talking about) in a
significant way, but I could be wrong since I am no expert on playing Go.

How often is this observed?   Perhaps at some point we could compile a
list of games that represent serious upsets and see how often this would
have been a factor. Probably a more accurate way is to put in
programs that understand these things and see if the crank the ratings
down.My guess is that they will only slightly decrease the ratings
of the top programs.


See the handtalk's winning rates on cgos 9x9 
(http://cgos.boardspace.net/9x9/cross/handtalk.html).
He won agains MoGo at 60% but his rating is about 200 ELO behind 
it.  This happened probably because he know MoGo's weakpoint, 
misunderstanding of LD at corners (including Nakde) very well, at 
least from the games I've watched.


-Hideki


- Don



Jacques.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

--
[EMAIL PROTECTED] (Kato)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Darren Cook
 ...
 That mogo would not know to move to nakade point c1 with either color?

Mogo tends to get confused on nakade positions when there are still
external liberties. Here is my report on this with a couple of examples:
 http://computer-go.org/pipermail/computer-go/2007-October/011327.html

If I've understood correctly it is symptom of weighting good moves in
the playouts: the moves to understand the nakade require putting your
own stones in atari, which is regarded as bad, so Mogo only gets to
explore those lines sufficiently when they appear at a low depth in the
tree.

Darren

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-30 Thread Petri Pitkanen
2008/1/30, Don Dailey [EMAIL PROTECTED]:
 It would get it eventually, which means this doesn't inhibit scalability.


 Having said that,  I am interested in this.  Is there something that
 totally prevents the program from EVER seeing the best move?I don't
 mean something that takes a long time,  I mean something that has the
 theoretical property that it's impossible to every find the best move,
 even given eternity?

 - Don

 Gian-Carlo Pascutto wrote:
  Don Dailey wrote:
 
  So I think this is nakade.
 
  Yes. Leela 0.2.x would get it wrong [1].
 
  [1] Not eternally, but it would still take unreasonably long.
 

Eternity is a long time. So I think UCT program would eventually find.
But compared to ProofNumber, AplhaBeta reader with at least 100-fold
CPU usage. To estimate how far UCT sees lets take 1000 000 simulation.
Typical middlegame simulation with no prior pruning there should be
about 200 possible moves. And about 200 possible answers. Since it is
gona spend at least on esimulation on every first level move an
probably attempt asme on second level. There is not much left 3-ply is
there? Considering the fact that wopuld 400 000  prior simulation from
which start expanding the tree. And it is quite unlikely that nakade
key point would have strong UCT value anyway.

5-point nakade is something that shows up frequently and that takes
more 3-deep lookahead to know. Or simple pattern mathcer and knowledge
that surrounding grou pa has more eyes. Even worse situation for UCT
program would a 5 pout nakade in close semeai with outside group.
Unless there heurestics etc it would never get it right. If my memore
serves AyaMC was using tactical search to prune moves from UCT?

I have first hand experience on this. It wasn't anything complex like
nakade but simply a situation where Crazy Stone tought that it had 60%
chance of winning for about 20 moves when it has excactly 0% chance.
Just that the dead group had three liberties and the outside group had
about 5 == UCT three never gets deep enough see that it dead as door
nail. Nice feature by the way this CrazyStone rankbot has on the KGS
that you can ask on chat its' estimate.

Yes eventually it will get deep enough. But I think some sort of
tactical analysis will get there first. Just mixing these two is
somewhat difficult
-- 
Petri Pitkänen
e-mail: [EMAIL PROTECTED]
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-29 Thread Rémi Coulom

Don Dailey wrote:

They seem under-rated to me also.   Bayeselo pushes the ratings together
because that is apparently a valid initial assumption.   With enough
games I believe that effect goes away.

I could test that theory with some work.Unless there is a way to
turn that off in bayelo (I don't see it) I could rate them with my own
program.

Perhaps I will do that test.

- Don
The factor that pushes ratings together is the prior virtual draws 
between opponents. You can remove or reduce this factor with the  
prior command. (before the mm command, you can run prior 0 or 
prior 0.1). This command indicates the number of virtual draws. If I 
remember correctly, the default is 3. You may get convergence problem if 
you set the prior to 0 and one player has 100% wins.


The effect of the prior should vanish as the number of games grows. But 
if the winning rate is close to 100%, it may take a lot of games before 
the effect of these 3 virtual draws becomes small. It is not possible to 
reasonably measure rating differences when the winning rate is close to 
100% anyway.


Instead of playing UCT bot vs UCT bot, I am thinking about running a 
scaling experiment against humans on KGS. I'll probably start with 2k, 
8k, 16k, and 32k playouts.


Rémi
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-29 Thread Hiroshi Yamashita
Instead of playing UCT bot vs UCT bot, I am thinking about running a 
scaling experiment against humans on KGS. I'll probably start with 2k, 
8k, 16k, and 32k playouts.


I have a result on KGS.

AyaMC  6k (5.9k) 16po http://www.gokgs.com/graphPage.jsp?user=AyaMC
AyaMC2 9k (8.4k)  1po http://www.gokgs.com/graphPage.jsp?user=AyaMC2

16po ... 2po x8 core (8sec/move on Xeon 2.66GHz)
1po ...  5000po x2 core (2sec/move on Opteron 2.4GHz)

(5.9k) and (8.4k) are from the graph.

AyaMC2 has played 97 games in a day on average. (2sec/move)
I changed program 01/19/2008, but it is not stable yet.
On this condition, 7 days or more will be needed for stable rating.

Hiroshi Yamashita


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-29 Thread Don Dailey


Rémi Coulom wrote:
 Don Dailey wrote:
 They seem under-rated to me also.   Bayeselo pushes the ratings together
 because that is apparently a valid initial assumption.   With enough
 games I believe that effect goes away.

 I could test that theory with some work.Unless there is a way to
 turn that off in bayelo (I don't see it) I could rate them with my own
 program.

 Perhaps I will do that test.

 - Don
 The factor that pushes ratings together is the prior virtual draws
 between opponents. You can remove or reduce this factor with the 
 prior command. (before the mm command, you can run prior 0 or
 prior 0.1). This command indicates the number of virtual draws. If I
 remember correctly, the default is 3. You may get convergence problem
 if you set the prior to 0 and one player has 100% wins.
Oh, I didn't know that.   I will change it so that it uses something
like 0.5 or so.


 The effect of the prior should vanish as the number of games grows.
 But if the winning rate is close to 100%, it may take a lot of games
 before the effect of these 3 virtual draws becomes small. It is not
 possible to reasonably measure rating differences when the winning
 rate is close to 100% anyway.

 Instead of playing UCT bot vs UCT bot, I am thinking about running a
 scaling experiment against humans on KGS. I'll probably start with 2k,
 8k, 16k, and 32k playouts.
That would be a great experiment.   There is only 1 issue here and
that's time control.   I would suggest the test is more meaningful if
you use the same time-control for all play-out levels, even if Crazy
Stone plays really fast.This is because the ELO curve for humans is
also based on thinking time.  

If you set the time control at just the rate the program needs to use
all it's time,you might very well find the program plays better at
fast time controls, it would be meaningless even as a rough measurement
of ELO strength.

- Don


 Rémi
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-29 Thread Don Dailey
What are the time controls for the games?

- Don


Hiroshi Yamashita wrote:
 Instead of playing UCT bot vs UCT bot, I am thinking about running a
 scaling experiment against humans on KGS. I'll probably start with
 2k, 8k, 16k, and 32k playouts.

 I have a result on KGS.

 AyaMC  6k (5.9k) 16po http://www.gokgs.com/graphPage.jsp?user=AyaMC
 AyaMC2 9k (8.4k)  1po http://www.gokgs.com/graphPage.jsp?user=AyaMC2

 16po ... 2po x8 core (8sec/move on Xeon 2.66GHz)
 1po ...  5000po x2 core (2sec/move on Opteron 2.4GHz)

 (5.9k) and (8.4k) are from the graph.

 AyaMC2 has played 97 games in a day on average. (2sec/move)
 I changed program 01/19/2008, but it is not stable yet.
 On this condition, 7 days or more will be needed for stable rating.

 Hiroshi Yamashita


 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-29 Thread Hiroshi Yamashita

What are the time controls for the games?


Both are 10 minutes + 30 seconds byo-yomi.

Hiroshi Yamashita


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-29 Thread Michael Williams
I don't feel like searching for it right now, but not too long ago someone posted a link to a chart that gave the winrates and equivalent rankings for different 
rating systems.



Don Dailey wrote:

I wish I knew how that translates to win expectancy (ELO rating.)Is
3 kyu at this level a pretty significant improvement? 


- Don



Hiroshi Yamashita wrote:

Instead of playing UCT bot vs UCT bot, I am thinking about running a
scaling experiment against humans on KGS. I'll probably start with
2k, 8k, 16k, and 32k playouts.

I have a result on KGS.

AyaMC  6k (5.9k) 16po http://www.gokgs.com/graphPage.jsp?user=AyaMC
AyaMC2 9k (8.4k)  1po http://www.gokgs.com/graphPage.jsp?user=AyaMC2

16po ... 2po x8 core (8sec/move on Xeon 2.66GHz)
1po ...  5000po x2 core (2sec/move on Opteron 2.4GHz)

(5.9k) and (8.4k) are from the graph.

AyaMC2 has played 97 games in a day on average. (2sec/move)
I changed program 01/19/2008, but it is not stable yet.
On this condition, 7 days or more will be needed for stable rating.

Hiroshi Yamashita


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-29 Thread Don Dailey


Hiroshi Yamashita wrote:
 What are the time controls for the games?

 Both are 10 minutes + 30 seconds byo-yomi.

 Hiroshi Yamashita

Good.   I think that is a good way to test.

- Don





 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-29 Thread Don Dailey
We can say with absolute statistical certainty that humans when playing
chess improve steadily with each doubling of time.This is not a
hunch, guess or theory,  it's verified by the FACT that we know exactly
how much computers improve with extra time and we also know for sure
that humans  play BETTER relative to computers as you add time to the
clock,  and this holds EVEN up to  correspondence chess.  So humans
have a similar ELO curve to the graph in our study,  only it's even
better than computers.Again I emphasize, this is not speculation but
is clear fact.

That being the case,  we have to take care how we construct any
experiment involving human vs computer play with GO.GO is a
different game, so we don't know if the same rule holds, it could even
be just the opposite, perhaps computers play better relative to humans
with more thinking time.But the point is that we should not  make
any assumptions about this.

I guess what I'm saying is that if anyone does such an experiment,  they
must publish the exact conditions of rated  games, otherwise the test is
completely meaningless.

Also, I suggest that such a test is more useful if you keep something
constant such as the time-control,   or the number of play-outs.Of
course if you keep the number of play-outs constant you are testing the
scalability of human players! I believe it's more interesting to
test the scalability of go programs but at some point we should try to
understand both, like we do in chess.

Another useful test is to get solid ratings for scalable go programs
playing at several different levels,  perhaps starting at 1 second per
move and moving up to 1 or more minutes per move.   Set the time control
and let the computer manage it's time. See if giving the human more
time makes him play worse against computers.

Studies against humans are necessarily messy.Humans have good and
bad days,   don't play consistently the same and humans vary
significantly in their ability to beat computers.   So it will be more
difficult to get evidence on this but it's worthwhile. I hope Rémi
decides to do this study.

- Don



Rémi Coulom wrote:
 Don Dailey wrote:
 They seem under-rated to me also.   Bayeselo pushes the ratings together
 because that is apparently a valid initial assumption.   With enough
 games I believe that effect goes away.

 I could test that theory with some work.Unless there is a way to
 turn that off in bayelo (I don't see it) I could rate them with my own
 program.

 Perhaps I will do that test.

 - Don
 The factor that pushes ratings together is the prior virtual draws
 between opponents. You can remove or reduce this factor with the 
 prior command. (before the mm command, you can run prior 0 or
 prior 0.1). This command indicates the number of virtual draws. If I
 remember correctly, the default is 3. You may get convergence problem
 if you set the prior to 0 and one player has 100% wins.

 The effect of the prior should vanish as the number of games grows.
 But if the winning rate is close to 100%, it may take a lot of games
 before the effect of these 3 virtual draws becomes small. It is not
 possible to reasonably measure rating differences when the winning
 rate is close to 100% anyway.

 Instead of playing UCT bot vs UCT bot, I am thinking about running a
 scaling experiment against humans on KGS. I'll probably start with 2k,
 8k, 16k, and 32k playouts.

 Rémi
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] 19x19 Study - prior in bayeselo, and KGS study

2008-01-29 Thread dhillismail

 From: Don Dailey [EMAIL PROTECTED]
 ...
  Rémi Coulom wrote:
  ...
  Instead of playing UCT bot vs UCT bot, I am thinking about running a
  scaling experiment against humans on KGS. I'll probably start with 2k,
  8k, 16k, and 32k playouts.

 That would be a great experiment.   There is only 1 issue here and
 that's time control.   I would suggest the test is more meaningful if
 you use the same time-control for all play-out levels, even if Crazy
 Stone plays really fast.    This is because the ELO curve for humans is
 also based on thinking time.  

 If you set the time control at just the rate the program needs to use
 all it's time,    you might very well find the program plays better at
 fast time controls, it would be meaningless even as a rough measurement
 of ELO strength.


In addition to having all versions of the program use the same time control, I 
think it would be best if they all made their moves at the same rate. When 
humans play against a bot playing at a fast tempo, they tend to speed up 
themselves regardless of the time limits. The human's pondering is also a 
factor.

I've noticed this in games on KGS; a lot of people lose games with generous 
time limits because they, rashly, try to keep up with my dumb but very fast bot 
and make blunders.

- Dave Hillis




More new features than ever.  Check out the new AIM(R) Mail ! - 
http://webmail.aim.com
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/