Re: [computer-go] Re: mogo beats pro!

2008-08-10 Thread David Doshay
Yeah, I am really on a roll ... first I am misquoted as saying it is  
going to be all over for humans in go very soon, and then they say  
I wrote GNU Go.


Sigh ...

I guess that now I need to expect requests for the next release of GNU  
Go source, or Windows versions, or whatever.


Cheers,
David



On 9, Aug 2008, at 9:34 PM, terry mcintyre wrote:

I was present; David Doshay said that in ten years, it would be  
reasonable to expect computers to play even games with pros.


Reporters tend to be a bit sloppy at times. In the Oregonian, David  
is reported as the author of Gnugo -- I've heard his spiel dozens  
of times, and he has never said anything remotely in the same  
ballpark as that. The fault was not his, in any way, shape or form.


Terry McIntyre [EMAIL PROTECTED]


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: mogo beats pro!

2008-08-10 Thread Ray Tayek

At 01:50 AM 8/10/2008, you wrote:

Yeah, I am really on a roll ... ...

On 9, Aug 2008, at 9:34 PM, terry mcintyre wrote:


I was present; David Doshay said that in ten years, it would be
reasonable to expect computers to play even games with pros.


david d, do you *really* think that they will play even with pros?

i am guessing more like amateur 1-dan.

thanks

---
vice-chair http://ocjug.org/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] no anchor on 13x13 cgos since yestday

2008-08-10 Thread John Fan

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Re: mogo beats pro!

2008-08-10 Thread David Doshay

Yes, for the first time I do think that on the 10 year time scale
computers will play against pros on an even basis. I am not
ready to predict that they will routinely beat the best of the
pros.

They play (or rather it played) at amateur 1-dan now ... that is
what just happened.

Cheers,
David



On 10, Aug 2008, at 4:15 AM, Ray Tayek wrote:


At 01:50 AM 8/10/2008, you wrote:

Yeah, I am really on a roll ... ...

On 9, Aug 2008, at 9:34 PM, terry mcintyre wrote:


I was present; David Doshay said that in ten years, it would be
reasonable to expect computers to play even games with pros.


david d, do you *really* think that they will play even with pros?

i am guessing more like amateur 1-dan.

thanks

---
vice-chair http://ocjug.org/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: mogo beats pro!

2008-08-10 Thread Mark Boon
I'm sure you can find quotes from 'experts' claiming really wild
things on just about any subject.

I think generally that reaching 1-dan in computer-Go was thought to be
attainable with today's hardware but that it would still take
considerable work. I don't think MoGo's recent success suddenly
invalidates that view to a great extent. Just that a good step towards
it has been made.

Let's put things in context a little bit. Here's a link from 1998:
http://www.computer-go.info/events/ing/1998/archive/gen.html

There you'll find a report of Handtalk beating young pros with 11
stones as part of the Ing challenge. If you'd ask me I wouldn't think
Mr. Kim is more than two stones stronger than those young pros.
Possible the difference is less than two stones. (It also reports
Handtalk beating a 3-dan, which I take with a grain of salt.)

So although I think this match was a good mile-stone, I don't see it
as if 9 stones of progress has been made in just a few years. 3-5
stones in ten years on hardware many thousands of times as powerful is
a bit closer to the truth.

What I think makes people optimistic about the (near) future is the
fact that MoGo is scalable whereas Handtalk and equivalent programs
are not. I'm sure a lot more speculation about this subject will tak
place. But this was just one game and only  time will show what really
has been achieved so far.

Mark
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] the more important news from the Go congress

2008-08-10 Thread David Doshay
While the mogo game and result is in the newspaper and keeping all of  
us talking, there was another piece of progress in computer Go that  
took place at the US Go congress that I think says more about the  
state of computer go than the 9-stone handicap win.


The day before the mogo match there was a meeting with a number of AGA  
officials that Peter Drake and I attended. After much spirited,  
passionate, and strongly opinionated discussion, it has been decided  
that the AGA will develop a plan to formally rate computer programs.  
The AGA feels that it has the gold standard in rating systems, and  
previous to this point all games against computer programs were  
explicitly not rated, and thus programs could not get a rating.


It is clear to me that the AGA is not going to drag its feet on this,  
and we will be able to get reliable ratings before a year from now.  
Before folks start rants about KGS ratings, lets make clear that while  
those are interesting, the ease of making a new user name to either  
inflate or deflate one's rating, and the ease of abandonment are very  
real issues that lead the AGA to shrug off KGS ratings at this time.


The exact details of the system are not yet specified, but I have been  
assured by those with the power to make it happen that one year from  
now we will have made the first important step towards the acceptance  
that programs can play Go: they will have realistic and confirmed  
ratings. This is clearly an important step towards more widespread  
acceptance of programs as serious players.


Cheers,
David



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Some cgos 19x19 suggestions

2008-08-10 Thread David Fotland
First, thank you very much, Don, for giving us a reliable 19x19 server.

Please consider increasing the time a program stays on the list until it
ages off.  I guess you drop programs from the ratings page after some time
that depends on the number of games they have played.  Since 19x19 games
take 4 times longs, it seems you should allow four times as much time to age
off the list, for the same number of games.  I like seeing the top program's
results a little longer.  

It would be nice if a program can get into position more quickly.  Since the
games take longer, it can take several days to climb up from the initial
1200 to 2000, especially if there is an early loos.  Does it make sense to
set the initial k value a little higher, or to set the initial rating to
1500 instead of 1200?

-David


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Some cgos 19x19 suggestions

2008-08-10 Thread Jason House
I like the idea of using Bayesian ELO ratings instead. They should  
adapt better and faster. It would give better rank confidence than the  
current k factor. For example, kartofel would have kept a low  
confidence.


Sent from my iPhone

On Aug 10, 2008, at 11:51 AM, David Fotland [EMAIL PROTECTED] 
games.com wrote:


First, thank you very much, Don, for giving us a reliable 19x19  
server.


Please consider increasing the time a program stays on the list  
until it
ages off.  I guess you drop programs from the ratings page after  
some time
that depends on the number of games they have played.  Since 19x19  
games
take 4 times longs, it seems you should allow four times as much  
time to age
off the list, for the same number of games.  I like seeing the top  
program's

results a little longer.

It would be nice if a program can get into position more quickly.   
Since the
games take longer, it can take several days to climb up from the  
initial
1200 to 2000, especially if there is an early loos.  Does it make  
sense to
set the initial k value a little higher, or to set the initial  
rating to

1500 instead of 1200?

-David


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: Strength of Monte-Carlo w/ UCT...

2008-08-10 Thread Robert Waite
 *  The MCTS technique appears to be extremely scalable.  The theoretical
* *  papers about it claim that it scales up to perfect play in theory.

**  We agree here that this is not true of course.
*
 No, I think we disagree this time my friend!
 Monte Carlo of course by itself is not scalable.  But when combined with
 tree search such as UCT,  it is equivalent  to a mini-max search with a
 high quality evaluation at leaf nodes.   It's scalable because the
 longer it searches, the more it acts like a proper mini-max search.

I am not really sure what you guys meant by the interchange above.. and I
will admit that I am not nearly as deep into the problem as probably anyone
on this list.. but it seems doubtful that they have proven some sort of
scalable algorithm that will continue to give moves closer and closer to
perfect play with higher and higher confidence. I could believe that they
have a proof that the algorithm is scalable in computing power... but not
necessarily that it is scalable against the problem of beating a human.

Firstly.. I don't know if there is necessarily perfect play. If you mean
play good enough to beat all humans... I guess you could consider that
perfect in a sense.

It just seems that given the ridiculously large number of possible games...
and the relative weakness our our computation machines... we will be taking
an extremely small sample size with our current models of computing. There
may be some breakthrough... but it seems that as long as Moore's law
holds... we will continue to be taking a very small sample. There are some
heuristics being used to help whittle down the bad.. like checking the 8
spots around a stone... but is it really proven that this algorithm will
scale in a practical sense? As in... if we took Mogo in it's current form
and threw all the hardware that humanity can make in the next 20 years at
it... has that alone been shown to be enough to beat humans? Seems that we
don't understand all of the parameters of human play to answer a question
like that.

I am playing the Devil's advocate here.. because I do have a feeling that
the human computer with regards to go is not mystical and that we have a
weakness that can be exploited by computers. But I just feel that there is
no rigorous proof yet about the power of scalability of computation... vs.
the ability to beat a human in go.

On Sat, Aug 9, 2008 at 2:54 PM, Robert Waite [EMAIL PROTECTED] wrote:

 I'm curious what you guys think about the scalability of monte carlo with
 UCT. Let's say we took a cluster like that which was used for the Mogo vs.
 Kim game. Then lets say we made 128 of these clusters and connected them
 together efficiently. Putting aside implementation and latency issues...
 what kind of stones-strength increase would you imagine?

 Its a pretty arbitrary guess.. but do you think one stone improvement... or
 that this would alone be enough to beat a pro even?

 I am wondering because there could be a weakness or limit in MC w/ UCT. I
 am only now learning about the UCT addition... but there are vast numbers of
 possible games that are never visited during the monte carlo simulations.
 The random stone simulations are pretty random aren't they? I am reading
 some of the papers on the UCT addition... and that does seem to show certain
 branches to be better and worth more time. Pro players may have a
 counter-strategy that might come out as Mogo is tested at higher levels of
 play. Perhaps there will be a need to combine MCwUCT with heuristics or more
 knowledge based play. Going the route of heuristics seems unpleasant and the
 promise of using a more computational method would be great. However... if
 MC techniques alone have a diminishing return... the route of heuristics
 might come back (or perhaps a whole new paradigm for game algorithms).

 I am still secretly on the side of human go beating the machine.. but the
 recent match really changed my view on topic and really showed the value of
 statistical analysis. I am just wondering about what kind of roadblocks
 might show up for the monte carlo techniques.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Re: Strength of Monte-Carlo w/ UCT...

2008-08-10 Thread Dan Andersson

Robert Waite skrev:

*  The MCTS technique appears to be extremely scalable.  The theoretical
  

* *  papers about it claim that it scales up to perfect play in theory.

**  We agree here that this is not true of course.
*
  

No, I think we disagree this time my friend!
Monte Carlo of course by itself is not scalable.  But when combined with
tree search such as UCT,  it is equivalent  to a mini-max search with a
high quality evaluation at leaf nodes.   It's scalable because the
longer it searches, the more it acts like a proper mini-max search.



I am not really sure what you guys meant by the interchange above.. and I
will admit that I am not nearly as deep into the problem as probably anyone
on this list.. but it seems doubtful that they have proven some sort of
scalable algorithm that will continue to give moves closer and closer to
perfect play with higher and higher confidence. I could believe that they
have a proof that the algorithm is scalable in computing power... but not
necessarily that it is scalable against the problem of beating a human.

  

MC/UCT is provably scalable up to perfect play.

/Dan Andersson

Firstly.. I don't know if there is necessarily perfect play. If you mean
play good enough to beat all humans... I guess you could consider that
perfect in a sense.

It just seems that given the ridiculously large number of possible games...
and the relative weakness our our computation machines... we will be taking
an extremely small sample size with our current models of computing. There
may be some breakthrough... but it seems that as long as Moore's law
holds... we will continue to be taking a very small sample. There are some
heuristics being used to help whittle down the bad.. like checking the 8
spots around a stone... but is it really proven that this algorithm will
scale in a practical sense? As in... if we took Mogo in it's current form
and threw all the hardware that humanity can make in the next 20 years at
it... has that alone been shown to be enough to beat humans? Seems that we
don't understand all of the parameters of human play to answer a question
like that.

I am playing the Devil's advocate here.. because I do have a feeling that
the human computer with regards to go is not mystical and that we have a
weakness that can be exploited by computers. But I just feel that there is
no rigorous proof yet about the power of scalability of computation... vs.
the ability to beat a human in go.

On Sat, Aug 9, 2008 at 2:54 PM, Robert Waite [EMAIL PROTECTED] wrote:

  

I'm curious what you guys think about the scalability of monte carlo with
UCT. Let's say we took a cluster like that which was used for the Mogo vs.
Kim game. Then lets say we made 128 of these clusters and connected them
together efficiently. Putting aside implementation and latency issues...
what kind of stones-strength increase would you imagine?

Its a pretty arbitrary guess.. but do you think one stone improvement... or
that this would alone be enough to beat a pro even?

I am wondering because there could be a weakness or limit in MC w/ UCT. I
am only now learning about the UCT addition... but there are vast numbers of
possible games that are never visited during the monte carlo simulations.
The random stone simulations are pretty random aren't they? I am reading
some of the papers on the UCT addition... and that does seem to show certain
branches to be better and worth more time. Pro players may have a
counter-strategy that might come out as Mogo is tested at higher levels of
play. Perhaps there will be a need to combine MCwUCT with heuristics or more
knowledge based play. Going the route of heuristics seems unpleasant and the
promise of using a more computational method would be great. However... if
MC techniques alone have a diminishing return... the route of heuristics
might come back (or perhaps a whole new paradigm for game algorithms).

I am still secretly on the side of human go beating the machine.. but the
recent match really changed my view on topic and really showed the value of
statistical analysis. I am just wondering about what kind of roadblocks
might show up for the monte carlo techniques.




  



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: Strength of Monte-Carlo w/ UCT...

2008-08-10 Thread Robert Waite
 MC/UCT is provably scalable up to perfect play.

Really? Could you send me a link to the paper? I think we must have a
different definition for some word. Perfect play? Are you saying that we
have proven that the 19x19 go board has a perfect path for black? I did not
realize we knew so much about the problem.

If it is proven to be scalable up to perfect play... then that would imply
that we have some notion of how far away we are from perfect play. Or at the
very least.. that we know what the graph of perfect play looks like. Maybe I
am way behind in the theory.. but this seems incredible to me.

On Sun, Aug 10, 2008 at 1:14 PM, Robert Waite [EMAIL PROTECTED]wrote:

  *  The MCTS technique appears to be extremely scalable.  The theoretical
 * *  papers about it claim that it scales up to perfect play in theory.
 
 **  We agree here that this is not true of course.
 *
  No, I think we disagree this time my friend!
  Monte Carlo of course by itself is not scalable.  But when combined with
  tree search such as UCT,  it is equivalent  to a mini-max search with a

  high quality evaluation at leaf nodes.   It's scalable because the
  longer it searches, the more it acts like a proper mini-max search.

 I am not really sure what you guys meant by the interchange above.. and I
 will admit that I am not nearly as deep into the problem as probably anyone
 on this list.. but it seems doubtful that they have proven some sort of
 scalable algorithm that will continue to give moves closer and closer to
 perfect play with higher and higher confidence. I could believe that they
 have a proof that the algorithm is scalable in computing power... but not
 necessarily that it is scalable against the problem of beating a human.

 Firstly.. I don't know if there is necessarily perfect play. If you mean
 play good enough to beat all humans... I guess you could consider that
 perfect in a sense.

 It just seems that given the ridiculously large number of possible games...
 and the relative weakness our our computation machines... we will be taking
 an extremely small sample size with our current models of computing. There
 may be some breakthrough... but it seems that as long as Moore's law
 holds... we will continue to be taking a very small sample. There are some
 heuristics being used to help whittle down the bad.. like checking the 8
 spots around a stone... but is it really proven that this algorithm will
 scale in a practical sense? As in... if we took Mogo in it's current form
 and threw all the hardware that humanity can make in the next 20 years at
 it... has that alone been shown to be enough to beat humans? Seems that we
 don't understand all of the parameters of human play to answer a question
 like that.

 I am playing the Devil's advocate here.. because I do have a feeling that
 the human computer with regards to go is not mystical and that we have a
 weakness that can be exploited by computers. But I just feel that there is
 no rigorous proof yet about the power of scalability of computation... vs.
 the ability to beat a human in go.


 On Sat, Aug 9, 2008 at 2:54 PM, Robert Waite [EMAIL PROTECTED]wrote:

 I'm curious what you guys think about the scalability of monte carlo with
 UCT. Let's say we took a cluster like that which was used for the Mogo vs.
 Kim game. Then lets say we made 128 of these clusters and connected them
 together efficiently. Putting aside implementation and latency issues...
 what kind of stones-strength increase would you imagine?

 Its a pretty arbitrary guess.. but do you think one stone improvement...
 or that this would alone be enough to beat a pro even?

 I am wondering because there could be a weakness or limit in MC w/ UCT. I
 am only now learning about the UCT addition... but there are vast numbers of
 possible games that are never visited during the monte carlo simulations.
 The random stone simulations are pretty random aren't they? I am reading
 some of the papers on the UCT addition... and that does seem to show certain
 branches to be better and worth more time. Pro players may have a
 counter-strategy that might come out as Mogo is tested at higher levels of
 play. Perhaps there will be a need to combine MCwUCT with heuristics or more
 knowledge based play. Going the route of heuristics seems unpleasant and the
 promise of using a more computational method would be great. However... if
 MC techniques alone have a diminishing return... the route of heuristics
 might come back (or perhaps a whole new paradigm for game algorithms).

 I am still secretly on the side of human go beating the machine.. but the
 recent match really changed my view on topic and really showed the value of
 statistical analysis. I am just wondering about what kind of roadblocks
 might show up for the monte carlo techniques.



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Some cgos 19x19 suggestions

2008-08-10 Thread Don Dailey
On Sun, 2008-08-10 at 14:15 -0400, Don Dailey wrote:
 I will also modify the server so that losses by anchors don't count.  

Woops,  what I mean is losses on TIME won't count.  They will still
count if the opponent loses but not if the anchor loses.  

- Don


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-10 Thread Bob Hearn

David Doshay wrote:

As an aside, the pro in question won the US Open, so comments about  
him being a weak pro seem inappropriate. I spoke with him a number  
of times, and I firmly believe that he took the match as seriously  
as any other public exhibition of his skill that involves handicap  
stones for the opponent. He has an open mind about computer Go,  
unlike some other pro players I talked to here at the congress.


Kim did state before the match that in his opinion computers would  
never be as strong as the best humans. I don't believe he was asked  
afterward whether he had any reason to change that opinion.


After the banquet last night, I was talking to Peter Drake when Kim  
walked up and started asking questions about how MoGo played go. Peter  
explained very well, but I'm not sure he completely understood.


BTW, David, I also pointed out to Chris Garlock that you'd been  
misquoted, shortly after the story went up on the AGA website, but he  
didn't reply.



Also BTW, let me introduce myself to the list and ask a question. I'm  
a 2D go player, also an AI researcher affiliated with Dartmouth. I did  
my Ph.D. at MIT on games and puzzles. However, I never seriously  
worked on computer go, because I was always convinced go was AI- 
complete -- that we would have strong go programs when we had general- 
purpose AI, and not before. Mostly my current work is on general- 
purpose AI heavily inspired by neuroscience.


However, with the advent of the Monte Carlo programs I'm about ready  
to change my mind. I'm tempted to try to work in the area and see  
whether I can contribute anything.



Now, my question. Sorry if this has already been beaten to death here.  
After the match, one of the MoGo programmers mentioned that doubling  
the computation led to a 63% win rate against the baseline version,  
and that so far this scaling seemed to continue as computation power  
increased.


So -- quick back-of-the-envelope calculation, tell me where I am  
wrong. 63% win rate = about half a stone advantage in go. So we need  
4x processing power to increase by a stone. At the current rate of  
Moore's law, that's about 4 years. Kim estimated that the game with  
MoGo would be hard at 8 stones. That suggests that in 32 years a  
supercomputer comparable to the one that played in this match would be  
as strong as Kim.


This calculation is optimistic in assuming that you can meaningfully  
scale the 63% win rate indefinitely, especially when measuring  
strength against other opponents, and not a weaker version of itself.  
It's also pessimistic in assuming there will be no improvement in the  
Monte Carlo technique.


But still, 32 years seems like a surprisingly long time, much longer  
than the 10 years that seems intuitively reasonable. Naively, it would  
seem that improvements in the Monte Carlo algorithms could gain some  
small number of stones in strength for fixed computation, but that  
would just shrink the 32 years by maybe a decade.


How do others feel about this?

I guess I should also go on record as believing that if it really does  
take 32 years, we *will* have general-purpose AI before then.



Bob Hearn

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] mogo beats pro!

2008-08-10 Thread Don Dailey
On Sun, 2008-08-10 at 11:37 -0700, Bob Hearn wrote:
 Now, my question. Sorry if this has already been beaten to death here.
 After the match, one of the MoGo programmers mentioned that doubling
 the computation led to a 63% win rate against the baseline version,
 and that so far this scaling seemed to continue as computation power
 increased. 
 
 So -- quick back-of-the-envelope calculation, tell me where I am
 wrong. 63% win rate = about half a stone advantage in go. So we need
 4x processing power to increase by a stone. At the current rate of
 Moore's law, that's about 4 years. Kim estimated that the game with
 MoGo would be hard at 8 stones. That suggests that in 32 years a
 supercomputer comparable to the one that played in this match would be
 as strong as Kim. 

 This calculation is optimistic in assuming that you can meaningfully
 scale the 63% win rate indefinitely, especially when measuring
 strength against other opponents, and not a weaker version of itself.
 It's also pessimistic in assuming there will be no improvement in the
 Monte Carlo technique. 
 
 But still, 32 years seems like a surprisingly long time, much longer
 than the 10 years that seems intuitively reasonable. Naively, it would
 seem that improvements in the Monte Carlo algorithms could gain some
 small number of stones in strength for fixed computation, but that
 would just shrink the 32 years by maybe a decade. 
 
 How do others feel about this?  

10 years in my opinion is not reasonable.  20 years would be a better
estimate.  We are probably looking at 20 - 30 years for a desktop player
of this strength.  

And I assume that the MCTS will continue to be refined and improved.  

Another factor is that Kim could easily be off by a stone or two in
either direction - but since he won 2 fast games I would guess that once
he got used to playing Mogo he could win consistently with 8 stones -
but this is only my guess of course.

My estimate may sound pessimistic to some, but this same wild exuberance
happened in chess with the famous Levy bet.  10 years later Levy beat
the computer winning the bet and 10 more years later he won again.   And
Levy was not a Grandmaster, he was an international master.  

- Don


  


 
 I guess I should also go on record as believing that if it really does
 take 32 years, we *will* have general-purpose AI before then. 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: Strength of Monte-Carlo w/ UCT...

2008-08-10 Thread Don Dailey
It's the tree search part where everything is happening.  Eventually,
enough of the tree is explored to find a win or prove a loss.

- Don


On Sun, 2008-08-10 at 20:11 +0100, Raymond Wold wrote:
 Dan Andersson wrote:
  No more incredible than that Mini-Max and Alpha-Beta will generate 
  perfect play given enough resources. In the worst case MC/UCT will build 
  a Mini-Max tree solving the game.
 
 I've not looked very well at how UCT works, but surely this depends on 
 the randomness of the MC? What if by some bizarre coincidence the source 
 of random bits gives all 1's? What if it's pseudo-random, and interferes 
 with itself so some paths are never investigated?
 
 Or are you stipulating not just a good random source, but one which 
 sooner or later enumerates all possibilities?
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: Strength of Monte-Carlo w/ UCT...

2008-08-10 Thread Robert Waite
Hmm.. I dunno.. I think there are a lot of ideas floating around but some
miscommunications.

So the aim is to devise a computer that will beat the strongest human
players of go.

I hear that Monte-Carlo with UCT is proven to be scalable to perfect play.
It seems that this is essentially saying... that as the sample size for this
technique grows to infinity.. you will approach the accuracy an algorithm
that has solved go (in the sense that 5x5 was solved)... kind of like
creating the entire game tree. That this curve approaches perfect play as
you increase the samples to infinity. Same goes for drawing out the entire
game tree. It just seems that MCwUCT is a lot easier.

This however speaks nothing about the rate at which it approaches perfect
play as you increase the sample size. I didn't see anything in the papers I
have read about this. Which brings us to what our aim is.. and that is to
beat human players at go. Nothing has been proven yet about practical
scalability... which is what we would like. Scalable in the sense of
approaching infinity alone does not prove that it is not intractable. And it
was said that the Mogo devs said that a double-strength version beats the
other one with 63%. As they mentioned... ideally... this would mean about 30
years. But there could be a point of diminishing returns as it relates to
beating a human.

When you say that is it proven scalable to perfect play... it is like saying
that we know that if you create every possible game... and have a database
that can access it well... you can get to perfect play. This does not help
us actually do it.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Re: Strength of Monte-Carlo w/ UCT...

2008-08-10 Thread Don Dailey
On Sun, 2008-08-10 at 15:19 -0400, Robert Waite wrote:
 
 Hmm.. I dunno.. I think there are a lot of ideas floating around but
 some miscommunications.
 
 So the aim is to devise a computer that will beat the strongest human
 players of go.
 
 I hear that Monte-Carlo with UCT is proven to be scalable to perfect
 play. It seems that this is essentially saying... that as the sample
 size for this technique grows to infinity.. you will approach the
 accuracy an algorithm that has solved go (in the sense that 5x5 was
 solved)... kind of like creating the entire game tree. That this curve
 approaches perfect play as you increase the samples to infinity. Same
 goes for drawing out the entire game tree. It just seems that MCwUCT
 is a lot easier.
 
 This however speaks nothing about the rate at which it approaches
 perfect play as you increase the sample size. I didn't see anything in
 the papers I have read about this. Which brings us to what our aim
 is.. and that is to beat human players at go. Nothing has been proven
 yet about practical scalability... which is what we would like.

I don't know how you can say that.  The empirical evidence is
overwhelming that this is scalable in a practical way but more
importantly it's been PROVEN to be scalable.  If you throw the word
practical in there then you are no longer talking the language of
mathematics, theory and proofs so please don't ask for a proof of
practical scalability, it makes no sense. 



  Scalable in the sense of approaching infinity alone does not prove
 that it is not intractable. 

MC will never solve the big board in a practical sense.  We are of
course talking about the issue of scalability in a practical game
improving sense.  

You can't have it both ways - the tool we use to predict performance is
an understanding of it's scalability properties.   That has been proven
but you want to say it's not relevant and that we have no way to
estimate it's chances against humans.   But we DO have a way to get a
rough estimate.   I think many just don't want to believe that it's
possible to beat a human - their brains cannot get used to the idea that
it will probably happen within their lifetime.  


 And it was said that the Mogo devs said that a double-strength version
 beats the other one with 63%. As they mentioned... ideally... this
 would mean about 30 years. But there could be a point of diminishing
 returns as it relates to beating a human.

There clearly is diminishing returns, even at very weak levels but you
probably cannot measure it.I think it's very likely that the
diminishing returns curve will be very very gradual for a long time to
come, well beyond the point of achieving the top human levels.

I believe for many this is a matter of credulity.  It was like this in
chess, we could not accept the possibility that computers could really
get that good.   And in GO we are so far away still that our brains have
to make up reasons why it can't happen.   When our realities don't match
our belief systems,  we balk.

But you can look at it another way - humans are not that good at the
game.   Compared to you and I,  they may seem like god's,  but they are
fallible and weak in the perfect game sense.  

Grandmasters in chess, in many ways fell from grace when it became
necessary to play odds matches with computers in order for them to have
a chance.   When I was a boy reading chess books I practically worshiped
these immortal masters of chess.   The computers now expose their errors
all the time, they are just humans and are no different from you and I,
they just play a few hundred ELO stronger.If you take them off the
pedestal, you can think more rationally about it.

- Don





 
 When you say that is it proven scalable to perfect play... it is like
 saying that we know that if you create every possible game... and have
 a database that can access it well... you can get to perfect play.
 This does not help us actually do it.
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: Strength of Monte-Carlo w/ UCT...

2008-08-10 Thread Robert Waite
 I don't know how you can say that.  The empirical evidence is
 overwhelming that this is scalable in a practical way but more
 importantly it's been PROVEN to be scalable.  If you throw the word
 practical in there then you are no longer talking the language of
 mathematics, theory and proofs so please don't ask for a proof of
 practical scalability, it makes no sense.

I would really like to see what paper you are referring to. Do you mean
Bandit based Monte-Carlo Planning? Please post the name of the paper which
you are referring to. I do not think that the empirical evidence is
overwhelming that it is scalable in a practical way for the problem of
beating a human.

PROVEN to be scalable? Big deal... isn't an algorithm that does an
exhaustive search provable to be scalable in the same sense? The fact that
it is proven to be scalable as the sample size increases to infinity does
not help the cause. The only thing that helps is the rate at which it
approaches perfect play. How is this different from exhaustive search with
regards to being proven to be scalable? Exhaustive search is scalable in
that I could give it all the memory and time it wanted. And it would
approach a finite amount of memory and a finite amount of time.

Complexity theory is based on math and it does address practicality. By
using the word practical.. I am not jumping into mysticism. I feel that the
proof that you offer does not help us in a practical sense, at least in a
rigorous mathematically proven way.

 We are of
 course talking about the issue of scalability in a practical game
 improving sense.


Okay.. so where is the paper that correlates the speed at which MCwUCT
approaches perfect play with the ability to play a human? They seem
unrelated as of yet.

I think it's very likely that the
diminishing returns curve will be very very gradual for a long time to
come, well beyond the point of achieving the top human levels.

This is conjecture... and it does not relate with MC methods being proven to
be scalable. It's a gut feeling.. just like many feelings I have about go.

 When our realities don't match
 our belief systems,  we balk.

 If you take them off the
 pedestal, you can think more rationally about it.


I don't think that represents my feelings on the subject. My gut feeling
before the match was the learning machines and further advanced in AI would
be needed to solve the problem. This was from a sense of the potential
intractability of go. I could very well be wrong.

It's obvious that you could recreate a brain since it is made of a finite
amount of matter. So I have no mystical attachments to the game of go. I
just think we have not proven yet that number crunching methods are viable
alone. A more heuristic approach could still be needed. Mogo does use MC
methods to play.. but it does have a few heuristics to help judge important
trees. Will number crunching methods be enough alone... or will there be a
need for much stronger heuristics to trim the tree? I don't think we know
yet.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] the more important news from the Go congress

2008-08-10 Thread Stuart A. Yeates
It is great to see computer players taking another step towards being
first-class citizens of the go-playing world.

cheers
stuart

On Mon, Aug 11, 2008 at 3:37 AM, David Doshay [EMAIL PROTECTED] wrote:
 While the mogo game and result is in the newspaper and keeping all of us
 talking, there was another piece of progress in computer Go that took place
 at the US Go congress that I think says more about the state of computer go
 than the 9-stone handicap win.

 The day before the mogo match there was a meeting with a number of AGA
 officials that Peter Drake and I attended. After much spirited, passionate,
 and strongly opinionated discussion, it has been decided that the AGA will
 develop a plan to formally rate computer programs. The AGA feels that it has
 the gold standard in rating systems, and previous to this point all games
 against computer programs were explicitly not rated, and thus programs could
 not get a rating.

 It is clear to me that the AGA is not going to drag its feet on this, and we
 will be able to get reliable ratings before a year from now. Before folks
 start rants about KGS ratings, lets make clear that while those are
 interesting, the ease of making a new user name to either inflate or deflate
 one's rating, and the ease of abandonment are very real issues that lead the
 AGA to shrug off KGS ratings at this time.

 The exact details of the system are not yet specified, but I have been
 assured by those with the power to make it happen that one year from now we
 will have made the first important step towards the acceptance that programs
 can play Go: they will have realistic and confirmed ratings. This is clearly
 an important step towards more widespread acceptance of programs as serious
 players.

 Cheers,
 David



 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Some cgos 19x19 suggestions

2008-08-10 Thread steve uurtamo
one more thing -- you may want to keep anchors from playing
one another.  at least, i seem to recall that i saw two anchors
playing one another.  it can't (by definition) affect anyone's ratings,
so... probably pointless for them to do so, right?

s.


On Sun, Aug 10, 2008 at 11:27 AM, Don Dailey [EMAIL PROTECTED] wrote:
 On Sun, 2008-08-10 at 14:15 -0400, Don Dailey wrote:
 I will also modify the server so that losses by anchors don't count.

 Woops,  what I mean is losses on TIME won't count.  They will still
 count if the opponent loses but not if the anchor loses.

 - Don


 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: Strength of Monte-Carlo w/ UCT...

2008-08-10 Thread Andy
On Sun, Aug 10, 2008 at 3:46 PM, Robert Waite [EMAIL PROTECTED]wrote:

 Okay.. so where is the paper that correlates the speed at which MCwUCT
 approaches perfect play with the ability to play a human? They seem
 unrelated as of yet.


The closest I've seen are these two studies Don made:

http://cgos.boardspace.net/study/

http://cgos.boardspace.net/study/13/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] mogo beats pro!

2008-08-10 Thread steve uurtamo
your calculation is for mogo to beat kim, according to kim and the
mogo team's estimates.

i think that a better thing to measure would be for a computer program
to be able to regularly beat amateurs of any rank without handicap.
i.e. to effectively be at the pro level.

for one thing, this is easier to measure, and for another, it relies much
less on mogo staying the same, kim being correct, or some other
professional being much better against computer players, for instance.
it just requires some machine connected to KGS to be able to attain,
say, 9d, and keep it for a month or so.

s.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: Strength of Monte-Carlo w/ UCT...

2008-08-10 Thread Don Dailey
Robert,

Do you know what Occam's razor is?   

Einstein originally believed that the universe was static.   When this
didn't fit his observations he invented the cosmological constant,
which he considered one of his biggest blunders.

If we are going to continue to discuss this,  then if you have a theory
that has no yet existing evidence (such as sudden diminishing returns)
then YOU present evidence.  This is called the burden of proof.   If you
accuse someone of some crime,  it isn't up to them to prove you wrong,
it is up to you to prove your assertion.

You might believe that it's up to me to prove that there isn't a sudden
diminishing returns affect, but you have it backwards.  It would be
pretty unreasonable to continue to throw assertions at me that I have to
prove - YOU are the one that needs to prove your assertions.  

I think there are little green men living about 20 miles down below the
surface of the earth.   Prove me wrong!   See, you can't so there!   


The facts are that Mogo is strong,  it's CLEARLY stronger on big Iron
and they went to a lot of trouble getting big Iron based on that fact.
We also observe that the good programs do better against both humans and
other programs.   Do you have something we don't know about that is more
than a hunch?


- Don




On Sun, 2008-08-10 at 16:46 -0400, Robert Waite wrote:
  I don't know how you can say that.  The empirical evidence is
  overwhelming that this is scalable in a practical way but more
  importantly it's been PROVEN to be scalable.  If you throw the word
 
  practical in there then you are no longer talking the language of
  mathematics, theory and proofs so please don't ask for a proof of
  practical scalability, it makes no sense. 
 
 I would really like to see what paper you are referring to. Do you
 mean Bandit based Monte-Carlo Planning? Please post the name of the
 paper which you are referring to. I do not think that the empirical
 evidence is overwhelming that it is scalable in a practical way for
 the problem of beating a human.
 
 PROVEN to be scalable? Big deal... isn't an algorithm that does an
 exhaustive search provable to be scalable in the same sense? The fact
 that it is proven to be scalable as the sample size increases to
 infinity does not help the cause. The only thing that helps is the
 rate at which it approaches perfect play. How is this different from
 exhaustive search with regards to being proven to be scalable?
 Exhaustive search is scalable in that I could give it all the memory
 and time it wanted. And it would approach a finite amount of memory
 and a finite amount of time.
 
 Complexity theory is based on math and it does address practicality.
 By using the word practical.. I am not jumping into mysticism. I feel
 that the proof that you offer does not help us in a practical sense,
 at least in a rigorous mathematically proven way.
 
  We are of
  course talking about the issue of scalability in a practical game
  improving sense.  
 
 Okay.. so where is the paper that correlates the speed at which MCwUCT
 approaches perfect play with the ability to play a human? They seem
 unrelated as of yet.
 
 I think it's very likely that the
 diminishing returns curve will be very very gradual for a long time to
 come, well beyond the point of achieving the top human levels.
 This is conjecture... and it does not relate with MC methods being
 proven to be scalable. It's a gut feeling.. just like many feelings I
 have about go.
 
  When our realities don't match
  our belief systems,  we balk.
  If you take them off the
  pedestal, you can think more rationally about it.
 
 I don't think that represents my feelings on the subject. My gut
 feeling before the match was the learning machines and further
 advanced in AI would be needed to solve the problem. This was from a
 sense of the potential intractability of go. I could very well be
 wrong.
 
 It's obvious that you could recreate a brain since it is made of a
 finite amount of matter. So I have no mystical attachments to the game
 of go. I just think we have not proven yet that number crunching
 methods are viable alone. A more heuristic approach could still be
 needed. Mogo does use MC methods to play.. but it does have a few
 heuristics to help judge important trees. Will number crunching
 methods be enough alone... or will there be a need for much stronger
 heuristics to trim the tree? I don't think we know yet.
 
 
 
 
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Some cgos 19x19 suggestions

2008-08-10 Thread Don Dailey
On Sun, 2008-08-10 at 14:35 -0700, steve uurtamo wrote:
 one more thing -- you may want to keep anchors from playing
 one another.  at least, i seem to recall that i saw two anchors
 playing one another.  it can't (by definition) affect anyone's ratings,
 so... probably pointless for them to do so, right?

Yes, that is something I have been planning to do for a long time but
haven't bothered since previously we rarely had more than 1 anchor at a
time.

- Don


 
 s.
 
 
 On Sun, Aug 10, 2008 at 11:27 AM, Don Dailey [EMAIL PROTECTED] wrote:
  On Sun, 2008-08-10 at 14:15 -0400, Don Dailey wrote:
  I will also modify the server so that losses by anchors don't count.
 
  Woops,  what I mean is losses on TIME won't count.  They will still
  count if the opponent loses but not if the anchor loses.
 
  - Don
 
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: mogo beats pro!

2008-08-10 Thread steve uurtamo
again, not true.

there are an infinite number of complexity classes beyond
P that do not require infinite space or infinite time.

exptime would just take exponential time instead of polynomial
time, and pspace would just be able to reuse its available
polynomial space (and thus use at worst exponential time).

there are no problems that would take infinite time or infinite
space.  there are problems that cannot be solved no matter
how much space or time you give a computer, but that's a
different matter altogether, and go isn't one of those problems.

s.


On Fri, Aug 8, 2008 at 6:53 PM, Robert Waite [EMAIL PROTECTED] wrote:
 Besides... solving a
 pspace-complete problem would require infinite memory... isn't that
 correct?

 nope.

 I flipped memory and time there. If pspace-complete is not in p, then it
 will be a big problem trying to solve it without infinite time. That doesn't
 seem like an ideal situation for solving it.


 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Some cgos 19x19 suggestions

2008-08-10 Thread steve uurtamo
david, is mfgo-12-0805-2c really over 400 ELO better
than mfgo-11, as cgos seems to suggest?  or is mfgo11
still rising up into place?

thanks,

s.


On Sun, Aug 10, 2008 at 8:51 AM, David Fotland [EMAIL PROTECTED] wrote:
 First, thank you very much, Don, for giving us a reliable 19x19 server.

 Please consider increasing the time a program stays on the list until it
 ages off.  I guess you drop programs from the ratings page after some time
 that depends on the number of games they have played.  Since 19x19 games
 take 4 times longs, it seems you should allow four times as much time to age
 off the list, for the same number of games.  I like seeing the top program's
 results a little longer.

 It would be nice if a program can get into position more quickly.  Since the
 games take longer, it can take several days to climb up from the initial
 1200 to 2000, especially if there is an early loos.  Does it make sense to
 set the initial k value a little higher, or to set the initial rating to
 1500 instead of 1200?

 -David


 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: Strength of Monte-Carlo w/ UCT...

2008-08-10 Thread Robert Waite
Well... I think I have hunches just as you do. And I think we both express
our hunches on here.

Diminishing returns is not really my theory.. I am just looking at
alternative ways of viewing the datapoints. Let's say you have two computers
and both of them focus only on solving local situations. At first they both
play around the same level. Then you scale one of them in some way and mark
the trend. We now know that one of them scales a certain way when solving
local solutions. If you then take that computer and put it against a
person.. the person is no longer just thinking about the local solution.
They are thinking about strategy, they might make certain moves that test
the computer's goals.. you have a whole different situation.

The little green men reference looks like a dangerous use of Occam's Razor.
In this case... someone says there are little green men under his house and
shows me a bunch of datapoints to suggest it. I don't have any datapoints
against it so my point is automatically invalidated? It seems there is a
little more detail to Occam's Razor.

When I saw proven to be scalable, my first thought was that it was proven
to be somehow practically scalable in order to beat a human or perhaps to
solve the game. You even mentioned that God would draw with the computer.
That kind of scalability seems related to solving the game, not to beating a
human. I saw no evidence that it has a practical scalability to solve the
game. And this seems like the kind of problem that could be intractable.
Practicality is an issue.

Now the topic has moved to scalable to beat a human and I disagree with the
interpretation of the data. We are both interpreting data. Your data doesn't
count as a theory.. where you reduced my theory to one that has no data. We
are both interpreting the same data. Diminishing returns was just an example
of something that could be a roadblock. I was questioning how this
necessarily scales to humans. It seems more data is needed from MC-programs
vs. humans to make a rigorous theory of scalability. So far.. the only
scalability that seems proven is a case for solving the game... not beating
humans. There is some point between that would most likely in my opinion
lead to humans being beaten.. some amount of calculation before you solved
it.. but the shape of this curve is something I am unsure of. It doesn't
seem that unreasonable to question if there is a practical scalability.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Re: mogo beats pro!

2008-08-10 Thread Robert Waite
 there are no problems that would take infinite time or infinite
 space.  there are problems that cannot be solved no matter
 how much space or time you give a computer, but that's a
 different matter altogether, and go isn't one of those problems.

How do you know what class go belongs in?
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Re: mogo beats pro!

2008-08-10 Thread Darren Cook
 How do you know what [complexity] class go belongs in?

Hi Robert,
If these topics interest you, you probably want to start by reading the
papers [1] about the complexity of go. Then if you still disagree take
up a specific point with the paper authors.

Both minimax and UCT solve go simply because they eventually make an
exhaustive tree. You are quite right in pointing out that these results
are theoretical, not practical, as we are unlikely ever to have the
computing power or memory to make an exhaustive tree even for 9x9 go.

Don, the Mogo team, and others, have collected good evidence that MCTS
(i.e. UCT) algorithms scale very nicely at more practical hardware levels.

On the other hand if you remake Don's scaling graph [2] with absolute
instead of logarithmic axes then it is no longer straightish, but
instead (looks like) it is flattening out. I.e. the improvement is not
linear with computing power; you have to try harder to get each
additional rank of improvement.

(I know Don understands this, and his point with that whole experiment
was just to prove his claim that more cpu cycles always help.)

Darren

[1]: Looks like this provides a starting point of papers to read:
  http://senseis.xmp.net/?ComplexityOfGo

[2]: (Sorry, cannot find the URL at the moment.)

-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
open source dictionary/semantic network)
http://dcook.org/work/ (About me and my work)
http://darrendev.blogspot.com/ (blog on php, flash, i18n, linux, ...)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Complexity of Go

2008-08-10 Thread Bob Hearn

(Sorry if this is a duplicate; the first posting didn't show up.)


To clarify (or maybe not) the status of the computational complexity  
of NxN go:


Go with Japanese rules is EXPTIME-complete (Robson, 1983).

Go with superko is only known to be PSPACE-hard, and EXPSPACE-easy  
(Robson, 1984). That is, both the upper and the lower bounds in  
Robson's EXPTIME-completeness proof fail. It is rather surprising that  
neither of those bounds has been tightened in the ~25 years since the  
last relevant papers.


Slightly confusing the issue is a proof in Papadimitriou's book that  
go is PSPACE-complete (Papadimitriou, 1993). However, the problem he  
considers is actually a modification of go which prevents  
exponentially long games, thus putting the problem trivially in  
PSPACE. Unfortunately this proof has been cited by others in claims  
that go is PSPACE-complete.


But fascinating as these complexity issues are (I've spent a fair  
amount of time trying to prove that go with superko is EXPSPACE- 
complete), I don't think they are really relevant to computer go.


Bob Hearn



References:

J. M. Robson. The complexity of Go. In Proceedings of the IFIP 9th World
Computer Congress on Information Processing, pages 413–417, 1983.

J. M. Robson. Combinatorial games with exponential space complete  
decision
problems. In Proceedings of the Mathematical Foundations of Computer  
Science

1984, pages 498–506, London, UK, 1984. Springer-Verlag.

Christos H. Papadimitriou. Computational Complexity. Addison Wesley,  
1993.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Some cgos 19x19 suggestions

2008-08-10 Thread Don Dailey
Here are the current ratings using bayeselo of program on the 19x19
server.   I have a script in place so that I can update this at will and
I may run this every few  hours or so, probably starting tomorrow.  

  http://cgos.boardspace.net/19x19/bayes_19x19.html

- Don




On Sun, 2008-08-10 at 08:51 -0700, David Fotland wrote:
 First, thank you very much, Don, for giving us a reliable 19x19 server.
 
 Please consider increasing the time a program stays on the list until it
 ages off.  I guess you drop programs from the ratings page after some time
 that depends on the number of games they have played.  Since 19x19 games
 take 4 times longs, it seems you should allow four times as much time to age
 off the list, for the same number of games.  I like seeing the top program's
 results a little longer.  
 
 It would be nice if a program can get into position more quickly.  Since the
 games take longer, it can take several days to climb up from the initial
 1200 to 2000, especially if there is an early loos.  Does it make sense to
 set the initial k value a little higher, or to set the initial rating to
 1500 instead of 1200?
 
 -David
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Some cgos 19x19 suggestions

2008-08-10 Thread David Fotland
Thanks.  This is more like what I would expect.  About 80 elo points between
mfgo 1 cpu and 2 cpu (like other programs), and many faces 11 a little
higher rated.

Does anyone know whose program is rz-74?  I'm trying to catch mogo,
crazystone, and leela by September, and I'm curious if there will be other
strong competition.

David

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:computer-go-
 [EMAIL PROTECTED] On Behalf Of Don Dailey
 Sent: Sunday, August 10, 2008 7:02 PM
 To: computer-go
 Subject: Re: [computer-go] Some cgos 19x19 suggestions
 
 Here are the current ratings using bayeselo of program on the 19x19
 server.   I have a script in place so that I can update this at will
 and
 I may run this every few  hours or so, probably starting tomorrow.
 
   http://cgos.boardspace.net/19x19/bayes_19x19.html
 
 - Don
 
 
 
 
 On Sun, 2008-08-10 at 08:51 -0700, David Fotland wrote:
  First, thank you very much, Don, for giving us a reliable 19x19
 server.
 
  Please consider increasing the time a program stays on the list until
 it
  ages off.  I guess you drop programs from the ratings page after some
 time
  that depends on the number of games they have played.  Since 19x19
 games
  take 4 times longs, it seems you should allow four times as much time
 to age
  off the list, for the same number of games.  I like seeing the top
 program's
  results a little longer.
 
  It would be nice if a program can get into position more quickly.
 Since the
  games take longer, it can take several days to climb up from the
 initial
  1200 to 2000, especially if there is an early loos.  Does it make
 sense to
  set the initial k value a little higher, or to set the initial rating
 to
  1500 instead of 1200?
 
  -David
 
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Some cgos 19x19 suggestions

2008-08-10 Thread Don Dailey
You will notice that no program has a great deal of confidence.  Even
Gnugo the anchor is plus or minus 50 ELO.  

I combined ALL Gnugo anchors into 1 entity for rating purposes.  It
appears on the charts as Gnugo-3.7.10-a0  and does not have a
cross-table entry.  

I also did the one time removal of all games lost on time for the 19x19
server.  I purge everything including the SGF files for those games.
Most of them had no moves played or perhaps only 1 move.  

- Don


On Sun, 2008-08-10 at 22:01 -0400, Don Dailey wrote:
 Here are the current ratings using bayeselo of program on the 19x19
 server.   I have a script in place so that I can update this at will and
 I may run this every few  hours or so, probably starting tomorrow.  
 
   http://cgos.boardspace.net/19x19/bayes_19x19.html
 
 - Don
 
 
 
 
 On Sun, 2008-08-10 at 08:51 -0700, David Fotland wrote:
  First, thank you very much, Don, for giving us a reliable 19x19 server.
  
  Please consider increasing the time a program stays on the list until it
  ages off.  I guess you drop programs from the ratings page after some time
  that depends on the number of games they have played.  Since 19x19 games
  take 4 times longs, it seems you should allow four times as much time to age
  off the list, for the same number of games.  I like seeing the top program's
  results a little longer.  
  
  It would be nice if a program can get into position more quickly.  Since the
  games take longer, it can take several days to climb up from the initial
  1200 to 2000, especially if there is an early loos.  Does it make sense to
  set the initial k value a little higher, or to set the initial rating to
  1500 instead of 1200?
  
  -David
  
  
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: Strength of Monte-Carlo w/ UCT...

2008-08-10 Thread Christoph Birk


On Aug 10, 2008, at 1:46 PM, Robert Waite wrote:
Exhaustive search is scalable in that I could give it all the  
memory and time it wanted. And it would approach a finite amount of  
memory and a finite amount of time.


Yes, but exhausitve search does not improve your player by 63% (eg.)
for a doubling in CPU time.
This part was done in an empirical scalability study. Please check the
archives of the list.

In the (inifinite) limit minimax+evaluation-function would find the  
perfect move

too, but UCT/MC already find good moves before the limit.

Christoph

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/