[computer-go] 2007 Cotsen Open wants your program to enter

2007-09-21 Thread Mark Schreiber
David thanks for the report.
Mark

On Thu Sep 20 14:06:53 PDT 2007 David Doshay wrote:

 SlugGo entered the first year as a 9 kyu and won 1 game. One other game
 was clearly won on the board (more than 100 points) but the opponent
 was clever enough to start playing very complicated moves that I
 could see
 were not going to work, but took SlugGo a long time to reply to, and
 SlugGo lost on time. After that we put in time management code.

 The second year I made a silly mistake, and the graphical front end
 (GoBan)
 was mistakenly set to announce atari. The announcement was something
 that the multi-processor message passing code did not understand, so
 SlugGo lost all games by crashing in response to the first atari of
 the game.
 I figured this out minutes after the last game. SlugGo had been
 entered as
 a 10 kyu the second year. I felt horrible accepting the cash prize
 for best
 program, but it did help offset the $1250 cost of a handicap van that
 I had
 to rent to transport the cluster to Los Angeles.

 The Cotsen open is 5 games. I think the bracket ran from 8 to 12 kyu. I
 did not do anything to narrow my guess of the rating.

 I will look for the games.

 Cheers,
 David
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Random move selection (was) Crazystone patterns

2007-09-21 Thread dhillismail

Good information. Thanks.



- Dave Hillis


-Original Message-
From: Jacques Basaldúa [EMAIL PROTECTED]
To: computer-go@computer-go.org
Sent: Fri, 21 Sep 2007 7:44 am
Subject: [computer-go] Random move selection (was) Crazystone patterns



Dave Hillis, wrote: 
 
 Suppose I can generate scores for all of the moves quickly enough.  I still 
 face the problem of quickly choosing a move biased by the  scores. 
 Tournament selection is fast, but that is a function of  relative ranking of 
 the scores, not the values of the scores.  Roulette wheel selection gives me 
 an answer, but it is slow slow  slow, the way I implement it anyway. Can 
 anybody describe a good  way to do this? 
 
We posted about that before this summer when I was implementing it. 
I proposed a ticket based lottery, but that, of course, restricts 
the difference to small values. It can be implemented using a linked 
list so that each extra ticket allocation cost few clock cycles (I don't 
remember exactly how many, but less than 10 asm instruction for sure). 
 
My final version uses 2 values for the tickets HI and LO where 
 
1 HI = 32 LO 
 
The default (when the pattern is not in the database) is 1 HI. 
 
The score goes from 1 (= 1 LO) to 1024 (= 32 HI). If you round the scores 
it the database to avoid such values as 927 (= 28 HI, 31 LO) and round it as 
928 (= 29 HI) you can have a nice dynamic range from default/32 to 
default*32 having not too many tickets to allocate. 
 
Jacques. 
 
PD. Search the threads (about May-June 2007) because other good ideas were 
proposed. Binary trees, etc. 
 
___ 
computer-go mailing list 
[EMAIL PROTECTED]
http://www.computer-go.org/mailman/listinfo/computer-go/ 



Check Out the new free AIM(R) Mail -- Unlimited storage and industry-leading 
spam and email virus protection.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-21 Thread Jason House
I guess it really depends on what the point of the test is.  I'm trying to
understand the performance gap between my AMAF bot(s) and Don's AMAF bots.
For comparison, here's the ratings and # of simulations:

ELO
 1434  - ControlBoy- 5000 simulations per move
 1398  - SuperDog  - 2000 simulations per move
 1059  - ReadyFreddy - 256 simulations per move
 763- Dodo - 64 simulations per move
600   - all my amaf- 5000-25000 simulations per move
300   - ego110_allfirst- ???

Looking at the cross table with ReadyFreddy, running (that's doing 5% of the
work that my bots are), the results are 0/14, 0/20, 0/24, and 0/10.  Even
with the small samples, I'm quite certain that the performance of my bot is
way worse than any of Don's.

I'm not particularly concerned if alternate eye method #1 is marginally
better than #2 (or vice versa).  I'm reasonably confident that their
performance is similar and that their performance is better than my original
method.

I'm content for now to find out the major causes of performance gaps and
then revisit what is truly the best combo when I get around to doing quality
coding of features instead of quick hacks for testing.  Currently, both the
random move selection strategy and the game scoring strategy have come under
question.

On 9/20/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 I'm going to echo Cenny's comment. Small samples like this can be very
 misleading. For this kind of test, I usually give each algorithm 5000
 playouts per move and let them play 1000 games on my computer. It takes
 about a day and a half.

 - Dave Hillis


 -Original Message-
 From: Cenny Wenner [EMAIL PROTECTED]
 To: computer-go computer-go@computer-go.org
 Sent: Tue, 18 Sep 2007 3:33 pm
 Subject: Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

 By the data in your upper table, the results need to uphold their mean
 for 40 times as many trials before you even get a significant*
 difference between #1 and #2.

 Which are the two methods you used?

 On 9/18/07, Jason House [EMAIL PROTECTED] wrote:
  original eye method = 407 ELO
  alt eye method #1   = 583 ELO
  alt eye method #2   = 518 ELO
 
  While both alternate methods are probably better than the original, I'm not
  convinced there's a significant difference between the two alternate
  methods.  The cross-tables for both are fairly close and could be luck of
  the draw (and even which weak bots were on at the time).  I put raw numbers
  below.  Since I made one other change when doing the alt eye method, I
  should rerun the original with that other change as well (how I end random
  playouts and score them to allow for other eye definitions).
 
  While I think the alternate eye definitions helped, I don't think they
  accounted for more than 100-200 ELO
 
  vs ego110_allfirst
  orig= 33/46 = 71%
  #1 =  17/20 = 85%
  #2 =  16/18 = 89%
 
  vs gotraxx-1.4.2a
  orig=N/A
  #1 = 2/8   = 25%
  #2 = 3/19 = 16%
 
 
  On 9/17/07, Jason House [EMAIL PROTECTED]  wrote:
  
  
  
   On 9/17/07, Don Dailey [EMAIL PROTECTED] wrote:
Another way to test this, to see if this is your problem,  is for ME to
implement YOUR eye definition and see if/how much it hurts AnchorMan.
   
I'm pretty much swamped with work today - but I may give this a try at
some point.
   
  
   I'd be interested in seeing that.  It looks like my first hack at an
  alternate eye implementation bought my AMAF version about 150 ELO (not
  tested with anything else).  Of course, what I did isn't what others are
  using.  I'll do another alteye version either today or tomorrow.  It may
  be possible that some of my 150 was because I changed the lengths of the
  random playouts.
  
 
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 


 --
 Cenny Wenner
 ___
 computer-go mailing list
 [EMAIL PROTECTED]://www.computer-go.org/mailman/listinfo/computer-go/

  --
 *Check Out the new free AIM(R) 
 Mail*http://o.aolcdn.com/cdn.webmail.aol.com/mailtour/aim/en-us/index.htm-- 
 Unlimited storage and industry-leading spam and email virus protection.

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-21 Thread Don Dailey
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Jason,

I noticed from several emails that you are probably doing a lot of
little things differently and assuming they make no difference.  For
instance you still haven't tried the same exact eye-rule we are using so
you can't really say with complete confidence that there is no difference.

If you really want to get to the bottom of this,  you should not assume
anything or make approximations that you believe shouldn't make much
difference even if you are right in most cases.   There could be one
clearly wrong thing you are doing, or it could be many little things
that all make it take a hit.

I am curious myself what the difference is and I'm willing to help you
figure it out but we have to minimize the ambiguity.

I was going to suggest the random number generator next, but for these
simple bots there doesn't seem to be a great deal of sensitivity to the
quality of the random number generator if it's reasonable - at least for
a few games.

Ogo (which is almost the same as AnchorMan) has a poor quality RNG and
if you play a few hundred games you will discover lot's of repeated
results.   With a good quality generator there have NEVER been repeated
games that I have ever seen.   So it could be a minor factor.

One thing you mentioned earlier that bothers me is something about when
you end the random simulations.  AnchorMan has a limit, but it's very
conservative - a game is rarely ended early and I would say 99.9% of
them get played to the bitter end.

Are you cheating here?   I suggest you make the program as identical to
mine as you can - within reason.   If you are doing little things wrong
they accumulate.   I learned this from computer chess.  Many
improvements are worth 5-20 ELO and you can't even measure them without
playing thousands of games - and yet if you put a few of them together
it can put your program in another class.

In my first marketable chess program I worked with my partner and I
obsessed every day on little tiny speedups - most of them less than 5%
speedups.  We found 2 or 3 of these every day for weeks it seems.  But
we kept finding them and when we were finished with had a program about
300% faster than we started and it had a reputation at the time as being
a really fast chess program.

This works with strength improvements too.   So you might have one big
wrong thing, but you may have several little ones.

So instead of continuing to explore eye rules and experiment,  as much
fun as it is in it's own right,  it's not a productive way to find the
performance problem.  Start with the eye rule every program is using
with great success and if you want to find something better LATER, more
power to you.

If you want my help, I will try to make a very generic version of
AnchorMan that doesn't have any enhancements - just the clearly stated
basics which I believe will probably play around 1300-1400 ELO.

By the way, Anchorman on the new server seems to be weaker than on the
old server - so that might explain some of the difference.


- - Don



Jason House wrote:
 I guess it really depends on what the point of the test is.  I'm trying
 to understand the performance gap between my AMAF bot(s) and Don's AMAF
 bots.  For comparison, here's the ratings and # of simulations:
 
 ELO
  1434  - ControlBoy- 5000 simulations per move
  1398  - SuperDog  - 2000 simulations per move
  1059  - ReadyFreddy - 256 simulations per move
  763- Dodo - 64 simulations per move
 600   - all my amaf- 5000-25000 simulations per move
 300   - ego110_allfirst- ???
 
 Looking at the cross table with ReadyFreddy, running (that's doing 5% of
 the work that my bots are), the results are 0/14, 0/20, 0/24, and 0/10. 
 Even with the small samples, I'm quite certain that the performance of
 my bot is way worse than any of Don's.
 
 I'm not particularly concerned if alternate eye method #1 is marginally
 better than #2 (or vice versa).  I'm reasonably confident that their
 performance is similar and that their performance is better than my
 original method.
 
 I'm content for now to find out the major causes of performance gaps and
 then revisit what is truly the best combo when I get around to doing
 quality coding of features instead of quick hacks for testing. 
 Currently, both the random move selection strategy and the game scoring
 strategy have come under question.
 
 On 9/20/07, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]*
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
 
 I'm going to echo Cenny's comment. Small samples like this can be
 very misleading. For this kind of test, I usually give each
 algorithm 5000 playouts per move and let them play 1000 games on my
 computer. It takes about a day and a half.
 
 - Dave Hillis
 
 
 
 -Original Message-
 From: Cenny Wenner [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 To: computer-go computer-go@computer-go.org
 mailto:computer-go@computer-go.org
  

Re: [computer-go] Re: [Housebot-developers] ReadyFreddy on CGOS

2007-09-21 Thread Christoph Birk

On Thu, 20 Sep 2007, Jason House wrote:


Christoph Birk wrote:

On Wed, 19 Sep 2007, Jason House wrote:
My logic behind stopping at the first pass is that it's highly unlikely to 
form life in the void from captured stones.  Since capturing the stones 
would increase the length of the game and isn't very likely to change the 
outcome of the game


But how do you score game, if there are still stones to capture?
Do you assume everything's alive?


No.  Stones in atari are considered dead.  All stones in atari will be owned 
by the side that passed.  It works for almost all cases, but I realize now 
there are some situations with ko that could be scored wrong...


And your progamm WILL find these!

Christoph

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-21 Thread Christoph Birk

On Fri, 21 Sep 2007, Jason House wrote:

I guess it really depends on what the point of the test is.  I'm trying to
understand the performance gap between my AMAF bot(s) and Don's AMAF bots.
For comparison, here's the ratings and # of simulations:

ELO
1434  - ControlBoy- 5000 simulations per move
1398  - SuperDog  - 2000 simulations per move
1059  - ReadyFreddy - 256 simulations per move
763- Dodo - 64 simulations per move
600   - all my amaf- 5000-25000 simulations per move
300   - ego110_allfirst- ???


It might be hard to compare your AMAF-bots with Don's since he
uses quite some tricks to improve their performance. I suggest
you compare with some plain-vanilla program I keep for comparison
on CGOS

 myCtest-10k (ELO ~1050)
 myCtest-50k (ELO ~1350)

They do just 1 (5) pure random simulations. Your AMAF-bots
should be at least that good if they have no significant bugs,
correct?
If you are interested I can run them 24/7 on CGOS (currently they
only play once per week to keep them on the list).

Christoph

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] problems w/ 19x19 cgos server

2007-09-21 Thread terry mcintyre
Two problems with the 19x19 server.

1) when I tried to click on a game for the 19x19 server, I got a 404 not found 
error. The same process works 
on the 9x9 links. 

Example of broken link from the Standings page:

http://cgos.boardspace.net/19x19/SGF/2007/09/16/26970.sgf

2) my copy of GnuGo cannot connect to the 19x19 server. This happens 
periodically.
 
Terry McIntyre [EMAIL PROTECTED]
They mean to govern well; but they mean to govern. They promise to be kind 
masters; but they mean to be masters. -- Daniel Webster




  

Don't let your dream ride pass you by. Make it a reality with Yahoo! Autos.
http://autos.yahoo.com/index.html
 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-21 Thread Jason House
On 9/21/07, Don Dailey [EMAIL PROTECTED] wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Jason,

 I noticed from several emails that you are probably doing a lot of
 little things differently and assuming they make no difference.



This thread has certainly helped highlight them.  I now have a list of
things and a crude ordering of what may be affecting the quality of the
simulations and the results.  I plan to experiment with changes to the
various things... but don't seem to get more than 20 minutes at a time to
dedicate to the experimentation.



For
 instance you still haven't tried the same exact eye-rule we are using so
 you can't really say with complete confidence that there is no difference.



Huh?  I thought my eye alternative #2 was the exact eye rule.  A center
point is considered an eye if all 4 neighbors are friendly stones and no
more than one diagonal is an enemy stone.  An edge (corner) are eyes if all
three (both) neighbors are friendly stones and none of the diagonals are
enemy stones.

I realize that alternative #1 wasn't correct, but I thought #2 was.  Please
let me know if that's an incorrect assumption.



If you really want to get to the bottom of this,  you should not assume
 anything or make approximations that you believe shouldn't make much
 difference even if you are right in most cases.   There could be one
 clearly wrong thing you are doing, or it could be many little things
 that all make it take a hit.



I'm systematically experimenting with these things.  So far, this has been
the eyes and playing a random game until one or two passes (in a row).

housebot-xxx-amaf - original eye, one pass to end game
hb-amaf-alteye - alternate eye rule #1, two passes to end game
hb-amaf-alteye2 - alternate eye rule #2, two passes to end game
hb-amaf-alt - original eye, two passes to end game

I still have yet to experiment with random move generation.  I'm 100%
confident that this is what effective go library does, but I don't
consider that evidence that it's the most correct method (only the fastest).


I am curious myself what the difference is and I'm willing to help you
 figure it out but we have to minimize the ambiguity.



I appreciate the help.  When we're all done, I'll take a crack at writing a
few pages describing the experiments and the outcomes.


I was going to suggest the random number generator next, but for these
 simple bots there doesn't seem to be a great deal of sensitivity to the
 quality of the random number generator if it's reasonable - at least for
 a few games.



Out of all things, I would suspect my PRNG the least.  It's an open source
Mersenne Twister implementation (Copyright (C) 1997 - 2002, Makoto Matsumoto
and Takuji Nishimura).  My understanding is considered a really good random
number generator.  I'll likely try alternatives to how random moves are
selected based on the random number generator.



Ogo (which is almost the same as AnchorMan) has a poor quality RNG and
 if you play a few hundred games you will discover lot's of repeated
 results.   With a good quality generator there have NEVER been repeated
 games that I have ever seen.   So it could be a minor factor.

 One thing you mentioned earlier that bothers me is something about when
 you end the random simulations.  AnchorMan has a limit, but it's very
 conservative - a game is rarely ended early and I would say 99.9% of
 them get played to the bitter end.



I don't use a mercy rule, and have tried out playing games to the bitter end
(neither side has a legal non-eye-filling move).  I assume this would
resolve your concerns.


Are you cheating here?   I suggest you make the program as identical to
 mine as you can - within reason.



I agree and I'm slowly trying to do that.


If you are doing little things wrong
 they accumulate.   I learned this from computer chess.  Many
 improvements are worth 5-20 ELO and you can't even measure them without
 playing thousands of games - and yet if you put a few of them together
 it can put your program in another class.



I don't disagree with what you're saying.  At 20 ELO per fix, 800 ELO is
tough to overcome.  I'd hope I can't have that many things wrong with a
relatively pure monte carlo program ;)  I'm hoping to find at least one
really big flaw...  something that'd put it close to ReadyFreddy.

I want to try a breadth of changes to see if I can something akin to a magic
bullet.  As I go forward, I will also try various combos of the implemented
hacks and see how they do.  It's easier to put up a new combo and see how it
does.  One small problem I have is that I can only run two versions reliably
and rankings at my bots' current level seem to fluctuate a lot by which bots
are currently running.



In my first marketable chess program I worked with my partner and I
 obsessed every day on little tiny speedups - most of them less than 5%
 speedups.  We found 2 or 3 of these every day for weeks it seems.  But
 we kept finding them and when 

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-21 Thread Jason House
On 9/21/07, Christoph Birk [EMAIL PROTECTED] wrote:

 It might be hard to compare your AMAF-bots with Don's since he
 uses quite some tricks to improve their performance. I suggest
 you compare with some plain-vanilla program I keep for comparison
 on CGOS

   myCtest-10k (ELO ~1050)
   myCtest-50k (ELO ~1350)

 They do just 1 (5) pure random simulations. Your AMAF-bots
 should be at least that good if they have no significant bugs,
 correct?
 If you are interested I can run them 24/7 on CGOS (currently they
 only play once per week to keep them on the list).



Are you using AMAF, UCT, or something else?  If it's no trouble to you, it
would be nice to see them running online while all of this AMAF stuff is
going on.  I find it interesting that your 10k and 50k bots have wildly
different performance given what Don has indicated.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [Housebot-developers] [computer-go] ReadyFreddy on CGOS

2007-09-21 Thread Christoph Birk

On Fri, 21 Sep 2007, Jason House wrote:

Are you using AMAF, UCT, or something else?


Nothing at all. Really pure random playouts.
I am working on an AMAF version for comparison ...


If it's no trouble to you, it
would be nice to see them running online while all of this AMAF stuff is
going on.


ok. I'll run them continously.


I find it interesting that your 10k and 50k bots have wildly
different performance given what Don has indicated.


I think he was referring to his Achorman. The heavier the
playout the smaller the improvement with more playouts, I guess.
I tried a 250k version, but that only in creased the rating by
about 100 ELO.

Christoph
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] problems w/ 19x19 cgos server

2007-09-21 Thread dhillismail
If the 19x19 CGOS is going to be retired due to lack of interest, I wonder if 
there would be interest in trying out an ultra-blitz version for a while: games 
as fast as the com. links would permit.?(Game storage would be an issue. Maybe 
they just wouldn't get stored.) It could be a limited time trial. It might push 
the front runners out of their comfort zones and tempt some of those on the 
sidelines to join in. What do people think?

- Dave Hillis


-Original Message-
From: Don Dailey [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org
Sent: Fri, 21 Sep 2007 1:25 pm
Subject: Re: [computer-go] problems w/ 19x19 cgos server



-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I had to take the 19x19 server down due to disk space limits on the
boardspace server.   Almost nobody has been using it anyway.  I will
also be archiving the 9x9 games on another site soon.

If someone wants to host it on a unix machine that is visible I would
consider relocating the server if there is enough interest.

I'm wondering if a 13x13 server would be more popular.

David Doshay is going to host a site to archive games and I will put all
the 19x19 games that have been played on the 19x19 server there for
future reference.

- - Don





terry mcintyre wrote:
 Two problems with the 19x19 server.
 
 1) when I tried to click on a game for the 19x19 server, I got a 404 not
 found error. The same process works
 on the 9x9 links.
 
 Example of broken link from the Standings page:
 
 http://cgos.boardspace.net/19x19/SGF/2007/09/16/26970.sgf
 
 2) my copy of GnuGo cannot connect to the 19x19 server. This happens
 periodically.
  
 Terry McIntyre [EMAIL PROTECTED]
 They mean to govern well; but they mean to govern. They promise to be
 kind masters; but they mean to be masters. -- Daniel Webster
 
 
 
 Take the Internet to Go: Yahoo!Go puts the Internet in your pocket:
 http://us.rd.yahoo.com/evt=48253/*http://mobile.yahoo.com/go?refer=1GNXIC
 mail, news, photos  more.
 
 
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG8/6SDsOllbwnSikRApxaAJ9PJ3v8WSe9ZPcAT0LSZnEpycXRyQCfS6Aj
1OwLvslMleTUOPG9JyMd/jU=
=23nc
-END PGP SIGNATURE-
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



Check Out the new free AIM(R) Mail -- Unlimited storage and industry-leading 
spam and email virus protection.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] problems w/ 19x19 cgos server

2007-09-21 Thread Jason House
I'd only be interested in 19x19 games with enough time for reasonable
games.  I'm ok with slow games.  My biggest problem is that my bots are
simply too immature for 19x19.

On 9/21/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 If the 19x19 CGOS is going to be retired due to lack of interest, I wonder
 if there would be interest in trying out an ultra-blitz version for a while:
 games as fast as the com. links would permit. (Game storage would be an
 issue. Maybe they just wouldn't get stored.) It could be a limited time
 trial. It might push the front runners out of their comfort zones and tempt
 some of those on the sidelines to join in. What do people think?

 - Dave Hillis
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] problems w/ 19x19 cgos server

2007-09-21 Thread alain Baeckeroot
Le vendredi 21 septembre 2007 21:35, [EMAIL PROTECTED] a écrit :
 If the 19x19 CGOS is going to be retired due to lack of interest,
 I wonder if there would be interest in trying out an ultra-blitz 
 version for a while: games as fast as the com. links would permit.?
 (Game storage would be an issue. Maybe they just wouldn't get stored.)
 It could be a limited time trial. It might push the front runners out 
 of their comfort zones and tempt some of those on the sidelines to
 join in. What do people think?  
 
 - Dave Hillis
 
With rather short time settings, gnugo can be the same strenght as mogo.
I guess something like 10-15 min is correct on 19X19.

Maybe a good idea is to find the time setting where these 2 bots have 50%
win against each other. This would be of great interest for other programs.

Else i bet gnugo will crush all programs in ultra blitz (at level 0 gnugo 
loses only 2 kyu on KGS compared to level 10)

Alain.
 
 -Original Message-
 From: Don Dailey [EMAIL PROTECTED]
 To: computer-go computer-go@computer-go.org
 Sent: Fri, 21 Sep 2007 1:25 pm
 Subject: Re: [computer-go] problems w/ 19x19 cgos server
 
 
 
 I had to take the 19x19 server down due to disk space limits on the
 boardspace server.   Almost nobody has been using it anyway.  I will
 also be archiving the 9x9 games on another site soon.
 
 If someone wants to host it on a unix machine that is visible I would
 consider relocating the server if there is enough interest.
 
 I'm wondering if a 13x13 server would be more popular.
 
 David Doshay is going to host a site to archive games and I will put all
 the 19x19 games that have been played on the 19x19 server there for
 future reference.
 
 - Don
 
 
 
 
 
 terry mcintyre wrote:
  Two problems with the 19x19 server.
 
  1) when I tried to click on a game for the 19x19 server, I got a 404 not
  found error. The same process works
  on the 9x9 links.
 
  Example of broken link from the Standings page:
 
  http://cgos.boardspace.net/19x19/SGF/2007/09/16/26970.sgf
 
  2) my copy of GnuGo cannot connect to the 19x19 server. This happens
  periodically.
 
  Terry McIntyre [EMAIL PROTECTED]
  They mean to govern well; but they mean to govern. They promise to be
  kind masters; but they mean to be masters. -- Daniel Webster
 
 
  
  Take the Internet to Go: Yahoo!Go puts the Internet in your pocket:
  http://us.rd.yahoo.com/evt=48253/*http://mobile.yahoo.com/go?refer=1GNXIC
  mail, news, photos  more.
 
 
  
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/
 
 
 
 Check Out the new free AIM(R) Mail -- Unlimited storage and industry-leading 
 spam and email virus protection.
 
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/