Re: [computer-go] Re: Open source real time Go server

2010-01-19 Thread Christian Nentwich
Steve,

I wouldagree with you that writing a good score estimator is extremely
difficult, probably as difficult as writing a computer player.

However, your argument of equivalence (if that is how I understand it) does
not follow. Just because you can score any position does not mean you can
therefore play well. If you could score all Go positions on the board, you
still couldn't enumerate them all, or follow all branches in the tree.

Personally I'd also rather see score estimate abolished. The comments of
people in games (normally below 2d or so) are atrocious.

Christian


On Tue, Jan 19, 2010 at 5:11 PM, steve uurtamo uurt...@gmail.com wrote:

 sorry, i should have been more clear.

 an SE can't be any smarter than a computer player, because it could
 otherwise easily simulate a computer player, as described.  would it
 be slower?  yes, by a constant factor that is bounded by the
 boardsize.  this simulation could be completely paralellized, however,
 so if anyone thinks that i'm wrong, they should build such an SE, and
 we can easily put together enough boxes to beat all existing computer
 players.

 i point this out because a pet peeve of mine is people who complain
 about the SE and don't realize that it's equivalently difficult, if
 not *more* difficult, than building a computer player.

 s.

 On Tue, Jan 19, 2010 at 12:20 AM, Michael Williams
 michaelwilliam...@gmail.com wrote:
  Your point is obvious but that's a horrible proof since there are usually
  more than one legal moves from which to chose (that means it takes more
  time).
 
  steve uurtamo wrote:
 
  As for other things we'd like to see improved, we could build a list.
 My
  pet
  peeve is the KGS score estimator, which is often wildly wrong.
 
  an SE can't be any smarter than a computer player that runs in the
  amount of time that you're willing to wait for the SE to calculate*.
  so don't expect much.  ever.  recall that the SE runs locally in your
  client.
 
  s.
 
  * proof: if it were, then it would make a better computer player by
  just evaluating its score estimate at all legal board points and
  choosing the maximum at each move.
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] CUDA code - can anybody host it?

2009-12-30 Thread Christian Nentwich
All,

the CUDA light playout code I wrote earlier this year and posted about in
this list is lying around dead on my hard disk, and I am not looking to take
it anywhere.It's certainly not production code, as it was written as an
experiment, but I think there is still value in releasing it.

I don't have any particular site I could host it on though. Would anybody
here be prepared to host a ZIP file?

I don't really care what anybody does with it, so I will put it in the
public domain with a simple attribution requirement.

Christian
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Kinds of Zobrist hashes

2009-12-09 Thread Christian Nentwich
On Tue, Dec 08, 2009 at 09:30:47PM -0500, Eric Boesch wrote:

  You can mathematically prove the two systems are almost the same, so
  there's no need to test.

   Yes, this was my line of thought, but I wasn't sure if I'm not missing
 anything...


If you ever decide to test which is faster, please post the results, I'm
curious about how expensive the branch prediction miss is when using two
values :-)

Christian
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] CUDA projects for Go?

2009-11-25 Thread Christian Nentwich

Steve,

I was one of the people who posted in the debate - I implemented light 
playouts on CUDA for testing purposes, with one thread per playout 
(rather than one per intersection).


I think one of the main things that I am curious about, and I don't 
think I am the only one, is whether texture memory or other such 
facilities could be used for fast pattern matching. How heavy playouts 
would perform on CUDA is a big unanswered question. Depending on the 
answers you get, it may be possible to run heavy playouts on the CPU, 
but delegate a pattern matching queue to CUDA. No idea - a good field 
for research :)


Christian

Steve Kroon wrote:

Hi.

I hope to have a student for the next month or two who can look into some
computer Go before starting his Masters degree.  He is interested in 
using CUDA

for his Masters, so I thought it would be nice for him to investigate
applicability of CUDA for computer Go.

I know there was quite a debate about this not so long ago, and I 
don't mean to
re-open it.  I was just wondering if anyone has a specific 
investigation of
limited scope they would like done, or think might be valuable with 
respect to CUDA.


I hope to continue further working with Fuego, so if you have suggestions
specific to Fuego, that would be particularly useful.

Thanks,


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] First ever win of a computer against a pro 9P as black (game of Go, 9x9).

2009-10-27 Thread Christian Nentwich


I suspect I am in your camp, Mark, though obviously it would be nice if 
we had measurements on this instead of conjectures.


I will offer some anecdotal evidence concerning humans playing other 
humans, from club and tournament playing experience: you will find that 
shorter time limits amplify the winning probability of stronger players 
when humans play other humans. Beating somebody 2 stones stronger than 
you on Blitz is much harder than beating them on a longer time limit; 
you may find that you need 3 handicap stones. The bigger the strength 
difference, the worse it gets. Beating a professional player in Blitz Go 
is *ferociously* difficult, even with very high handicap.


Humans are extremely good at recognizing patterns, whole board 
awareness, and intuition about influence; reading more into the game is 
useless for a human without these skills. It has never really surprised 
me that stronger players are that much better at short time limits, 
given their larger experience and knowledge. At the extreme end, you 
have the beginner: even if it's a human with incredible reading ability, 
he/she will still lose with a three hour time limit, to somebody a few 
stones stronger, and on a 10 minute limit.


Well, some trials with different time limits against computers would be 
nice, I guess :)


Christian


On 26/10/2009 23:14, Mark Boon wrote:

2009/10/26 Don Daileydailey@gmail.com:
   

Yes, you understood me right.   I disagree with Olivier on this one.To
me it is self-evident that humans are more scalable than computers because
we have better heuristics.   When that is not true it is usually because the
task is trivial, not because it is hard.

 


Personally I rather think that what makes a human good at certain
tasks is not necessarily a conscious effort, and that doesn't have to
be a trivial task. So then actively thinking longer doesn't help as
much because you lack the control over the thought-process. I believe
very much that Go falls in that category, where Chess does not.

Mark
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

   


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] NVidia Fermi and go?

2009-10-23 Thread Christian Nentwich

Darren,

these articles are still somewhat short on detail, so it's hard to tell. 
A lot of the new features listed there won't have any impact on the 
suitability of the GPU for Go, because they do not change the method of 
computation (e.g. doubling floating point precision is irrelevant).


Having said that, the parallel data cache they alude to may be 
significant. If this is going to enable the construction of data 
structures such as linked lists, or bring down global memory access time 
significantly, then I believe the performance of playout algorithms on 
the architecture will shoot up.


Christian

On 23/10/2009 09:28, Darren Cook wrote:

I was reading a linux mag article [1] saying that the latest nvidia GPUs
[2] solve many of the problems of using them for supercomputing problems.
There was a thread [3] here in September about running go playouts on
GPUs, where the people who had tried it seemed generally pessimistic. I
just wondered if this new Fermi GPU solves the issues for go playouts,
or don't really make any difference?

Darren

[1]: http://www.linux-mag.com/id/7575
[2]: http://www.nvidia.com/object/io_1254288141829.html
[3]: Starting here:
http://computer-go.org/pipermail/computer-go/2009-September/019422.html

   


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Great Wall Opening by Bruce Wilcox

2009-10-20 Thread Christian Nentwich


Go has been played long enough, and the proposed great wall opening is 
simple enough, that is should be more than valid to argue that if it 
was a good opening, it would be played more often.


Here are some openings that have been found to lead to high winning 
percentages in real games:

  - The Shusaku opening in the old pre-komi days
  - The Chinese opening (still holds one of the highest winning 
percentages with Black, of all openings, if I remember correctly)

  - Mini-chinese
  - Kobayashi opening

One could try and plonk down those openings and see whether the engine 
has a significantly better result. I would conjecture that current 
engines are not strong enough to use them correctly, and it won't make 
any difference where you place the first three stones, as long as it's 
reasonable distributed and not on the second line.


I may extend my conjecture to amateur players under about 3 kyu :)

Christian


On 20/10/2009 06:56, Mark Boon wrote:


On Oct 19, 2009, at 7:04 PM, Petri Pitkanen wrote:

Not really a compuetr Go issue, but I do not think that great wall is 
superior even when completed. It is not too bad but it needs a 
definite strategy from wall owner. I.e building side moyos using wall 
as a roof and hoping that the other guy gets nervous and jumps in. So 
by being patient is pretty good defence against it.




Even when completed I think it's inferior. But that doesn't mean you 
can take it lightly. There's a psychological component to it that 
makes it easy for the opponent to make a mistake. Also, White may have 
some advantage by having more experience with the strategy.


But I agree with the pro that if you disrupt it by preventing 
completion with the last move it really turns disadvantageous for Black.


Mark

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Long superko example

2009-10-05 Thread Christian Nentwich

Brian,

this is interesting.

The sequence is as interesting as the superko, really, because it's the 
sort of strange sequence that would only occur to monte carlo programs. 
When black plays B9, a human white player would answer H4. This gives 
the highest win. If he's feeling obliging, he might answer A5. What he 
certainly would never play is A9. As for black, looks like he should 
resign instead of B9. I'm very curious what black thought the win rate 
was before B9


Christian

On 04/10/2009 22:21, Brian Sheppard wrote:

   1 2 3 4 5 6 7 8 9
A O O O X - X X O -
B - O O X X O O - -
C O - O X X - O O O
D - O X X O O O X O
E O O O O X X O X X
F X O O X X X X X -
G X X O O X O X - X
H - X X - O O O X -
J X X - O - X X - X

X: B9 (hole-of-three)
O: A9 (atari)
X: B8 (capture)
O: A8 (atari)
X: A9 (capture)
O: A8 (capture)

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

   


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] CUDA and GPU Performance

2009-09-10 Thread Christian Nentwich

Rene,

you're absolutely right, it's completely fishy! But don't worry, you're 
work is not in vain :) I noticed this morning, when I read your mail, 
that I had included the 9x9 results in my original mail instead of 
19x19! Indeed, for 19x19 the results are even worse. Here's a complete 
rundown:


- 9x9 CPU: 47,000 playouts per core per second
- 9x9 GPU: 170,000 playouts per second

- 19x19 CPU: 9,800 playouts per core per second
- 19x19 GPU: 11,000 playouts per second

I did mention in another mail that the performance difference for 9x9 
should be larger, I think. What I didn't realise was that I had reported 
the 9x9 numbers by mistake!


Additional statistics:
 - Processor occupancy for 19x19 was 6% instead of 9%
 - Branch divergence was less than half a percent. It was 2% for 9x9. 
This is perhaps because of the larger board size causing more moves to 
fall onto empty intersections, or fewer moves requiring merges/captures.


Christian



René van de Veerdonk wrote:

Christian,

Would you care to provide some more detail on your implementation for 
the playouts? Your results are very impressive. At 19x19 Go using 
bit-boards, your implementation is roughly 7x as fast as the bitboard 
implementation I presented just a few weeks back, and also outperforms 
libEgo by about a factor of two.


René

On Wed, Sep 9, 2009 at 2:57 PM, Christian Nentwich 
christ...@modeltwozero.com mailto:christ...@modeltwozero.com wrote:


Mark,

let me try to add some more context to answer your questions. When
I say in my conclusion that it's not worth it, I mean it's not
worth using the GPU to run playout algorithms of the sort that are
in use today. There may be many other algorithms that form part of
Go engines where the GPU can provide an order-of-magnitude
speedup. Still more where the GPU can run in parallel with the CPU
to help.

In my experiments, a CPU core got 47,000 playouts per second and
the GPU 170,000. But:
 - My computer has two cores (so it gets 94,000 playouts with 2
threads)
 - My computer's processor (intel core duo 6600) is 3 years old,
and far from state of the art
 - My graphics card (Geforce 285) on the other hand, is recently
purchased and one of the top graphics cards

That means that my old CPU already gets more than twice the speed
of the GPU. An Intel Nehalem processor would surely beat it, let
alone an 8-core system. Bearing in mind the severe drawbacks of
the GPU - these are not general purpose processors, there is much
you can't do on them - this limits their usefulness with this
algorithm. Compare this speedup to truly highly parallel
algorithms: random number generation, matrix multiplication,
monte-carlo simulation of options (which are highly parallel
because there is no branching and little data); you see speedups
of 10x to 100x over the CPU with those.

The 9% occupancy may be puzzling but there is little that can be
done about that. This, and the talk about threads and blocks would
take a while to explain, because GPUs don't work like general
purpose CPUs. They are SIMD processors meaning that each processor
can run many threads in parallel on different items of data but
only if *all threads are executing the same instruction*. There is
only one instruction decoding stage per processor cycle. If any
if statements or loops diverge, threads will be serialised until
they join again. The 9% occupancy is a function of the amount of
data needed to perform the task, and the branch divergence (caused
by the playouts being different). There is little that can be done
about it other than use a completely different algorithm.

If you google CUDA block threads you will find out more. In
short, the GPU runs like a grid cluster. In each block, 64 threads
run in parallel, conceptually. On the actual hardware, in each
processor 16 threads from one block will execute followed by 16
from another (half-warps). If any threads are blocked (memory
reads costs ~400 cycles!) then threads from another block are
scheduled instead. So the answer is: yes, there are 64 * 80
threads conceptually but they're not always scheduled at the same
time.

Comments on specific questions below.

If paralellism is what you're looking for, why not have one
thread per
move candidate? Use that to collect AMAF statistics. 16Kb is
not a lot
to work with, so the statistics may have to be shared.
 


One thread per move candidate is feasible with the architecture I
used, since every thread has its own board. I have not implemented
AMAF, so I cannot comment on the statistics bit, but the output
of your algorithm is typically not in the 16k shared memory
anyway. You'd write that to global memory (1GB). Would uniform
random playouts be good enough for this though

Re: [computer-go] CUDA implementation of the per-intersection GPGPU approach

2009-09-10 Thread Christian Nentwich

Petr,

wow, I didn't expect to see so much experimentation being performed! 
It's great that you have taken the time to implement this approach, 
because this now shows people both alternatives for implementing a 
playout algorithm on the GPU.


I strongly suspect the low performance in the per-intersection case is 
down to two reasons - please let me know what you think:
 - A big proportion of moves place one stone on an empty intersection. 
In this case 80 of your 81 threads are asleep doing nothing. This means 
GPU time is wasted.
 - In quite a few other cases, we get to the old problem with Go: that 
the intersections are not independent. Thus when you process captures or 
merges, you possibly have to lock 80 of 81 threads.


You can't do much about the first. Did you find a solution for the 
second? I was thinking for a while about whether it would be possible to 
come up with a highly parallel algorithm for captures/merges that 
requires no lock. That's quite hard so I concentrated on my own 
experiment instead.


I agree with you that your approach could be very performant in pattern 
matching scenarios. In fact, your work somewhat strengthens my 
conclusion that doing just straight-forward playouts on the GPU is not 
the way to go.


Christian


Petr Baudis wrote:

  I have written a quick CUDA implementation of the per-intersection
GPGPU apprach; Christian's e-mail finally made me polish it up to
a somewhat presentable form.

  In this implementation, each GPU thread maintains a single
intersection.

  The implementation uses 9x9 board (10x11 internally); expanding to
19x19 would probably mean either moving some data to global memory
or splitting the playout of single board to multiple blocks or waiting
for GPUs with larger shared memory. ;-)

  The speed is unfortunately very disappointing; on GTX260, I see
17,300 playouts per second, with 50 playouts running in parallel.
There are several buts:

  - With some further obvious optimizations, I'm thinking it should be
possible to get it to at least 22,000 playouts per second easily.

  - Bit boards are an obvious thing to try, but I'm quite doubtful
they will be an improvement

  - The playouts are uniformly random now, but they have been
implemented in a way to make heavier playouts easy; precise liberty
count is maintained and it should be easy to add pattern matching;
the move selection can deal with arbitrary probabilities for
different intersections (I wonder, what are the standard
high-performance numbers for heavier playouts with pattern matching?)

  - Christian is playing 2650 games in parallel; I'm playing only 50
games in parallel, in the meantime the CPU can pick next games to
play from the tree

  - My code already accounts for transferring board images from CPU to
the GPU (different one for each playout) and getting scores back

  - Christian's GPU seems quite better than mine

  - Apparently I still suck at CUDA optimizations; this has been my
first real small CUDA project, it would be awesome if someone more
experienced could look at the code and suggest obvious improvements

  Still, I'm pretty unhappy with the slowness; I wonder how Christian
achieved such a high speed. One problem with my approach is that I have
to make use of a lot of atomic instrinsics (some of them could be worked
around) and __syncthreads() all the time to ensure that all threads have
consistent board image at each stage.

  I think with pattern matching, the per-intersection approach could
shine more, since I can easily match patterns everywhere on the board
at once. Still, I wonder if it will become at least on par with good
CPU implementations.

On Wed, Sep 09, 2009 at 04:54:23PM +0100, Christian Nentwich wrote:
  

In other words: no tree search is involved, and this is the lightest
possible playout. The raw numbers are as follows:
 - CPU Search: 47,000 playouts per CPU core per second, on an Intel
6600 Core-2 Duo
 - GPU Search: 170,000 playouts per second, on an NVidia Geforce 285 card



  I still find this quite astonishing; since you consider this
experiment a failure anyway, would you mind publishing the source code?
:-)

  



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



--

Christian Nentwich

Director, Model Two Zero Ltd.
+44-(0)7747-061302
http://www.modeltwozero.com

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] batoo

2009-08-28 Thread Christian Nentwich


I suppose it is worth bringing this new variant to people's attention. 
What will become of it, who knows, it may fade quickly:


http://www.godiscussions.com/forum/showthread.php?t=7176

Christian
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Double/Triple Ko situation

2009-08-06 Thread Christian Nentwich


This is probably the best route. Either this, or get rid of the rule. 
This rule cannot be shown to be correct in general, it may work for most 
life and death problems, but can be wrong in semeai. You may get a nice 
ELO increase, but you are still actively building a wrong rule into the 
program.


Check out the most difficult Go problem, problem 120 from Igo Hatsuyo 
Ron, for example, where black keeps trying to push white to capture his 
group which is getting bigger and bigger.. but surprisingly, white loses 
if he does!


http://senseis.xmp.net/?MostDifficultProblemEver
http://www.harryfearnley.com/go/trmdpe/hats-120-2009.html

Christian


Michael Williams wrote:
You should set the limit to whatever yields the highest ELO in YOUR 
program.


Harry Fearnley wrote:

Darren Cook wrote:

The largest nakade shape is the rabbity six. My wild guess would be to
outlaw self-atari for groups of 7+ stones.


The fun thing about computer go is how hard it is to make hard and fast
rules:
http://senseis.xmp.net/?BiggestKnownEyeSpaceForWhichThereIsANakade

Outlawing self-atari of 18+ stones is probably okay, but not quite so
useful :-).


  http://www.dgob.de/dgoz/trmdpe/

Shows where not defending 20 stones in atari is best.  This
position is easily modified to give a position where self
atari is best.  Clearly this, as well as the 18-stone
nakade, is pathological and will _never_ occur in a real
game ...  :-)

I would guess that there will be a fair number of self-atari
of up to 11 stones (especially on the edge, or in the corner,
and where there are cutting points) where the self-atari
would be the best move.


Harry

-+-+-+-+-+-+
ha...@goban.demon.co.uk38 Henley St, Oxford, OX4 1ES, UK
http://www.goban.demon.co.ukTel: +44 1865 248775
http://harryfearnley.com   *** NEW site ***

Oxford Go Club:http://www.britgo.org/clubs/oxford_c.html
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Time for another seki

2009-07-27 Thread Christian Nentwich

Brian,

reading this ascii can be difficult, so I might be missing something, 
but why would white play F5 after E1 in your primary variation, instead 
of playing atari at B2 to prevent the seki? Would this lose on points? 
(Couldn't quite tell without knowing the prisoner count)


I don't see how black can force a seki after C3 as long as the E1 
liberty is open in the initial position.


Christian

Brian Sheppard wrote:

Another month, another seki:

  1 2 3 4 5 6 7 8 9
A - - - X - O O X - 
B - - O O O O X X - 
C O O - O X X O O - 
D O X O X X - X O X 
E - X X X X X X X X 
F X - X - - O X O O 
G - X - X X O O O - 
H - - - X O - - O - 
J - - - X O O O - - 
O to play.


Pebbles rated this position as a 93% win, with over 1 trials.
It seemed that way to me, too. O fills C3, and then gets either
F5 or E1 to make 37 points and win by 0.5 with komi.

The actual variation: C3, A2, A3, E1, F5, B1 and now O cannot win.

  1 2 3 4 5 6 7 8 9
A - X O X - O O X - 
B X - O O O O X X - 
C O O O O X X O O - 
D O X O X X - X O X 
E X X X X X X X X X 
F X - X - O O X O O 
G - X - X X O O O - 
H - - - X O - - O - 
J - - - X O O O - - 
O to play.


The upper left is seki. I didn't see that coming. If O responds to
A2 with B2 then we get a different seki:

  1 2 3 4 5 6 7 8 9
A - X X X - O O X - 
B - O O O O O X X - 
C O O O O X X O O - 
D O X O X X - X O X 
E X X X X X X X X X 
F X - X - O O X O O 
G - X - X X O O O - 
H - - - X O - - O - 
J - - - X O O O - - 


With enough search effort, Pebbles will see this as a loss for O.
It takes about 6 trials for the score to drop below 50%.
Seeing the loss is delayed by forcing exchanges such as A9/B9.

Best,
Brian

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

  


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: Dynamic komi in commercial programs

2009-07-12 Thread Christian Nentwich
Ingo,

are you sure you already want to bet on one particular technique? :)

I don't believe a score optimisation algorithm like UCT works all that
well when behind. I am pretty sure that human players do *not* choose
between the top three moves if their values are 40%, 39% and 38%. They
will start looking for other moves. I am also not sure the dynamic
komi will help, if all it ends up doing is shifting the percentages.

Criteria for moves for humans when playing behind may look something
like this (subjective assessment):
  1. Am I certain of the score after the move, and does it lead to a
loss? Discard it, there is no point - unless it is a waiting sequence
to prepare for another move, or a move that answers a sente play.
  2. Does the move create uncertainty in the score? One does not play
for certainty when behind, but for confusion. If I cannot see the
result, my opponent probably can't either.
  3. Is it a meltdown move (i.e. if it goes wrong, I will definitely
lose the game)? If so, I might not want to play it unless I am very
far behind ( 10 points, and in the middle game)

I wonder if anybody has looked at alternating the evaluation function
when behind? This is, of course, very difficult without providing a
point of attack or discontinuity in its playing style. Perhaps moves
with a higher standard deviation could be chosen once in a while?

Whatever the case may be, I agree that things definitely need to be
tried out, against strong players, perhaps on KGS. I gave Zen, an
excellent program, 2 stones yesterday night, and sure enough it melted
down spectacularly once it was behind :)

Christian
p.s. I believe handicap games need to be treated differently from
situations where one is behind in even games - in the former, one can
wait for the opponent's mistakes; in the second, one needs to be
proactive.


Ingo Althöfer wrote:
 Don Dailey wrote:
 I think we should open up to other ideas, not
 just dynamic komi modification.   In fact that
 has not proved to be a very fruitful technique
 and I don't understand the fascination with it.

 I was not clear enough in the original posting.
 My main point is the following: Currently the
 programmers are developing program, and customers
 are playing with them.

 But we should try to bring creative customers inside
 the process of development. Dynamic komi might be
 one good early try for this approach.

 You write that [DyKo] has not proved to be a very
 fruitful technique but you are right only concerning
 the rather narrow community of programmers.
 (Some) creative customers have lots of time to test strange
 things and settings and are willing to do so, provided the
 programs are sufficiently test-friendly. Concerning dynamic
 komi I would bet 1:1 that their findings might lead to a
 breakthrough.

 Ingo.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: Dynamic komi in commercial programs

2009-07-12 Thread Christian Nentwich
Don, others,

are there publications about this? If people have tried it out, are
there any threads on this list that somebody remembers where results
are posted? I have not been able to find  any. It would be interesting
to see.

Christian



2009/7/12 Don Dailey dailey@gmail.com:


 On Sun, Jul 12, 2009 at 8:59 AM, Benjamin Teuber benjamin.teu...@web.de
 wrote:

  You just hit the nail on the head.   Dynamic komi does not encourage a
  program to overplay the position.   Since you are starting from a losing
  position you HAVE to overplay a bit.   You have to attack when it is
  futile.

 That depends on the komi - if you're behind by fourty points and set
 the virtual komi to 30, you play as if you'd be 10 behind, which would
 be agressive, but not kamikaze.

 This is exactly what people do, so I don't see your point.

 It's not up to me to prove anything.   It's up to you.

 Several of us have tried variations of this idea of dynamic komi adjustment,
 which seems like a very good premise.  This SHOULD help the play.    But the
 fact of the matter is that none of us (so far) has made it work.   If the
 observations do not fit the premise, at some point we should actually
 scrutinize the premise - even if the premise seems logical to us.

 I think the ones who still cling to this idea have not actually tried
 implementing it.    Have you tried?    If you have, why are still talking
 about it and not showing us something?


 - Don






 Benjamin
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Really basic question

2009-07-07 Thread Christian Nentwich

Fred,

others have already indicated that you are missing the tree part of 
Monte Carlo Tree Search, but there is something else to add.


If you run uniform random playouts on an empty 9x9 board, let's say a 
million of them for each point, you will see a probability distribution 
emerging that roughly shows the centre 3x3 or 4x4 points as having a 
high probability of winning (around 50%, depending on komi), and you 
will see the edges and first line having a much lower probability.


This is not going to help you choose between the points in the centre. 
Every time you run the simulation, a different point will be selected as 
best - because the state space will be inadequately explored, as you 
say - but you will be able to choose a good move.


On 19x19, the space is so inadequately explored that running a uniform 
Monte Carlo simulation on an empty board is useless (i.e. you will get 
completely different results every time you run it) and further 
heuristics either during playouts or during the tree search phase are 
needed.


Christian


Fred Hapgood wrote:

I have a really basic question about how MC works in the context of Go.

Suppose the problem is to make the first move in a game, and suppose we
have accepted as a constraint that we will abstain from just copying
some joseki out of a book -- we are going to use MC to figure out the
first move de novo. We turn on the software and it begins to play out
games. My question is: how does the software pick its first move?  Does
it move entirely at random? Sometimes it sounds that way MC works is by
picking each move at random, from the first to the last, for a million
games or so. The trouble is that the number of possible Go games is so
large that a million games would not even begin to explore the
possibilities.  It is hard to imagine anything useful emerging from
examining such a small number. So I'm guessing that the moves are not
chosen at random.  But even if you reduce the possibilities to two
options per move, which would be pretty impressive, you'd still run out
of your million games in only twenty moves, after which you would be
back to picking at random again.

What am I missing??  





http://www.BostonScienceAndEngineeringLectures.com
http://www.pobox.com/~fhapgood

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

  



--

Christian Nentwich

Director, Model Two Zero Ltd.
+44-(0)7747-061302
http://www.modeltwozero.com

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Experimentation (was: Really basic question)

2009-07-07 Thread Christian Nentwich


The problem I think is to find a good tradeoff between heavyness and 
speed. In my test with Valkyria vs Fuego, Valkyria is superior when the 
number of playouts are the same. But Fuego can play 5 times more 
playouts per second on the hardware that results in Fuego being slightly 
stronger than Valkyria at the moment.


Indeed - this makes me wonder why I keep seeing papers where different 
versions of algorithms are compared with the same number of playouts, 
rather than under the same time limit.


What is the motivation in this? I cannot conceive of any good reason for 
running an experiment this way, so I would be interested in opinions. It 
seems to me that making algorithms heavier and then demonstrating that 
they are stronger with the same number of playouts misses the point - 
why would one not run an experiment under the same time conditions instead?


Christian
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Experimentation

2009-07-07 Thread Christian Nentwich

Darren Cook wrote:

seems to me that making algorithms heavier and then demonstrating that
they are stronger with the same number of playouts misses the point -
why would one not run an experiment under the same time conditions instead?



* The heavier algorithm might be unoptimized;
  
That is probably a good pragmatic point, but that does not make the 
experiment any more valid. You are already designing the algorithm so 
that it performs better at the same number of playouts, by trading off 
speed. I don't think it is then valid to also use that as proof that it 
performs better - the experiment is slanted against the lighter, faster 
algorithm.


Unfortunately, it is unoptimized, while usually a good argument, 
interferes with one of the main dimensions of the experiment, which is 
speed.



* The heavier algorithm might be easier to parallelize (or put into
hardware);
  
That seems unintuitive, but I confess I have no experience whatsoever in 
that respect.

* The scaling behaviour might be different. E.g. if fuego and Valkyria
are both run with 10 times more playouts the win rate might change. Just
to dismiss an algorithm that loses at time limits that happen to suit
rapid testing on today's hardware could mean we miss out on the ideal
algorithm for tomorrow's hardware. (*)
  
I am not sure I follow this argument. How do you intend to prove that, 
unless you run the algorithms with 10 times more playouts? In that case, 
I would still argue that you should run them with X times longer time 
limits, not with 10 times more playouts, unless you can assume (with 
proof or good evidence, so you can set differential numbers of playouts 
between the two) that tomorrow's hardware will favour one algorithm more 
than the other.



* By analyzing why the win rate of the super-heavy algorithm is better
might give other people ideas for lighter but still effective playouts.

  
This I can accept. In general, I do think it is interesting to see the 
win rate at the same number of playouts as background analysis, but I 
don't see it as a convincing evidence of advancement over other algorithms.


Christian
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Experimentation

2009-07-07 Thread Christian Nentwich

Don,

I see where this argument comes from, but you are not taking it to its 
conclusion. Unless you can, in the end, show that your algorithm can 
outperform another one with the same time limit, you have not proved 
that it is an advance. That is why tournaments are played with time 
limits, not limits on the number of playouts. Time is an important 
dimension of the game of Go.


Here is why: if I follow your argument to its conclusion, I can come up 
with an algorithm that spends five minutes per move looking through 
databases of moves, performing simulations, and god knows what, and show 
a 20% improvement over an algorithm that thinks for 5 seconds. That is 
an interesting intermediate result. But would you then allow me to 
write obviously it will be just as quick as soon we've done a bit of 
optimisation in the epilogue, and accept that? What if there is no way 
to optimise it? What if the old algorithm will always scale better with 
the same time limit, on faster hardware? I guess you might let the 
argument pass under tight conditions: the other program is a version 
of the same program, and architecturally similar; and no algorithms 
beyond linear complexity per move were added, or something of the sort.


You mix proof and evidence a few times in the same paragraph; the 
sorting algorithm is probably not a good comparison. With a sorting 
algorithm, you would first prove its big O and omega complexities, maybe 
look at average cases too. I don't see a lot of people trying to prove 
that their algorithms are improvements in Go, because it is generally 
too complex; so evidence is all we get.


Christian


Don Dailey wrote:



On Tue, Jul 7, 2009 at 6:31 AM, Christian Nentwich 
christ...@modeltwozero.com mailto:christ...@modeltwozero.com wrote:



The problem I think is to find a good tradeoff between heavyness
and speed. In my test with Valkyria vs Fuego, Valkyria is superior
when the number of playouts are the same. But Fuego can play 5
times more playouts per second on the hardware that results in
Fuego being slightly stronger than Valkyria at the moment.

Indeed - this makes me wonder why I keep seeing papers where
different versions of algorithms are compared with the same number
of playouts, rather than under the same time limit.


Because this is the right testing methodology to use.   The first 
thing you want to know is if the core idea is good.   This is because 
you will never know if you implemented it in the fastest possible 
way. But once you know that the idea gives you better results with 
the same number of playouts  you have identified something about it 
that is superior - then you can go from there.


There are two aspects that you are concerned about with tests like 
this.  The first and most important thing is the theoretical soundness 
of the idea or approach being used.The second is the engineering 
issues, which are really quite open ended and tough to nail down 
accurately.   Not only that, but you can kill 2 birds with one stone - 
if the theory is not sound, then there is no need to pursue the 
engineering aspect. 

There is probably no great crime in doing it your way if your only 
goal is to produce a strong program,  but if your test fails you don't 
really know if it failed because the idea is stupid or if your 
implementation is unduly slow.   

If you are writing a paper you certainly do not want to claim results 
based solely on just your particular implementation of an idea that 
might be bad anyway.   There is nothing wrong with presenting an 
engineering paper about an interesting way to implement something, but 
even then it would be a pretty silly paper if you did not present at 
least some analysis on why someone would WANT to implement this thing 
i.e. it is a commonly used thing (a new sorting algorithm)  or has 
some practical application that you can identify. 




What is the motivation in this? I cannot conceive of any good
reason for running an experiment this way, so I would be
interested in opinions. It seems to me that making algorithms
heavier and then demonstrating that they are stronger with the
same number of playouts misses the point - why would one not run
an experiment under the same time conditions instead?


As I said,  when you are writing a paper you have to prove that 
something is theoretically sound, at least empirically.   If you are 
writing an engineering paper you might present an interesting 
implementation of some idea, but it should be an idea that has first 
been shown to be interesting in some way. For instance a new 
faster sorting algorithm is interesting. But you certainly don't 
want to present evidence that YOUR IMPLEMENTATION of YOUR IDEA is no 
good when you have not even attempted to establish whether the idea 
itself is viable. 

 
- Don




 




Christian
___
computer-go mailing list

Re: [computer-go] Experimentation

2009-07-07 Thread Christian Nentwich

Magnus,

along the lines of the argument I am trying to make: did you try your 
experiments with time limits from 30 seconds per game to five minutes 
per game (say), rather than playouts? Using gogui-twogtp this is 
relatively easy to achieve.


I am obviously not associated with Fuego, but I guess it is reasonable 
to assume that Fuego's architecture was not designed to operate at 
limits like 2, 8 or 32 simulations in the same way Valkyria was. It is 
an interesting study in its own right for scalability purposes; but to 
go on to infer strength from it would seem like comparing apples and 
oranges.


Christian


Magnus Persson wrote:

Quoting Darren Cook dar...@dcook.org:


* The scaling behavior might be different. E.g. if Fuego and Valkyria
are both run with 10 times more playouts the win rate might change. Just
to dismiss an algorithm that loses at time limits that happen to suit
rapid testing on today's hardware could mean we miss out on the ideal
algorithm for tomorrow's hardware. (*)


I just happened to have experimental data on exactly this topic. This 
is Valkyria vs Fuego where I scale the number of playouts (Sims) x4 in 
each row.


SimsWinrateErrNWREloDiff
299.20.45000.992837
898.20.65000.982696
3294.215000.942484
12888.81.45000.888360
512821.75000.82263
204883.21.74990.832278
819281.31.74970.813255
3276875.53.61390.755196

The data shows clearly that the 0.3.2 version of Fuego I use, probably 
plays really bad moves with a high frequency in the playouts. With 
more playouts a lot of these blunders can be avoided I guess and the 
win rate goes down from 99% towards 80%. The question here if it goes 
asymptotically towards 80% or perhaps 50% with more simulations. 
Unfortunately I cannot continue this plot because I run out of memory 
and it takes ages to finish the games.


So the question is then: are there a fixed gain of the heavy playouts 
with more than  512 simulations or does the effect of the heavy 
playout get less and less important with larger tree size.


Note also that this also not only a matter of having heavy playouts or 
not. There is also a difference in tree search since Valkyria and 
Fuego probably search their trees differently, and it could be that 
Valkyria search deep trees

inefficiently.

Maybe I should run a similar test against a light version of Valkyria 
to control for the search algorithm.


-Magnus
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/




--

Christian Nentwich

Director, Model Two Zero Ltd.
+44-(0)7747-061302
http://www.modeltwozero.com

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] any AMD performance experience?

2009-07-05 Thread Christian Nentwich

Hi all,

do any of you have performance figures for multi-processor AMD 
(opteron/shanghai) systems with your engines? Any single-processor?


There's a good offer here on AMD based servers to get the second 
processor for free, so I am thinking of pouncing on it. I'm curious 
about how the better memory performance for the multi-processor AMD 
system impacts Go engines, versus the worse integer and branch 
prediction performance..


thanks,
Christian
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Scoring - step function or sigmoid function?

2009-07-01 Thread Christian Nentwich

Darren,

this sounds like a good insight, but only if a very large number of 
playouts have been performed. By contrast, the original poster writes:


 But in the opening, where the scoring
leaves are 300 moves away from the root, surely a putative half point
win doesn't translate to a significant advantage, where as a 100

This I don't buy. If the scoring leaves are 300 moves away, any random 
playout is way too unreliable to take the score into account. You might 
as well generate a score randomly. It could be a 100 point win on the 
first and 100 point loss on the second. In that case, it will be much 
safer to use Fuego's approach of slightly modifying the playout score 
from [0.0,1.0] to [0.0+s,1.0-s] where s depends on the size of the win 
relative to the board size.


It is also worth bearing in mind - again, only if the state space was 
only very superficially searched - that winning by large margins can 
entail taking large risks. Human players do that only when behind and 
otherwise actively seek the safer route.


Christian


On 01/07/2009 04:23, Darren Cook wrote:

It seems to be surprisingly difficult to outperform the step function
  when it comes to mc scoring. I know that many surprises await the mc
adventurer, but completely discarding the final margin of victory
just can't be optimal. ...
an mc program, holding on to a half point victory in the endgame,  is
a thing of beauty and terror. But in the opening, where the scoring
leaves are 300 moves away from the root, surely a putative half point
win doesn't translate to a significant advantage, where as a 100
point win would.
 


I had a breakthrough in my understanding of why it is surprisingly
difficult to outperform the step function when analyzing some 9x9 games
with Mogo and ManyFaces. Let's see if I can extract that insight into
words...

I observed that in many situations I could map the winning percentage to
the final score. E.g.
   50-55%: 0.5pt
   55-60%: 1.5pt
   60-65%: 2.5pt
   etc.

It wasn't as clear cut as that. In fact what I was actually noticing was
if I made a 1pt error the winning percentage for the opponent often
jumped by, say, 5%.

Thinking about why... In a given board position moves can be grouped
into sets: the set of correct moves, the set of 1pt mistakes, 2pt
mistakes, etc. Let's assume each side has roughly the same number of
moves each in each of these groupings.

If black is winning by 0.5pt with perfect play, then mistakes by each
side balance out and we get a winning percentage of just over 50%. If he
is winning by 1.5pt then he has breathing space and can make an extra
mistake. Or in other words, at a certain move he can play any of the
moves in the correct moves set, or any of the moves in the 1pt
mistakes set, and still win. So he wins more of the playouts. Say 55%.
If he is winning by 2.5pts then he can make one 2pt mistakes or two 1pt
mistakes (more than the opponent) and still win, so he wins more
playouts, 60% perhaps. And so on.

My conclusion was that the winning percentage is more than just an
estimate of how likely the player is to win. It is in fact a crude
estimator of the final score.

Going back to your original comment, when choosing between move A that
leads to a 0.5pt win, and move B that leads to a 100pt win, you should
be seeing move B has a higher winning percentage.

Darren

   


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Complicated seki with Ko

2009-07-01 Thread Christian Nentwich


|- - - - - - -
|. * * o . . .
|* . * o * . *
|o . o o * . .
|o * o . * . .
|o o o * . . .
|. * * * . . .
|. . . . . . .
|. * . . . . .

Black to play and kill :)

Christian


On 01/07/2009 17:41, Magnus Persson wrote:
In this case one needs to check that after the two stones are captured 
the capturing single stone can be recaptured bringing us back to where 
we started. If it is a big eye there is no recapture.


-Magnus

Quoting Brian Sheppard sheppar...@aol.com:


For black I think I prune this kind of two stone suicide
always no matter what the situation is (exception is ko). These
prunings are probably wrong in some extremely rare cases.


How can you tell the difference between this kind of two-stone
self-atari, and a self-atari of two stones within an opponent's
big eye, which could be necessary for lifedeath?

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/







___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: fuego strength

2009-06-24 Thread Christian Nentwich

Don,

you might have your work cut out if you try to control inflation 
directly, that can turn into a black art very quickly. Multiple anchors 
would be preferable. An offline, X * 1000 game playoff between gnugo and 
another candidate anchor would be enough to fix their rating difference. 
If their bilateral winnings drift away during continuous play, the 
anchor rating could be tweaked.


I'm not sure if the worries voiced on this list about anchors are not 
somewhat overdone. Other bots, with improvements, may do just as well 
against an old version of Fuego as the full Fuego does, we don't know. 
Maybe they would do better than new versions of Fuego. All this reliance 
on gnugo introduces bias, too, and after all the anchor player is not a 
single control variable that determines the destiny of the server. 
Players will still play many different opponents. If Fuego keeps beating 
the anchor player but losing to everybody else, it still won't get a 
higher rank.


For me, gnugo as an anchor is fine, as I am still experimenting around a 
low ELO level. For authors of strong programs: I am quite surprised that 
you are not insisting on a much more highly rated anchor. I remember 
when KGS was anchored in the kyu ranks, many years ago. I found myself 7 
dan one day, until somebody intervened and reanchored the server. The 
territory far above a single anchor player is unsafe.


Christian



On 24/06/2009 05:28, Don Dailey wrote:
From what I have discovered so far, there is no compelling reason to 
change anchors.   What I really was hoping we could do is UPGRADE the 
anchor, since many programs are now far stronger than 1800.


Fuego is pretty strong, but not when it plays at the same CPU 
intensity as gnugo.   I went up to 5000 simulations and the match is 
fairly close and the time is about the same.Going from 3000 to 
5000 was quite a remarkable jump in strength and no doubt we could run 
at 10,000 and have substantial superiority - but that's not really 
what I had in mind.


So I think I agree with all the comments I have received so far - and 
my own observations and testing, there is no compelling reasons to 
change.


Now if fuego was substantially stronger using less resources, I would 
be more eager to change after carefully calibrating the difference,  
but that is not the case, at least not at 19x19.


There is another way to keep ratings stable and that is to monitor key 
players over time and build a deflation/inflation mechanism into the 
server to keep it in tune.For instance if there were no anchors,   
the server could monitor gnugo and if it were to gradually drop in 
rating, I could make minor adjustments to the ratings of winners and 
losers to compensate gradually over time.  For example the winner 
could get 1% more ELO and the loser could lose 1% less ELO when in 
inflation mode and just the opposite when in deflation mode.   In this 
way I could feed points into the rating pool, or gradually extract 
them as needed.   I don't plan to do this, but there is more than one 
way to skin a cat.


If we use more than one player as anchors,  I would still pick one 
player as the standard, and periodically adjust the other anchors 
based on their global perormance rating - since they will all tend to 
drift around relative to each other and I would not want to make any 
assumptions about what the other anchors should be. We cannot just 
say gnugo is 1800, fuego is 2000, etc because we don't really know the 
exact difference between the 2.   But we could refine this over time.


- Don





On Tue, Jun 23, 2009 at 11:34 PM, David Fotland 
fotl...@smart-games.com mailto:fotl...@smart-games.com wrote:


I'd also prefer to use gnugo as an anchor.  Since fuego is under
development, new versions will be playing with an odler version of
itself.
Fuego will win more often against its old version.  I don't care
about it
distorting Fuego's rating, but it will distort the rating system.
 If new
fuego is on with few other programs it will gain rating points,
then when
other programs come new fuego will give them the other program as
its rating
drops.  The effect will be to make the rating system less stable,
so it's
hard to use cgos to evaluate new versions of programs to see if
they are
stronger.

I think it's best to use an anchor that's not under active
development.  I
like gnugo since there is lots of published results against it,
and it is
not changing rapidly.  Also it has a different style than the
monte carlo
programs, so it's more likely to expose bugs in the monte carlo
programs.

David

 -Original Message-
 From: computer-go-boun...@computer-go.org
mailto:computer-go-boun...@computer-go.org [mailto:computer-go-
mailto:computer-go-
 boun...@computer-go.org mailto:boun...@computer-go.org] On
Behalf Of Hideki Kato
 Sent: Tuesday, June 23, 2009 5:15 

[computer-go] cgos client (python version) has moved

2009-06-23 Thread Christian Nentwich

All,

FYI, the Python version of the CGOS client has moved into the main CGOS 
sourceforge area:


http://cgos.sourceforge.net/client-python/

There is also a new release (0.3.0) out that supports multiple engines.

Christian
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] cgos client (python version) has moved

2009-06-23 Thread Christian Nentwich

Hi Don,

thank you, that is very useful. Definitely food for thought. I am 
probably going to have time to take a look at this tomorrow. If you have 
a developer copy of the server running on a private port, and can mail 
me that off-list, I'd like to give it a try.


A few questions:
  - What is a good way of referring to this new version of CGOS? CGOS 2?
  - What happens if the old server is sent e2? Does it drop the 
connection?
  - Probably more of a user question, but what is the default venue 
exactly? You said 9x9, I guess that's 9-300-7.5, as it is now?


And unrelated:
  - I've sometimes wanted to see who is currently playing on the 
server. Will this be possible (e.g. through a web pages that gets 
updated every few minutes)?


Christian

On 23/06/2009 14:29, Don Dailey wrote:

Christian (and anyone else interested) in the new CGOS protocol:

There are 2 minor change to the protocol.   I'm hoping to do a test 
drive today as the new server is functional, though not 100% complete.


Here are the two changes:

First of all, when the server asks for the protocol,  you can continue 
to use the e1 protocol and it will default to a default venue (which 
will be 9x9 for now.)   In this way the old clients will continue to 
work.


But if you specify the new 'e2' protocol, such as:  'e2 cgosPython 0.2 
beta'  you will next get something like this:


venues 9-300-7.5 11-420-7.5 13-540-7.5 15-660-7.5 17-780-7.5 19-900-7.5

This is a list of available venues and could be anything the server is 
configured for.  Each venue is separated by a space.   A hyphen 
separates the 3 elements within a venue which are boardsize, time in 
seconds, and komi.   So 15-660-7.5 is:


boardsize: 15x15
 time control: 11:00   (11 minutes, zero seconds.)
   komi: 7.5

You may want to provide some mechanism for informing the user of the 
software which venues are available, even if it is as simple as just 
displaying it.If the user tries to configure a venue that is not 
available the connection will abort.   You can also respond with a 
'quit'  if the client  notices that the user did not specify an 
available venue.   Of course all of that can be handled any way the 
client sees fit to do or you can just ignore the issue and require the 
user to configure a proper venue (the available venues will be posted 
on the web page of course.)


The proper response is ONE of those protocols, such as  13-540-7.5

After this, everything remains the same as in the old protocol and the 
server will next get the username and password as usual.


The second (optional) change is this:

If you respond to the protocol message with something like this:

 t2 cgosNoGo 0.1 alpha - Java engine client for NoGo

The client will be placed in test mode.  Nothing else changes about 
the protocol,  it is the same as e2 but this triggers the server to 
play test games with the client which will not be rated or 
recorded.You will then recieve the venues message which you must 
respond to just as in the e2 protocol.


Another minor change, which does not directly affect the protocol, is 
that the server will abort any connection that seems unduly delayed.   
For instance if the client does not respond to messages after a few 
seconds (such as gameover or venues) then the server will abort the 
connection. This is to avoid a needless buildup of bogus connections.




FYI  here is a list of clients GGOS has seen for the e1 protocol.  I'm 
leaving out the clients I have provided:


e1 CGOS soket engine client 1.3 windows7 by WangManDong
e1 CGOS pike client 0.9 by Gunnar Farneback
e1 CGOS tcl engine client 1.2 [platform] by WangManDong
e1 MROZTEST 1
e1 NaruGoTest
e1 Tesuji Software CGOS Client
e1 cgosNoGo 0.1 alpha - Java engine client for NoGo
e1 cgosNoGo 0.1 alpha - Java engine client for Nogo
e1 cgosPython 0.1 beta
e1 cgosPython 0.2 beta
e1 cgosPython 0.3.0 beta
e1 joe
e1 myCtest
e1 test client
e1 testing
v1 0.1 NaruGo client for CGOS.
v1 NaruGoTest

- Don





On Tue, Jun 23, 2009 at 8:20 AM, Christian Nentwich 
christ...@modeltwozero.com mailto:christ...@modeltwozero.com wrote:


All,

FYI, the Python version of the CGOS client has moved into the main
CGOS sourceforge area:

http://cgos.sourceforge.net/client-python/

There is also a new release (0.3.0) out that supports multiple
engines.

Christian
___
computer-go mailing list
computer-go@computer-go.org mailto:computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/




___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Paper: Beta Distribution

2009-06-23 Thread Christian Nentwich

Peter,

I tried to reproduce this, so I gave this a whirl and the win rate 
against UCB-Tuned1 with first move priority of 1.1 (like Mogo) was only 
33%. That was using uniform random playouts.


What was the playout policy you used for this?

Christian

On 18/06/2009 21:04, Peter Drake wrote:

An improvement on the UCB/UCT formula:

Stogin, J., Chen, Y.-P., Drake, P., and Pellegrino, S. (2009) “The 
Beta Distribution in the UCB Algorithm Applied to Monte-Carlo Go”. In 
Proceedings of the 2009 International Conference on Artificial 
Intelligence, CSREA Press.


http://webdisk.lclark.edu/drake/publications/BetaDistribution.pdf

Peter Drake
http://www.lclark.edu/~drake/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] cgos client (python version) has moved

2009-06-23 Thread Christian Nentwich



And unrelated:
  - I've sometimes wanted to see who is currently playing on the
server. Will this be possible (e.g. through a web pages that gets
updated every few minutes)?


The web page will be in PHP,  so when you refresh the web page you 
will get the exact and current state of the server.   In other words 
if I log on and 2 seconds later you hit the site,  you will see that I 
am logged on.


That would  be great. Definitely good enough for me.

I am also considering the idea of sending info message to every client 
when someone logs on or off and perhaps even to broadcast the result 
and pairings of each game.


What do you think of that?   If I do that, then a sophisticated enough 
client could track the state of the server without requiring the user 
to constantly refer to a separate web page.  Of course this does not 
change the protocol since info is already defined as part of the 
protocol,  although to be most useful I would have to standardize the 
exact format of those kinds of messages.


I personally have no requirement for that. I can put it in the client if 
somebody else wants it. I actually considered parsing the current next 
round info messages, but decided against it. I did not want to create a 
bad dependency, and I think I would continue to treat info messages as 
unparsed text.


I would probably steer clear of putting this in the protocol, as it may 
contribute to lag. Also, if somebody writes a badly engineered client 
that reconnects too quickly, all current players would get flooded with 
updates.


Christian
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] cgos clients

2009-06-17 Thread Christian Nentwich

Lukasz,

your client doesn't seem to be displaying all the info for some reason:

 -c specified name of config file - this is MANDATORY
-k sepcifies a file to create which will stop client after current game
-p specifies which player plays first game
-s display a sample configuration file to stdout and exit 

If you add multiple engines, the priority fields will be added up, 
normalized to 1.0 and a player will be chosen probabilistically.


Christian
p.s. remember also http://cgos-python.sourceforge.net/, which does not 
have this feature (yet!), but has others :)


On 17/06/2009 00:29, Łukasz Lew wrote:

Hi,

I have a couple of question about cgos client programs.

Why there are two links to clients 32bit linux?

The first one is on page
http://cgos.boardspace.net/9x9/
http://cgos.boardspace.net/public/cgos3-x86_32.zip
and is broken, but the page refers to it as version 1.1 compared to
the one on the main page
http://cgos.boardspace.net/
http://cgos.boardspace.net/software/cgosGtp-linux-x86_32.tar.gz
which works but the version is 0.98.

Can I add some more engines to config file without restarting cgos client?

What is the -p PLAYER_FIRST option ?

cgosGtp 0.98 alpha - engine client for CGOS Linux-x86_64 by Don Dailey
Usage: /home/lew/cgos/cgosGtp-linux-x86_64/main.tcl  -c CONFIG_FILE
-k KILL_FILE  -p PLAYER_FIRST -s

What is the priority in config file?

Thanks
Lukasz
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

   


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] opening book structure

2009-06-17 Thread Christian Nentwich


Are you asking about in-memory storage, or on disk?

On disk, you probably want to make sure that it's as easy as possible to 
maintain - up to your taste. Some people like a single SGF file with 
variations. You can do mirrorring and transpositions when you load the book.


Christian

Willemien wrote:

I was puzzeling what is the best way to organise an opening book?

i am writing a program to be very strong on 7x7 go program (as
prelimary for writing a very strong 9x9 program ) and am wondering
what is the best structure for an opening book.

Where i am at the moment is:

There are about 4 fundamentally different ways to structure it

1  a position - move collection (if the position is such make that move)
2 a position value collection (this position is good, that position is
bad for the colur who is on the move)
3  a professional game collection (let the program just play like the pro's)
4 a game tree (game with lots of variations)


all have there advantages and disadvantages.

1 is close to what the program uses but is a long list. (difficult to maintain?)
2 same problem as 1 maybe even more extreme, but easy to use for the program

3 are reasonably easy to find. (for standard sizes Godod ed) but has
many hiatus (what if
my opponent doesn't play like Go Seigen)
4 is easy to use with go editors (Drago ed) but problems with
transpositionings ed.

I would like to hear from others what kind of opening book they use
and why. and how they maintain it.
and if i am missing important things.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

  



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] New CGOS - need your thoughts.

2009-06-16 Thread Christian Nentwich
Whatever the eventual decision is - personally I would love a fast-play 
venue as an alternative, with separate rating - please don't worry too 
much about engines with fixed playouts, or engines that cannot handle 
certain time limits.


The GTP client sitting between the engine and server will be able to 
protect the engine, by either keeping it out of games it cannot support 
or issuing it with reconfiguration commands.


Christian


Michael Williams wrote:

I vote for 2 venues, each optional.  Separate rating pools is a must.


Łukasz Lew wrote:

Maybe we could agree that 1 day out of 7 in a week would be played on
6 times faster time controls.
The same bots, connections, logins, the same number of games per week.
Different rating of course.
This would be a problem only for hardcoded bots with no time control.

The advantage would be that we would see how different algorithms 
(bots) scale.

If the ratings would be very similar for most bots, it would mean that
we can get faster testing of new ideas.
We would know which ideas can be tested of fast time control.

Lukasz

2009/6/16 Don Dailey dailey@gmail.com:
From what I can see, there is resistance to this idea - so what I'm 
going
to do is to provide venues which are standalone but makes it 
possible later
to add a time control.In other words for now there will be only 
1 time
control per board size but the server will be flexible enough that 
other
venues can be added if the server ever gets popular enough that we 
have 40
or 50 players always on line.   But they will be separate venues 
scheduled

independently.


- Don


On Tue, Jun 16, 2009 at 8:08 AM, Isaac Deutsch i...@gmx.ch wrote:
I'm voting for 2 time settings: One normal and one fast (so maybe 5 
min

and 1 min on 9x9).
--
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/




--

Christian Nentwich

Director, Model Two Zero Ltd.
+44-(0)7747-061302
http://www.modeltwozero.com

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MCTS, 19x19, hitting a wall?

2009-06-11 Thread Christian Nentwich

Terry,

I don't think the part of the argument looking at hardware is sound. You 
are assuming that computing power is going to continue to provide a 
linear strength increase with every doubling. I think the argument being 
made by a few of the previous posters is that the strength curve is 
showing asymptotic behaviour, and it is very possible that it will tail 
off somewhere soon with the current generation of algorithms.


The 19x19 board, lest anybody forgets, is huge: 
http://homepages.cwi.nl/~tromp/go/legal.html. A few gazillion percent of 
added speed  is not enough. Faster hardware *will*  however help us 
execute algorithms that are infeasible now, and I think that is part of 
the argument Don is making.


I have a lot of respect for Olivier and people like Magnus who put all 
this effort into experimenting with heavy playout patterns. 
Unfortunately, it's a bad sign that there is so much work now going into 
pattern tuning for MCTS on 19x19.. when we reach a tuning stage like 
that, I get a feeling of deja vu. That's what all the traditional 
programs started spending time on.


Christian


On 11/06/2009 07:04, terry mcintyre wrote:



*From:* Don Dailey dailey@gmail.com
**
 My basic observation is that over the several year period I have 
been in this forum,  I have detected a huge amount of resistance to 
the idea that hardware could have anything to do with computer go 
strength, despite the fact that it keeps proving to be so.   The 
resistance is strong enough that we have to explain it way when it 
happens, by saying things like we have hit a wall and it won't happen 
any more thank goodness.


You overrstate the resistance - it's not that anybody is saying 
hardware is irrelevant. In fact, did we not have a recent discussion 
over the merits of two different CPU variations? We've seen a fair 
number of multi-processor entrants at competitions, besides.


The questions ishow much does hardware matter? So far, we have one 
data point to work with: David Fotland's excellent Many Faces of Go is 
about one stone stronger when it uses 32 cores instead of 2. That's 
nice to have, but if we extrapolate, a factor of 16 is 3 doublings or 
about 4.5 years, in terms of Moore's Law. It will only take 9*4.5,  
roughly 40 years, to reach pro-level play.


We don't have data from Mogo yet, but I wonder if they are seeing 2-3 
stones improvement for their 3200-node version?


The less patient among us may wish to seek algorithmic improvements to 
bridge the gap a bit sooner.


Got to be some reason for bright programmers and mathematicians to 
work on the problen, after all; otherwise we could just wait 40 years 
for Intel and AMD to deliver 32,768 cores on a single chip - or will 
it be a silicon wafer?


In other fields, algorithmic improvements have led multiple orders of 
magnitude improvement in running time. Humans manage to complete 
30-minute games on a 19x19 board, so we do have evidence that the game 
can be played well at such a speedy pace.





___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Python CGOS Client

2009-06-10 Thread Christian Nentwich

All,

I have released an alternative client for CGOS, written in Python. I had 
some features in there that are useful to me, and I thought maybe others 
could benefit from it.


Some key interesting features:
  - GTP extensions that tell your engine who the opponent is, and what 
the result was (useful if you persist your own SGF)

  - Local SGF persistence
  - Local mirroring to a GTP observer engine of all moves. This lets 
you watch games locally using GoGUI. Nice for the server, and nice 
graphics :-)

  - Usual failover using the CGOS servers move replay
  - Separate logging. The console output has only high-level stuff 
(like what colour your engine is playing, and against who)


Head on over here: http://cgos-python.sourceforge.net/

I would be interested in hearing feedback, and ideas for further 
features. I'm thinking of adding mail notification for very long 
sessions. I run this on Windows 7 (64 bit) and XP (32 bit). If you run 
into issues on Linux (shouldn't.. but...), let me know. Contributors 
also welcome, of course.


Christian
p.s. Don: if you are reading this - this client obeys the TCL client's 
convention of not hammering the server on reconnects (uses 30 seconds + 
random number), and the rest of the protocol.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Python CGOS Client

2009-06-10 Thread Christian Nentwich

Don,

great. I will be happy also to merge the project into your SVN, as Jason 
suggested, subject to some administrative needs that I will e-mail the 
two of you about, off-list.


I'd be more than happy to help on the client side of the new CGOS. 
Upgrading this Python client to support the new protocol (or both at the 
same time) will not be difficult at all. It will then shield the 
programs behind it, as a good adapter should.


Christian


On 10/06/2009 20:23, Don Dailey wrote:
Awesome!This will be a good thing for CGOS and I will put a link 
to it on my site soon.


I plan to try to stick to the same exact protocol when I do the new 
server.   I will almost certainly have to make minor changes to 
accomodate the ability to decide which time controls to play in.In 
order not to break your client I can design it so there is a default 
that doesn't have to be communicated,   but I will discuss it with you 
when the time comes.


- Don


On Wed, Jun 10, 2009 at 2:23 PM, Christian Nentwich 
christ...@modeltwozero.com mailto:christ...@modeltwozero.com wrote:


All,

I have released an alternative client for CGOS, written in Python.
I had some features in there that are useful to me, and I thought
maybe others could benefit from it.

Some key interesting features:
 - GTP extensions that tell your engine who the opponent is, and
what the result was (useful if you persist your own SGF)
 - Local SGF persistence
 - Local mirroring to a GTP observer engine of all moves. This
lets you watch games locally using GoGUI. Nice for the server, and
nice graphics :-)
 - Usual failover using the CGOS servers move replay
 - Separate logging. The console output has only high-level stuff
(like what colour your engine is playing, and against who)

Head on over here: http://cgos-python.sourceforge.net/

I would be interested in hearing feedback, and ideas for further
features. I'm thinking of adding mail notification for very long
sessions. I run this on Windows 7 (64 bit) and XP (32 bit). If you
run into issues on Linux (shouldn't.. but...), let me know.
Contributors also welcome, of course.

Christian
p.s. Don: if you are reading this - this client obeys the TCL
client's convention of not hammering the server on reconnects
(uses 30 seconds + random number), and the rest of the protocol.
___
computer-go mailing list
computer-go@computer-go.org mailto:computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/




___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Tweak to MCTS selection criterion

2009-06-07 Thread Christian Nentwich


This is kind of interesting. Is anybody measuring their playout 
performance in real-time at the moment and performing this sort of  
computation, to check if overtaking the leading move is mathematically 
impossible?


It's interesting with UCT because of the interplay between the time 
management algorithm and the exploration parameter. Suppose you are 
early in the game, and your time management algorithm says you should be 
spending 10 seconds on a move. After six seconds, because your parameter 
is skewed towards exploitation, you already have 90% more trials on the 
leading move than anything else, calculate that it cannot be overtaken 
and abort.


Some things come to mind:
- If this behaviour is happening consistently, i.e. you end up spending 
too little time on all your moves, is your exploitation parameter 
correct? There is a reason you use a time management algorithm to 
allocate a lot of time in the beginning. You may be doing pointless 
searches.
- Would you rather exploit less in that case, thus spending your 
allotted time to do more exploration, or would you instead keep 
searching instead of aborting and reuse the tree for pondering and/or 
your follow-up move?


Given that people spend a lot of time experimenting on good 
exploitation/exploration parameters, I suspect that the last option 
(obey the time management, continue searching, reuse the tree) is the 
better?


Christian


Don Dailey wrote:



2009/6/6 dhillism...@netscape.net mailto:dhillism...@netscape.net

I think this is one of those design decisions that nobody takes on
faith. We all wind up testing it both ways and in various
combinations.

An additional advantage of using the number of visits is that
branches at the root become mathematically eliminated and can be
pruned away. It often also allows the search to be stopped early.
It can save a lot of time for forced moves.


Let me see if I understand what you are saying here.   

If you have set a goal time for thinking of 10 seconds,  and let's say 
6 seconds have progressed,   then it might be possible to stop the 
search early if you do the math and see that it's not possible for any 
other move to have more visits given an additional 4 seconds?  

So when there is a position that has only a single clearly best move, 
perhaps a capture that cannot wait or a necessary atari defense,  then 
you can probably save as much (or close to) as half the thinking time 
on that move.Is this correct? 

So if this understanding is correct, then it still makes sense to do 
the pebbles test at this point and check to see if another move has 
a higher score before stopping the search.If the move really is 
forced and necessary,  then the answer will be no and you can stop 
early.  If there is a move that currently appears better but with a 
too small sample,  then it seems foolish to stop early.If the 
move is a result of just a few lucky playouts, then it will quickly be 
revealed and you can still stop early.


- Don



 


- Dave Hillis



-Original Message-
From: Michael Williams michaelwilliam...@gmail.com
mailto:michaelwilliam...@gmail.com
To: computer-go computer-go@computer-go.org
mailto:computer-go@computer-go.org
Sent: Sat, 6 Jun 2009 5:07 pm
Subject: Re: [computer-go] Tweak to MCTS selection criterion

Another strategy to be considered is to not allow the thinking to
cease until the maximum win rate and the maximum visit count agree
on the same move. Obviously this requires some extra code to make
sure you don't lose on time, etc. 
 
Brian Sheppard wrote: 
 When a UCT search is completed, the usual selection criterion is 
 choose the move that has the most trials. This is more stable 
 than choosing the move that has the highest percentage of wins, 
 since it is possible to have an unreliably high percentage if the 
 number of trials is small. 
  I have a small tweak to that criterion. Pebbles uses choose the 
 move that has the most wins. This rule selects the same move as 
 the conventional criterion in almost every case. The reason why 
 Pebbles' rule is superior is revealed in the case where the moves 
 differ. 
  When Pebbles chooses a different move than the conventional
criterion, 
 it is because Pebbles move has more wins in fewer trials. When that 
 happens, Pebbles move would inevitably become the move with the
most 
 trials if searching were to continue. So there is actually no
downside. 
 Of course, the upside is minor, too. 
  For validation, Pebbles has been using both strategies on CGOS
games. 
 At present, the conventional selection strategy has won 341/498
= 68.47%. 
 Pebbles strategy has won 415/583 = 71.18%. This isn't statistically 
 conclusive or anything (0.7 standard deviations; we would need 4
to 8 
 times as many trials for strong 

Re: [computer-go] Problems with CGOS

2009-06-03 Thread Christian Nentwich

Don,

you're probably on the right track with the database. I'm not sure why 
you'd torture yourself with C in this case - the string processing, 
architecting, etc, won't be fun -, KGS does very well with Java.


Christian


Don Dailey wrote:

Hi Remi,

I noticed that when CGOS first came up,  it was very fast but as the 
sqlite3 database gets bigger and bigger, it gets slower.  

So I believe this is a design flaw in CGOS itself.   I wrote CGOS 
without having had any experience writing servers.  

I think I know now how to build a super fast server and I plan to do 
that very soon.  

I would like to do a test where I start a brand new server instance 
from scratch to see what happens.   I cannot do that at the moment 
because I am at work and do not get payed to do this on company time,  
but perhaps tonight or tomorrow I can test this.


I can build a new server in very short time - I plan to do this in C 
and I have all the support routines ready to go - so it's mostly 
piecing things together, not redesigning code from scratch.I think 
the end result will be an impressively efficient server with low 
memory usage, which is probably a big part of the problem.


It would  come with a new ajax style web page.  


- Don




On Tue, Jun 2, 2009 at 1:34 PM, Rémi Coulom 
remi.cou...@univ-lille3.fr mailto:remi.cou...@univ-lille3.fr wrote:


Hi,

I have just connected Crazy Stone to CGOS and noticed a few problems:
- pairings seem to take forever (10 minutes or so)
- there is a lot of lag during games (up to one minute for a move,
which causes losses on time)

I tried the cgosview-linux-x86_32 client, and got a could not
execute error. If I run it with no parameter, it runs fine
(except that default parameter don't connect to any server). If I
run it with parameter, it fails with could not execute. This is
not a big problem, because I kept a copy of an old version that
works very well.

Rémi
___
computer-go mailing list
computer-go@computer-go.org mailto:computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/




___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



--

Christian Nentwich

Director, Model Two Zero Ltd.
+44-(0)7747-061302
http://www.modeltwozero.com

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Congratulations to Steenvreter!

2009-06-02 Thread Christian Nentwich

Nick,

I am hoping that I can join this at some point, at the lower end of the 
field to start with :)


Is it possible to set a bar at these tournaments? In human McMahon 
tournaments, that very successfully allows a top tier of competition 
while guaranteeing at least some fun for everybody else.


Christian


Nick Wedd wrote:
In message 8cbb1200f1dffd9-cbc-...@mblk-m02.sysops.aol.com, 
dhillism...@netscape.net writes

One factor is that there seems to be a narrow range between too few
entrants and too many. For any given contest, the potential pool
includes an elite few who have a chance at first place and maybe a
couple who have a new or newly improved bot. There is a larger group,
back in the pack, whose last breakthrough was a while ago. For many of
us in that last group, it would be easy enough to enter, but hard to
know if that would help or hinder.


My view is that more entrants, including weaker entrants, help.  I 
used to encourage Aloril to enter his deliberately weak bots, not only 
to fill out the numbers, but to provide suitable opponents for first 
time entrants.


I see a purpose of these events as providing a training ground for 
more significant events.  Some programmers concentrate too much on 
trying to get the bot to play well, rather than on doing basic things 
right.  A bot that plays badly but beats IdiotBot shouldn't be too 
hard to achieve - so if a bot plays well but loses to IdiotBot, it is 
doing something wrong which really ought to be fixed.


Nick



--

Christian Nentwich

Director, Model Two Zero Ltd.
+44-(0)7747-061302
http://www.modeltwozero.com

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: Implications of a CPU vs Memory trend on MCTS

2009-05-13 Thread Christian Nentwich
 
visited before.   I thought I already conceded that point? Didn't 
I already say that this is an idea for the first few moves?


But this idea will save you time that can be spend in later moves,  so 
it can actaully benefit the moves you make later in the game.   But 
more importantly it can prevent you from being in a losing position by 
move 20 from a bad move choice on move 5. 


- Don







 




--

___
computer-go mailing list
computer-go@computer-go.org mailto:computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/




___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



--

Christian Nentwich

Director, Model Two Zero Ltd.
+44-(0)7747-061302
http://www.modeltwozero.com

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Implications of a CPU vs Memory trend on MCTS

2009-05-12 Thread Christian Nentwich
. Here is a thought experiment: if
you run the bzip
compressor on your tree, would you get 10% compression? My guess
is more
like 95% compression.
 
The nodes in my own program are horribly fat. But I blame Mogo;

they haven't
published how they represent the tree! Joking aside, there is a
lot of potential here.
 
Another approach is to play well using a smaller tree size, which

means using more
CPU power on each trial. This is the heavy playouts conclusion.
I speculate that
adding knowledge to the tree search and trials will pay off, and
can be beneficial even
if the program is not stronger, because smaller memory usage means
that the program
has a higher limit at longer time controls.
 
Note that benchmarking programs in lightning games on CGOS can

bias our
decisions in the wrong direction. If we use memory to make our
programs faster then
it looks like a big win. But at tournament time controls that
choice merely saturates
RAM more quickly, at a smaller tree size.
 
So that's all I have to say. Just something to think about, and if

you think of a good
way to represent the tree, I hope you will return the favor.
 
Best,

Brian

___
computer-go mailing list
computer-go@computer-go.org mailto:computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/




___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



--

Christian Nentwich

Director, Model Two Zero Ltd.
+44-(0)7747-061302
http://www.modeltwozero.com

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] published stats on string merge frequencies, etc

2009-04-29 Thread Christian Nentwich

Hi all,

I've been looking through the archives trying to find out if anybody had 
published information on the following during uniform random playouts 
and/or real games:

 - Frequency of string merges
 - Frequency of placing stones in empty space
 - Average length of strings, etc.

I noticed that quite a few people had experimented, has anybody put 
their stats up anywhere on the web, or in a paper?


regards,
Christian
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/