Re: [SPAM] Re: [SPAM] [computer-go] MoGo and Fuego question.

2009-11-17 Thread Olivier Teytaud

 How to activate patterns?
 Why they are not on on default?

 They are the default, but they are removed if
they are not in the path when running (it can be seen
on the stderr - the number of patterns read must be 0).
In 19x19 there is a big impact.

(In the release, there's no pattern)
Olivier
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] MoGo and Fuego question.

2009-11-13 Thread Łukasz Lew
Hi,

Is there a way to set up  MoGo so it has the same playing strength
independently of the CPU and other factors?
For instance fixing number of playouts per move?

The same question to Fuego developers.
Will I achieve this effect with this command?
uct_param_player max_games 10

Thanks,
Lukasz
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [SPAM] [computer-go] MoGo Zones

2009-10-26 Thread Olivier Teytaud
  (Sylvain et al. 2006) describes the use of CFG-based zones in random
 simulations to simulate only the local position and tune the score based
 on few thousands of simulations of outside of the zone. It doesn't seem
 the idea is too practical (especially with RAVE, but there seem to be
 more problems), but I'm wondering if MoGo or anyone is still using it,
 perhaps in a modified form?


Not like that in Mogo. We have some local tool for heavy playouts which use
local simulations, but for the moment this version is weaker than the light
playout - even with fixed number of simulations.

Best regards,
Olivier
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] MoGo Zones

2009-10-24 Thread Petr Baudis
  Hi!

  (Sylvain et al. 2006) describes the use of CFG-based zones in random
simulations to simulate only the local position and tune the score based
on few thousands of simulations of outside of the zone. It doesn't seem
the idea is too practical (especially with RAVE, but there seem to be
more problems), but I'm wondering if MoGo or anyone is still using it,
perhaps in a modified form?

  Thanks,

-- 
Petr Pasky Baudis
A lot of people have my books on their bookshelves.
That's the problem, they need to read them. -- Don Knuth
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo policy: capture stones anywhere?

2009-09-01 Thread Peter Drake

On Aug 31, 2009, at 10:12 PM, terry mcintyre wrote:

If you maintain a list of strings ( connected groups ) of stones  
and their liberty counts - or perhaps the actual liberties - it  
should be fairly quick to find a string with just one liberty.


I'm currently using pseudoliberties, so that might be tricky. As David  
Fotland points out, though, the difference shouldn't be huge.


In any case, if I read the explanation correctly, this happens  
infrequently, if several less-expensive tests fail; the cost would  
be amortized over many trials.



Ah, that helps some. I was testing just this aspect of the policy; if  
I do the escaping and the patterns first, the hit is not as bad  
(although worse than I'd like).


My current implementation is:

- Traverse the board. For each point that is the root of an enemy  
chain, check if that chain is in atari:

- If it has 4 pseudoliberties, it's trivially not in atari.
	- Otherwise, traverse it looking for liberties, abandoning the search  
if a second liberty is ever found.
	- If only one liberty is found, increase the number of stones  
captured by playing at that liberty by the size of the chain.
- From those moves that capture, choose the one that captures the most  
stones.


Is there a better way?

Peter Drake
http://www.lclark.edu/~drake/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] MoGo policy: capture stones anywhere?

2009-09-01 Thread Brian Sheppard
IIRC, Fuego has an urgent rule that captures the last-played stone.
That rule applies before any others. Then the mop-up capture rule
applies just before playing randomly.

Mop-up capture is facilitated by a list of strings in atari. Pebbles
keeps such a list, like Many Faces. Having such a list is not as
important as it used to be, because the rest of Pebbles has gotten heavier. 

I strongly recommend reading the Fuego source code. Their ideas and
implementations might not be best for Orego, and you may have to express
ideas differently because Orego might not have infrastructure that Fuego
has.
Setting these limitations aside, Fuego provides a reference implementation
that has been vetted by world-class AI game programmers and very stronger
amateur Go masters, and is known to perform very well.


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] MoGo policy: capture stones anywhere?

2009-09-01 Thread David Fotland
I use real liberties.  I think if you want the playouts to do much
computation on groups based on liberty count you should switch to using real
liberties.  I don't think any of the strong programs use pseudoliberties.

 

From: computer-go-boun...@computer-go.org
[mailto:computer-go-boun...@computer-go.org] On Behalf Of Peter Drake
Sent: Tuesday, September 01, 2009 7:51 AM
To: computer-go
Subject: Re: [computer-go] MoGo policy: capture stones anywhere?

 

On Aug 31, 2009, at 10:12 PM, terry mcintyre wrote:





If you maintain a list of strings ( connected groups ) of stones and their
liberty counts - or perhaps the actual liberties - it should be fairly quick
to find a string with just one liberty.

 

I'm currently using pseudoliberties, so that might be tricky. As David
Fotland points out, though, the difference shouldn't be huge.

 

In any case, if I read the explanation correctly, this happens infrequently,
if several less-expensive tests fail; the cost would be amortized over many
trials.

 

Ah, that helps some. I was testing just this aspect of the policy; if I do
the escaping and the patterns first, the hit is not as bad (although worse
than I'd like).

 

My current implementation is:

 

- Traverse the board. For each point that is the root of an enemy chain,
check if that chain is in atari:

  - If it has 4 pseudoliberties, it's trivially not in atari.

  - Otherwise, traverse it looking for liberties, abandoning the
search if a second liberty is ever found.

  - If only one liberty is found, increase the number of stones
captured by playing at that liberty by the size of the chain.

- From those moves that capture, choose the one that captures the most
stones.

 

Is there a better way?

 

Peter Drake

http://www.lclark.edu/~drake/

 





 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] MoGo policy: capture stones anywhere?

2009-09-01 Thread David Fotland
But note that Fuego is using a GNU license, so if you incorporate any of the
Fuego code into your own app, you will have to make your own app available
under the GNU license and distribute source to your customers.

David

 I strongly recommend reading the Fuego source code. Their ideas and
 implementations might not be best for Orego, and you may have to express
 ideas differently because Orego might not have infrastructure that Fuego
 has.
 Setting these limitations aside, Fuego provides a reference implementation
 that has been vetted by world-class AI game programmers and very stronger
 amateur Go masters, and is known to perform very well.
 


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo policy: capture stones anywhere?

2009-09-01 Thread Peter Drake

On Sep 1, 2009, at 8:11 AM, David Fotland wrote:


I don’t think any of the strong programs use pseudoliberties.



Interesting! Can those involved with other strong programs verify this?

My board code is descended from my Java re-implementation of libEGO. I  
tried writing one using real liberties earlier, and it was  
considerably slower in random playouts. Perhaps it's worth the speed  
hit as playouts get heavier.


Incidentally, preliminary experiments indicate that the capture-the- 
largest-chain heuristic (after escaping and matching patterns) does  
improve strength, despite the speed cost. I'll run a larger experiment  
tonight...


Peter Drake
http://www.lclark.edu/~drake/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] MoGo policy: capture stones anywhere?

2009-09-01 Thread Magnus Persson
I never tried pseudoliberties in Valkyria. It actually stores arrays  
of the liberties in addition to the count. This make programming  
complex algorithms simple, but perhaps not the most efficient way.


-Magnus

Quoting Peter Drake dr...@lclark.edu:


On Sep 1, 2009, at 8:11 AM, David Fotland wrote:


I don?t think any of the strong programs use pseudoliberties.



Interesting! Can those involved with other strong programs verify this?

My board code is descended from my Java re-implementation of libEGO. I
tried writing one using real liberties earlier, and it was
considerably slower in random playouts. Perhaps it's worth the speed
hit as playouts get heavier.

Incidentally, preliminary experiments indicate that the capture-the-
largest-chain heuristic (after escaping and matching patterns) does
improve strength, despite the speed cost. I'll run a larger experiment
tonight...

Peter Drake
http://www.lclark.edu/~drake/




--
Magnus Persson
Berlin, Germany
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo policy: capture stones anywhere?

2009-09-01 Thread Mark Boon
2009/9/1 Peter Drake dr...@lclark.edu:
 On Sep 1, 2009, at 8:11 AM, David Fotland wrote:

 I don’t think any of the strong programs use pseudoliberties.

 Interesting! Can those involved with other strong programs verify this?
 My board code is descended from my Java re-implementation of libEGO. I tried
 writing one using real liberties earlier, and it was considerably slower in
 random playouts.

I started out by looking at Orego's code when I first tried MC ;-).
Since then I found that even with very light playouts,
pseudo-liberties is only marginally (like a few percent) faster than
keeping actual liberty-counts. This performance hit is easily
recovered by having the real number of liberties at all times for
other parts.  Just the coding is a bit more work to make it efficient.
But you can check the plug-and-go project for a Java implementation.

Mark
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] MoGo policy: capture stones anywhere?

2009-09-01 Thread Brian Sheppard
 Interesting! Can those involved with other strong programs verify this?

Pebbles isn't a particularly strong program, but using real
liberty counts is better.

I have recently taken Pebbles offline to renovate internals.
I am adding lists of liberties, too. I am convinced that the
richer representation will improve speed, because I have
written ridiculous amounts of code that searches for liberties
in the course of trying to analyze semeais, ladders, and so on.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] MoGo policy: capture stones anywhere?

2009-08-31 Thread Peter Drake
MoGo's playout policy (at one time) is given in section 3.2 of Gelly  
et al's paper, Modification of UCT with Patterns in Monte-Carlo Go:


We describe briefly how the improved random mode generates moves. It  
first verifies
whether the last played move is an Atari; if yes, and if the stones  
under Atari can be saved
(in the sense that it can be saved by capturing stones or increasing  
liberties), it chooses one
saving move randomly; otherwise it looks for interesting moves in the  
8 positions around the
last played move and plays one randomly if there is any; otherwise it  
looks for the moves
capturing stones on the Go board, plays one if there is any. At last,  
if still no move is

found, it plays one move randomly on the Go board.

We've implemented much of this, which has made Orego considerably  
stronger. The problem is with this part:


otherwise it looks for the moves capturing stones on the Go board

Does this really mean traversing the entire board looking for  
captures? Doing so seems to create a catastrophic speed hit.


Peter Drake
http://www.lclark.edu/~drake/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo policy: capture stones anywhere?

2009-08-31 Thread terry mcintyre
If you maintain a list of strings ( connected groups ) of stones and their 
liberty counts - or perhaps the actual liberties - it should be fairly quick to 
find a string with just one liberty.  In any case, if I read the explanation 
correctly, this happens infrequently, if several less-expensive tests fail; the 
cost would be amortized over many trials.

 Terry McIntyre terrymcint...@yahoo.com
And one sad servitude alike denotes
The slave that labours and the slave that votes -- Peter Pindar





From: Peter Drake dr...@lclark.edu
To: Computer Go computer-go@computer-go.org
Sent: Monday, August 31, 2009 9:56:16 PM
Subject: [computer-go] MoGo policy: capture stones anywhere?

MoGo's playout policy (at one time) is given in section 3.2 of Gelly et al's 
paper, Modification of UCT with Patterns in Monte-Carlo Go:

We describe briefly how the improved random mode generates moves. It first 
verifies
whether the last played move is an Atari; if yes, and if the stones under Atari 
can be saved
(in the sense that it can be saved by capturing stones or increasing 
liberties), it chooses one
saving move randomly; otherwise it looks for interesting moves in the 8 
positions around the
last played move and plays one randomly if there is any; otherwise it looks for 
the moves
capturing stones on the Go board, plays one if there is any. At last, if still 
no move is
found, it plays one move randomly on the Go board.

We've implemented much of this, which has made Orego considerably stronger. The 
problem is with this part:

otherwise it looks for the moves capturing stones on the Go board

Does this really mean traversing the entire board looking for captures? Doing 
so seems to create a catastrophic speed hit.

Peter Drake
http://www.lclark.edu/~drake/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



  ___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] MoGo policy: capture stones anywhere?

2009-08-31 Thread David Fotland
I maintain a list of strings with exactly one liberty.  If I disable the
code to track one liberty groups and generate capture moves the speed goes
up from 17.7 K playouts/s to 18.1K playouts/s, so it's a small difference,
probably not statistically significant.

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of Peter Drake
 Sent: Monday, August 31, 2009 9:56 PM
 To: Computer Go
 Subject: [computer-go] MoGo policy: capture stones anywhere?
 
 MoGo's playout policy (at one time) is given in section 3.2 of Gelly
 et al's paper, Modification of UCT with Patterns in Monte-Carlo Go:
 
 We describe briefly how the improved random mode generates moves. It
 first verifies
 whether the last played move is an Atari; if yes, and if the stones
 under Atari can be saved
 (in the sense that it can be saved by capturing stones or increasing
 liberties), it chooses one
 saving move randomly; otherwise it looks for interesting moves in the
 8 positions around the
 last played move and plays one randomly if there is any; otherwise it
 looks for the moves
 capturing stones on the Go board, plays one if there is any. At last,
 if still no move is
 found, it plays one move randomly on the Go board.
 
 We've implemented much of this, which has made Orego considerably
 stronger. The problem is with this part:
 
 otherwise it looks for the moves capturing stones on the Go board
 
 Does this really mean traversing the entire board looking for
 captures? Doing so seems to create a catastrophic speed hit.
 
 Peter Drake
 http://www.lclark.edu/~drake/
 
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo - ManyFaces

2009-08-15 Thread Stefan Kaitschick



 13 games were played and the total score was 8-5 for CzechBot. I wonder
how would they play if on even grounds. The general game pattern was the
usual wild middlegame wrestling typical of MC, with CzechBot usually
getting large edge initially (70% winning probability and seemingly
unshakeable position), but then in the lost games making a careless
blunder or two when it got too much ahead, and panicking subsequently.
Overlooking ladders seemed to happen to it several times.



trollSo maybe CzechBot should put a komi burden on itself when it gets too 
optimistic./troll


Stefan 


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo - ManyFaces

2009-08-15 Thread Petr Baudis
  Hi!

   Today there was a short discussion about the strongest bot currently
 online on KGS and I got curious whether ManyFaces or CzechBot (bleeding
 edge MoGo) is stronger, so I made it play against ManyFaces.
 
   CzechBot is running as dual-thread pondering MoGo on slightly loaded
 dual-core Athlon64 4800+ with 4G RAM, so perhaps slightly better hardware
 than ManyFaces

  Turns out that ManyFaces is running on a quad-core, so the hardware
difference is bigger than I thought and in favor of ManyFaces.

Petr Pasky Baudis
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] MoGo - ManyFaces

2009-08-14 Thread Petr Baudis
  Hi!

  Today there was a short discussion about the strongest bot currently
online on KGS and I got curious whether ManyFaces or CzechBot (bleeding
edge MoGo) is stronger, so I made it play against ManyFaces.

  CzechBot is running as dual-thread pondering MoGo on slightly loaded
dual-core Athlon64 4800+ with 4G RAM, so perhaps slightly better hardware
than ManyFaces; it was reading slightly above 3kpps. Since CzechBot is
ranked one stone lower than ManyFaces, all the games were played josen:
with ManyFaces white taking no komi. The time was 30:00 SD. The ruleset
was Japanese and CzechBot was assuming Chinese rules, but in practice it
never mattered.

  13 games were played and the total score was 8-5 for CzechBot. I wonder
how would they play if on even grounds. The general game pattern was the
usual wild middlegame wrestling typical of MC, with CzechBot usually
getting large edge initially (70% winning probability and seemingly
unshakeable position), but then in the lost games making a careless
blunder or two when it got too much ahead, and panicking subsequently.
Overlooking ladders seemed to happen to it several times.

  This is just a FYI if anyone is interested in the relative strength or
would find the games interesting.

-- 
Petr Pasky Baudis
A lot of people have my books on their bookshelves.
That's the problem, they need to read them. -- Don Knuth
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] MoGo-4C-Po-100Mnod on CGOS

2009-07-20 Thread Hiroshi Yamashita

MoGo-4C-Po-100Mnod does not remove dead stones, and lose.
http://cgos.boardspace.net/9x9/SGF/2009/07/20/813050.sgf

Please add --playsAgainstHuman 0 option, like this,

mogo --9 --nbTotalSimulations 1 --nbThread 1 --pondering 1 
--playsAgainstHuman 0

And following bots also don't remove.

Fuego4C4PlaPo20Mno(2438)
MoGo-4C-Po-100Mnod(2401)
GnuGo-mc-10K-lev11(1818)
GG-500(1646)
Fuego-700node(1508)
Fuego-500node(1400)
Fuego-300node(1248)

Reagrds,
Hiroshi Yamashita


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo and passing

2009-06-30 Thread Olivier Teytaud

 On 
 http://www.lri.fr/~gelly/MoGo_Download.htmhttp://www.lri.fr/%7Egelly/MoGo_Download.htm,
  under the FAQ section,
 I found the bullet point:

 MoGo continues playing after the game is over?: MoGo never consider
 a pass unless you pass first. If you think the game is over, simply
 pass.

 Is this still true? If so, how does MoGo deal with situations where
 the best move is to pass (e.g. seki).


When passing is necessary, mogo passes. So this sentence is not exactly
true;
mogo can pass first in extremal cases (very rare cases, against humans).

Best regards,
Olivier
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] MoGo and passing

2009-06-30 Thread Sylvain Gelly
Hi,

Olivier answered for the new version.
On the downloadable version, I don't remember exactly (almost 2 years
back now...), but I think Mogo will still pass if all the other moves
are clearly loosing. So it should understand somehow Seki
situations.
If that is correct, the sentence is not completely accurate.
It should more be:
MoGo never consider a pass unless you pass first or all other moves
are obviously loosing the game.

Sylvain

On Wed, Jun 24, 2009 at 10:11 PM, Seth Pellegrinose...@lclark.edu wrote:
 Hello all,

 On http://www.lri.fr/~gelly/MoGo_Download.htm , under the FAQ section,
 I found the bullet point:

 MoGo continues playing after the game is over?: MoGo never consider
 a pass unless you pass first. If you think the game is over, simply
 pass.

 Is this still true? If so, how does MoGo deal with situations where
 the best move is to pass (e.g. seki).

 Thanks,

 Seth Pellegrino
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo and passing

2009-06-30 Thread Sylvain Gelly
Obviously I should read better the emails before answering. Olivier
rightly answered for all versions.

Sorry,
Sylvain

On Tue, Jun 30, 2009 at 7:59 PM, Sylvain Gellysylvain.ge...@m4x.org wrote:
 Hi,

 Olivier answered for the new version.
 On the downloadable version, I don't remember exactly (almost 2 years
 back now...), but I think Mogo will still pass if all the other moves
 are clearly loosing. So it should understand somehow Seki
 situations.
 If that is correct, the sentence is not completely accurate.
 It should more be:
 MoGo never consider a pass unless you pass first or all other moves
 are obviously loosing the game.

 Sylvain

 On Wed, Jun 24, 2009 at 10:11 PM, Seth Pellegrinose...@lclark.edu wrote:
 Hello all,

 On http://www.lri.fr/~gelly/MoGo_Download.htm , under the FAQ section,
 I found the bullet point:

 MoGo continues playing after the game is over?: MoGo never consider
 a pass unless you pass first. If you think the game is over, simply
 pass.

 Is this still true? If so, how does MoGo deal with situations where
 the best move is to pass (e.g. seki).

 Thanks,

 Seth Pellegrino
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Mogo on supercomputer

2009-05-11 Thread Michael Williams

When Mogo runs on the supercomputer with long-ish time limits, how big does the 
search tree get?
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Mogo on supercomputer

2009-05-11 Thread Olivier Teytaud
When Mogo runs on the supercomputer with long-ish time limits, how big does
 the search tree get?


Plotting the depth/number of nodes as a function of the thinking time might
be a good idea... No idea :-( I just remember that changing the number of
visits before adding a new node in the tree changes the length of the
simulated variations without having a strong impact on the level.
Olivier
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Mogo MCTS is not UCT ? (GENERALIZED_AMAF)

2008-12-02 Thread Denis fidaali

In the mentioned article it is said that :



A weakness remains, due to the initial time: at the very beginning, neither AMAF

values nor standard statistics are available





I have developed a model that aim at healing that part. It gives 
virtual_simulations

out of arbitrary ancestor at a rate of 1/2 for each node you go through. 

I called that process GENERALIZED_AMAF. I'm not familiar

with mathematical terms, but i think the idea is dead simple :



 AMAF goes as this : 

Evaluate the expected results KNOWING that move A has been played.



GENERALIZED_AMAF :

Evaluate the expected results KNOWING that move A (AND) move B (AND) move C

has been played.



--

You can easily build for example an AMAF_map, which would get both the moves

that have been marked as played by black on one part, and the moves that has

been played by white in the other part. That is associated with the result 
(black win/lose)

Given a node, classical AMAF would then try to check out every of those maps,

for each simulation starting from that node where the child you are trying to 
score has been played.

Generalized AMAF can build statistics out simulation from arbitrary

ancestors from the considered node. You would get (1/2)^n simulations

from the ancestor as GENERALIZED_AMAF_simulations (n the number of choice
made from the ancestor to the node you assess).



Suppose that you have 1000 simulations for a root node R.

Then amaf gives you about 500 simulations for every child

let's call them Cn those child, where n is the id of the child.
Cn also represent the move made from R to reach that child.



Then for each child of Cn, you can get 250 generalized_amaf

simulations. you would consider from the set of simulations from the root, only 
those where
Cn has been played, and then aggregate the results in the AMAF fashion for each 
son of Cn.

My prototype was scoring 30% win against Gnu-go lvl 0, 
without any tuning using plain standard_light_simulations. 
It would use the generalized-amaf
as a way to get RAVE values. Then it would guarantees that
the most promising nodes is explored exponentially more.
(it used a raw non deterministic algorithm)
I did not however tried to compare that win ratio, with
the win ratio i would have got out of standard Amaf.

Best regard,
Denis FIDAALI.

 Date: Mon, 1 Dec 2008 21:55:03 +0100
 From: [EMAIL PROTECTED]
 To: computer-go@computer-go.org
 Subject: Re: [computer-go] Mogo MCTS is not UCT ?
 
   I think it's now well known that Mogo doesn't use UCT.
  I realize that i have no idea at all what Mogo do use for
  it's MCTS.
 
 A complicated formula mixing
 (i) patterns (ii) rules (iii) rave values (iv) online statistics
 
 Also we have a little learning (i.e. late parts of simulations
 are evolved based on online statistics and not only the early parts).
 
  I really wonder if there was an article describing
  the new MCTS of mogo somewhere that i missed.
  How is it better than UCT ?
 
 http://www.lri.fr/~teytaud/eg.pdf contains most of the information
 (many other things
 have been tried and kept as they provided small improvements, but
 essentially the
 ideas are in this version)
 
 Best regards,
 Olivier
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

_
Téléphonez gratuitement à tous vos proches avec Windows Live Messenger  !  
Téléchargez-le maintenant !
http://www.windowslive.fr/messenger/1.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Mogo MCTS is not UCT ? (GENERALIZED_AMAF)

2008-12-02 Thread Mark Boon
At first blush, it sounds like a reasonable idea. However, as always  
with these things, the proof of the cake is in the eating. So you'd  
have to try it out against a ref-bot to see if it improves things.  
You also need to keep an eye on time used, as it sounds a lot more  
CPU intensive than plain AMAF.


As to the quote below: what is meant with the 'very beginning'? If  
that's the very beginning of the game, then I suppose an opening book  
can be used for the initial statistics. If it means, at the beginning  
of the search process of a move, then I suppose you can start by  
using the data generated by the previous search. Most likely it's  
very rare your opponent plays a move that has not been investigated  
by your previous search at all.


Another thing you can do is use the opponent's time to do a little  
preparation. I haven't spent time looking at 'pondering' yet, but if  
it were me I'd start by building a two-ply tree of AMAF values. Say  
you expand the tree after N playouts. You do N simulations at the  
current level. Then you choose the best node and do N simulations  
there. Then at the second-best node, etc. After N*m (m=number of  
legal moves) simulations, you have the initial AMAF data for every  
possible move your opponent can play. Even with N fairly large you're  
likely to be able to finish this process before your opponents plays  
his move. Of course the time saved is very little, as will always be  
the case with pondering. The 'weakness' as seen in the article is a  
very small one.


Mark


On 2-dec-08, at 06:48, Denis fidaali wrote:


In the mentioned article it is said that :

A weakness remains, due to the initial time: at the very beginning,  
neither AMAF

values nor standard statistics are available


I have developed a model that aim at healing that part. It gives  
virtual_simulations
out of arbitrary ancestor at a rate of 1/2 for each node you go  
through.

I called that process GENERALIZED_AMAF. I'm not familiar
with mathematical terms, but i think the idea is dead simple :

 AMAF goes as this :
Evaluate the expected results KNOWING that move A has been played.

GENERALIZED_AMAF :
Evaluate the expected results KNOWING that move A (AND) move B  
(AND) move C

has been played.

--
You can easily build for example an AMAF_map, which would get both  
the moves
that have been marked as played by black on one part, and the  
moves that has
been played by white in the other part. That is associated with  
the result (black win/lose)
Given a node, classical AMAF would then try to check out every of  
those maps,
for each simulation starting from that node where the child you are  
trying to score has been played.

Generalized AMAF can build statistics out simulation from arbitrary
ancestors from the considered node. You would get (1/2)^n simulations
from the ancestor as GENERALIZED_AMAF_simulations (n the number of  
choice

made from the ancestor to the node you assess).

Suppose that you have 1000 simulations for a root node R.
Then amaf gives you about 500 simulations for every child
let's call them Cn those child, where n is the id of the child.
Cn also represent the move made from R to reach that child.

Then for each child of Cn, you can get 250 generalized_amaf
simulations. you would consider from the set of simulations from  
the root, only those where
Cn has been played, and then aggregate the results in the AMAF  
fashion for each son of Cn.


My prototype was scoring 30% win against Gnu-go lvl 0,
without any tuning using plain standard_light_simulations.
It would use the generalized-amaf
as a way to get RAVE values. Then it would guarantees that
the most promising nodes is explored exponentially more.
(it used a raw non deterministic algorithm)
I did not however tried to compare that win ratio, with
the win ratio i would have got out of standard Amaf.

Best regard,
Denis FIDAALI.

 Date: Mon, 1 Dec 2008 21:55:03 +0100
 From: [EMAIL PROTECTED]
 To: computer-go@computer-go.org
 Subject: Re: [computer-go] Mogo MCTS is not UCT ?

  I think it's now well known that Mogo doesn't use UCT.
  I realize that i have no idea at all what Mogo do use for
  it's MCTS.

 A complicated formula mixing
 (i) patterns (ii) rules (iii) rave values (iv) online statistics

 Also we have a little learning (i.e. late parts of simulations
 are evolved based on online statistics and not only the early  
parts).


  I really wonder if there was an article describing
  the new MCTS of mogo somewhere that i missed.
  How is it better than UCT ?

 http://www.lri.fr/~teytaud/eg.pdf contains most of the information
 (many other things
 have been tried and kept as they provided small improvements, but
 essentially the
 ideas are in this version)

 Best regards,
 Olivier
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

Votre correspondant a choisi Hotmail et

[computer-go] Mogo MCTS is not UCT ?

2008-12-01 Thread Denis fidaali


 I think it's now well known that Mogo doesn't use UCT.
I realize that i have no idea at all what Mogo do use for
it's MCTS.

There are only two things i dislike about UCT :
- It's slow to compute.
- It's deterministic


I really wonder if there was an article describing
the new MCTS of mogo somewhere that i missed.
How is it better than UCT ?

_
Email envoyé avec Windows Live Hotmail. Dites adieux aux spam et virus, passez 
à Hotmail ! C'est gratuit !
http://www.windowslive.fr/hotmail/default.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Mogo MCTS is not UCT ?

2008-12-01 Thread Jason House
On Dec 1, 2008, at 3:38 AM, Denis fidaali [EMAIL PROTECTED]  
wrote:




 I think it's now well known that Mogo doesn't use UCT.
I realize that i have no idea at all what Mogo do use for
it's MCTS.

There are only two things i dislike about UCT :
- It's slow to compute.
- It's deterministic


I really wonder if there was an article describing
the new MCTS of mogo somewhere that i missed.
How is it better than UCT ?


My understanding is that MoGo dropped the upper confidence bound  
portion. That makes it a bit faster, but still deterministic for a  
given set of playout results.


Heuristics and RAVE give a sufficiently good move ordering that less  
exploration is needed. IIRC, Valkyra still uses UCT, but has a very  
low coefficent on the upper confidence bound term.






Qui vous permet d'enregistrer la TV sur votre PC et lire vos emails  
sur votre mobile ? la réponse en vidéo la réponse en vidéo

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Mogo MCTS is not UCT ?

2008-12-01 Thread Denis fidaali


 Let's assume that the UCT formula is 
UCTValue(parent, n) = winrate + sqrt((ln(parent.visits))/(5*n.nodevisits))
(taken from sensei library)



What is the Upper confidence bound term ? That would'nt be 
sqrt((ln(parent.visits))/(5*n.nodevisits)) ??

I doubt that exploring only the move with the best winrate would
lead to a fast enough convergence even on 9x9. Is that what you meant
by dropping the upper confidence bound term ? Otherwise, 
what does the formula without the upper confidence bound term do looks like ?



My understanding is that MoGo dropped the upper confidence bound portion.
 That makes it a bit faster, but still deterministic for a given set of playout 
results. 
Heuristics and RAVE give a sufficiently good move ordering that less 
exploration is needed. 
IIRC, Valkyra still uses UCT, but has a very low coefficent on the upper 
confidence bound term.

_
Email envoyé avec Windows Live Hotmail. Dites adieux aux spam et virus, passez 
à Hotmail ! C'est gratuit !
http://www.windowslive.fr/hotmail/default.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Mogo MCTS is not UCT ?

2008-12-01 Thread Olivier Teytaud
  I think it's now well known that Mogo doesn't use UCT.
 I realize that i have no idea at all what Mogo do use for
 it's MCTS.

A complicated formula mixing
(i) patterns (ii) rules (iii) rave values (iv) online statistics

Also we have a little learning (i.e. late parts of simulations
are evolved based on online statistics and not only the early parts).

 I really wonder if there was an article describing
 the new MCTS of mogo somewhere that i missed.
 How is it better than UCT ?

http://www.lri.fr/~teytaud/eg.pdf contains most of the information
(many other things
have been tried and kept as they provided small improvements, but
essentially the
ideas are in this version)

Best regards,
Olivier
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Mogo MCTS is not UCT ?

2008-12-01 Thread Mark Boon


On 1-dec-08, at 18:55, Olivier Teytaud wrote:


 I think it's now well known that Mogo doesn't use UCT.
I realize that i have no idea at all what Mogo do use for
it's MCTS.


A complicated formula mixing
(i) patterns (ii) rules (iii) rave values (iv) online statistics



Isn't that technically still UCT? I mean, you use different input and  
probably a different formula, but most likely what you do is still  
establish an upper bound to which extent you trust the win-ratio (and  
possibly other data) to determine which node to extend next. When  
that upper-bound is passed you decide to extend a less promising node  
to make sure you don't overlook an unlikely but possibly very good  
candidate. It's just that people here have come to associate UCT with  
a particular formula, but that formula is not the only way you can  
establish an upper confidence bound.


Mark

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Mogo Opening, Building Strategy ?

2008-11-30 Thread Denis fidaali


 Hi.

Is there any theoretical reasons for the Mogo Opening being built out of
self play, rather than by spending time increasing the number of simulations
at the root, and after a time, keeping what seems to be the best ?

 Obviously one could object that increasing the number of simulations
would lead to out of memory. But then it seems that there are ways
of killing the less explored branches. Another thing also, is that
it may be far easier to get a massive parallel processing of self play.

 So i really wondered what would be best, both for time efficiency, 
and strength : spending a lot of computer power refining the tree,
or using self play from a position, to assess its value ? 
What was the reason behind the choice of the Mogo team ?

_
Inédit ! Des Emoticônes Déjantées! Installez les dans votre Messenger ! 
http://www.ilovemessenger.fr/Emoticones/EmoticonesDejantees.aspx___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Mogo Opening, Building Strategy ?

2008-11-30 Thread Olivier Teytaud

 Is there any theoretical reasons for the Mogo Opening being built out of
 self play, rather than by spending time increasing the number of
 simulations
 at the root, and after a time, keeping what seems to be the best ?



There are practical reasons: our approach can be used with humans or other
programs as opponent as well;
we can benefit from games launched for other purposes than opening-building;
and we can easily parallelize
the algorithm on grids.

No other reason, at least for me - but these reasons are enough I guess. The
alternate approach is nice,
but is difficult to use for tenths of years of CPU - whereas using
preemptable mode in grids, we can have access
to a huge computational power.

From a more technical point of view, I think that the idea of using results
of games of strong versions of mogo
is better for avoiding biases in the MC. But it's only a conjecture.
Olivier
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] Mogo Opening, Building Strategy ?

2008-11-30 Thread Denis fidaali

Thanks a lot for your quick answer.

By conjecture, i suppose you mean that
no experiments yet has been ran as
to assess this hypothesis ? 

I think Sylvain (and maybe just everyone else) has tried
at some point to use a UCT decision bot, as a way to
get the simulation done. Then using those high level
simulations in an other UCT decision tree (or AMAF,
or FirstMove wining stats)
From what i recall, the results were disappointing.
Also don dailey tried to build an AMAF over a bot
that would use AMAF as a decision with very little
expectation that this would lead to anything worthy.
I don't know how hard Sylvain tried at his time. 

Yet you have this feeling that using High level mogo
games as a way to get a simulation done could lead
to interesting results. I also have this feeling.
For example, it is well known that Mogo-style decision
(or crazy stone) lead to very poor understanding of seki
(and/or semeai ?) Would'nt the use of high level game
as simulation get to better understanding of those
really nasty situations ?

 Then, i guess that if nobody has ever run
any experiments, as to get measure of
the efficiency of increasing the UCT tree
against using high-level-simulation, there must be a reason ...
Is it that that it is known it would consume to much time and resources ?
Is it that knowing the results of this measure would prove of little value ?

If there is a point where high-level-simulations really give a stronger 
evaluation
function, wouldn't it be good to know about it ? 

Date: Sun, 30 Nov 2008 10:10:14 +0100
From: [EMAIL PROTECTED]
To: computer-go@computer-go.org
Subject: Re: [computer-go] Mogo Opening, Building Strategy ?



Is there any theoretical reasons for the Mogo Opening being built out of

self play, rather than by spending time increasing the number of simulations
at the root, and after a time, keeping what seems to be the best ?





There are practical reasons: our approach can be used with humans or other 
programs as opponent as well;

we can benefit from games launched for other purposes than opening-building; 
and we can easily parallelize

the algorithm on grids.



No other reason, at least for me - but these reasons are enough I guess. The 
alternate approach is nice,

but is difficult to use for tenths of years of CPU - whereas using preemptable 
mode in grids, we can have access

to a huge computational power.



From a more technical point of view, I think that the idea of using results of 
games of strong versions of mogo

is better for avoiding biases in the MC. But it's only a conjecture.

Olivier

 


_
Email envoyé avec Windows Live Hotmail. Dites adieux aux spam et virus, passez 
à Hotmail ! C'est gratuit !
http://www.windowslive.fr/hotmail/default.asp___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Mogo Opening, Building Strategy ?

2008-11-30 Thread Olivier Teytaud


 By conjecture, i suppose you mean that
 no experiments yet has been ran as
 to assess this hypothesis ?


Yes. The other reasons were sufficient :-)


I think Sylvain (and maybe just everyone else) has tried
 at some point to use a UCT decision bot, as a way to
 get the simulation done. Then using those high level
 simulations in an other UCT decision tree (or AMAF,
 or FirstMove wining stats)
 From what i recall, the results were disappointing.


At least, it has been clearly established that
replacing the random player by a stronger player
does not imply
the Monte-Carlo program built on top of the random player becomes stronger
(even with fixed number of simulations)

But, it is also clearly established that the building of the opening book by
self-play
clearly works, whereas it is roughly the same idea. I guess the reason is
the
difference of strength of the player - a MCTS (Monte-Carlo Tree Search - I
don't
write UCT here as it is not UCT in mogo) built on top of a perfect player
should
be a perfect player (this is formally obvious). So perhaps for huge
computational
power, this approach (building MCTS on top of MCTS) is consistent.



 For example, it is well known that Mogo-style decision
 (or crazy stone) lead to very poor understanding of seki
 (and/or semeai ?) Would'nt the use of high level game
 as simulation get to better understanding of those
 really nasty situations ?


I hope so, at least for 9x9 and with really huge computational power.
(by the way I'm afraid we have to patch semeai manually)





 Is it that that it is known it would consume to much time and resources ?


I think it is really difficult to do that - you have to dump your results
unless you have
a machine for a huge time without any reboot, also if you want to have a
huge computational
effort you have to parallelize it - I think this is really hard.

But perhaps trying mogo against mogo built on top of a mogo
with e.g. 1s/move would be fine... this is easy to organize. Well, it
requires
some time, and time is always expensive :-)

Best regards,
Olivier
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Mogo Opening, Building Strategy ?

2008-11-30 Thread Don Dailey
On Sun, 2008-11-30 at 13:38 +0100, Olivier Teytaud wrote:
 But, it is also clearly established that the building of the opening
 book by self-play
 clearly works, whereas it is roughly the same idea. I guess the reason
 is the 
 difference of strength of the player - a MCTS (Monte-Carlo Tree Search
 - I don't 
 write UCT here as it is not UCT in mogo) built on top of a perfect
 player should
 be a perfect player (this is formally obvious). So perhaps for huge
 computational
 power, this approach (building MCTS on top of MCTS) is consistent. 

I've always had this idea that the best way to build an opening book is
the best way to build a general playing engine.   You are trying to
solve the same exact problem - what is the best move in this position?

- Don



signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Mogo Opening, Building Strategy ?

2008-11-30 Thread terry mcintyre
 From: Don Dailey [EMAIL PROTECTED]
 
 I've always had this idea that the best way to build an opening book is
 the best way to build a general playing engine.   You are trying to
 solve the same exact problem - what is the best move in this position?

When building an opening book, you have the advantage of not playing against 
the clock. In fact, a good opening book ( one which your game-playing engine 
knows how to use ) can shave time during the game itself.

Knows how to use can be subtle; many joseki depend on knowing the exact line 
of play, the move choices depend on knowing the exact ( not approximate ) 
results of ladders and semeai. Against better players, approximate answers tend 
toward disaster. A book move might work sometimes, not others, and the program 
won't know the difference. 

I think the opening book and general playing engine solve similar problems: 
what is the best move which can be discovered with finite resources? The 
opening problem must solve an additional side constraint: it must suggest moves 
which can be correctly exploited by the playing engine, which may have less 
time and computational power available. A sufficiently broad and deep book can 
make up for lack of computational resources during the game; such a book needs 
to know the best refutations for each of many possible plays. Joseki and fuseki 
books for humans are full of annotations: move a
is used when the ladder works; otherwise, b is recommended; c is a
mistake, which can be exploited by ... A book for computer programs
might need similar annotations.

Some programs may need different books, depending on whether they have a fast 
or slow clock, one or many processors.


  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Mogo Opening, Building Strategy ?

2008-11-30 Thread Don Dailey
It's true that building an opening book in an automated way can be done
off-line which gives us more resources.   That's really the basis for
this thought that we are trying to solve the same problem.  

As a thought experiment,  imagine some day in the future,  when
computers are 1 thousand times faster.  What would you do with that
extra time to make your program more scalable?You might be able to
use your book building algorithm,  whatever it might be,  to find moves
for you.  If there is a better way, then you would use it instead.  But
if there is a better way why are you not using it for creating book
moves?  

Of course the details of how to do it may not be exactly the same -
because we may be forced to simulate in memory hash tables with disk
space for instance, or to deal with other real-world constraints.
However I think that at least in principle, we should be doing the same
thing.   

MCTS really feels to me like a superb book building algorithm.
Computer Chess books (at least the automated part) are built essentially
by taking millions of games from master play and picking out the ones
that seem to work best.   Those games are like playouts.   The moves
that score the best are played the most.   We have a kind of MCTS here.

- Don



On Sun, 2008-11-30 at 09:15 -0800, terry mcintyre wrote:
  From: Don Dailey [EMAIL PROTECTED]
  
  I've always had this idea that the best way to build an opening book is
  the best way to build a general playing engine.   You are trying to
  solve the same exact problem - what is the best move in this position?
 
 When building an opening book, you have the advantage of not playing against 
 the clock. In fact, a good opening book ( one which your game-playing engine 
 knows how to use ) can shave time during the game itself.
 
 Knows how to use can be subtle; many joseki depend on knowing the exact 
 line of play, the move choices depend on knowing the exact ( not approximate 
 ) results of ladders and semeai. Against better players, approximate answers 
 tend toward disaster. A book move might work sometimes, not others, and the 
 program won't know the difference. 
 
 I think the opening book and general playing engine solve similar 
 problems: what is the best move which can be discovered with finite 
 resources? The opening problem must solve an additional side constraint: it 
 must suggest moves which can be correctly exploited by the playing engine, 
 which may have less time and computational power available. A sufficiently 
 broad and deep book can make up for lack of computational resources during 
 the game; such a book needs to know the best refutations for each of many 
 possible plays. Joseki and fuseki books for humans are full of annotations: 
 move a
 is used when the ladder works; otherwise, b is recommended; c is a
 mistake, which can be exploited by ... A book for computer programs
 might need similar annotations.
 
 Some programs may need different books, depending on whether they have a fast 
 or slow clock, one or many processors.
 
 
   
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Mogo Opening, Building Strategy ?

2008-11-30 Thread terry mcintyre


- Original Message 
 From: Don Dailey [EMAIL PROTECTED]
 
 MCTS really feels to me like a superb book building algorithm.
 Computer Chess books (at least the automated part) are built essentially
 by taking millions of games from master play and picking out the ones
 that seem to work best.   Those games are like playouts.   The moves
 that score the best are played the most.   We have a kind of MCTS here.


Interesting, the book moves are not generated by random playouts, but by 
professional ( or highly skilled ) players. 

In the Chess world, what is meant by picking out the ones that seem to work 
best?

My impression is that computer go programs do not, at this stage in their 
evolution, make good use of professional book moves; too much professional 
knowledge is actually in the part of the tree which is almost never played in 
pro-pro games - the how to beat up on mistakes and what not to do when 
playing against a pro parts of the tree. 


  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Mogo Opening, Building Strategy ?

2008-11-30 Thread Don Dailey
On Sun, 2008-11-30 at 14:33 -0800, terry mcintyre wrote:
 
 - Original Message 
  From: Don Dailey [EMAIL PROTECTED]
  
  MCTS really feels to me like a superb book building algorithm.
  Computer Chess books (at least the automated part) are built essentially
  by taking millions of games from master play and picking out the ones
  that seem to work best.   Those games are like playouts.   The moves
  that score the best are played the most.   We have a kind of MCTS here.
 
 
 Interesting, the book moves are not generated by random playouts, but by 
 professional ( or highly skilled ) players. 

Many chess opening books are created by a statistical analysis of lots
of human games.   Some books are created by hand.  Very tediously.
Even the ones created with the aid of human games are sometimes modified
by hand, at least the top notch books.

But in the context of this discussion we note that books can and are
created solely from databases of top quality games.   You can get a
reasonable book that way.   


 In the Chess world, what is meant by picking out the ones that seem to work 
 best?

What I mean is that you look at the statistics of the moves and base
your opening book on the moves that gave the best results.  You can also
go by the moves that are played the most - with the assumption that if
they are played a lot they must be good.   It is typical to do a
combination of both - if it's played a lot and also scores good, use it.

I think some have tried mini-max too.   It's possible that a move seems
to has great success in general, but not if it's responded to in a
certain way.   It could be that in recent months or years a refutation
has been found, and that a move that used to work really well has been
found to be bad.   


 
 My impression is that computer go programs do not, at this stage in their 
 evolution, make good use of professional book moves; too much professional 
 knowledge is actually in the part of the tree which is almost never played in 
 pro-pro games - the how to beat up on mistakes and what not to do when 
 playing against a pro parts of the tree. 

Even in chess, despite the awesome strength of the programs,  human
knowledge of the openings still reigns supreme, although it's now the
case that computers are helping to build opening theory by finding new
moves - in chess these are called theoretical novelties and computers
have produced many of them from what I understand.

- Don


 
   
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Mogo Opening, Building Strategy ?

2008-11-30 Thread Michael Williams

You can read about some such novelties found using Rybka here:  
http://www.rybkachess.com/index.php?auswahl=Rybka+3+book


Don Dailey wrote:

On Sun, 2008-11-30 at 14:33 -0800, terry mcintyre wrote:

- Original Message 

From: Don Dailey [EMAIL PROTECTED]

MCTS really feels to me like a superb book building algorithm.
Computer Chess books (at least the automated part) are built essentially
by taking millions of games from master play and picking out the ones
that seem to work best.   Those games are like playouts.   The moves
that score the best are played the most.   We have a kind of MCTS here.


Interesting, the book moves are not generated by random playouts, but by professional ( or highly skilled ) players. 


Many chess opening books are created by a statistical analysis of lots
of human games.   Some books are created by hand.  Very tediously.
Even the ones created with the aid of human games are sometimes modified
by hand, at least the top notch books.

But in the context of this discussion we note that books can and are
created solely from databases of top quality games.   You can get a
reasonable book that way.   




In the Chess world, what is meant by picking out the ones that seem to work 
best?


What I mean is that you look at the statistics of the moves and base
your opening book on the moves that gave the best results.  You can also
go by the moves that are played the most - with the assumption that if
they are played a lot they must be good.   It is typical to do a
combination of both - if it's played a lot and also scores good, use it.

I think some have tried mini-max too.   It's possible that a move seems
to has great success in general, but not if it's responded to in a
certain way.   It could be that in recent months or years a refutation
has been found, and that a move that used to work really well has been
found to be bad.   



My impression is that computer go programs do not, at this stage in their evolution, make good use of professional book moves; too much professional knowledge is actually in the part of the tree which is almost never played in pro-pro games - the how to beat up on mistakes and what not to do when playing against a pro parts of the tree. 


Even in chess, despite the awesome strength of the programs,  human
knowledge of the openings still reigns supreme, although it's now the
case that computers are helping to build opening theory by finding new
moves - in chess these are called theoretical novelties and computers
have produced many of them from what I understand.

- Don


  
___

computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch (Jason House's paper)

2008-09-27 Thread Jason House



Sent from my iPhone

On Sep 24, 2008, at 5:16 PM, Jason House [EMAIL PROTECTED]  
wrote:


On Sep 24, 2008, at 2:40 PM, Jacques Basaldúa [EMAIL PROTECTED] wr 
ote:




Therefore, the variance of the normal that best approximates the  
distribution of both RAVE and

wins/(wins + losses) is the same n·p·(1-p)


See above, it's slightly different.


If this is true, the variance you are measuring from the samples  
does not contain any information
about the precision of the estimators. If someone understands this  
better, please explain it to

the list.


This will get covered in my next revision. A proper discussion is  
too much to type with my thumb...


My paper writing time is less than I hadhiped, so here's a quick and  
dirty answer.


For a fixed win rate, the probabilities of a specific number of wins  
and losses follows the binomial distribution. That distribution keeps  
p (probability of winning) constant and the number of observed wins  
and losses variable.


When trying to reverse this process, the wins and losses are kept  
constant and p varies. Essentially prob(p=x) is proportional to  
(x^wins)(1-x)^losses.


This is a Beta distribution with known mean, mode, variance, etc...  
It's these values which should be used for approximating the win rate  
estimator as a normal distion. Does that makes sense?


I've glossed over a very important detail when reversing. Bayes  
Theorem requires some extra a priori information. My preferred  
handling alters the reversed equation's exponents a bit but the basic  
conclusion (of a beta distribution) is the same.___

computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch (Jason House's paper)

2008-09-27 Thread Álvaro Begué
On Fri, Sep 26, 2008 at 9:29 AM, Jason House
[EMAIL PROTECTED] wrote:


 Sent from my iPhone

 On Sep 24, 2008, at 5:16 PM, Jason House [EMAIL PROTECTED]
 wrote:

 On Sep 24, 2008, at 2:40 PM, Jacques Basaldúa [EMAIL PROTECTED] wrote:


 Therefore, the variance of the normal that best approximates the
 distribution of both RAVE and
 wins/(wins + losses) is the same n·p·(1-p)

 See above, it's slightly different.


 If this is true, the variance you are measuring from the samples does not
 contain any information
 about the precision of the estimators. If someone understands this
 better, please explain it to
 the list.

 This will get covered in my next revision. A proper discussion is too much
 to type with my thumb...

 My paper writing time is less than I hadhiped, so here's a quick and dirty
 answer.

 For a fixed win rate, the probabilities of a specific number of wins and
 losses follows the binomial distribution. That distribution keeps p
 (probability of winning) constant and the number of observed wins and losses
 variable.

 When trying to reverse this process, the wins and losses are kept constant
 and p varies. Essentially prob(p=x) is proportional to (x^wins)(1-x)^losses.

 This is a Beta distribution with known mean, mode, variance, etc... It's
 these values which should be used for approximating the win rate estimator
 as a normal distion. Does that makes sense?

 I've glossed over a very important detail when reversing. Bayes Theorem
 requires some extra a priori information. My preferred handling alters the
 reversed equation's exponents a bit but the basic conclusion (of a beta
 distribution) is the same.

Maybe I can say it a little more precisely. Before we have collected
any data, let's use a uniform prior for p. After we sample the move a
number of times, we obtain w wins and l losses. Bayes's theorem tells
us that the posterior probability distribution is a beta distribution
B(w+1,l+1) (see http://en.wikipedia.org/wiki/Beta_distribution for
details).

An implication of this is that the expected value of p after w wins
and l losses is (w+1)/(w+l+2). This is the same as initializing w=l=1
before you have any information and then using w/(w+l) as your winning
rate, which some people have done intuitively, but it's clear that
it's not just a kludge. I'll use the letter r for the value of the
winning rate.

The estimate of the variance is (w+1)*(l+1)/((w+l+2)^2*(w+l+3)), which
is r*(1-r)/(w+l+3). The simple UCB formula uses an estimate of the
variance that is simply 1/visits, so perhaps one should modify the
formula by multiplying that estimate by r*(1-r), which means that the
variance is smaller in positions that look like clear victories for
one side. I don't know if this makes any difference in practice, but I
doubt it.

Álvaro.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch (Jason House's paper)

2008-09-27 Thread Jason House



Sent from my iPhone

On Sep 27, 2008, at 10:14 AM, Álvaro Begué [EMAIL PROTECTED]  
wrote:



On Fri, Sep 26, 2008 at 9:29 AM, Jason House
[EMAIL PROTECTED] wrote:



Sent from my iPhone

On Sep 24, 2008, at 5:16 PM, Jason House  
[EMAIL PROTECTED]

wrote:

On Sep 24, 2008, at 2:40 PM, Jacques Basaldúa [EMAIL PROTECTED] 
 wrote:





Therefore, the variance of the normal that best approximates the
distribution of both RAVE and
wins/(wins + losses) is the same n·p·(1-p)


See above, it's slightly different.


If this is true, the variance you are measuring from the samples  
does not

contain any information
about the precision of the estimators. If someone understands this
better, please explain it to
the list.


This will get covered in my next revision. A proper discussion is  
too much

to type with my thumb...


My paper writing time is less than I hadhiped, so here's a quick  
and dirty

answer.

For a fixed win rate, the probabilities of a specific number of  
wins and

losses follows the binomial distribution. That distribution keeps p
(probability of winning) constant and the number of observed wins  
and losses

variable.

When trying to reverse this process, the wins and losses are kept  
constant
and p varies. Essentially prob(p=x) is proportional to (x^wins)(1- 
x)^losses.


This is a Beta distribution with known mean, mode, variance, etc...  
It's
these values which should be used for approximating the win rate  
estimator

as a normal distion. Does that makes sense?

I've glossed over a very important detail when reversing. Bayes  
Theorem
requires some extra a priori information. My preferred handling  
alters the
reversed equation's exponents a bit but the basic conclusion (of a  
beta

distribution) is the same.


Maybe I can say it a little more precisely. Before we have collected
any data, let's use a uniform prior for p. After we sample the move a
number of times, we obtain w wins and l losses. Bayes's theorem tells
us that the posterior probability distribution is a beta distribution
B(w+1,l+1) (see http://en.wikipedia.org/wiki/Beta_distribution for
details).



When I originally posted about this stuff and modifying the UCB  
formula, the uniform prior was a major sticking point for people. It  
is my preferred handling when no domain knowledge/heuristics are used.





An implication of this is that the expected value of p after w wins
and l losses is (w+1)/(w+l+2). This is the same as initializing w=l=1
before you have any information and then using w/(w+l) as your winning
rate, which some people have done intuitively, but it's clear that
it's not just a kludge. I'll use the letter r for the value of the
winning rate.

The estimate of the variance is (w+1)*(l+1)/((w+l+2)^2*(w+l+3)), which
is r*(1-r)/(w+l+3). The simple UCB formula uses an estimate of the
variance that is simply 1/visits, so perhaps one should modify the
formula by multiplying that estimate by r*(1-r), which means that the
variance is smaller in positions that look like clear victories for
one side. I don't know if this makes any difference in practice, but I
doubt it.



The UCB1-Tuned formula aimed at a similar effect. I know the MoGo team  
abandoned it because they didn't see a strength gain, just added  
complexity. In fact, they don't even use UCB's at all anymore.


Also, note that when initializing wins and losses to 1, n=wins+losses  
and p=wins/n, the variance becomes p(1-p)/(n+1) which is very close to  
a nieve binomial distribution of p(1-p)/n with win and loss  
initialization.





___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch

2008-09-25 Thread Jason House
On Sep 25, 2008, at 12:12 AM, David Fotland [EMAIL PROTECTED] 
games.com wrote:


This is an interesting idea, but do you have any actual results?  If  
you

implement this kind of rave formula do you get a stronger program?


I don't have enough data to say anything except that it did not have a  
large (100 ELO+) change.


I switched to this method just before the July KGS tournament. Both  
the old and new versions were untuned, but based on parameters I've  
heard on this list. My testing on CGOS is tough to draw conclusions  
from.






David


-Original Message-
From: [EMAIL PROTECTED] [mailto:computer-go-
[EMAIL PROTECTED] On Behalf Of Jason House
Sent: Wednesday, September 24, 2008 4:34 AM
To: computer-go
Subject: Re: [computer-go] MoGo v.s. Kim rematch

On Tue, 2008-09-23 at 18:08 -0300, Douglas Drumond wrote:

Attached is a quick write up of what I was talking about with some

math.


PS: Any tips on cleanup and making it a mini publication would be

appreciated.  I've never published a paper before.  Would this be too
small?



Better add an abstract, but what I missed most was bibliography.


Ask and you shall receive :)
Actually, I spent most of my free time learning Tex/Lyx, so there are
very few changes in this version.  I'm out of time for a while, so I
figured I'd just share what I have so far.





[]'s




Douglas Drumond
-
Computer Engineering
FEEC/IC - Unicamp
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/




___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] MoGo v.s. Kim rematch (Jason House's paper)

2008-09-24 Thread Jacques Basaldúa
 The approach of this paper is to treat all win rate estimations as 
independent estimators with

additive white Gaussian noise. 

Have you tried if that works? (As Łukasz Lew wrote experimental setup 
would be useful) I guess
there may be a flaw in your idea, but I am not a specialist. I will try 
to explain it.


If it wasn't for the fact that the tree is learning, the probability of 
a playout through a node to win
would be constant each time the node is visited. This is, of course, a 
simplification because the tree
does learn, but, at least between playouts that are not very distant in 
time, it is true. So my argument
holds to some (I guess, much) extent. The same applies to the RAVE 
estimator which is also the result
of counting wins (assume P(win|that move) = constant) and dividing by 
some appropriate sample size.
Therefore, these estimators follow a binomial distribution. It does 
converge to the normal, but with
some fundamental caveat: Unlike the normal in which mean an variance are 
independent, in this case

the variance is a function of p.

The variance of the binomial = n·p·(1-p) is a _function of p_.

Therefore, the variance of the normal that best approximates the 
distribution of both RAVE and

wins/(wins + losses) is the same n·p·(1-p)

If this is true, the variance you are measuring from the samples does 
not contain any information
about the precision of the estimators. If someone understands this 
better, please explain it to

the list.

Jacques.*
*
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] MoGo v.s. Kim rematch

2008-09-24 Thread David Fotland
This is an interesting idea, but do you have any actual results?  If you
implement this kind of rave formula do you get a stronger program?

David

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:computer-go-
 [EMAIL PROTECTED] On Behalf Of Jason House
 Sent: Wednesday, September 24, 2008 4:34 AM
 To: computer-go
 Subject: Re: [computer-go] MoGo v.s. Kim rematch
 
 On Tue, 2008-09-23 at 18:08 -0300, Douglas Drumond wrote:
   Attached is a quick write up of what I was talking about with some
 math.
  
   PS: Any tips on cleanup and making it a mini publication would be
 appreciated.  I've never published a paper before.  Would this be too
 small?
 
 
  Better add an abstract, but what I missed most was bibliography.
 
 Ask and you shall receive :)
 Actually, I spent most of my free time learning Tex/Lyx, so there are
 very few changes in this version.  I'm out of time for a while, so I
 figured I'd just share what I have so far.
 
 
 
 
  []'s
 
 
 
 
  Douglas Drumond
  -
  Computer Engineering
  FEEC/IC - Unicamp
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch

2008-09-23 Thread David Doshay

On 22, Sep 2008, at 10:50 PM, Hideki Kato wrote:



David Doshay: [EMAIL PROTECTED]:

It was 800, just like last time, but the networking had been upgraded
from ethernet to infiniband. Olivier said that this should have  
been a

good improvement because he felt that communication overhead was
significant.


Really previous Huygens used Ethernet?  It's hard to believe...

Hideki


I thought so as well, but Olivier wrote to me:

Begin forwarded message:


From: Olivier Teytaud [EMAIL PROTECTED]
Date: 6, September 2008 2:07:42 AM PDT
To: David Doshay [EMAIL PROTECTED]
Subject: Re: [computer-go] yet a mogo vs human game

Hi David,
...

We will have at least the same number of cores, probably more, and  
we will very likely have a better hardware -
the infiniband network should be available, and this makes a big  
difference.


...
Best regards,
Olivier



So, perhaps Huygens has both and they were not using it last time,
or maybe they brought Huygens up with E-net and then upgraded.

But Mogo did not use it for the Portland exhibition, but did use
infiniband for the rematch.

Cheers,
David
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch

2008-09-23 Thread Hideki Kato
David,

I've found a description that Infiniband was improved from 2 x 4X IB 
(20 Gbps) to 8 x 8X IB (160 Gbps) on Jun 2008 at the bottom of 6th 
page of a pdf about Huygens system:
https://www.os3.nl/_media/2007-2008/courses/inr/week7/sne_20080320_walter.pdf

I guess that is the better hardware Olivier wrote.

Hideki

David Doshay: [EMAIL PROTECTED]:
On 22, Sep 2008, at 10:50 PM, Hideki Kato wrote:


 David Doshay: [EMAIL PROTECTED]:
 It was 800, just like last time, but the networking had been upgraded
 from ethernet to infiniband. Olivier said that this should have  
 been a
 good improvement because he felt that communication overhead was
 significant.

 Really previous Huygens used Ethernet?  It's hard to believe...

 Hideki

I thought so as well, but Olivier wrote to me:

Begin forwarded message:

 From: Olivier Teytaud [EMAIL PROTECTED]
 Date: 6, September 2008 2:07:42 AM PDT
 To: David Doshay [EMAIL PROTECTED]
 Subject: Re: [computer-go] yet a mogo vs human game

 Hi David,
 ...

 We will have at least the same number of cores, probably more, and  
 we will very likely have a better hardware -
 the infiniband network should be available, and this makes a big  
 difference.

 ...
 Best regards,
 Olivier


So, perhaps Huygens has both and they were not using it last time,
or maybe they brought Huygens up with E-net and then upgraded.

But Mogo did not use it for the Portland exhibition, but did use
infiniband for the rematch.

Cheers,
David
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
--
[EMAIL PROTECTED] (Kato)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch

2008-09-23 Thread Jason House
On Mon, Sep 22, 2008 at 1:21 PM, Łukasz Lew [EMAIL PROTECTED] wrote:

 Hi,

 On Mon, Sep 22, 2008 at 17:58, Jason House [EMAIL PROTECTED]
 wrote:
  On Sep 22, 2008, at 7:59 AM, Magnus Persson [EMAIL PROTECTED]
 wrote:
 
  The results of the math are most easilly expressed in terms of inverse
  variance (iv=1/variance)
 
  Combined mean = sum( mean * iv )
  Combined iv = sum( iv )
 
  I'll try to do a real write-up if anyone is interested.

 I am very interested. :)

 Lukasz



Attached is a quick write up of what I was talking about with some math.

PS: Any tips on cleanup and making it a mini publication would be
appreciated.  I've never published a paper before.  Would this be too small?


RAVE.doc
Description: MS-Word document
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] MoGo v.s. Kim rematch

2008-09-23 Thread Douglas Drumond
 Attached is a quick write up of what I was talking about with some math.

 PS: Any tips on cleanup and making it a mini publication would be 
 appreciated.  I've never published a paper before.  Would this be too small?


Better add an abstract, but what I missed most was bibliography.


[]'s




Douglas Drumond
-
Computer Engineering
FEEC/IC - Unicamp
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch

2008-09-23 Thread Darren Cook
 It was 800, just like last time, but the networking had been upgraded
 from ethernet to infiniband. Olivier said that this should have been a
 good improvement because he felt that communication overhead was
 significant.

I thought Olivier had previously said there was very little overhead. E.g.:
 http://computer-go.org/pipermail/computer-go/2008-May/015068.html
 http://www.mail-archive.com/computer-go@computer-go.org/msg07953.html

I took this 95% to mean that giving Mogo 760 (800x0.95) times the
thinking time on a single core would be the same strength as Mogo on 800
cores.

Assuming a rule of thumb that doubling the playouts is worth one rank
(*), increasing network speed will surely have very little effect on the
strength?

Darren

*: That is from memory of discussion here; I hope somebody will correct
me with a more accurate thumb.

 We will have at least the same number of cores, probably more, and we
 will very likely have a better hardware -
 the infiniband network should be available, and this makes a big
 difference.


-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
open source dictionary/semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch

2008-09-22 Thread Magnus Persson

Quoting Mark Boon [EMAIL PROTECTED]:


Playing out that fake ladder in the first game meant an instant loss.
Surprising. And embarassing. Any information on the number of
processors used?



The interesting question is if there is a silly bug or something more  
sophisticated.


I have struggled with ladders in Valkyria and it is often really hard  
to tell what causes these problems. In Leksand most games on 19x19  
where lost in a ways similar to the recent mogo game. I could not find  
an obvious problem with the playouts at least not in terms of an  
easily fixable bug. Note that Valkyria reads out 99% of all ladders  
correctly both in the tree and in the playouts.


What I realized was that AMAF in combination with heavy playouts  
causes some serious biases, for some kinds of very bad moves such that  
AMAF completely misevaluate them.


In the case of the ladders the heavy playouts of Valkyria correctly  
prunes playing out ladders for the loser. But sometimes in the  
playouts the ladder is broken and after that there is a chance that  
the stones escape anyway. This means that almost always when the  
escaping move is played it is a good move! Thus AMAF will assign a  
very good score to this move


My solutions to this was simply to turn off AMAF-eval for all shapes  
commonly misevaluated for ladders. But I think this problem is true  
for many shapes
in general. What makes ladders special is that the problem repeats it  
self and the effect get stronger and thus even more likely the larger  
the ladder gets.


I think a better solution would be to modify AMAF in some way to avoid  
these problems, or perhaps change the playouts in a way to balance the  
problem. Does anyone know something to do about it or have any ideas?


-Magnus


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch

2008-09-22 Thread Don Dailey
I'm curious about a couple of things in particular.   Is this a bug and
how much time would be required for Mogo to have played the correct move
if it wasn't. 

Of course I'm also interested in ways to solve this with less deep
searches or better play-outs.

- Don
 

On Mon, 2008-09-22 at 13:59 +0200, Magnus Persson wrote:
 Quoting Mark Boon [EMAIL PROTECTED]:
 
  Playing out that fake ladder in the first game meant an instant loss.
  Surprising. And embarassing. Any information on the number of
  processors used?
 
 
 The interesting question is if there is a silly bug or something more  
 sophisticated.
 
 I have struggled with ladders in Valkyria and it is often really hard  
 to tell what causes these problems. In Leksand most games on 19x19  
 where lost in a ways similar to the recent mogo game. I could not find  
 an obvious problem with the playouts at least not in terms of an  
 easily fixable bug. Note that Valkyria reads out 99% of all ladders  
 correctly both in the tree and in the playouts.
 
 What I realized was that AMAF in combination with heavy playouts  
 causes some serious biases, for some kinds of very bad moves such that  
 AMAF completely misevaluate them.
 
 In the case of the ladders the heavy playouts of Valkyria correctly  
 prunes playing out ladders for the loser. But sometimes in the  
 playouts the ladder is broken and after that there is a chance that  
 the stones escape anyway. This means that almost always when the  
 escaping move is played it is a good move! Thus AMAF will assign a  
 very good score to this move
 
 My solutions to this was simply to turn off AMAF-eval for all shapes  
 commonly misevaluated for ladders. But I think this problem is true  
 for many shapes
 in general. What makes ladders special is that the problem repeats it  
 self and the effect get stronger and thus even more likely the larger  
 the ladder gets.
 
 I think a better solution would be to modify AMAF in some way to avoid  
 these problems, or perhaps change the playouts in a way to balance the  
 problem. Does anyone know something to do about it or have any ideas?
 
 -Magnus
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] MoGo v.s. Kim rematch

2008-09-22 Thread terry mcintyre
 Consider this as tentative, since I heard it about 3rd-hand, but I believe the 
number of processors used to have been 3000.


Congratulations to the Mogo team; good luck improving your program to deal with 
the ladder and life-and-death issues.

Looking forward to further information!

I have always wondered if AMAF is a feature or a bug. There are many situations 
where the order of moves is crucial; A before B wins, B before A loses; ladders 
are a classic example where the ordering of moves is utterly important. AMAF 
seems to assume that order doesn't matter. Of course, there are many other 
positions where this assumption is true; that is why it sometimes yields an 
improvement in processing speed, but it seems risky.

Ladders are also a classic case where two patterns can look very similar, but 
be very different. When you capture a ladder, you are in a very good position. 
But if the stones under attack have just one extra liberty, the position may 
look like a ladder, but your target will escape, and your stones will be full 
of cutting points; the proper evaluation for that position would be much 
harsher. More generally, whenever I see a Monte Carlo program lose, it is 
usually a semeai where being one liberty behind or one ahead makes all the 
difference. We call these capturing races in English for a reason; being 
ahead or behind by one liberty matters a great deal. To make life interesting, 
there are loose ladder constructs where an extra liberty does not help the 
fleeing stones; they still get corraled and captured.

These corner cases are tough, but many games hinge on correctly reading out the 
exact consequences of life-and-death struggles.

Terry McIntyre [EMAIL PROTECTED]


Go is very hard. The more I learn about it, the less I know. -Jie Li, 9 dan

 On Mon, 2008-09-22 at 13:59 +0200, Magnus Persson wrote:
  Quoting Mark Boon :
  
   Playing out that fake ladder in the first game meant an instant loss.
   Surprising. And embarassing. Any information on the number of
   processors used?


  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch

2008-09-22 Thread Don Dailey
I think AMAF is a feature not a bug.   It's only a matter of how
judiciously it's applied.   

Also, almost any evaluation feature in a game playing program is a bug -
meaning it is an imperfect approximation of what you really want.  

Of course it could turn out that AMAF got them in trouble in this game.
The Mogo team will probably analyze the reason for the problem.But
as long as they are playing strong professional players they are going
to have something to debug and analyze!

- Don


 
On Mon, 2008-09-22 at 06:06 -0700, terry mcintyre wrote:
 Consider this as tentative, since I heard it about 3rd-hand, but I believe 
 the number of processors used to have been 3000.
 
 
 Congratulations to the Mogo team; good luck improving your program to deal 
 with the ladder and life-and-death issues.
 
 Looking forward to further information!
 
 I have always wondered if AMAF is a feature or a bug. There are many 
 situations where the order of moves is crucial; A before B wins, B before A 
 loses; ladders are a classic example where the ordering of moves is utterly 
 important. AMAF seems to assume that order doesn't matter. Of course, there 
 are many other positions where this assumption is true; that is why it 
 sometimes yields an improvement in processing speed, but it seems risky.
 
 Ladders are also a classic case where two patterns can look very similar, but 
 be very different. When you capture a ladder, you are in a very good 
 position. But if the stones under attack have just one extra liberty, the 
 position may look like a ladder, but your target will escape, and your 
 stones will be full of cutting points; the proper evaluation for that 
 position would be much harsher. More generally, whenever I see a Monte Carlo 
 program lose, it is usually a semeai where being one liberty behind or one 
 ahead makes all the difference. We call these capturing races in English 
 for a reason; being ahead or behind by one liberty matters a great deal. To 
 make life interesting, there are loose ladder constructs where an extra 
 liberty does not help the fleeing stones; they still get corraled and 
 captured.
 
 These corner cases are tough, but many games hinge on correctly reading out 
 the exact consequences of life-and-death struggles.
 
 Terry McIntyre [EMAIL PROTECTED]
 
 
 Go is very hard. The more I learn about it, the less I know. -Jie Li, 9 dan
 
  On Mon, 2008-09-22 at 13:59 +0200, Magnus Persson wrote:
   Quoting Mark Boon :
   
Playing out that fake ladder in the first game meant an instant loss.
Surprising. And embarassing. Any information on the number of
processors used?
 
 
   
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] MoGo v.s. Kim rematch

2008-09-22 Thread David Fotland
AMAF certainly helps to do move ordering when there is little other
information.  With good prior heuristics or enough actual playouts, it
should not be weighted very highly.  AMAF finds good moves, but it often
bias heavily for or against moves.  In ManyFaces, AMAF (actually RAVE) is
worth between 5% and 10% wins against gnugo.

I've seen similar ladder problems, and it is not AMAF, it's caused by the
playouts, when they can't read ladders.  It's easy to add various hacks to
prevent playing out simple ladders, but the one in this game had an extra
liberty (if I remember correctly).  A general solution is a little tricky.

David

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:computer-go-
 [EMAIL PROTECTED] On Behalf Of Don Dailey
 Sent: Monday, September 22, 2008 6:23 AM
 To: computer-go
 Subject: Re: [computer-go] MoGo v.s. Kim rematch
 
 I think AMAF is a feature not a bug.   It's only a matter of how
 judiciously it's applied.
 
 Also, almost any evaluation feature in a game playing program is a bug
 - meaning it is an imperfect approximation of what you really want.
 
 Of course it could turn out that AMAF got them in trouble in this game.
 The Mogo team will probably analyze the reason for the problem.But
 as long as they are playing strong professional players they are going
 to have something to debug and analyze!
 
 - Don
 
 
 
 On Mon, 2008-09-22 at 06:06 -0700, terry mcintyre wrote:
  Consider this as tentative, since I heard it about 3rd-hand, but I
 believe the number of processors used to have been 3000.
 
 
  Congratulations to the Mogo team; good luck improving your program to
 deal with the ladder and life-and-death issues.
 
  Looking forward to further information!
 
  I have always wondered if AMAF is a feature or a bug. There are many
 situations where the order of moves is crucial; A before B wins, B
 before A loses; ladders are a classic example where the ordering of
 moves is utterly important. AMAF seems to assume that order doesn't
 matter. Of course, there are many other positions where this assumption
 is true; that is why it sometimes yields an improvement in processing
 speed, but it seems risky.
 
  Ladders are also a classic case where two patterns can look very
 similar, but be very different. When you capture a ladder, you are in a
 very good position. But if the stones under attack have just one extra
 liberty, the position may look like a ladder, but your target will
 escape, and your stones will be full of cutting points; the proper
 evaluation for that position would be much harsher. More generally,
 whenever I see a Monte Carlo program lose, it is usually a semeai where
 being one liberty behind or one ahead makes all the difference. We call
 these capturing races in English for a reason; being ahead or behind
 by one liberty matters a great deal. To make life interesting, there
 are loose ladder constructs where an extra liberty does not help the
 fleeing stones; they still get corraled and captured.
 
  These corner cases are tough, but many games hinge on correctly
 reading out the exact consequences of life-and-death struggles.
 
  Terry McIntyre [EMAIL PROTECTED]
 
 
  Go is very hard. The more I learn about it, the less I know. -Jie
  Li, 9 dan
 
   On Mon, 2008-09-22 at 13:59 +0200, Magnus Persson wrote:
Quoting Mark Boon :
   
 Playing out that fake ladder in the first game meant an instant
 loss.
 Surprising. And embarassing. Any information on the number of
 processors used?
 
 
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch

2008-09-22 Thread Jason House
On Sep 22, 2008, at 7:59 AM, Magnus Persson [EMAIL PROTECTED]  
wrote:


In the case of the ladders the heavy playouts of Valkyria correctly  
prunes playing out ladders for the loser. But sometimes in the  
playouts the ladder is broken and after that there is a chance that  
the stones escape anyway. This means that almost always when the  
escaping move is played it is a good move! Thus AMAF will assign a  
very good score to this move


My solutions to this was simply to turn off AMAF-eval for all shapes  
commonly misevaluated for ladders. But I think this problem is true  
for many shapes
in general. What makes ladders special is that the problem repeats  
it self and the effect get stronger and thus even more likely the  
larger the ladder gets.


I think a better solution would be to modify AMAF in some way to  
avoid these problems, or perhaps change the playouts in a way to  
balance the problem. Does anyone know something to do about it or  
have any ideas?


My RAVE formulation includes a per-move parameter for RAVE confidence.  
This allows heuristics to fix situations like above. Sadly, my bot  
isn't mature enough to take advantage yet :(


The concept I used for the derivation is simple. I treat everything as  
gaussian estimators. It's easy to find the max of the distribution. I  
then use the same trick as bayeselo to estimate variance. I then add a  
Gaussian noise term to represent RAVE bias.








The results of the math are most easilly expressed in terms of inverse  
variance (iv=1/variance)


Combined mean = sum( mean * iv )
Combined iv = sum( iv )

I'll try to do a real write-up if anyone is interested.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch

2008-09-22 Thread David Doshay
It was 800, just like last time, but the networking had been upgraded  
from ethernet to infiniband. Olivier said that this should have been a  
good improvement because he felt that communication overhead was  
significant.


Cheers,
David



On 22, Sep 2008, at 6:06 AM, terry mcintyre wrote:

Consider this as tentative, since I heard it about 3rd-hand, but I  
believe the number of processors used to have been 3000.


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch

2008-09-22 Thread Hideki Kato

David Doshay: [EMAIL PROTECTED]:
It was 800, just like last time, but the networking had been upgraded  
from ethernet to infiniband. Olivier said that this should have been a  
good improvement because he felt that communication overhead was  
significant.

Really previous Huygens used Ethernet?  It's hard to believe...

Hideki

Cheers,
David



On 22, Sep 2008, at 6:06 AM, terry mcintyre wrote:

 Consider this as tentative, since I heard it about 3rd-hand, but I  
 believe the number of processors used to have been 3000.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
--
[EMAIL PROTECTED] (Kato)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch

2008-09-21 Thread mingwu
Anyone knows the result, or better the game sgf?


On Sat, Sep 6, 2008 at 6:57 AM, Don Dailey [EMAIL PROTECTED] wrote:

 Great news!   Look forward to seeing it happen.  I hope Mogo has some
 great hardware.

 - Don


 On Fri, 2008-09-05 at 15:54 -0700, David Doshay wrote:
  MoGo and Myungwan Kim will hold an exhibition rematch at the Cotsen
  Open on Saturday September 20. The exhibition will start at about 5pm
  Pacific Daylight time.
 
  As probably known by all on this list, MoGo won the last game, held at
  the US Go Congress in Portland Oregon, when it was given a 9 stone
  handicap and played with a one hour time limit.
 
  At this time the expected handicap will be 7, and it is not clear if
  there will be one game or two. It is not known at this time how many
  cores MoGo will be running on. Mr Kim has asked for MoGo to be given
  90 minutes because he saw how much the increase in time from 15
  minutes to one hour increased its playing strength. Mr. Kim has also
  asked that there be only one or at most 2 blitz games at the start.
 
  The MoGo team also wants to have some 9x9 games, but Mr Kim does not
  feel familiar enough with 9x9 to play those games, but he is searching
  for an alternate strong player. MoGo has some new features for 9x9,
  and the team is anxious to see the newest code in action.
 
 
 
  Cheers,
  David
 
 
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] MoGo v.s. Kim rematch

2008-09-21 Thread dhillismail
There is a discussion at godiscussions
http://www.godiscussions.com/forum/showthread.php?t=7154
?The sgf's for the two games played are on page 2

Dave Hillis

-Original Message-
From: mingwu [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; computer-go computer-go@computer-go.org
Sent: Sun, 21 Sep 2008 9:22 pm
Subject: Re: [computer-go] MoGo v.s. Kim rematch



Anyone knows the result, or better the game sgf?



On Sat, Sep 6, 2008 at 6:57 AM, Don Dailey [EMAIL PROTECTED] wrote:

Great news! ? Look forward to seeing it happen. ?I hope Mogo has some
great hardware.

- Don






On Fri, 2008-09-05 at 15:54 -0700, David Doshay wrote:
 MoGo and Myungwan Kim will hold an exhibition rematch at the Cotsen
 Open on Saturday September 20. The exhibition will start at about 5pm
 Pacific Daylight time.

 As probably known by all on this list, MoGo won the last game, held at
 the US Go Congress in Portland Oregon, when it was given a 9 stone
 handicap and played with a one hour time limit.

 At this time the expected handicap will be 7, and it is not clear if
 there will be one game or two. It is not known at this time how many
 cores MoGo will be running on. Mr Kim has asked for MoGo to be given
 90 minutes because he saw how much the increase in time from 15
 minutes to one hour increased its playing strength. Mr. Kim has also
 asked that there be only one or at most 2 blitz games at the start.

 The MoGo team also wants to have some 9x9 games, but Mr Kim does not
 feel familiar enough with 9x9 to play those games, but he is searching
 for an alternate strong player. MoGo has some new features for 9x9,
 and the team is anxious to see the newest code in action.



 Cheers,
 David



 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/









___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] MoGo v.s. Kim rematch

2008-09-05 Thread David Doshay
MoGo and Myungwan Kim will hold an exhibition rematch at the Cotsen  
Open on Saturday September 20. The exhibition will start at about 5pm  
Pacific Daylight time.


As probably known by all on this list, MoGo won the last game, held at  
the US Go Congress in Portland Oregon, when it was given a 9 stone  
handicap and played with a one hour time limit.


At this time the expected handicap will be 7, and it is not clear if  
there will be one game or two. It is not known at this time how many  
cores MoGo will be running on. Mr Kim has asked for MoGo to be given  
90 minutes because he saw how much the increase in time from 15  
minutes to one hour increased its playing strength. Mr. Kim has also  
asked that there be only one or at most 2 blitz games at the start.


The MoGo team also wants to have some 9x9 games, but Mr Kim does not  
feel familiar enough with 9x9 to play those games, but he is searching  
for an alternate strong player. MoGo has some new features for 9x9,  
and the team is anxious to see the newest code in action.




Cheers,
David



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo v.s. Kim rematch

2008-09-05 Thread Don Dailey
Great news!   Look forward to seeing it happen.  I hope Mogo has some
great hardware.

- Don


On Fri, 2008-09-05 at 15:54 -0700, David Doshay wrote:
 MoGo and Myungwan Kim will hold an exhibition rematch at the Cotsen  
 Open on Saturday September 20. The exhibition will start at about 5pm  
 Pacific Daylight time.
 
 As probably known by all on this list, MoGo won the last game, held at  
 the US Go Congress in Portland Oregon, when it was given a 9 stone  
 handicap and played with a one hour time limit.
 
 At this time the expected handicap will be 7, and it is not clear if  
 there will be one game or two. It is not known at this time how many  
 cores MoGo will be running on. Mr Kim has asked for MoGo to be given  
 90 minutes because he saw how much the increase in time from 15  
 minutes to one hour increased its playing strength. Mr. Kim has also  
 asked that there be only one or at most 2 blitz games at the start.
 
 The MoGo team also wants to have some 9x9 games, but Mr Kim does not  
 feel familiar enough with 9x9 to play those games, but he is searching  
 for an alternate strong player. MoGo has some new features for 9x9,  
 and the team is anxious to see the newest code in action.
 
 
 
 Cheers,
 David
 
 
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] MoGo beats pro: The website

2008-08-13 Thread Chaslot G (MICC)
Dear all,

There were details that were unclear about the victory of MoGo. 
Hence I created a website to gather useful information about this game:

http://www.cs.unimaas.nl/g.chaslot/muyungwan-mogo/

Cheers,

Guillaume


 Message d'origine
De: [EMAIL PROTECTED] de la part de Sylvain Gelly
Date: mer. 8/13/2008 19:54
À: computer-go
Objet : Re: [computer-go] What was the specific design of the Mogo versionwhich 
beat the pro...
 
C++ on linux (with a port on windows using cygwin libraries for the binary
release)

Sylvain

2008/8/13 steve uurtamo [EMAIL PROTECTED]

  And what language/platform is Mogo written in; C/C++, Java, Assembly,
 PHP,
  etc.?

 This made coffee spray out of my nose (PHP).

 I think that C is most likely, based upon how they parallelized it.  Did
 you
 read the list posting that mentioned (briefly) how they scaled it up?

 s.
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


winmail.dat___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] MoGo beats pro: The website

2008-08-13 Thread Jason House
If this is aimed at clearing up ambiguity, you should state which way  
the handicap was given.


Sent from my iPhone

On Aug 13, 2008, at 2:08 PM, Chaslot G (MICC) [EMAIL PROTECTED] 
 wrote:



Dear all,

There were details that were unclear about the victory of MoGo.
Hence I created a website to gather useful information about this  
game:


http://www.cs.unimaas.nl/g.chaslot/muyungwan-mogo/

Cheers,

Guillaume


 Message d'origine
De: [EMAIL PROTECTED] de la part de Sylvain Gelly
Date: mer. 8/13/2008 19:54
À: computer-go
Objet : Re: [computer-go] What was the specific design of the Mogo  
versionwhich beat the pro...


C++ on linux (with a port on windows using cygwin libraries for the  
binary

release)

Sylvain

2008/8/13 steve uurtamo [EMAIL PROTECTED]

And what language/platform is Mogo written in; C/C++, Java,  
Assembly,

PHP,

etc.?


This made coffee spray out of my nose (PHP).

I think that C is most likely, based upon how they parallelized  
it.  Did

you
read the list posting that mentioned (briefly) how they scaled it up?

s.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



winmail.dat
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo beats pro: The website

2008-08-13 Thread terry mcintyre
From: Jason House [EMAIL PROTECTED]

 If this is aimed at clearing up ambiguity, you should state which way the 
 handicap was given.

Oops! Now I need to clean off my keyboard! rotflmao!

Mmmm, we already have a hotly-contested estimate that computer programs will 
play pros on an even basis in ten years time.

Shall we now open the pool for computer offers pro a 9 stone handicap and 
wins?

On Aug 13, 2008, at 2:08 PM, Chaslot G (MICC) [EMAIL PROTECTED] 
 wrote:

 Dear all,

 There were details that were unclear about the victory of MoGo.
 Hence I created a website to gather useful information about this  
 game:

 http://www.cs.unimaas.nl/g.chaslot/muyungwan-mogo/

 Cheers,

 Guillaume




  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-11 Thread terry mcintyre
From: Bob Hearn [EMAIL PROTECTED]



Now, my question. Sorry if this has already been beaten to death here. After 
the match, one of the MoGo programmers mentioned that doubling the computation 
led to a 63% win rate against the baseline version, and that so far this 
scaling seemed to continue as computation power increased.

So -- quick back-of-the-envelope calculation, tell me where I am wrong. 63% 
win rate = about half a stone advantage in go. So we need 4x processing power 
to increase by a stone. At the current rate of Moore's law, that's about 4 
years. Kim estimated that the game with MoGo would be hard at 8 stones. That 
suggests that in 32 years a supercomputer comparable to the one that played in 
this match would be as strong as Kim.

This calculation is optimistic in assuming that you can meaningfully scale the 
63% win rate indefinitely, especially when measuring strength against other 
opponents, and not a weaker version of itself. It's also pessimistic in 
assuming there will be no improvement in the Monte Carlo technique.

But still, 32 years seems like a surprisingly long time, much longer than the 
10 years that seems intuitively reasonable. Naively, it would seem that 
improvements in the Monte Carlo algorithms could gain some small number of 
stones in strength for fixed computation, but that would just shrink the 32 
years by maybe a decade.

How do others feel about this? 

I guess I should also go on record as believing that if it really does take 32 
years, we *will* have general-purpose AI before then.
I suspect that Mogo -- good as it is -- is far from being the optimal 
algorithm. In ten years time new methods will emerge which should yield 
considerable improvements.

In addition, the 800-core supercomputer used was not today's state of the 
art; the Mogo team almost obtained a 3000-core supercomputer for this 
exhibition, which would be nearly 4x as large; as Computer Go becomes more 
exciting, we may be able to borrow still more impressive hardware -- current 
state-of-the-art is 65k or even 128k processors. 

Third, the 32 year figure is highly sensitive to one's expectation of Moore's 
Law. A doubling every 18 months would be a quadrupling every 36 months, which 
is three years; this factor alone shrinks the 32 years to 24. We may see a 
faster rate of growth - GPUs have been improving faster than general-purpose 
CPUs, and the coming multicore processors may have more in common with GPUs 
than with previous generations of X86 cores -- we may revert to simpler RISC 
cores, which use less silicon.

In short, reaching the top of the pyramid would be a thousand-fold improvement 
in processing power -- about 4 to the 4th power, or half way to the goal. 
During the same period, the petaflops race and Moore's Law would continue to 
increase the power of the Top 500.  Stir in some algorithmic improvements, and 
we should be within range in something closer to ten, not 32 years. 

If general purpose AI means an AI which can solve every problem at the expert 
level, that is probably not a prerequisite for solving one problem at an expert 
level. We're not asking for a program which can skillfully play a teaching game 
against a weaker player, as a human pro would, nor are we asking that it be 
able to dance the salsa; it just needs to beat a pro in an even game. 

We have just barely started optimizing the search. What do humans know that 
computers don't? How do pros manage to play well without the ability to examine 
trillions of playouts?


  ___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] mogo beats pro!

2008-08-10 Thread Bob Hearn

David Doshay wrote:

As an aside, the pro in question won the US Open, so comments about  
him being a weak pro seem inappropriate. I spoke with him a number  
of times, and I firmly believe that he took the match as seriously  
as any other public exhibition of his skill that involves handicap  
stones for the opponent. He has an open mind about computer Go,  
unlike some other pro players I talked to here at the congress.


Kim did state before the match that in his opinion computers would  
never be as strong as the best humans. I don't believe he was asked  
afterward whether he had any reason to change that opinion.


After the banquet last night, I was talking to Peter Drake when Kim  
walked up and started asking questions about how MoGo played go. Peter  
explained very well, but I'm not sure he completely understood.


BTW, David, I also pointed out to Chris Garlock that you'd been  
misquoted, shortly after the story went up on the AGA website, but he  
didn't reply.



Also BTW, let me introduce myself to the list and ask a question. I'm  
a 2D go player, also an AI researcher affiliated with Dartmouth. I did  
my Ph.D. at MIT on games and puzzles. However, I never seriously  
worked on computer go, because I was always convinced go was AI- 
complete -- that we would have strong go programs when we had general- 
purpose AI, and not before. Mostly my current work is on general- 
purpose AI heavily inspired by neuroscience.


However, with the advent of the Monte Carlo programs I'm about ready  
to change my mind. I'm tempted to try to work in the area and see  
whether I can contribute anything.



Now, my question. Sorry if this has already been beaten to death here.  
After the match, one of the MoGo programmers mentioned that doubling  
the computation led to a 63% win rate against the baseline version,  
and that so far this scaling seemed to continue as computation power  
increased.


So -- quick back-of-the-envelope calculation, tell me where I am  
wrong. 63% win rate = about half a stone advantage in go. So we need  
4x processing power to increase by a stone. At the current rate of  
Moore's law, that's about 4 years. Kim estimated that the game with  
MoGo would be hard at 8 stones. That suggests that in 32 years a  
supercomputer comparable to the one that played in this match would be  
as strong as Kim.


This calculation is optimistic in assuming that you can meaningfully  
scale the 63% win rate indefinitely, especially when measuring  
strength against other opponents, and not a weaker version of itself.  
It's also pessimistic in assuming there will be no improvement in the  
Monte Carlo technique.


But still, 32 years seems like a surprisingly long time, much longer  
than the 10 years that seems intuitively reasonable. Naively, it would  
seem that improvements in the Monte Carlo algorithms could gain some  
small number of stones in strength for fixed computation, but that  
would just shrink the 32 years by maybe a decade.


How do others feel about this?

I guess I should also go on record as believing that if it really does  
take 32 years, we *will* have general-purpose AI before then.



Bob Hearn

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] mogo beats pro!

2008-08-10 Thread Don Dailey
On Sun, 2008-08-10 at 11:37 -0700, Bob Hearn wrote:
 Now, my question. Sorry if this has already been beaten to death here.
 After the match, one of the MoGo programmers mentioned that doubling
 the computation led to a 63% win rate against the baseline version,
 and that so far this scaling seemed to continue as computation power
 increased. 
 
 So -- quick back-of-the-envelope calculation, tell me where I am
 wrong. 63% win rate = about half a stone advantage in go. So we need
 4x processing power to increase by a stone. At the current rate of
 Moore's law, that's about 4 years. Kim estimated that the game with
 MoGo would be hard at 8 stones. That suggests that in 32 years a
 supercomputer comparable to the one that played in this match would be
 as strong as Kim. 

 This calculation is optimistic in assuming that you can meaningfully
 scale the 63% win rate indefinitely, especially when measuring
 strength against other opponents, and not a weaker version of itself.
 It's also pessimistic in assuming there will be no improvement in the
 Monte Carlo technique. 
 
 But still, 32 years seems like a surprisingly long time, much longer
 than the 10 years that seems intuitively reasonable. Naively, it would
 seem that improvements in the Monte Carlo algorithms could gain some
 small number of stones in strength for fixed computation, but that
 would just shrink the 32 years by maybe a decade. 
 
 How do others feel about this?  

10 years in my opinion is not reasonable.  20 years would be a better
estimate.  We are probably looking at 20 - 30 years for a desktop player
of this strength.  

And I assume that the MCTS will continue to be refined and improved.  

Another factor is that Kim could easily be off by a stone or two in
either direction - but since he won 2 fast games I would guess that once
he got used to playing Mogo he could win consistently with 8 stones -
but this is only my guess of course.

My estimate may sound pessimistic to some, but this same wild exuberance
happened in chess with the famous Levy bet.  10 years later Levy beat
the computer winning the bet and 10 more years later he won again.   And
Levy was not a Grandmaster, he was an international master.  

- Don


  


 
 I guess I should also go on record as believing that if it really does
 take 32 years, we *will* have general-purpose AI before then. 

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-10 Thread steve uurtamo
your calculation is for mogo to beat kim, according to kim and the
mogo team's estimates.

i think that a better thing to measure would be for a computer program
to be able to regularly beat amateurs of any rank without handicap.
i.e. to effectively be at the pro level.

for one thing, this is easier to measure, and for another, it relies much
less on mogo staying the same, kim being correct, or some other
professional being much better against computer players, for instance.
it just requires some machine connected to KGS to be able to attain,
say, 9d, and keep it for a month or so.

s.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-09 Thread jonas . kahn

Congratulations to Mogo team!
Twenty years from now, in ``a computer go history''
August 7th 2008: First victory of computer against pro with 9 handicap.

By the way, the surge in strength with the 800 processors with respect
to the quadcore (old) MogoBot, seemed relatively low, when comparing to
the gain with more time (*6 instead of *800). Of course you do not have
as many playouts, and the algorithm gets different. 
But I am curious to know whether you have made studies like self-play of a (smaller) cluster against a single node Mogo with more time (without pondering), to see how more nodes scales against more time?


Et encore félicitations !
Jonas
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-09 Thread terry mcintyre


- Original Message 

 I still have this theory that when the level of the program is in the 
 high-dan reaches, it can take proper advantage of an opening book. Alas, it 
 may be a few years before enough processoring power is routinely available to 
 test this hypothesis. I know that we duffers can always ruin a perfectly good 
 joseki just as soon as we leave the memorized sequence.
Steve:

why would this be the case?

and where would the book come from?

my thinking is that unless mogo created the book itself, playing
games like these, against opponents like these, at time controls
like this one, then it couldn't possibly be helpful.  and even
then it might not be helpful.

As far as we could see, Mogo was essentially re-creating book knowledge the 
hard way - using millions of playouts times many seconds to do so. The opening 
is the same in every game: you start with an empty board or a given number of 
handicap stones; why spend minutes figuring out the best first move, instead of 
precalculating that information? As for where it would come from, observation 
of thousands of pro games would reveal what they do in a variety of standard 
sequences. This information is not useful if the program cannot play at that 
level -- lower-level players often botch the followup to joseki, or choose the 
wrong joseki for the given whole-board situation. But a program which uses 
joseki to guide search could optimize search. 

You can reliably say that in certain situations, when you play move A, even the 
strongest pro is very likely to respond with one of a handful of plays; if this 
knowledge is part of the search strategy, the search is much more efficient. If 
you choose to play some other move, it needs to be demonstrably better than the 
standard replies.

A more efficient opening would enable more time to be spent on the complex 
middle-game situations.









 - Original Message 
 From: Darren Cook [EMAIL PROTECTED]

 I do have to ask -- if 1.7 million playouts per second and an hour of 
 playing time are required to reach this level, ...

 Can Olivier give us more details. A few questions that come to mind: how
 many playouts per *move* was it using in each of the opening, middle
 game and endgame? Was it using a fuseki book, and how many moves did the
 game stay in that book? And once it was out of the book was it all UCT
 (*) search, or were there any joseki libraries, etc.?

 I'd also be interested to hear how inefficient the cluster was (e.g.
 1000 CPUs won't be doing 1000 times the number of playouts, there must
 be some overhead).

 Darren

 *: Sorry, I've forgotten the new term we are supposed to use.



 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-09 Thread Vincent Diepeveen

Congrats to the MoGo team for getting system time at SARA for a match.

The architecture of the power5/power6 system (2007 july a power5  
system was installed and that has been updated to power6 now),
is based upon having sufficient RAM and high bandwidth to i/o (for  
each Gflop a specific amount of i/o bandwidth is there).


When they opened the machine (when i stand in front of it at SARA),
you can see that a power6 is basically 1 big block of metal that eats  
a shitlot of power.
A single rack with a few powerblocks eats more power than the entire  
storage of it (which is a huge wall of harddrives).


Huijgens has 8 dual core processors, 16 cores a node in short, which  
are very bad for integers.

For most go/chess/checkers type applications a 4.7Ghz power6
single core performs like a 2.4Ghz core2. That is because it focuses  
upon a high number of instructions a cycle, basically floating point
(double precision). The power processor is not so great for most  
programs for integers.

Its IPC (instructions per cycle) is far under 1.0 for most programs.

For such type of hardware i therefore did do a PGO run usually with  
Diep that lasts for 24 hours.
After that the compiler software has more datapoints to optimize the  
software better. That gives a huge speedboost.


Each power6 block is connected with a highspeed network to the other  
nodes.
There is 2 physical network lines, i guess one for i/o and one for  
communication (RAM).


So the latency + bandwidth from 1 node to another is real bad  
compared to the huge crunching power of 1 node,
from RAM communication viewpoint seen. A blocked get will be between  
5 and 10 microseconds. That's not so fast.


From calculation viewpoint seen (instructions a cycle executed), the  
power6 is not so fast when you need more crunching power than 1
block. So for example for my chessprogram having 1 node would be  
interesting as that's a lot faster than a PC. I wouldn't ask for more  
than
1 node though; hashtable is too important for a program like Diep.  
Even though its nps is low compared to other chessprograms (it has  
the biggest evaluation function of all chessprograms) it still is  
doing a 200k nps a core * 16 cores = 3.2 million nps. That's 3.2  
million reads and
3.2 million writes to/from a shared global hashtable. Add to that  
that the network itself then also will deliver 6.4 million  
transactions a second
from the other nodes. So that's a grand total of 12.8 million.  
Suppose we say this is allowed to eat 10% of our system time,
then we would require a latency equal to handling fulltime 12.8 * 10  
= 128 million transactions a second or under 10 nanoseconds.


In reality however the network is factor 1000 slower in latency,  
demonstrating the latency problem to its full extend when one uses

more than 1 node for game tree search.

When using many processors (hundreds) using a global shared hashtable  
is extra important,
as it prematurely terminates by means of a hashtable cutoff millions  
of searches that are not needed to get done.


The necessity of a global shared hashtable was already  noticed  
clearly by Rainer Feldmann long time ago.


When not using a shared hashtable last 10 plies at 460 processors,  
the search depth loss is about 6 to 7 ply at tournament level time

controls for Diep.

The power6 is a magnicifent machine for software that can use big  
RAM. Power6 as in SARA has hundreds of gigabytes of RAM.


If your software can work on multiple of those nodes very well, one  
should consider buying for a $10k a big Tesla setup. It has some

thousands of cores clocked to 1.x Ghz.

You can get one to develop at already for just 520 euro.

The number of instructions a second Tesla can execute is a lot higher  
than power6 can do for the sequential game tree searching codes.


Tesla also gets handsdown an IPC of 1.0, higher than Power6. Let's  
use the IPC = 1.4 at core2 2.4Ghz,

that's giving power6 roughly ipc=0.75

Those 800 processors executing :
  800 * 0.75 * 4.7Ghz = 2.8T instructions per second.

that's quite a lot compared to a simple PC, which of course is a  
zillion times more efficient thanks to global shared hashtables.


A single new card from Nvidia when it is in tesla form is:
  240 cores * 1.2 Ghz = 0.36T instructions a cycle.

The tesla setups combine a bunch of those cards, giving a similar  
amount of instructions a cycle.


Assuming of course one isn't using double precision floating point  
for the software.
a new nvidia GPU has 60 processors onboard that can do double  
precision, not 240.


I'd estimate the search efficiency of mogo at the same efficiency  
level like Deep blue.


Deep Blue wasn't 11 teraflop. Not even close.
Its hardware was of course much faster than that from instructions  
per cycle seen.


It was having an average of 130 million nodes per second. Now for  
simplistico programs nowadays, A single node eats

roughly 1000 cycles at core2.

IPC = 1.4
1000 cycles a node
130 mln nps.

Re: [computer-go] mogo beats pro!

2008-08-09 Thread Ian Osgood


On Aug 9, 2008, at 4:16 AM, terry mcintyre wrote:




- Original Message 

I still have this theory that when the level of the program is in  
the high-dan reaches, it can take proper advantage of an opening  
book. Alas, it may be a few years before enough processoring power  
is routinely available to test this hypothesis. I know that we  
duffers can always ruin a perfectly good joseki just as soon as we  
leave the memorized sequence.

Steve:

why would this be the case?

and where would the book come from?


A thousand years of Go experience? There are many good books on  
fuseki and joseki.  The challenge is encoding that knowledge flexibly  
and using the information appropriately. Compared to earlier  
programs, this is one area where the MC programs have taken a step  
backwards (or rather, a step toward the center).



my thinking is that unless mogo created the book itself, playing
games like these, against opponents like these, at time controls
like this one, then it couldn't possibly be helpful.  and even
then it might not be helpful.


There is an obvious need for adding features to Go programs to make  
them play more like humans. The public won't buy programs that play  
too strangely, even if they are objectively stronger. The challenges  
to MC programs are to:


1. Play more normal looking fuseki.
2. Play joseki moves when available, and use appropriate joseki for  
the current board situation.

3. Correct seki detection and evaluation.
4. Some sort of sliding komi so programs still play reasonably when  
far ahead or far behind.
4. Toward the endgame, switch to greedier evaluations that maximize  
points.

6. Pass instead of filling in territory when all dame are filled.

As far as we could see, Mogo was essentially re-creating book  
knowledge the hard way - using millions of playouts times many  
seconds to do so. The opening is the same in every game: you start  
with an empty board or a given number of handicap stones; why spend  
minutes figuring out the best first move, instead of precalculating  
that information? As for where it would come from, observation of  
thousands of pro games would reveal what they do in a variety of  
standard sequences. This information is not useful if the program  
cannot play at that level -- lower-level players often botch the  
followup to joseki, or choose the wrong joseki for the given whole- 
board situation. But a program which uses joseki to guide search  
could optimize search.


There have already been programs that have used pro game databases  
for opening moves. Howard Landman's Poka springs to mind.


 You can reliably say that in certain situations, when you play  
move A, even the strongest pro is very likely to respond with one  
of a handful of plays; if this knowledge is part of the search  
strategy, the search is much more efficient. If you choose to play  
some other move, it needs to be demonstrably better than the  
standard replies.


A more efficient opening would enable more time to be spent on the  
complex middle-game situations.


Indeed. That is especially beneficial for scalable search algorithms.  
For example, Orego had a simplistic yet effective fuseki: try to play  
on all the star points for the first nine moves.  That saved quite a  
lot of time and still obtained a reasonable looking opening.


Ian

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-08 Thread David Doshay
Chris may be right with his implication that I talk too much these  
days, but just to keep things honest, the quote below is not exactly  
what I said. I said that others were wondering how much time it will  
be before the programs are beating the pros. My thought was that  
programs have advanced 7 to 9 stones in the last few years, and after  
this match, for the first time I think that programs will likely be  
competing evenly with pros within a decade. I am shocked to be  
thinking this ... I certainly did not think this yesterday.


Cheers,
David



On 7, Aug 2008, at 8:47 PM, terry mcintyre wrote:


This is from the AGA newsletter:

COMPUTER BEATS PRO AT U.S. GO CONGRESS: In a historic achievement,  
the MoGo computer program defeated Myungwan
Kim 8P (l) Thursday afternoon by 1.5 points in a 9-stone game billed  
as

“Humanity’s Last Stand?” “It played really well,” said Kim, who
estimated MoGo’s current strength at “two or maybe three dan,” though
he noted that the program – which used 800 processors, at 4.7 Ghz, 15
Teraflops on a borrowed European supercomputer – “made some 5-dan
moves,” like those in the lower right-hand corner, where Moyogo took
advantage of a mistake by Kim to get an early lead. “I can’t tell you
how amazing this is,” David Doshay -- the SlugGo programmer who
suggested the match -- told the E-Journal after the game.
“I’m shocked at the result. I really didn’t expect the computer to win
in a one-hour game.” Kim easily won two blitz games with 9 stones and
11 stones and minutes and lost one with 12 stones and 15 minutes by  
3.5

points. The games were played live at the U.S. Go Congress, with over
500 watching online on KGS. “I think there’s no chance on nine  
stones,”

Kim told the EJ after the game. “It would even be difficult with eight
stones. MoGo played really well; after getting a lead, every time I
played aggressively, it just played safely, even when it meant
sacrificing some stones. It didn’t try to maximize the win and just
played the most sure way to win. It’s like a machine.” The game
generated a lot of interest and discussion about the game’s tactics  
and
philosophical implications. “Congratulations on making history  
today,” game organizer Peter Drake told both Kim and Olivier  
Teytaud, one of MoGo’s programmers, who participated ina brief  
online chat after the game. At a rare loss for words in a brief
interview with the EJ after the game, Doshay wondered “How much time  
do

we have left? We’ve improved nine stones in just a year and I suspect
the next nine will fall quickly now.”
- reported by Chris Garlock, photo by Brian Allen

Terry McIntyre [EMAIL PROTECTED]




“Wherever is found what is called a paternal government, there is  
found state education. It has been discovered that the best way to  
insure implicit obedience is to commence tyranny in the nursery.”



Benjamin Disraeli, Speech in the House of Commons [June 15, 1874]




___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-08 Thread steve uurtamo
 I still have this theory that when the level of the program is in the 
 high-dan reaches, it can take proper advantage of an opening book. Alas, it 
 may be a few years before enough processoring power is routinely available to 
 test this hypothesis. I know that we duffers can always ruin a perfectly good 
 joseki just as soon as we leave the memorized sequence.

why would this be the case?

and where would the book come from?

my thinking is that unless mogo created the book itself, playing
games like these, against opponents like these, at time controls
like this one, then it couldn't possibly be helpful.  and even
then it might not be helpful.

s.




 - Original Message 
 From: Darren Cook [EMAIL PROTECTED]

 I do have to ask -- if 1.7 million playouts per second and an hour of 
 playing time are required to reach this level, ...

 Can Olivier give us more details. A few questions that come to mind: how
 many playouts per *move* was it using in each of the opening, middle
 game and endgame? Was it using a fuseki book, and how many moves did the
 game stay in that book? And once it was out of the book was it all UCT
 (*) search, or were there any joseki libraries, etc.?

 I'd also be interested to hear how inefficient the cluster was (e.g.
 1000 CPUs won't be doing 1000 times the number of playouts, there must
 be some overhead).

 Darren

 *: Sorry, I've forgotten the new term we are supposed to use.



 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-08 Thread Mark Boon

First of all, congratulations to the MoGo team.

As some have remarked already, the difference in level between the  
fast games and the slow games was considerable. I didn't think the  
level of the fast games was anything to boast about. And my opinion  
is more informed than many other observers who don't understand why  
MoGo plays so 'stupidly' when behind. The remarks from the kibitzers  
weren't very flattering.


Where I do differ in opinion from most is the remarks from the pro.  
He played too fast and made a few terrible mistakes at crucial  
points. He said that MoGo winning the lower-right corner was 5-dan  
level play but I strongly disagree. It was good play, probably dan- 
level, but the kicker was that the mistake by the pro was also almost  
sub dan level. It's a very standard situation that should be very  
easy for a pro to get right. So I disagree when he says he wouldn't  
have played stronger with more time, this was a very big mistake that  
was easy to take advantage of.


Then the upper-right corner. I think MoGo played very well there. I  
don't have the game-record to verify, but I had the feeling the pro  
could have done a lot better there too with a little more thinking.  
But I must admit that MoGo took advantage of it very well and I  
thought the play there was probably 5-dan level or more. To me that  
was where MoGo won the game. Although there was a point where I  
thought it might still lose.


This was just one game of course but it's an encouraging result.

Mark

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Mogo beats pro: the hardware

2008-08-08 Thread Chaslot G (MICC)
Dear all,

The machine that was used by MoGo yesterday is the Dutch supercomputer 
Huygens, situated in Amsterdam. Huygens was provided by SARA (www.sara.nl) 
and NCF(http://www.nwo.nl/nwohome.nsf/pages/ACPP_4X6R5C_Eng). Huygens was 
upgraded on August 1 to 60 Teraflops (Peak), so porting MoGo with this short 
notice for the match was a lot of hard work and stress. But the result showed 
it was worth it!

Huygens is constituted of 104 nodes of 16 dual-cores POWER6 processors at 4.7 
GHz each (with 128G of RAM). MoGo was using 25 nodes, i.e., 800 cores and 
nearly 15 Teraflops. By comparison, Deep Blue was using only 11 Gigaflops. 
The structure of Huygens with powerful processors and numerous cores per node 
is ideal for MoGo. It would be less efficient to use a supercomputer with few 
cores per node, e.g., Blue Gene.

The parallelization was performed using Pthreads and OpenMPI. 
On each node, two search-trees were built, each one using 32 threads. Thanks to 
the SMT technology, it is actually more efficient to use two threads per core. 
Indeed, while one thread is looking in memory, an other thread can use the 
core. Every 350 milliseconds, the nodes were communicating their tree to one an 
other using OpenMPI.

Finally, we would like to help all the people who helped us to port the code, 
both from the french INRIA/CNRS and the dutch NCF/SARA.

Cheers, and see you in Beijing!

The Mogo Team: http://www.lri.fr/~teytaud/mogo.html

PS: A nice picture of Huygens, and further information, can be found here: 
www.sara.nl
winmail.dat___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

RE: [computer-go] mogo beats pro!

2008-08-08 Thread Jeffrey Greenberg
Wow!  I've been radio silent for a long time now working on other things
some years now, but watching the successes of the new approaches.  What
incredible validation them...

Fantastic!

Jeffrey Greenberg
www.jeffrey-greenberg.com


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of terry mcintyre
Sent: Thursday, August 07, 2008 8:48 PM
To: computer go
Subject: [computer-go] mogo beats pro!


This is from the AGA newsletter:

COMPUTER BEATS PRO AT U.S. GO CONGRESS: In a historic achievement, the MoGo
computer program defeated Myungwan Kim 8P (l) Thursday afternoon by 1.5
points in a 9-stone game billed as Humanity's Last Stand? It played
really well, said Kim, who estimated MoGo's current strength at two or
maybe three dan, though he noted that the program - which used 800
processors, at 4.7 Ghz, 15 Teraflops on a borrowed European supercomputer -
made some 5-dan moves, like those in the lower right-hand corner, where
Moyogo took advantage of a mistake by Kim to get an early lead. I can't
tell you how amazing this is, David Doshay -- the SlugGo programmer who
suggested the match -- told the E-Journal after the game. I'm shocked at
the result. I really didn't expect the computer to win in a one-hour game.
Kim easily won two blitz games with 9 stones and 11 stones and minutes and
lost one with 12 stones and 15 minutes by 3.5 points. The games were played
live at the U.S. Go Congress, with over 500 watching online on KGS. I think
there's no chance on nine stones, Kim told the EJ after the game. It would
even be difficult with eight stones. MoGo played really well; after getting
a lead, every time I played aggressively, it just played safely, even when
it meant sacrificing some stones. It didn't try to maximize the win and just
played the most sure way to win. It's like a machine. The game generated a
lot of interest and discussion about the game's tactics and philosophical
implications. Congratulations on making history today, game organizer
Peter Drake told both Kim and Olivier Teytaud, one of MoGo's programmers,
who participated ina brief online chat after the game. At a rare loss for
words in a brief interview with the EJ after the game, Doshay wondered How
much time do we have left? We've improved nine stones in just a year and I
suspect the next nine will fall quickly now.
- reported by Chris Garlock, photo by Brian Allen

 Terry McIntyre [EMAIL PROTECTED]




Wherever is found what is called a paternal government, there is found
state education. It has been discovered that the best way to insure implicit
obedience is to commence tyranny in the nursery.


Benjamin Disraeli, Speech in the House of Commons [June 15, 1874]



  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-08 Thread Mark Boon

Thanks for posting the game Eric.

When I look back at it it's obvious to me S1 was much better. After  
the likely sequence of R1, T3, T2, T4, S7, Q1, R7 Black still has a  
serious weakness at N4.


I also still question W's play in the upper-right. I doubt W S15 was  
a good move and think S19 would have given a much better result. Also  
O15 I find questionable, better not play here at all and let Black  
play S17. That way you get the same result as in the game, with the  
difference that Black added a stone at S17.


But I must say that B R16 was a really strong move to make things  
complicated for W. Hats off to MoGo.


Mark


On 8-aug-08, at 11:29, Eric Boesch wrote:


MyungWan-MoGoTiTan-4.sgf


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-08 Thread David Doshay


On 8, Aug 2008, at 7:29 AM, Eric Boesch wrote:

On Fri, Aug 8, 2008 at 8:04 AM, Mark Boon [EMAIL PROTECTED]  
wrote:

First of all, congratulations to the MoGo team.


Ditto!


Absolutely an amazing achievement!



Where I do differ in opinion from most is the remarks from the pro.  
He
played too fast and made a few terrible mistakes at crucial points.  
He said
that MoGo winning the lower-right corner was 5-dan level play but I  
strongly
disagree. It was good play, probably dan-level, but the kicker was  
that the

mistake by the pro was also almost sub dan level.


If it was an outright blunder then it was definitely sub-dan level. I
just don't know if that's the case.


In discussion one person said So it was an overplay on your part? His
answer was With 9 stones I must make overplays. From the nature of
what was said after that it is clear that Mr Kim knew how to live in the
corner, but chose a variation that he did not think the computer would
answer so well that would prevent him from getting sealed into the
corner.


Maybe (as an alternative to the misclick theory, which would be my
other top candidate) Myung Wan


I mentioned the misclick theory in the chat and he was emphatic that
there was no misclick.


deliberately tested Mogo for a blunder


He showed us the kind of responses he expected, which he said were
1-Dan level.


after Mogo played a very nice squeeze. Myung Wan was disrespecting his
opponent to even try to see if r1 would work (but maybe he wanted to
see just in case Mogo was that dumb, and he wanted to find out early
in the game so he would know how many points he needed to make
elsewhere), but s1 is not great either because if Myung saves his
bottom chain, then as in the variation starting with move 52 in the
attached SGF, Myung is stuck in the corner after black plays s7. It is
hard to imagine that the lower right side is worth losing the corner,
but maybe the difference is small.

To me, r2 looks very good. What do you stronger players think? (I'm  
only 1 kyu.)
MyungWan- 
MoGoTiTan-4.sgf___

computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] mogo beats pro!

2008-08-07 Thread terry mcintyre
This is from the AGA newsletter:

COMPUTER BEATS PRO AT U.S. GO CONGRESS: In a historic achievement, the MoGo 
computer program defeated Myungwan
Kim 8P (l) Thursday afternoon by 1.5 points in a 9-stone game billed as
“Humanity’s Last Stand?” “It played really well,” said Kim, who
estimated MoGo’s current strength at “two or maybe three dan,” though
he noted that the program – which used 800 processors, at 4.7 Ghz, 15
Teraflops on a borrowed European supercomputer – “made some 5-dan
moves,” like those in the lower right-hand corner, where Moyogo took
advantage of a mistake by Kim to get an early lead. “I can’t tell you
how amazing this is,” David Doshay -- the SlugGo programmer who
suggested the match -- told the E-Journal after the game.
“I’m shocked at the result. I really didn’t expect the computer to win
in a one-hour game.” Kim easily won two blitz games with 9 stones and
11 stones and minutes and lost one with 12 stones and 15 minutes by 3.5
points. The games were played live at the U.S. Go Congress, with over
500 watching online on KGS. “I think there’s no chance on nine stones,”
Kim told the EJ after the game. “It would even be difficult with eight
stones. MoGo played really well; after getting a lead, every time I
played aggressively, it just played safely, even when it meant
sacrificing some stones. It didn’t try to maximize the win and just
played the most sure way to win. It’s like a machine.” The game
generated a lot of interest and discussion about the game’s tactics and
philosophical implications. “Congratulations on making history today,” game 
organizer Peter Drake told both Kim and Olivier Teytaud, one of MoGo’s 
programmers, who participated ina brief online chat after the game. At a rare 
loss for words in a brief
interview with the EJ after the game, Doshay wondered “How much time do
we have left? We’ve improved nine stones in just a year and I suspect
the next nine will fall quickly now.”
- reported by Chris Garlock, photo by Brian Allen

 Terry McIntyre [EMAIL PROTECTED]




“Wherever is found what is called a paternal government, there is found state 
education. It has been discovered that the best way to insure implicit 
obedience is to commence tyranny in the nursery.”


Benjamin Disraeli, Speech in the House of Commons [June 15, 1874]




___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-07 Thread terry mcintyre
I enjoyed watching this game. Having trouble with KGS at the moment, or I'd 
send a game record.

Having more time makes a very marked improvement in the quality of play, to a 
degree which surprised me. The first two games, at 10 and something between 10 
and 15 minutes ( Mogo thought it only had 10 minutes until it was restarted ), 
Mogo made mistakes which kyu players would avoid. The 15-minute game was much 
better. The 60 minute game, as the pro said, was 2-3 dan level, with a 
remarkable sequence in the bottom right corner which he says is 5-dan level. 

 
I think computer clusters will beat pros within a decade; combination of better 
algorithms and cheaper processor power will ensure that.

Many thanks to everyone who made this possible, especially to the team of Mogo 
and whoever donated the European Supercomputer. 

I do have to ask -- if 1.7 million playouts per second are required and an hour 
of playing time are required to reach this level, is it possible to greatly 
improve the efficiency? Humans surely don't process that much information to 
accomplish a very high level of performance. The Mogo team has done heroic 
work. I understand they are working on adaptive playouts which would better 
utilize information about the board, and presumably gather higher quality 
information with less effort.

All the best to you wonderful people for making this program possible!

Terry McIntyre [EMAIL PROTECTED]


  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-07 Thread terry mcintyre
We had a bit of discussion w/ the Mogo team after the match. ( I am writing 
this from the US Go Congress in Portlland, OR ), and Olivier said that Mogo no 
longer uses a book; it was found to be ineffective in their research. I am 
wondering, of course, if a book would be more effective now that Mogo has such 
impressive power - but unfortunately, the supercomputer was only lent for this 
one match. They can't easily test the hypothesis that a strong program on a 
massive cluster could make effective use of a good opening book. Mogo no longer 
uses UCT.

 Terry McIntyre [EMAIL PROTECTED]



- Original Message 
From: Darren Cook [EMAIL PROTECTED]
To: computer-go computer-go@computer-go.org
Sent: Thursday, August 7, 2008 9:24:00 PM
Subject: Re: [computer-go] mogo beats pro!

Great news. Well done to the Mogo team.

John, if I can just find 3000 CPUs lying around I might actually win our
bet ;-).

 I do have to ask -- if 1.7 million playouts per second are required
 and an hour of playing time are required to reach this level, ...

Can Olivier give us more details. A few questions that come to mind: how
many playouts per *move* was it using in each of the opening, middle
game and endgame? Was it using a fuseki book, and how many moves did the
game stay in that book? And once it was out of the book was it all UCT
(*) search, or were there any joseki libraries, etc.?

I'd also be interested to hear how inefficient the cluster was (e.g.
1000 CPUs won't be doing 1000 times the number of playouts, there must
be some overhead).

Darren

*: Sorry, I've forgotten the new term we are supposed to use.


-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
open source dictionary/semantic network)
http://dcook.org/work/ (About me and my work)
http://darrendev.blogspot.com/ (blog on php, flash, i18n, linux, ...)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-07 Thread terry mcintyre
To answer one other question: we were told that Mogo scales linearly. The 
supercomputer has a very high-bandwidth interconnect. The Mogo team was unable 
to release more architectural details at this time.

To reiterate  on another question, from what the team said, no book, no joseki, 
just raw search using billions and billions of galloping CPU cycles. Some of 
the plays were described by the pro as 5 dan level -- effectively, Mogo 
generated joseki from whole cloth. I was impressed.

 
Check out the KGS records. If my memory is correct, the userid was MogoTitan. 
I'd love to hear feedback from stronger players, but it seemed to me that, as 
Mogo was given more time, its opening and middlegame play was markedly better.

I have a question: Is the allocation of time front-weighted, to take advantage 
of the fact that much less effort is required to calculate end-game plays, 
since the playouts are so much shorter? The playouts are shorter, the search 
tree has fewer branches; the time needed should decay rapidly.

I still have this theory that when the level of the program is in the high-dan 
reaches, it can take proper advantage of an opening book. Alas, it may be a few 
years before enough processoring power is routinely available to test this 
hypothesis. I know that we duffers can always ruin a perfectly good joseki just 
as soon as we leave the memorized sequence. 


- Original Message 
From: Darren Cook [EMAIL PROTECTED]

 I do have to ask -- if 1.7 million playouts per second and an hour of playing 
 time are required to reach this level, ...

Can Olivier give us more details. A few questions that come to mind: how
many playouts per *move* was it using in each of the opening, middle
game and endgame? Was it using a fuseki book, and how many moves did the
game stay in that book? And once it was out of the book was it all UCT
(*) search, or were there any joseki libraries, etc.?

I'd also be interested to hear how inefficient the cluster was (e.g.
1000 CPUs won't be doing 1000 times the number of playouts, there must
be some overhead).

Darren

*: Sorry, I've forgotten the new term we are supposed to use.


  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-07 Thread Rémi Coulom

Well done, Mogo team !


terry mcintyre wrote:

moves,” like those in the lower right-hand corner, where Moyogo took
  

Typo :-)

Rémi
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] mogo beats pro!

2008-08-07 Thread Darren Cook
 ... no book, no joseki,...Mogo generated joseki from whole cloth.
 ...
 seemed to me that, as Mogo was given more time, its opening and
 middlegame play was markedly better.

If it is basically reinventing opening theory from scratch each time
then that makes sense. (Though I suppose there is indirectly some go
opening knowledge (aka good shape) in the heavy playout algorithms.)

Darren


-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
open source dictionary/semantic network)
http://dcook.org/work/ (About me and my work)
http://darrendev.blogspot.com/ (blog on php, flash, i18n, linux, ...)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] mogo-pr-4core on cgos 9x9

2008-05-16 Thread Hideki Kato
Dear CGOS watchers, :)

Mogo-pr-4core is the public release (not big float) version of MoGo 
and running on Intel Q9550, latest 45nm chip of Core2 micro 
architecture with larger L2 cache (2 x 6MB), at 3.4 GHz (8.5 
x 400MHz; overclocked from rated 2.83GHz) on ASUS 
P5K-VM motherboard.

Its performance is almost tie against mogo-big-4core but seems better 
to others.  Somewhat interesting.

-Hideki
--
[EMAIL PROTECTED] (Kato)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo/professional challenge

2008-04-20 Thread James Guo

on 19x19 board, any thought on getting computer to win all the games with 
handicap 9(may be started at 13), then improve to handicap 8,7...?

Olivier Teytaud wrote:
 Has the program become that much stronger on 9x9 recently?
 (Compared to the version was trying?)

 *Parallelization: MPI == ~80% vs no mpi in 9x9 (for same number of
  cores).

 *Monte-Carlo improvement == strongly depends on number of simulations
and number of cores (as the multi-core reduces the influence of the
computational overhead), ~55% I guess.

 *Openings: 58%, for games with constant time per move (should be higher
for games with given total time), if we only keep the openings which
are still efficient in the new version of the code. Human-based
openings do not work :-(

 *less interestingly, we have a better hardware than at that time (more
cores, more GHz).

 == no doubt that this mogo is by far stronger than the one at
 Amsterdam 07.

 The improvement is much higher in 19x19, but humans are really too
 strong in 19x19 :-)
 ___
 computer-go mailing list
 computer-go at computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


   
-
Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] MoGo/Taranu challenge in Paris

2008-03-26 Thread Nick Wedd
I have put a report of the weekend's challenge games between MoGo and 
Catalin Taranu 5p at http://www.computer-go.info/tc/

mainly to make it easier for people to find the game records.

Nick
--
Nick Wedd[EMAIL PROTECTED]
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo/Taranu challenge in Paris

2008-03-26 Thread Olivier Teytaud
I have put a report of the weekend's challenge games between MoGo and Catalin 
Taranu 5p at http://www.computer-go.info/tc/

mainly to make it easier for people to find the game records.


Thanks a lot for that.

Some points are wrong however, below some informations about the errors.

The hardware was provided by Bull and operated by Bull; we have never
had an access to the hardware ourselves, in the mogo-team.
Only the solution
used when the initial hardware was down was operated by me. The hardware
used for calibrating comes from Grid5000, but as Grid5000 does not agree
for the use of its hardware for Go competitions, it was only for 
calibrating and experimenting.


The hardware in case of trouble, 
which has been used for two games, is provided by Université Paris-Sud.


The code was originally written by Sylvain Gelly and Yizao Wang, with 
ideas from Remi Munos and minor contributions from me. Rémi Coulom

has also helped the initial developments and we had some discussions
with him at various steps of the development. Then, the
code has been improved by Arpad Rimmel, Jean-Baptiste Hoock, Julien Perez
and me. Thomas Herault and Vincent Danjean also helped for profiling
various aspects of the parallelization.
This is a small list of contributors, as we have opened the cvs
access to many people and many people have suggested or coded some
points: a complete list of contributors is in 
http://www.lri.fr/~teytaud/crmogo.html (in french, an english version

should come soon). This web page is itself a draft, in progress; I'll
include more informations later.

For the first game, the game was played by the cluster from Bull during 13 
moves, then the cluster was down (hard drive full). I then launched the 
second machine, a 4-core system.


For the second and third game, the game was played by the cluster with
no bug; one win for Catalin, one win for Mogo. 
For side games, played around the challenge against Catalin, there was no bug;

mogo won both times.

In 19x19, the cluster lost its connection to internet and therefore I 
launched once again the second solution. The cluster came back later in 
the game. I am not able to remember at which times (perhaps in the KGS 
record).


In my humble opinion, there was no problem for the clock in 9x9 games; 
only in 19x19, we were afraid that the KGS time might be different
from the physical clock, but the trouble was minor as mogo resigned before 
any trouble.


Thanks,
Olivier___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] MoGo/Taranu challenge in Paris

2008-03-26 Thread Olivier Teytaud
The hardware in case of trouble, which has been used for two games, is 
provided by Université Paris-Sud.


Precisely: LRI, Université Paris-Sud.___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] MoGo/professional challenge

2008-03-22 Thread Olivier Teytaud

It was 2 cores 2.6GHz. (intel core2 duo).


sorry, I believed it was the tipi.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] MoGo/professional challenge

2008-03-21 Thread Nick Wedd
This Easter weekend, there will be a challenge between MoGo running on a 
very powerful system, and Catalin Taranu, 5-dan professional.


The following is from the info of The Enclave room on KGS.  It is 
confirmed by the page http://paris2008.jeudego.org/


 quotation starts 
A unique challenge will be held in parallel to the Paris Go tournament :

Mogo, currently one of the best Go programs in the world, will challenge 
the professional Go player Catalin Taranu 5P. Mogo has all the computing 
power of INRIA with hundreds of super-computers in a network.
The winner will be chosen at the end of 9x9 games after 3 rounds of 
2x30-minute sudden death. A 19x19 exhibition will be held on Sunday. 
Events are being followed live on KGS! They will be shown by 
'iagochall'.


Saturday:
3/23/08 3:00 PM
Game I (9x9)
Game II 9x9
Game III 9x9
Played with 1.5 hours from the start of one round to the next

Sunday:
3/24/08 3:00 PM
Exhibition game (19x19)

Monday:
3/25/08 11:00 PM
Debate with participants
 quotation ends 

The time zone quoted above is GMT;  that in the 
http://paris2008.jeudego.org/ page is French time, GMT+1.


Nick
--
Nick Wedd[EMAIL PROTECTED]
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo/professional challenge

2008-03-21 Thread Olivier Teytaud


For information on the mogo/pro challenge:
- during preliminary tests, mogo has won 4/0 against a very high level
  human; at that time we were just very very very happy :-)
- some other humans, supposed to be weaker, have however
  won some games at that time (before the nakade correction);
- the Nakade weakness is currently assumed to be solved, but I'm not sure
  of that - at least mogo solves the old known nakade situations and is
  stronger than the old mogo; at that time we were happy again :-)
- another improvement is that we currently have access to much
  hardware than during tests above;
- but, a human, supposed to be weaker (non professional level, 5Dan
  however) has found some trick to win against mogo; this is not the
  nakade, but this is seemingly stable, and I am just not able of
  explaining how he can do that; he has shown me situations and says that
  in this kind of situations, mogo makes an error, but I just don't
  understand the common point in these situations. If we understand
  something we will post details here (at least the sgf files)...
- in 9x9, the MPI (multi-machine) version of mogo wins with probability
  80% against the non-MPI version. The speed-up is better in 19x19 and
  will be detailed later, after extensive experiments - the focus was
  on 9x9 until now due to the challenge.
- once again, very strong improvements in front of old versions of mogo
  leads to disappointing improvements against humans. However, I think
  that the best 9x9 go programs (mogo and others) are currently difficult
  opponents for high level players.

Everything is under writing for publication and will be sent on this 
mailing list.

Some technical details:
- due to concurrency in memory access, heavier playouts come for free. If
  playouts are heavier (computationally more expensive) the speed-up
  becomes better. The nakade-problem involves heavier playouts, but the
  computational overhead is almost canceled by the speed-up improvement,
  as the speed-limit on 8-core machine is due to concurrency in memory
  access (for modifying the tree) more than to computational cost.
- (very) unfortunately, the opening books generated for mogo without
  nakade are seemingly poor for mogo with nakade... this has destroyed
  weeks of work.

If mogo wins the challenge, I'd like to point out that this is a 
collective success of the computer-go mailing list - without gnugo, 
crazystone, cgos, kgs and so on, mogo would just not exist. Thanks to all 
of you for that. I regret that due to some restrictions,
we have not published every detail before, but it was just a 
matter of weeks and I'm happy that everything will be published soon, and 
if we loose the challenge I hope someone else will win something similar 
soon :-)

Olivier
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo/professional challenge

2008-03-21 Thread Robert Jasiek

How well does the nakade improvement perform on 13x13?

--
robert jasiek
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo/professional challenge

2008-03-21 Thread Olivier Teytaud

How well does the nakade improvement perform on 13x13?


no idea on 13x13, but it does not work on 19x19 (seemingly,
perhaps we just need tuning...).

Also, it works only, in terms of success rate against the old
mogo, for sufficiently large number of simulations per move.

Olivier
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo/professional challenge

2008-03-21 Thread Hiroshi Yamashita

This event sounds very interesting!


Saturday:
3/23/08 3:00 PM


Saturday:
3/22/08 3:00 PM

is right?

Regards,
Hiroshi Yamashita


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MoGo/professional challenge

2008-03-21 Thread Olivier Teytaud

Saturday:
3/23/08 3:00 PM


Saturday:
3/22/08 3:00 PM

is right?


Hi; it's saturday 22.
Olivier

(stress++)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


  1   2   3   >