Re: [computer-go] small study

2008-10-27 Thread Vlad Dumitrescu
On Sun, Oct 26, 2008 at 21:22, Don Dailey [EMAIL PROTECTED] wrote:
 I'm doing a small study of the scalability of the reference bot at
 various numbers of playouts.

 I still need a lot more games, but in general you eventually start to
 see a point of diminishing returns for each doubling.  I didn't take it
 any farther than 8192,  but my guess is that anything beyond this is not
 going to give you much.   I imagine that it approaches some hypothetical
 level in an asymptotic way.   I may test 16384 later.

Hi!

I may remember incorrectly, but didn't your previous scalability test
show that when a new player was added at the top, the previous top
players' ratings got a slight boost too? Meaning that at the top (and
possibly at the bottom too, but that's unimportant) the levels may be
a little squashed by the border effect? It's very possible that the
effect is negligible, but we can't know until we measure...

The differences between adjacent levels look very symmetrical at the
top and bottom, so a few more levels might reveal what is going on.

best regards,
Vlad
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] From zero to playing on CGOS in 10 minutes

2008-10-27 Thread Mark Boon
OK, after dicking about for a few hours with git and Mercurial I  
decided against using any of them. I keep getting errors or  
completely fail to understand how it works. It's just not intuitive  
enough to get going quickly.


Moreover, if my goal is to get newcomers up and running quickly, I  
don't want to put them through the same pain. I'm quite comfortable  
with Subversion and I feel confident to get someone up to speed very  
quickly using Eclipse. I don't know how to do that with either git or  
Mercurial. It's probably possible. But not by me at this point in time.


So I think I'm just going to use an online Subversion repository  
again. To avoid confusion I think it would be better not to use my  
existing project I have at dev.java.net. So I'll have to request a  
new project. On my own computer I've called it GoPluginFramework but  
I'm not sure if that's a good name. Basically what it does is allow  
someone to implement a relatively simple engine interface and have it  
act as a GTP program. For that you don't really need the whole  
project, only the JAR containing the Java application and drop your  
own JAR containing an engine implementation in the same directory.  
Then I have another project with a very simple random engine, which I  
called GoEngineTemplate. I expect this would be copied and then  
renamed by people wanting to start their own engine. And a third  
project has my reference bot.


So I'm looking for suggestions. Suggestions for a project name(s) and  
suggestions how to best organize it. Should I put all three projects  
under the same online project, or create separate projects for them?


Mark

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Transpostions in UCT search

2008-10-27 Thread Mark Boon
A while ago I implemented what I thought was a fairly straightforward  
way to deal with transpositions. But to my surprise it made the  
program weaker instead of stronger. Since I couldn't figure out  
immediately what was wrong with it, I decided to leave it alone for  
the time being.


Just now I decided to do a search on transpostions and UCT in this  
mailing list and it seems to have been discussed several times in the  
past. But from what I found it's not entirely clear to me what was  
the conclusion of those discussions.


Let me first describe what I did (ar attempted to do): all nodes are  
stored in a hash-table using a checksum. Whenever I create a new node  
in the tree I add it in the hash-table as well. If two nodes have the  
same checksum, they are stored at the same slot in the hashtable in a  
small list.


When I add a node to a slot that already contains something, then I  
use the playout statistics of the node(s) already there and propagate  
that up the tree. When I have done a playout I propagate the result  
of the single playout up the tree, but at each step I check in the  
hashtable to see if there are multiple paths to update.


I've seen some posts that expressed concerns about using the existing  
statistics of another path. This may be the reason I'm not seeing any  
improvement. So I was wondering if there's any consensus about how to  
deal with transpositions in a UCT tree. Or if someone could point me  
to other sources of information on the subject.


Mark

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Transpostions in UCT search

2008-10-27 Thread terry mcintyre
- Original Message 

 From: Mark Boon [EMAIL PROTECTED]
snippage
 
 Let me first describe what I did (ar attempted to do): all nodes are  
 stored in a hash-table using a checksum. Whenever I create a new node  
 in the tree I add it in the hash-table as well. If two nodes have the  
 same checksum, they are stored at the same slot in the hashtable in a  
 small list.
 
 When I add a node to a slot that already contains something, then I  
 use the playout statistics of the node(s) already there and propagate  
 that up the tree.

Am I reading this right? Are all nodes in the same slot considered equivalent? 
Could some of these nodes be collisions?



  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Transpostions in UCT search

2008-10-27 Thread Mark Boon


On 27-okt-08, at 11:51, terry mcintyre wrote:


- Original Message 


From: Mark Boon [EMAIL PROTECTED]

snippage


Let me first describe what I did (ar attempted to do): all nodes are
stored in a hash-table using a checksum. Whenever I create a new node
in the tree I add it in the hash-table as well. If two nodes have the
same checksum, they are stored at the same slot in the hashtable in a
small list.

When I add a node to a slot that already contains something, then I
use the playout statistics of the node(s) already there and propagate
that up the tree.


Am I reading this right? Are all nodes in the same slot considered  
equivalent? Could some of these nodes be collisions?


Yes, all the nodes in the same slot are considered equivalent.  
They're not collisions. If a node hashes to a slot with a different  
checksum, it keeps looking for an empty slot. If it doesn't find an  
empty slot it throws out some nodes to which it collided.


Mark

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Transpostions in UCT search

2008-10-27 Thread Erik van der Werf
When a child has been sampled often through some other path a naive
implementation may initially explore other less frequently visited
children first. The new path leading to the transposition may
therefore suffer from some initial bias. Using state-action values
appears to solve the problem.

Erik


On Mon, Oct 27, 2008 at 2:21 PM, Mark Boon [EMAIL PROTECTED] wrote:
 A while ago I implemented what I thought was a fairly straightforward way to
 deal with transpositions. But to my surprise it made the program weaker
 instead of stronger. Since I couldn't figure out immediately what was wrong
 with it, I decided to leave it alone for the time being.

 Just now I decided to do a search on transpostions and UCT in this mailing
 list and it seems to have been discussed several times in the past. But from
 what I found it's not entirely clear to me what was the conclusion of those
 discussions.

 Let me first describe what I did (ar attempted to do): all nodes are stored
 in a hash-table using a checksum. Whenever I create a new node in the tree I
 add it in the hash-table as well. If two nodes have the same checksum, they
 are stored at the same slot in the hashtable in a small list.

 When I add a node to a slot that already contains something, then I use the
 playout statistics of the node(s) already there and propagate that up the
 tree. When I have done a playout I propagate the result of the single
 playout up the tree, but at each step I check in the hashtable to see if
 there are multiple paths to update.

 I've seen some posts that expressed concerns about using the existing
 statistics of another path. This may be the reason I'm not seeing any
 improvement. So I was wondering if there's any consensus about how to deal
 with transpositions in a UCT tree. Or if someone could point me to other
 sources of information on the subject.

 Mark

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Transpostions in UCT search

2008-10-27 Thread Adriaan van Kessel
[EMAIL PROTECTED] wrote on 27-10-2008 14:57:54:

 
 On 27-okt-08, at 11:51, terry mcintyre wrote:
 
  - Original Message 
 
  From: Mark Boon [EMAIL PROTECTED]
  snippage
 
  Let me first describe what I did (ar attempted to do): all nodes are
  stored in a hash-table using a checksum. Whenever I create a new node
  in the tree I add it in the hash-table as well. If two nodes have the
  same checksum, they are stored at the same slot in the hashtable in a
  small list.
 
  When I add a node to a slot that already contains something, then I
  use the playout statistics of the node(s) already there and propagate
  that up the tree.
 
  Am I reading this right? Are all nodes in the same slot considered 
  equivalent? Could some of these nodes be collisions?
 
 Yes, all the nodes in the same slot are considered equivalent. 
 They're not collisions. If a node hashes to a slot with a different 
 checksum, it keeps looking for an empty slot. If it doesn't find an 
 empty slot it throws out some nodes to which it collided.

Do you mean the Graph History Interaction ? (GHI)
If you implement (some kind of) superko, the result of the evaluation
( = playouts) *should* depend on the path taken.
For simple ko, the path is not relevant, IMHO.

There may also be some information-theoretic complications (propagating 
twice 
means that some measurements are counted twice, while they have been
estimated only once. This _could_ lead to inflation, because you 
underestimate
the variance in these branches)I am not a statistician. YMMV.

AvK
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Transpostions in UCT search

2008-10-27 Thread Mark Boon


On 27-okt-08, at 12:45, Erik van der Werf wrote:


Using state-action values
appears to solve the problem.


What are 'state-action values'?

Mark

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Transpostions in UCT search

2008-10-27 Thread Erik van der Werf
Reinforcement Learning terminology :-)

In Go the state is the board situation (stones, player to move, ko
info, etc.), the action is simply the move. Together they form
state-action pairs.

A standard transposition table typically only has state values; action
values can then be inferred from a one ply search. In a full graph
representation the state-action values are the values of the edges.

Erik


On Mon, Oct 27, 2008 at 4:03 PM, Mark Boon [EMAIL PROTECTED] wrote:

 On 27-okt-08, at 12:45, Erik van der Werf wrote:

 Using state-action values

 appears to solve the problem.

 What are 'state-action values'?
 Mark

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Another study based on the reference bot

2008-10-27 Thread Michael Williams
The following modification to AMAF seems to perform better and scale better.  The idea is to weight the moves at the beginning of the playout heavier than the 
moves at the end of the playout.  It's probably not a new idea.


This code from the reference implementation:
wins[mv] += sc;
hits[mv]++;

Becomes this:
double weight = 1.0 - (double)(i - savctm) / (ctm - savctm);
wins[mv] += weight * sc;
hits[mv] += weight;

If you are not familiar with the reference code, here are the meanings of the 
variables in the code above:
i is the loop variable, counting from savctm to ctm
mv iterates over each move in the playout
sc is 1 or -1, depending on the outcome of the playout
ctm is the move count at the end of the playout
savctm is the move count at the beginning of the playout
hits is the number of times a given move was played
wins is the number of times a given move resulted in a playout win

At15 playouts per move, the modified version wins 54.0% of the time (±3.5%) 
after 200 games.
At30 playouts per move, the modified version wins 54.0% of the time (±3.5%) 
after 200 games.
At60 playouts per move, the modified version wins 54.5% of the time (±3.5%) 
after 200 games.
At   125 playouts per move, the modified version wins 53.0% of the time (±3.5%) 
after 200 games.
At   250 playouts per move, the modified version wins 54.0% of the time (±3.5%) 
after 200 games.
At   500 playouts per move, the modified version wins 55.5% of the time (±3.5%) 
after 200 games.
At  1000 playouts per move, the modified version wins 57.5% of the time (±3.5%) 
after 200 games.
At  2000 playouts per move, the modified version wins 63.0% of the time (±3.4%) 
after 200 games.
At  4000 playouts per move, the modified version wins 63.5% of the time (±3.4%) 
after 200 games.
At  8000 playouts per move, the modified version wins 63.5% of the time (±3.4%) 
after 200 games.
At 16000 playouts per move, the modified version wins 71.0% of the time (±3.2%) 
after 200 games.

Because of the weighting, it is probably safe to remove the code that checks to see if the move was previously played before awarding credit.  Doing so and 
incrementally calculating the weight would yield this simple and fast update loop after each playout:


// Track win statistics using weighted AMAF - (All Moves As First)
// ---
double weight = 1.0;
double weightDelta = 2.0 / (ctm - savctm + 1);
for (int i = savctm; i  ctm; i += 2)
{
int mv = mvs[i]  MASK;

wins[mv] += weight * sc;
hits[mv] += weight;
weight -= weightDelta;
}

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Another study based on the reference bot

2008-10-27 Thread Mark Boon
Interesting. I had tried something much more simple. I added two wins  
for the first move in the sequence, figuring that a move being first  
in the sequence should have more weight than the rest. But to my  
surprise that played much worse, winning only 37%. Maybe I made a  
mistake and I should try again.


Mark


On 27-okt-08, at 15:01, Michael Williams wrote:

The following modification to AMAF seems to perform better and  
scale better.  The idea is to weight the moves at the beginning of  
the playout heavier than the moves at the end of the playout.  It's  
probably not a new idea.


This code from the reference implementation:
wins[mv] += sc;
hits[mv]++;

Becomes this:
double weight = 1.0 - (double)(i - savctm) / (ctm - savctm);
wins[mv] += weight * sc;
hits[mv] += weight;

If you are not familiar with the reference code, here are the  
meanings of the variables in the code above:

i is the loop variable, counting from savctm to ctm
mv iterates over each move in the playout
sc is 1 or -1, depending on the outcome of the playout
ctm is the move count at the end of the playout
savctm is the move count at the beginning of the playout
hits is the number of times a given move was played
wins is the number of times a given move resulted in a playout win

At15 playouts per move, the modified version wins 54.0% of the  
time (±3.5%) after 200 games.
At30 playouts per move, the modified version wins 54.0% of the  
time (±3.5%) after 200 games.
At60 playouts per move, the modified version wins 54.5% of the  
time (±3.5%) after 200 games.
At   125 playouts per move, the modified version wins 53.0% of the  
time (±3.5%) after 200 games.
At   250 playouts per move, the modified version wins 54.0% of the  
time (±3.5%) after 200 games.
At   500 playouts per move, the modified version wins 55.5% of the  
time (±3.5%) after 200 games.
At  1000 playouts per move, the modified version wins 57.5% of the  
time (±3.5%) after 200 games.
At  2000 playouts per move, the modified version wins 63.0% of the  
time (±3.4%) after 200 games.
At  4000 playouts per move, the modified version wins 63.5% of the  
time (±3.4%) after 200 games.
At  8000 playouts per move, the modified version wins 63.5% of the  
time (±3.4%) after 200 games.
At 16000 playouts per move, the modified version wins 71.0% of the  
time (±3.2%) after 200 games.


Because of the weighting, it is probably safe to remove the code  
that checks to see if the move was previously played before  
awarding credit.  Doing so and incrementally calculating the weight  
would yield this simple and fast update loop after each playout:


// Track win statistics using weighted AMAF - (All Moves As  
First)
//  
---

double weight = 1.0;
double weightDelta = 2.0 / (ctm - savctm + 1);
for (int i = savctm; i  ctm; i += 2)
{
int mv = mvs[i]  MASK;

wins[mv] += weight * sc;
hits[mv] += weight;
weight -= weightDelta;
}

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Another study based on the reference bot

2008-10-27 Thread Don Dailey
Great information.   I'll include a version of this to my scalability
study.   Is this the C version?   

wins[] and hits[] are integer arrays and weight is a fraction less than
1.0, so I'm not sure how this works.  Did you change hits and wins to be
doubles?

There are many enhancements possible, and I'm not going to change the
definition,  it was my intention to provide a very basic bare-bones
reference implementation for others to build on.

- Don



On Mon, 2008-10-27 at 13:01 -0400, Michael Williams wrote:
 The following modification to AMAF seems to perform better and scale better.  
 The idea is to weight the moves at the beginning of the playout heavier than 
 the 
 moves at the end of the playout.  It's probably not a new idea.
 
 This code from the reference implementation:
   wins[mv] += sc;
   hits[mv]++;
 
 Becomes this:
   double weight = 1.0 - (double)(i - savctm) / (ctm - savctm);
   wins[mv] += weight * sc;
   hits[mv] += weight;
 
 If you are not familiar with the reference code, here are the meanings of the 
 variables in the code above:
   i is the loop variable, counting from savctm to ctm
   mv iterates over each move in the playout
   sc is 1 or -1, depending on the outcome of the playout
   ctm is the move count at the end of the playout
   savctm is the move count at the beginning of the playout
   hits is the number of times a given move was played
   wins is the number of times a given move resulted in a playout win
 
 At15 playouts per move, the modified version wins 54.0% of the time 
 (±3.5%) after 200 games.
 At30 playouts per move, the modified version wins 54.0% of the time 
 (±3.5%) after 200 games.
 At60 playouts per move, the modified version wins 54.5% of the time 
 (±3.5%) after 200 games.
 At   125 playouts per move, the modified version wins 53.0% of the time 
 (±3.5%) after 200 games.
 At   250 playouts per move, the modified version wins 54.0% of the time 
 (±3.5%) after 200 games.
 At   500 playouts per move, the modified version wins 55.5% of the time 
 (±3.5%) after 200 games.
 At  1000 playouts per move, the modified version wins 57.5% of the time 
 (±3.5%) after 200 games.
 At  2000 playouts per move, the modified version wins 63.0% of the time 
 (±3.4%) after 200 games.
 At  4000 playouts per move, the modified version wins 63.5% of the time 
 (±3.4%) after 200 games.
 At  8000 playouts per move, the modified version wins 63.5% of the time 
 (±3.4%) after 200 games.
 At 16000 playouts per move, the modified version wins 71.0% of the time 
 (±3.2%) after 200 games.
 
 Because of the weighting, it is probably safe to remove the code that checks 
 to see if the move was previously played before awarding credit.  Doing so 
 and 
 incrementally calculating the weight would yield this simple and fast update 
 loop after each playout:
 
  // Track win statistics using weighted AMAF - (All Moves As First)
  // ---
  double weight = 1.0;
  double weightDelta = 2.0 / (ctm - savctm + 1);
  for (int i = savctm; i  ctm; i += 2)
  {
  int mv = mvs[i]  MASK;
 
  wins[mv] += weight * sc;
  hits[mv] += weight;
  weight -= weightDelta;
  }
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Another study based on the reference bot

2008-10-27 Thread Michael Williams

Yes, they become doubles.

Don Dailey wrote:

Great information.   I'll include a version of this to my scalability
study.   Is this the C version?   


wins[] and hits[] are integer arrays and weight is a fraction less than
1.0, so I'm not sure how this works.  Did you change hits and wins to be
doubles?

There are many enhancements possible, and I'm not going to change the
definition,  it was my intention to provide a very basic bare-bones
reference implementation for others to build on.

- Don



On Mon, 2008-10-27 at 13:01 -0400, Michael Williams wrote:
The following modification to AMAF seems to perform better and scale better.  The idea is to weight the moves at the beginning of the playout heavier than the 
moves at the end of the playout.  It's probably not a new idea.


This code from the reference implementation:
wins[mv] += sc;
hits[mv]++;

Becomes this:
double weight = 1.0 - (double)(i - savctm) / (ctm - savctm);
wins[mv] += weight * sc;
hits[mv] += weight;

If you are not familiar with the reference code, here are the meanings of the 
variables in the code above:
i is the loop variable, counting from savctm to ctm
mv iterates over each move in the playout
sc is 1 or -1, depending on the outcome of the playout
ctm is the move count at the end of the playout
savctm is the move count at the beginning of the playout
hits is the number of times a given move was played
wins is the number of times a given move resulted in a playout win

At15 playouts per move, the modified version wins 54.0% of the time (±3.5%) 
after 200 games.
At30 playouts per move, the modified version wins 54.0% of the time (±3.5%) 
after 200 games.
At60 playouts per move, the modified version wins 54.5% of the time (±3.5%) 
after 200 games.
At   125 playouts per move, the modified version wins 53.0% of the time (±3.5%) 
after 200 games.
At   250 playouts per move, the modified version wins 54.0% of the time (±3.5%) 
after 200 games.
At   500 playouts per move, the modified version wins 55.5% of the time (±3.5%) 
after 200 games.
At  1000 playouts per move, the modified version wins 57.5% of the time (±3.5%) 
after 200 games.
At  2000 playouts per move, the modified version wins 63.0% of the time (±3.4%) 
after 200 games.
At  4000 playouts per move, the modified version wins 63.5% of the time (±3.4%) 
after 200 games.
At  8000 playouts per move, the modified version wins 63.5% of the time (±3.4%) 
after 200 games.
At 16000 playouts per move, the modified version wins 71.0% of the time (±3.2%) 
after 200 games.

Because of the weighting, it is probably safe to remove the code that checks to see if the move was previously played before awarding credit.  Doing so and 
incrementally calculating the weight would yield this simple and fast update loop after each playout:


 // Track win statistics using weighted AMAF - (All Moves As First)
 // ---
 double weight = 1.0;
 double weightDelta = 2.0 / (ctm - savctm + 1);
 for (int i = savctm; i  ctm; i += 2)
 {
 int mv = mvs[i]  MASK;

 wins[mv] += weight * sc;
 hits[mv] += weight;
 weight -= weightDelta;
 }

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] small study

2008-10-27 Thread Weston Markham
On Sun, Oct 26, 2008 at 4:22 PM, Don Dailey [EMAIL PROTECTED] wrote:
 I imagine that it approaches some hypothetical
 level in an asymptotic way.

For most board positions, it is reasonable to expect that there exists
a single move for which the asymptotic Monte Carlo value is higher
than the rest.  Even when this is not the case, the different moves
with the same value are generally symmetrical, and the only possible
strategic advantage of one of these moves over another would be due to
superko.  So I expect that the asymptotic behavior is very nearly
deterministic.

In the past, I have had difficulties in evaluating the overall
strength of such players, because I would not get a good variety of
positions in the games played.  If you continue with higher numbers of
playouts, you might want to either ensure that you either include at
least one strong player that plays a variety of openings, or else play
games with a variety of starting positions.

Weston
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Another study based on the reference bot

2008-10-27 Thread Michael Williams

It's actually more bare-bones (in the sense that there is less code) if you 
consider the update loop that I showed.


Don Dailey wrote:

Great information.   I'll include a version of this to my scalability
study.   Is this the C version?   


wins[] and hits[] are integer arrays and weight is a fraction less than
1.0, so I'm not sure how this works.  Did you change hits and wins to be
doubles?

There are many enhancements possible, and I'm not going to change the
definition,  it was my intention to provide a very basic bare-bones
reference implementation for others to build on.

- Don



On Mon, 2008-10-27 at 13:01 -0400, Michael Williams wrote:
The following modification to AMAF seems to perform better and scale better.  The idea is to weight the moves at the beginning of the playout heavier than the 
moves at the end of the playout.  It's probably not a new idea.


This code from the reference implementation:
wins[mv] += sc;
hits[mv]++;

Becomes this:
double weight = 1.0 - (double)(i - savctm) / (ctm - savctm);
wins[mv] += weight * sc;
hits[mv] += weight;

If you are not familiar with the reference code, here are the meanings of the 
variables in the code above:
i is the loop variable, counting from savctm to ctm
mv iterates over each move in the playout
sc is 1 or -1, depending on the outcome of the playout
ctm is the move count at the end of the playout
savctm is the move count at the beginning of the playout
hits is the number of times a given move was played
wins is the number of times a given move resulted in a playout win

At15 playouts per move, the modified version wins 54.0% of the time (±3.5%) 
after 200 games.
At30 playouts per move, the modified version wins 54.0% of the time (±3.5%) 
after 200 games.
At60 playouts per move, the modified version wins 54.5% of the time (±3.5%) 
after 200 games.
At   125 playouts per move, the modified version wins 53.0% of the time (±3.5%) 
after 200 games.
At   250 playouts per move, the modified version wins 54.0% of the time (±3.5%) 
after 200 games.
At   500 playouts per move, the modified version wins 55.5% of the time (±3.5%) 
after 200 games.
At  1000 playouts per move, the modified version wins 57.5% of the time (±3.5%) 
after 200 games.
At  2000 playouts per move, the modified version wins 63.0% of the time (±3.4%) 
after 200 games.
At  4000 playouts per move, the modified version wins 63.5% of the time (±3.4%) 
after 200 games.
At  8000 playouts per move, the modified version wins 63.5% of the time (±3.4%) 
after 200 games.
At 16000 playouts per move, the modified version wins 71.0% of the time (±3.2%) 
after 200 games.

Because of the weighting, it is probably safe to remove the code that checks to see if the move was previously played before awarding credit.  Doing so and 
incrementally calculating the weight would yield this simple and fast update loop after each playout:


 // Track win statistics using weighted AMAF - (All Moves As First)
 // ---
 double weight = 1.0;
 double weightDelta = 2.0 / (ctm - savctm + 1);
 for (int i = savctm; i  ctm; i += 2)
 {
 int mv = mvs[i]  MASK;

 wins[mv] += weight * sc;
 hits[mv] += weight;
 weight -= weightDelta;
 }

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] MC programs vs. top commercial programs?

2008-10-27 Thread Ian Osgood
Now that Leela and Many Faces v12 are available for any Windows user  
to purchase and run (and Fuego is free to tinker with), has anyone  
tried them against the old guard of commercial programs? KCC Igo,  
Haruka, Go++, and HandTalk haven't competed in a while so it is hard  
to tell how much better MC is than the previous state of the art.   
(For that matter, it isn't a foregone conclusion that they are  
better; GNU Go won the 2008 US computer go tournament against a field  
MC programs.)


Alternatively, I wonder whether Hiroshi Yamashita has tested his  
stronger AyaMC against his stable of commercial programs (as he  
previously did in 2007 using GNU Go).


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] From zero to playing on CGOS in 10 minutes

2008-10-27 Thread Mark Boon
A post from Michael Williams led me to review this mail below once  
more. I hadn't looked at the code of Don's reference bot very closely  
until now and instead relied on the description he gave below:


On 23-okt-08, at 14:29, Don Dailey wrote:


Let me give you a simple example where we set the level to a measly 2
playouts:

So you play 2 unformly random games that go like the following.   
This

is a nonsense game that probably has illegal moves, but I just made up
the moves for the sake of example:

Black to move:

c3 d4 g5 f6 b2 g7 c3 d4 g7 pass pass  - black wins
d4 b2 g7 c6 g5 a5 c3 d5 c7 pass pass  - black loses

In the first game black played:
c3, g5, b2, c3, d4
...  but we cannot count d4 because white played it first.
 and we can only count c3 once because black played it twice.

So our statistics so far for black is:

  c3 -  1 win  1 game
  g5 -  1 win  1 game
  b2 -  1 win  1 game

In the second game black played:

 d4, g7, g5, c3, c7  - and black played these moves first but lost.

If you combine the statistics you get:

   c3   1 win  2 games0.50
   g5   1 win  2 games0.50
   b2   1 win  1 game 1.00
   d4   0 wins 1 game 0.00
   g7   0 wins 1 game 0.00
   c7   0 wins 1 game 0.00


The highest scoring move is b2 so that is what is played.

It's called AMAF because we only care if black played the move, we  
don't

care WHEN he played it.   So AMAF cheats by viewing a single game as
several separate games to get many data points from 1 game instead of
just 1 data point.

Please note that we don't care what WHITE did, we only take statistics
on the moves black made (although we ignore any move that black didn't
play first.)   So if white plays e5 and later BLACK plays e5  
(presumably
after a capture) then we cannot count e5 because white played it  
first.


I think if you study my example you will understand.



So I understand from the above that when a playout leads to a win you  
add 1 to the wins.
But in the code you subtract one when it leads to a loss. So doesn't  
the actual result statistics in the example above become:


c3 1 win 2 games0.00
g5 1 win 2 games0.00
b2 1 win 1 game 1.00
d4 0 wins 1 game-1.00
g7 0 wins 1 game-1.00
c7 0 wins 1 game-1.00

 Maybe it doesn't matter and it leads to the same result. But I was  
trying to make sense of what Michael wrote in light of what I coded  
myself.


Mark


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MC programs vs. top commercial programs?

2008-10-27 Thread Gian-Carlo Pascutto
Ian Osgood wrote:

 (For that matter,
 it isn't a foregone conclusion that they are better; GNU Go won the 2008
 US computer go tournament against a field MC programs.)

Believe me, in match long enough to exclude pure luck, with the MC
programs running on something faster than a Pentium 4, it IS a foregone
conclusion.

There will be an update of Leela to the version used in the Olympiad
soon. You can try it against GNU Go then.

Or look at the KGS ratings. They're against a wider range of opponents.

-- 
GCP
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] From zero to playing on CGOS in 10 minutes

2008-10-27 Thread Michael Williams

Maybe Don built it that way so that the playouts could handle integer komi and 
the possibility of a draw.  In that case, it would neither add one nor subtract 
one.


Mark Boon wrote:
A post from Michael Williams led me to review this mail below once more. 
I hadn't looked at the code of Don's reference bot very closely until 
now and instead relied on the description he gave below:


On 23-okt-08, at 14:29, Don Dailey wrote:


Let me give you a simple example where we set the level to a measly 2
playouts:

So you play 2 unformly random games that go like the following.  This
is a nonsense game that probably has illegal moves, but I just made up
the moves for the sake of example:

Black to move:

c3 d4 g5 f6 b2 g7 c3 d4 g7 pass pass  - black wins
d4 b2 g7 c6 g5 a5 c3 d5 c7 pass pass  - black loses

In the first game black played:
c3, g5, b2, c3, d4
...  but we cannot count d4 because white played it first.
 and we can only count c3 once because black played it twice.

So our statistics so far for black is:

  c3 -  1 win  1 game
  g5 -  1 win  1 game
  b2 -  1 win  1 game

In the second game black played:

 d4, g7, g5, c3, c7  - and black played these moves first but lost.

If you combine the statistics you get:

   c3   1 win  2 games0.50
   g5   1 win  2 games0.50
   b2   1 win  1 game 1.00
   d4   0 wins 1 game 0.00
   g7   0 wins 1 game 0.00
   c7   0 wins 1 game 0.00


The highest scoring move is b2 so that is what is played.

It's called AMAF because we only care if black played the move, we don't
care WHEN he played it.   So AMAF cheats by viewing a single game as
several separate games to get many data points from 1 game instead of
just 1 data point.

Please note that we don't care what WHITE did, we only take statistics
on the moves black made (although we ignore any move that black didn't
play first.)   So if white plays e5 and later BLACK plays e5 (presumably
after a capture) then we cannot count e5 because white played it first.

I think if you study my example you will understand.



So I understand from the above that when a playout leads to a win you 
add 1 to the wins.
But in the code you subtract one when it leads to a loss. So doesn't the 
actual result statistics in the example above become:


c3 1 win 2 games0.00
g5 1 win 2 games0.00
b2 1 win 1 game1.00
d4 0 wins 1 game-1.00
g7 0 wins 1 game-1.00
c7 0 wins 1 game-1.00

 Maybe it doesn't matter and it leads to the same result. But I was 
trying to make sense of what Michael wrote in light of what I coded myself.


Mark


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MC programs vs. top commercial programs?

2008-10-27 Thread terry mcintyre
 From: Ian Osgood [EMAIL PROTECTED]

 Now that Leela and Many Faces v12 are available for any Windows user  
 to purchase 

Thanks for the heads-up, I must have missed the announcement.

Do either of these worthy programs work with Wine on Linux? 

I recently tried a development version of MGF12 on Wine, and the registration 
failed. 



  
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] MC programs vs. top commercial programs?

2008-10-27 Thread Gian-Carlo Pascutto
terry mcintyre wrote:
 From: Ian Osgood [EMAIL PROTECTED]
 
 Now that Leela and Many Faces v12 are available for any Windows
 user to purchase
 
 Thanks for the heads-up, I must have missed the announcement.
 
 Do either of these worthy programs work with Wine on Linux?

You can try the free version of Leela to check. It will work with wine
1.1.6 or later, as far as I know.

-- 
GCP
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] From zero to playing on CGOS in 10 minutes

2008-10-27 Thread Don Dailey
On Mon, 2008-10-27 at 17:19 -0200, Mark Boon wrote:
 So I understand from the above that when a playout leads to a win
 you  
 add 1 to the wins.
 But in the code you subtract one when it leads to a loss. 


This is just semantics.  In the literal code a win is 1 and a loss is -1
but when I actually report the statistics I fix this up to report a
score between 0 and 1.  

I basically divide the final score by 2 and add 0.5 

- Don



signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Another study based on the reference bot

2008-10-27 Thread Michael Williams

By your argument, it would seem to make sense to remove this check even if you 
don't use my decaying weight.

boolean ok = true;// ok to use this move?
// see if either side has used this move before
for (int j=savctm; ji; j++) {
if ( (mvs[j]  MASK) == mv ) {
ok = false;
break;
}
}


Don Dailey wrote:

On Mon, 2008-10-27 at 14:48 -0400, Michael Williams wrote:

It's actually more bare-bones (in the sense that there is less code) if you 
consider the update loop that I showed.


The spirit of the recipe is to avoid any magic, any special cleverness,
anything that requires extra explanation, etc.   For instance one thing
I hate is the magic constant of 3X for the stopping rule - but some
things are absolutely necessary. 


I could have specified early cutoffs for AMAF because I have found it's
better not to go all the way to the end for this,  but again that is an
enhancement that goes beyond the most basic thing and would require
additional explanation.This idea of course supplants that, but it's
easy to imagine that someone else might come along and show a different
decay function that is more effective than yours and so on.


In fact, the number of simple enhancements could go on indefinitely.
AMAF would even seem to violate this basic principle of simplicity,
however AMAF has become so ubiquitous that it seems right to make it
part of this.

So no, the idea isn't to make the strongest reference bot possible but
to provide a definition so simple and easy to explain that people will
be willing to implement it.I don't want to add several paragraphs to
explain each little tweak.

However, having said all of that,  I like the idea of tracking these
kinds of enhancements.   We might even have progressively stronger
version that we could standardize and test against so that new bot
authors could progressively add features (such as this nice enhancement)
as they learn.

- Don





Don Dailey wrote:

Great information.   I'll include a version of this to my scalability
study.   Is this the C version?   


wins[] and hits[] are integer arrays and weight is a fraction less than
1.0, so I'm not sure how this works.  Did you change hits and wins to be
doubles?

There are many enhancements possible, and I'm not going to change the
definition,  it was my intention to provide a very basic bare-bones
reference implementation for others to build on.

- Don



On Mon, 2008-10-27 at 13:01 -0400, Michael Williams wrote:
The following modification to AMAF seems to perform better and scale better.  The idea is to weight the moves at the beginning of the playout heavier than the 
moves at the end of the playout.  It's probably not a new idea.


This code from the reference implementation:
wins[mv] += sc;
hits[mv]++;

Becomes this:
double weight = 1.0 - (double)(i - savctm) / (ctm - savctm);
wins[mv] += weight * sc;
hits[mv] += weight;

If you are not familiar with the reference code, here are the meanings of the 
variables in the code above:
i is the loop variable, counting from savctm to ctm
mv iterates over each move in the playout
sc is 1 or -1, depending on the outcome of the playout
ctm is the move count at the end of the playout
savctm is the move count at the beginning of the playout
hits is the number of times a given move was played
wins is the number of times a given move resulted in a playout win

At15 playouts per move, the modified version wins 54.0% of the time (±3.5%) 
after 200 games.
At30 playouts per move, the modified version wins 54.0% of the time (±3.5%) 
after 200 games.
At60 playouts per move, the modified version wins 54.5% of the time (±3.5%) 
after 200 games.
At   125 playouts per move, the modified version wins 53.0% of the time (±3.5%) 
after 200 games.
At   250 playouts per move, the modified version wins 54.0% of the time (±3.5%) 
after 200 games.
At   500 playouts per move, the modified version wins 55.5% of the time (±3.5%) 
after 200 games.
At  1000 playouts per move, the modified version wins 57.5% of the time (±3.5%) 
after 200 games.
At  2000 playouts per move, the modified version wins 63.0% of the time (±3.4%) 
after 200 games.
At  4000 playouts per move, the modified version wins 63.5% of the time (±3.4%) 
after 200 games.
At  8000 playouts per move, the modified version wins 63.5% of the time (±3.4%) 
after 200 games.
At 16000 playouts per move, the modified version wins 71.0% of the time (±3.2%) 
after 200 games.

Because of the weighting, it is probably safe to remove the code that checks to see if the move was previously played before awarding credit.  Doing so and 
incrementally calculating the weight would yield this simple and fast update loop after each playout:


 // Track win statistics using weighted AMAF - (All 

Re: [computer-go] MC programs vs. top commercial programs?

2008-10-27 Thread David Doshay
GNU Go won the tournament at the US Go Congress against several MC  
programs including Many Faces and Leela, but the Many Faces that  
competed was not quite the newest. David Fotland was working on  the  
program while in Portland and only got the multi-core (to use both  
cores of a duo) working after the tournament.


By some stroke of luck for him, after the tournament GNU Go was not  
turned off and sat waiting for more games against Many Faces. David  
played his multi-threaded multi-core version against GNU Go and all of  
those games were won by Many Faces. I do not recall the number of  
games he played before he went home.


Cheers,
David



On 27, Oct 2008, at 12:05 PM, Ian Osgood wrote:

Now that Leela and Many Faces v12 are available for any Windows user  
to purchase and run (and Fuego is free to tinker with), has anyone  
tried them against the old guard of commercial programs? KCC Igo,  
Haruka, Go++, and HandTalk haven't competed in a while so it is hard  
to tell how much better MC is than the previous state of the art.   
(For that matter, it isn't a foregone conclusion that they are  
better; GNU Go won the 2008 US computer go tournament against a  
field MC programs.)


Alternatively, I wonder whether Hiroshi Yamashita has tested his  
stronger AyaMC against his stable of commercial programs (as he  
previously did in 2007 using GNU Go).


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Another study based on the reference bot

2008-10-27 Thread Don Dailey
On Mon, 2008-10-27 at 16:08 -0400, Michael Williams wrote:
 By your argument, it would seem to make sense to remove this check
 even if you don't use my decaying weight.
 
 boolean ok = true;// ok to use this move?
 // see if either side has used this move before
 for (int j=savctm; ji; j++) {
 if ( (mvs[j]  MASK) == mv ) {
 ok = false;
 break;
 }
 }


I strongly considered not doing this check.   However, in most
descriptions I've seen of how AMAF works, this check was considered
important.   

Also, if you don't do the check it's possible to get a move credited
SEVERAL times per game EVEN if your opponent plays it first!  Somehow
that just seems wrong. 

There are some decisions that HAVE to be made that could be considered
arbitrary no matter which direction you go.  For instance the eye rule
is arbitrary and is not the simplest possible eye rule.   

So I tried to keep it real simple, but still gave some consideration to
tradition, correctness and reasonableness.  Tradition dictated the eye
rule (it is the one almost everyone uses), AMAF,  and this check you
mention.  Correctness seemed to indicate this check also (although it's
also easy to argue that this is not an issue of correctness because AMAF
is not correct anyway.)

I especially tried to avoid any gratuitous enhancements.  It's very easy
to do little tweaks that make it play stronger but are blatantly
recognizable as improvements that go beyond the basic implementation.
This clearly fits in that category.   

I'm not going to keep modifying the spec every time someone finds a
little helpful change.  I didn't intend for this to be a forum to
gradually evolve a stronger bot.However, the idea of doing that is
appealing as a separate project and I would like to track these type of
things when I build the web page.We might even have them as options
to the program and give them names so that we know what we are talking
about.  We could call this one the mw enhancement :-)

I'm testing your enhancement now, as we speak.   It's hard to draw any
conclusions after only 184 games, but I'm testing 4096 and it seems to
be almost as strong as the reference 8092.I may later try testing
only the 2 against each other like you did.I took out the above
check and implemented it exactly as you specified.

I may later test the reference bot with and without the check to see if
it really makes any difference. 


Rank Name  Elo+- games score oppo. draws 
   1 refbot-0081922661   25   24  1223   78%  21890% 
   2 mwRefbot-004096  2655   70   66   184   80%  20240% 
   3 refbot-0040962631   25   24  1223   75%  21740% 
   4 refbot-0020482532   24   24  1219   65%  21820% 
   5 refbot-0010242410   25   25  1223   56%  21460% 
   6 refbot-0005122245   29   30  1214   51%  20150% 
   7 refbot-0002562003   36   37  1218   44%  18960% 
   8 refbot-0001281646   45   46  1214   45%  16320% 
   9 refbot-641301   45   45  1208   52%  12880% 
  10 refbot-32 974   44   45  1214   53%  10190% 
  11 refbot-16 619   37   35  1208   55%   7740% 
  12 refbot-08 377   29   28  1202   48%   6380% 
  13 refbot-04 248   26   25  1200   43%   5330% 
  14 refbot-02 135   24   24  1199   35%   4830% 
  15 refbot-00   0   25   25  1200   23%   4790% 
  16 refbot-01  -1   25   26  1199   22%   4620% 

- Don



signature.asc
Description: This is a digitally signed message part
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Re: OT: Harder than go?

2008-10-27 Thread Dave Dyer

I think the question is largely meaningless, because few games have
been studied by humans (or human computer programmers) with the depth
and intensity that has been achieved for games like chess and go.

In general, games with many choices and no obvious strategies
are good for people and bad for computers.  Two examples that
fit this paradigm, and have been proposed as examples of hard
for computers are Arimaa and Havannah

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] OT: Harder than go?

2008-10-27 Thread Jason House
On Tue, 2008-10-28 at 08:55 +0900, Darren Cook wrote:
 Where harder means the gap between top programs and top human players
 is bigger, are there any games harder than go? Including games of
 imperfect information, multi-player games, single-player puzzle games.
 
 Naturally I'm most interested in well-known games, but if there is an
 artificially created game where humans run rings around the computers I
 would also like to hear about it.
 
 Darren
 
 
 

I think Arimaa was designed to be tough for computers

http://arimaa.com/arimaa/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] OT: Harder than go?

2008-10-27 Thread Darren Cook
 Where harder means the gap between top programs and top human players
 is bigger, are there any games harder than go? Including games of
 imperfect information, multi-player games, single-player puzzle games.
 
 Poetry contests?

I caught the smiley, but if you can define the rules (such that a
mechanical referee can decide the winner, as in all games from go to
poker to scrabble) then I guess the computer would be strong [1].

Thanks for the Arimaa and Havannah suggestions; I'd not heard of
Havannah before. I see both have bets by humans that they'll not be
beaten any time soon.

David, is MCTS likely to be useful for Arimaa?

Darren

[1]: Though http://en.wikipedia.org/wiki/Scrabble#Computer_players is
ambiguous about if computer scrabble players are stronger than human
players or not.

-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
open source dictionary/semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] MC programs vs. top commercial programs?

2008-10-27 Thread David Fotland
At Portland, I ran 3 games against gnugo with the two-CPU version, and won
all three.  Version 12 as released is quite a bit stronger than the code I
was using in Portland.

Many Faces version 11 was competitive against the old guard, winning about
30% to 40% against handtalk for example.  Version 12 is about 5 stones
stronger on 19x19 than version 11.

David

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:computer-go-
 [EMAIL PROTECTED] On Behalf Of David Doshay
 Sent: Monday, October 27, 2008 3:56 PM
 To: computer-go
 Subject: Re: [computer-go] MC programs vs. top commercial programs?
 
 GNU Go won the tournament at the US Go Congress against several MC
 programs including Many Faces and Leela, but the Many Faces that
 competed was not quite the newest. David Fotland was working on  the
 program while in Portland and only got the multi-core (to use both
 cores of a duo) working after the tournament.
 
 By some stroke of luck for him, after the tournament GNU Go was not
 turned off and sat waiting for more games against Many Faces. David
 played his multi-threaded multi-core version against GNU Go and all of
 those games were won by Many Faces. I do not recall the number of
 games he played before he went home.
 
 Cheers,
 David
 
 
 
 On 27, Oct 2008, at 12:05 PM, Ian Osgood wrote:
 
  Now that Leela and Many Faces v12 are available for any Windows user
  to purchase and run (and Fuego is free to tinker with), has anyone
  tried them against the old guard of commercial programs? KCC Igo,
  Haruka, Go++, and HandTalk haven't competed in a while so it is hard
  to tell how much better MC is than the previous state of the art.
  (For that matter, it isn't a foregone conclusion that they are
  better; GNU Go won the 2008 US computer go tournament against a
  field MC programs.)
 
  Alternatively, I wonder whether Hiroshi Yamashita has tested his
  stronger AyaMC against his stable of commercial programs (as he
  previously did in 2007 using GNU Go).
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] The 2nd Comuputer Go UEC Cup

2008-10-27 Thread TAKESHI ITO


*
 CALL FOR PARTICIPATION

 The 2nd Computer Go UEC Cup
   The University of Electro-Communication,
   Tokyo, Japan, 13-14 December 2008
   http://jsb.cs.uec.ac.jp/~igo/2008/eng/index.html
*

#Important Information
   Championship: December 13-14, 2008
   Entry's Deadline: November 28, 2008

#Schedule
   December 13
 Preliminary Match 
   December 14
 Final Tournament 
 Exhibition Match 
 Explanation by professional Igo player 

#Guest Comentator
   Mr.Cheng Ming Huang(9th-grade), Ms.Kaori Aoba(4th-grade)

#Event
   Exhibition match:  Ms.Kaori Aoba(4th-grade) VS Championship Program 
(Handicap Game)

#Entry
   Cost:free
   Entry:accepting now
 http://jsb.cs.uec.ac.jp/~igo/2008/eng/mailform.html

#Venue
   The University of Electro-communications
   : Building W-9 3F AV hall(campus map:40)
1-5-1 Chofugaoka, Chofu-shi, Tokyo 182-8585 Japan.

#Past
   The 1st Comuputer Go UEC Cup
  http://jsb.cs.uec.ac.jp/~igo/2007/eng/index.html

#Host Organizer
   Cognitive Science and Entertainment (EC) Research Station, 
   The University of Electro-communications

#Support
   Computer Go Forum 

#Contact (Tournament committee)
   [EMAIL PROTECTED]


  -
   Takeshi Ito
  The University of Electro-Communications
  [EMAIL PROTECTED]
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] OT: Harder than go?

2008-10-27 Thread Luke Gustafson
Computer Scrabble significantly exceeds humans.  A basic monte carlo search 
and an endgame solver is very effective.  There is probably still much 
strength to be gained (very little opponent modeling is done), but it's 
already so strong I don't think it's getting much attention.


Looks like there's about 700 elo between the top Arimaa bot and human.  I 
suppose for go it is quite a bit more?


For single player games, there are definitely many types of puzzles that 
computers struggle with.  Sokoban is a puzzle I saw a while ago that is 
still difficult for computers.  I don't think I'd say it's harder than go, 
though.

http://www.grappa.univ-lille3.fr/icga/phpBB3/viewtopic.php?f=2t=35


- Original Message - 
From: Darren Cook [EMAIL PROTECTED]

To: computer-go@computer-go.org
Sent: Monday, October 27, 2008 8:54 PM
Subject: Re: [computer-go] OT: Harder than go?



Where harder means the gap between top programs and top human players
is bigger, are there any games harder than go? Including games of
imperfect information, multi-player games, single-player puzzle games.


Poetry contests?


I caught the smiley, but if you can define the rules (such that a
mechanical referee can decide the winner, as in all games from go to
poker to scrabble) then I guess the computer would be strong [1].

Thanks for the Arimaa and Havannah suggestions; I'd not heard of
Havannah before. I see both have bets by humans that they'll not be
beaten any time soon.

David, is MCTS likely to be useful for Arimaa?

Darren

[1]: Though http://en.wikipedia.org/wiki/Scrabble#Computer_players is
ambiguous about if computer scrabble players are stronger than human
players or not.

--
Darren Cook, Software Researcher/Developer
http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
   open source dictionary/semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/ 


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] OT: Harder than go?

2008-10-27 Thread dhillismail
Core Wars , robot soccer (there is a simulation league), pictionary,...

- Dave Hillis


?
- Original Message - From: Darren Cook [EMAIL PROTECTED]?
To: computer-go@computer-go.org?
Sent: Monday, October 27, 2008 8:54 PM?
Subject: Re: [computer-go] OT: Harder than go??
?
 Where harder means the gap between top programs and top human players?
 is bigger, are there any games harder than go? Including games of?
 imperfect information, multi-player games, single-player puzzle games.?
?
 Poetry contests??
?
 I caught the smiley, but if you can define the rules (such that a?
 mechanical referee can decide the winner, as in all games from go to?
 poker to scrabble) then I guess the computer would be strong [1].?
?
 Thanks for the Arimaa and Havannah suggestions; I'd not heard of?
 Havannah before. I see both have bets by humans that they'll not be?
 beaten any time soon.?
?
 David, is MCTS likely to be useful for Arimaa??
?
 Darren?
?
 [1]: Though http://en.wikipedia.org/wiki/Scrabble#Computer_players is?
 ambiguous about if computer scrabble players are stronger than human?
 players or not.?
?
 --  Darren Cook, Software Researcher/Developer?
 http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic?
 open source dictionary/semantic network)?
 http://dcook.org/work/ (About me and my work)?
 http://dcook.org/blogs.html (My blogs and articles)?
 ___?
 computer-go mailing list?
 [EMAIL PROTECTED]
 http://www.computer-go.org/mailman/listinfo/computer-go/ ?
___?
computer-go mailing list?
[EMAIL PROTECTED]
http://www.computer-go.org/mailman/listinfo/computer-go/?

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] The 2nd Comuputer Go UEC Cup

2008-10-27 Thread David Doshay
Will remote computing be allowed, or do we need to have our hardware  
on site?


Cheers,
David



On 27, Oct 2008, at 7:21 PM, TAKESHI ITO wrote:




*
CALL FOR PARTICIPATION

The 2nd Computer Go UEC Cup
  The University of Electro-Communication,
  Tokyo, Japan, 13-14 December 2008
  http://jsb.cs.uec.ac.jp/~igo/2008/eng/index.html
*

#Important Information
  Championship: December 13-14, 2008
  Entry's Deadline: November 28, 2008

#Schedule
  December 13
Preliminary Match
  December 14
Final Tournament
Exhibition Match
Explanation by professional Igo player

#Guest Comentator
  Mr.Cheng Ming Huang(9th-grade), Ms.Kaori Aoba(4th-grade)

#Event
  Exhibition match:  Ms.Kaori Aoba(4th-grade) VS Championship  
Program (Handicap Game)


#Entry
  Cost:free
  Entry:accepting now
http://jsb.cs.uec.ac.jp/~igo/2008/eng/mailform.html

#Venue
  The University of Electro-communications
  : Building W-9 3F AV hall(campus map:40)
   1-5-1 Chofugaoka, Chofu-shi, Tokyo 182-8585 Japan.

#Past
  The 1st Comuputer Go UEC Cup
 http://jsb.cs.uec.ac.jp/~igo/2007/eng/index.html

#Host Organizer
  Cognitive Science and Entertainment (EC) Research Station,
  The University of Electro-communications

#Support
  Computer Go Forum

#Contact (Tournament committee)
  [EMAIL PROTECTED]


 -
  Takeshi Ito
 The University of Electro-Communications
 [EMAIL PROTECTED]
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] The 2nd Comuputer Go UEC Cup

2008-10-27 Thread David Fotland
Do we have to show up in person, or can our programs be operated for us?

David Fotland

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:computer-go-
 [EMAIL PROTECTED] On Behalf Of TAKESHI ITO
 Sent: Monday, October 27, 2008 7:22 PM
 To: computer-go@computer-go.org
 Subject: [computer-go] The 2nd Comuputer Go UEC Cup
 
 
 
 *
  CALL FOR PARTICIPATION
 
  The 2nd Computer Go UEC Cup
The University of Electro-Communication,
Tokyo, Japan, 13-14 December 2008
http://jsb.cs.uec.ac.jp/~igo/2008/eng/index.html
 *
 
 #Important Information
Championship: December 13-14, 2008
Entry's Deadline: November 28, 2008
 
 #Schedule
December 13
  Preliminary Match
December 14
  Final Tournament
  Exhibition Match
  Explanation by professional Igo player
 
 #Guest Comentator
Mr.Cheng Ming Huang(9th-grade), Ms.Kaori Aoba(4th-grade)
 
 #Event
Exhibition match:  Ms.Kaori Aoba(4th-grade) VS Championship Program
 (Handicap Game)
 
 #Entry
Cost:free
Entry:accepting now
  http://jsb.cs.uec.ac.jp/~igo/2008/eng/mailform.html
 
 #Venue
The University of Electro-communications
: Building W-9 3F AV hall(campus map:40)
 1-5-1 Chofugaoka, Chofu-shi, Tokyo 182-8585 Japan.
 
 #Past
The 1st Comuputer Go UEC Cup
   http://jsb.cs.uec.ac.jp/~igo/2007/eng/index.html
 
 #Host Organizer
Cognitive Science and Entertainment (EC) Research Station,
The University of Electro-communications
 
 #Support
Computer Go Forum
 
 #Contact (Tournament committee)
[EMAIL PROTECTED]
 
 
   -
Takeshi Ito
   The University of Electro-Communications
   [EMAIL PROTECTED]
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] OT: Harder than go?

2008-10-27 Thread Michael Williams

I love pictionary!  The computers will be drunk, right?


[EMAIL PROTECTED] wrote:

Core Wars , robot soccer (there is a simulation league), pictionary,...

- Dave Hillis

 
- Original Message - From: Darren Cook [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] 
To: computer-go@computer-go.org mailto:computer-go@computer-go.org 
Sent: Monday, October 27, 2008 8:54 PM 
Subject: Re: [computer-go] OT: Harder than go? 
 
  Where harder means the gap between top programs and top human 
players 
  is bigger, are there any games harder than go? Including games of 
  imperfect information, multi-player games, single-player puzzle games. 
  
  Poetry contests? 
  
  I caught the smiley, but if you can define the rules (such that a 
  mechanical referee can decide the winner, as in all games from go to 
  poker to scrabble) then I guess the computer would be strong [1]. 
  
  Thanks for the Arimaa and Havannah suggestions; I'd not heard of 
  Havannah before. I see both have bets by humans that they'll not be 
  beaten any time soon. 
  
  David, is MCTS likely to be useful for Arimaa? 
  
  Darren 
  
  [1]: Though http://en.wikipedia.org/wiki/Scrabble#Computer_players is 
  ambiguous about if computer scrabble players are stronger than human 
  players or not. 
  
  --  Darren Cook, Software Researcher/Developer 
  http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic 
  open source dictionary/semantic network) 
  http://dcook.org/work/ (About me and my work) 
  http://dcook.org/blogs.html (My blogs and articles) 
  ___ 
  computer-go mailing list 
  computer-go@computer-go.org mailto:computer-go@computer-go.org 
  http://www.computer-go.org/mailman/listinfo/computer-go/  
___ 
computer-go mailing list 
computer-go@computer-go.org mailto:computer-go@computer-go.org 
http://www.computer-go.org/mailman/listinfo/computer-go/ 



McCain or Obama? Stay updated on coverage of the Presidential race while 
you browse - Download Now! 
http://pr.atwola.com/promoclk/10075x1211139166x1200680084/aol?redir=http://toolbar.aol.com/elections/download.html?ncid=emlweusdown0002 






___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] MC programs vs. top commercial programs?

2008-10-27 Thread David Fotland
You can also try the free version of Many Faces of Go 12 at
www.smart-games.com.  Many Faces version 11 worked under wine, so this one
should too.  If you try it please let me know if it works.

David

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:computer-go-
 [EMAIL PROTECTED] On Behalf Of Gian-Carlo Pascutto
 Sent: Monday, October 27, 2008 1:00 PM
 To: computer-go
 Subject: Re: [computer-go] MC programs vs. top commercial programs?
 
 terry mcintyre wrote:
  From: Ian Osgood [EMAIL PROTECTED]
 
  Now that Leela and Many Faces v12 are available for any Windows
  user to purchase
 
  Thanks for the heads-up, I must have missed the announcement.
 
  Do either of these worthy programs work with Wine on Linux?
 
 You can try the free version of Leela to check. It will work with wine
 1.1.6 or later, as far as I know.
 
 --
 GCP
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/