On Feb 16, 2009, at 5:45 PM, Andy wrote:
See attached a copy of the .sgf. It was played private on KGS so you
can't get it there directly. One of the admins cloned it and I saved
it off locally.
I changed the result to be B+4.5 instead of W+2.5.
Here is another copy of the game record,
At the moment I (and another member of my group) are doing research on
applying machine learning to constructing a static evaluator for Go
positions (generally by predicting the final ownership of each point
on the board and then using this to estimate a probability of
winning). We are looking
I'd be more than happy to work with you and the other members of your
group. I'm getting close to wrapping up a restructuring of my bot that
allows easily swapping out evaluation methods and search techniques.
As an example, here's the code that does a few basic MC searches:
11 static if
While your goal is laudable, I'm afraid there is no such thing
as a simple tree search with a plug-in evaluator for Go. The
problem is that the move generator has to be very disciplined,
and the evaluator typically requires elaborate and expensive to
maintain data structures. It all tends to be
I am aware such a decoupled program might not exist, but I don't see
why one can't be created. When you say the move generator has to be
very disciplined what do you mean? Do you mean that the evaluator
might be used during move ordering somehow and that generating the
nodes to expand is tightly
Do you mean that the evaluator might be used during move ordering somehow
and that generating the nodes to expand is tightly coupled with the static
evaluator?
That's the general idea.
No search program can afford to use a fan-out factor of 361. The information
about what to cut has to come
Do you mean that the evaluator might be used during move ordering somehow
and that generating the nodes to expand is tightly coupled with the static
evaluator?
That's the general idea.
No search program can afford to use a fan-out factor of 361. The information
about what to cut has to come
A simple alfabeta searcher will only get a few plies deep on 19x19, so it won't
be very useful (unless your static evaluation function is so good that it
doesn't really need an alfabeta searcher)
Dave
Van: computer-go-boun...@computer-go.org namens George
You're right of course. We have a (relatively fast) move pruning
algorithm that can order moves such that about 95% of the time, when
looking at pro games, the pro move will be in the first 50 in the
ordering. About 70% of the time the expert move will be in the top
10. So a few simple tricks
This is old and incomplete, but still is a starting point you might
find useful http://www.andromeda.com/people/ddyer/go/global-eval.html
General observations (from a weak player's point of view):
Go is played on a knife edge between life and death. The only evaluator
that matters is is
On Tue, 2009-02-17 at 20:04 +0100, dave.de...@planet.nl wrote:
A simple alfabeta searcher will only get a few plies deep on 19x19, so
it won't be very useful (unless your static evaluation function is so
good that it doesn't really need an alfabeta searcher)
I have to say that I believe this
I really don't like the idea of ranking moves and scoring based on the
distance to the top of a list for a pro move. This is worthless if we
ever want to surpass humans (although this isn't a concern now, it is
in principle) and we have no reason to believe a move isn't strong
just because a pro
George Dahl wrote:
I guess the another question is, what would you need to see a static
evaluator do to be so convinced it was useful that you then built a
bot around it? Would it need to win games all by itself with one ply
lookahead?
Here is one way to look at it: Since a search tends to
On Tue, Feb 17, 2009 at 8:23 PM, George Dahl george.d...@gmail.com wrote:
It is very hard for me to figure out how good a given evaluator is (if
anyone has suggestions for this please let me know) without seeing it
incorporated into a bot and looking at the bot's performance. There
is a
Michael Williams wrote:
As for the source of applicable positions, that's a bit harder, IMO. My
first thought was to use random positions since you don't want any bias,
but that will probably result in the evaluation of the position being
very near 0.5 much of the time. But I would still try
I agree with you, but I wouldn't qualify MC evaluation with MCTS as a static
evaluation function on top of a pure alfabeta search to a fixed depth (I have
the impression that this is what George Dahl is talking about).
Dave
Van:
Dave Dyer wrote:
If you look at GnuGo or some other available program, I'm pretty sure
you'll find a line of code where the evaluator is called, and you could
replace it, but you'll find it's connected to a pile of spaghetti.
That would have to be some other available program. GNU Go doesn't
I've been looking into CGT lately and I stumbled on some articles about
approximating strategies for determining the sum of subgames (Thermostrat,
MixedStrat, HotStrat etc.)
It is not clear to me why approximating strategies are needed. What is the
problem? Is Ko the problem? Is an exact
On Feb 17, 2009, at 12:55 PM, Dave Dyer dd...@real-me.net wrote:
While your goal is laudable, I'm afraid there is no such thing
as a simple tree search with a plug-in evaluator for Go. The
problem is that the move generator has to be very disciplined,
and the evaluator typically requires
From: Jason House jason.james.ho...@gmail.com
On Feb 17, 2009, at 4:39 PM, dave.de...@planet.nl wrote:
I've been looking into CGT lately and I stumbled on some articles about
approximating strategies for determining the sum of subgames (Thermostrat,
I think it would be much more informative to compare evaluator A and
evaluator B in the following way.
Make a bot that searched to a fixed depth d before then calling a
static evaluator (maybe this depth is 1 or 2 or something small). Try
and determine the strength of a bot using A and a bot
Really? You think that doing 20-50 uniform random playouts and
estimating the win probability, when used as a leaf node evaluator in
tree search, will outperform anything else that uses same amount of
time? I must not understand you. What do you mean by static
evaluator? When I use the term, I
Really? You think that doing 20-50 uniform random playouts and
estimating the win probability, when used as a leaf node evaluator in
tree search, will outperform anything else that uses same amount of
time?
Same amount of clock time for the whole game. E.g. if playing 20 random
playouts to
On Tue, Feb 17, 2009 at 8:35 PM, George Dahl george.d...@gmail.com wrote:
Really? You think that doing 20-50 uniform random playouts and
estimating the win probability, when used as a leaf node evaluator in
tree search, will outperform anything else that uses same amount of
time?
You'll
From: dhillism...@netscape.net dhillism...@netscape.net
Perhaps the biggest problem came from an unexpected quarter. MC playouts are
very fast and neural nets are a bit slow. (I am talking about the forward
pass, not the off-line training.) In the short time
GPUs can speed up many types of neural networks by over a factor of 30.
- George
On Tue, Feb 17, 2009 at 8:35 PM, terry mcintyre terrymcint...@yahoo.com wrote:
From: dhillism...@netscape.net dhillism...@netscape.net
Perhaps the biggest problem came from an
On Mon, Feb 16, 2009 at 7:45 PM, Andy andy.olsen...@gmail.com wrote:
See attached a copy of the .sgf. It was played private on KGS so you
can't get it there directly. One of the admins cloned it and I saved
it off locally.
I changed the result to be B+4.5 instead of W+2.5.
I forgot to make
I think you mean Many Faces of Go, not Crazystone.
David
-Original Message-
From: computer-go-boun...@computer-go.org [mailto:computer-go-
boun...@computer-go.org] On Behalf Of Andy
Sent: Tuesday, February 17, 2009 10:08 PM
To: computer-go
Subject: Re: [computer-go] Congratulations
It is very clear that nonuniform random playouts is a far better evaluator
than any reasonable static evaluation, given the same amount of time. Many
people (including myself) spent decades creating static evaluations, using
many techniques, and the best ones ended up with similar strength
Many Faces of Go has a static position evaluator, but it's not spaghetti :)
It makes many passes over the board building up higher level features from
lower level ones, and it does local lookahead as part of feature evaluation,
so it has a lot of code, and is fairly slow.
David
-Original
It's not true that MCTS only goes a few ply. In 19x19 games on 32 CPU
cores, searching about 3 million play outs per move, Many Faces of Go
typically goes over 15 ply in the PV in the UCT tree.
I agree that it is much easier to reliably prune bad moves in go than it is
in chess.
Many Faces (pre
One way to figure out how good your static evaluator is, is to have it do a
one ply search, evaluate, and display the top 20 or so evaluations on a go
board. Ask a strong player to go through a pro game, showing your
evaluations at each move. He can tell you pretty quickly how bad your
evaluator
Many Faces uses information from the static evaluator to order and prune
moves during move generation. For example if the evaluation finds a big
unsettled group, the move generator will favor eye making or escaping moves
for the big group.
David
-Original Message-
From:
33 matches
Mail list logo