On anecdotal evidence:
Manyfaces on medium time settings KGS = 2k (accounts manyfaces and
manyfaces2)
Manyfaces1 playing round 10 sec/move is able maintain 1d rank.
So by reducing oppponents thinking time bot gets relative advantage of
3stones.
Also in chess it is uusually considered that
I cant recall any offocoal challenges. I do remember some such statement in
some other challenge, but failed to google it up.
Human computer chess challenges are not likely to happen anymore. What would
be the point for human? Hydra could probably beat anyone. And as processors
get faster any of
2009/10/30 Olivier Teytaud olivier.teyt...@lri.fr:
I think in correpondence chess humans still hold against computers
Petri
Are there sometimes games organized like that ? This is really impressive to
me.
Arno Nickel played three games with Hydra over a few months in 2005.
He won 2.5-0.5
Thanks a lot for this information.
This is really very interesting and not widely known.
Maybe chess is less closed than I would have believed :-)
Olivier
Arno Nickel played three games with Hydra over a few months in 2005.
He won 2.5-0.5
http://en.wikipedia.org/wiki/Arno_Nickel
I doubt
2009/10/30 terry mcintyre terrymcint...@yahoo.com:
This may be useful in computer Go. One of the reasons human pros do well is
that they compute certain sub-problems once, and don't repeat the effort
until something important changes. They know in an instant that certain
positions are live or
I personally just use root parallelization in Pachi
I think this answers my question; each core in Pachi independently explores
a tree, and the master thread merges the data. This is even though you have
shared memory on your machine.
Have you read the Parallel Monte-Carlo Tree Search paper?
While re-reading the parallelization papers, I tried to formulate why I
thought that they couldn't be right. The issue with Mango reporting a
super-linear speed-up was an obvious red flag, but that doesn't mean that
their conclusions were wrong. It just means that Mango's exploration policy
needs
On Fri, Oct 30, 2009 at 07:53:15AM -0600, Brian Sheppard wrote:
I personally just use root parallelization in Pachi
I think this answers my question; each core in Pachi independently explores
a tree, and the master thread merges the data. This is even though you have
shared memory on your
I share all uct-nodes with more than N visits, where N is currently 100, but
performance doesn't seem very sensitive to N.
Does Mogo share RAVE values as well over MPI?
I agree that low scaling is a problem, and I don't understand why.
It might be the MFGO bias. With low numbers of playouts
Welcome Aldric,
Not a frequent poster myself, here are two resources that you may find
useful.
1) an extensive library of articles related to computer-go is collected on
http://www.citeulike.org/group/5884/library
This list provides a wealth of articles tracing back many years and used to
be
Back of envelope calculation: MFG processes 5K nodes/sec/core * 4 cores per
process = 20K nodes/sec/process. Four processes makes 80K nodes/sec. If you
think for 30 seconds (pondering + move time) then you are at 2.4 million
nodes. Figure about 25,000 nodes having 100 visits or more. UCT data is
On Fri, Oct 30, 2009 at 10:50:05AM -0600, Brian Sheppard wrote:
Parallelization *cannot* provide super-linear speed-up.
I don't see that at all.
This is standard computer science stuff, true of all parallel programs and
not just Go players. No parallel program can be better than N times a
In the MPI runs we use an 8-core node, so the playouts per node are higher.
I don't ponder, since the program isn't scaling anyway.
The number of nodes with high visits is smaller, and I only send nodes that
changed since the last send.
I do progressive unpruning, so most children have zero
13 matches
Mail list logo