David Fotland wrote:
Many Faces plays L10, which looks like it also breaks both ladders.
-David
Thanks for testing.
What if Black replies with K9 ? It looks like K9 restores both ladders
(to my naive eye).
What about the first position I posted, where more tempting moves are
available
alain Baeckeroot wrote:
Le mercredi 22 novembre 2006 20:44, Rémi Coulom a écrit :
Hi,
Hi Rémi
I am in search of Go positions that are easy to understand for humans,
and difficult for computers.
One incredibly simple example for human, where GNU Go horribly fails.
The only
Don Dailey wrote:
Hi Steve,
What you fail to take into considerations is that a monte/carlo
player may ruin it's chances before the weaker player has a
chance to play a bad move. The monte carlo player sees all
moves as losing and will play almost randomly.
I don't agree. Here is the
Don Dailey wrote:
I'll take a final poll - speak now or forever hold your peace!
Should we:
1. Give white N-1 stones at end of game. (where N = handicap)
2. Give white N stones at end of game. (N = handicap)
3. Give white N stones except handicap 1 case.
4. Not worry about giving
Nick Wedd wrote:
Some results of Computer Go (and other computer games) events have
long been available on the old ICGA web site at
http://www.cs.unimaas.nl/icga/ . There is now a new ICGA web site at
http://www.grappa.univ-lille3.fr/icga/ with fuller information on
these events, including
Chrilly wrote:
I think on 9x9 the superiority of search based programms is now
clearly demonstrated. Its only the question if UCT or Alpha-Beta is
superior.
Hi Chrilly,
Thanks for your report.
The question of UCT versus Alpha-Beta is not open any more in my
opinion. The current state of the
Chrilly wrote:
The main point of my mail was: Search works (at least in 9x9) well. I
think we can agree on this point.
Yes.
For the UCT v. Alpha-Beta question there is a simple proof of the
pudding: Sent us the latest/strongest version and we will try to beat it.
I do not plan to
Don Dailey wrote:
On Thu, 2007-04-12 at 11:18 -0400, Jason House wrote:
Not having byo yomi because it's tough to code isn't really a good
argument. If we want (non-computer-go) people to take the results
seriously, the game timing should be the same as what people naturally
do. I
[EMAIL PROTECTED] wrote:
I also find this kind of information very interesting and useful. Now
I have a better feel for what kind of scaling is realistic to try for
and how to measure it.
Putting some recent data points together, it look like giving Mogo 2
orders of magnitude more computer
[EMAIL PROTECTED] wrote:
-Original Message-
From: [EMAIL PROTECTED]
To: computer-go@computer-go.org
Sent: Mon, 16 Apr 2007 5:26 AM
Subject: Re: [computer-go] The dominance of search (Suzie v. GnuGo)
[EMAIL PROTECTED]
javascript:parent.ComposeTo(dhillismail%40netscape.net, ); wrote:
Hi,
I first thought I would keep my ideas secret until the Asmterdam
tournament, but now that I have submitted my paper, I cannot wait to
share it. So, here it is:
http://remi.coulom.free.fr/Amsterdam2007/
Comments and questions are very welcome.
Rémi
Álvaro Begué wrote:
There are many things in the paper that we had never thought of, like
considering the distance to the penultimate move.
That feature improved the effectiveness of progressive widening a lot.
When I had only the distance to the previous move, and the opponent made
a
Chris Fant wrote:
I first thought I would keep my ideas secret until the Asmterdam
tournament, but now that I have submitted my paper, I cannot wait to
share it. So, here it is:
http://remi.coulom.free.fr/Amsterdam2007/
Comments and questions are very welcome.
I'd like to propose a potential
David Silver wrote:
Very interesting paper!
I have one question. The assumption in your paper is that increasing
the performance of the simulation player will increase the performance
of Monte-Carlo methods that use that simulation player. However, we
found in MoGo that this is not
Yamato wrote:
Rémi,
May I ask you some more questions?
(1) You define Dj as Dj=Mij*ci+Bij. Is it not Aij but Bij?
What does this mean?
Yes, it is ! Thanks for pointing that mistake out.
(2) You have relatively few shape patterns. How large is each
pattern? 5x5, 7x7, or more?
I
Hi,
I have just updated my web page with the final version of my paper:
http://remi.coulom.free.fr/Amsterdam2007/
I have tried to improve it based on all your comments and questions, and
those of the workshop reviewer. I thank you all very much for your
interesting remarks.
I have not
Question for native English speakers: do you think this technique is
best described by “progressive unpruning” or “progressive widening”?
I used this term in reference to Tristan Cazenave's iterative widening
and generalized widening (I should have cited him). See:
Łukasz Lew wrote:
I'm not sure whether You have noticed, but my student made an
empirical comparison
between BAST, UCT and other formulas.
It can be found here:
http://students.mimuw.edu.pl/~fg219435/Go/
Best Regards,
Lukasz Lew
Hi Łukasz,
You write that EGO_BAST seems to be a bit more
Darren Cook wrote:
Does anyone know of UCT being used in games other than go, or outside
games altogether, such as travelling salesman problem, or some
business-related scheduling/optimizing/searching problem domain?
Thanks,
Darren
Guillaume has one paper titled Monte-Carlo Tree Search in
Álvaro Begué wrote:
Actually, John had a better idea to do this. In two words: binary
tree. The root represents the whole board, and it contains the sum of
the probabilities of all the points (you don't need to force this to
be 1, if you use non-normalized probabilities). This node points to
Jason House wrote:
On 6/6/07, *Rémi Coulom* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
I wonder if other people had thought about this before...
Álvaro.
Yes, I did it in the beginning. But I found that it is faster to divide
by more than two. Currently, I
Darren Cook wrote:
I've been messing around with where to apply heuristics. Candidates
include:
1) In the UCT tree, since this is earliest in each playout.
2) In the moves beyond the tree (heavy playouts), since this is where
most of the moves happen. Because this part is so speed-critical, ...
Peter Drake wrote:
On Jun 6, 2007, at 2:41 PM, Rémi Coulom wrote:
Also, if you have a clever probability distribution, the range of
values for each move will be very large. For instance, here are two
3x3 shapes used by Crazy Stone (# to move):
O O #
# . .
# O #
Gamma = 143473
Jacques Basaldúa wrote:
Rémi, are your values the result of learning in masters games?
Yes. I took data from a learning experiment based on very few games. So
there may be a little overfitting. Still, the ratio between the
strongest and the weakest patterns is always very big.
I'll run
Hi,
Their paper is available online:
http://www.machinelearning.org/proceedings/icml2007/papers/387.pdf
I thank Lukasz for letting me know.
Rémi
___
computer-go mailing list
computer-go@computer-go.org
Jason House wrote:
Darren Cook wrote:
Hi Jason,
In UCT the monte carlo searches (I find it clearer to call them the
playouts) are always run to the end of the game. So they always
accurately (well, as accurate as a random playout can be!) take sente in
account. Therefore my understanding is
Joshua Shriver wrote:
Anyone have a list and URL's for all of the open source and/or free
engines?
http://en.wikipedia.org/wiki/List_of_free_Go_programs
Rémi
___
computer-go mailing list
computer-go@computer-go.org
Hi,
Here are some interviews from the Computer Olympiad in Amsterdam:
http://www.youtube.com/results?search_query=computer+olympiade+amsterdamsearch=
Maybe they had been posted here before, but I did not notice. Sorry if
it is the case.
Rémi
Sil wrote:
How about http://home.wwgo.jp/jp/minigo/
It seems that only 24 games are available. Is the whole collection
available somewhere?
Rémi
___
computer-go mailing list
computer-go@computer-go.org
Don Dailey wrote:
On Mon, 2007-07-09 at 10:10 -0700, terry mcintyre wrote:
I concur with Christian Nilsson; handicap stones permit the win-loss
ratio to approximate 50%, where it is more sensitive to improvements.
As one tweaks the program, the progress would be measurable within a
few
Ian Osgood wrote:
From what I can tell, there has not been a clash of the Go titans
since the 2003 Gifu Challenge, which had all of KCC Igo, Haruka, Go++,
Goemate/Handtalk, Many Faces, GNU Go, and Go Intellect participating.
(This was the last public competition for many of these programs.) It
Nick Wedd wrote:
According to the game records from the recent ICGA events in
Amsterdam, the 19x19 events used Japanese rules with 6.5 komi, and the
9x9 games used Chinese rules, but with 6.5 komi. So I suspect not.
All games were played with Chinese rules, with a komi of 6.5. Those who
Andrés Domínguez wrote:
Hello everyone!
After two years programming a complex go engine without success
I have started a new one (kakegoto). I use an innovative approach,
the program plays many random games, and then plays the move
with more winning probability.
Interesting idea.
I want
chrilly wrote:
I have a Phd in statistics. But Bayesian methods were at that time a
non-topic. I know the general principles, but I want to learn a little
bit more about the latest developments in the field. Bayes is now chic,
there are many books about it. I assume also a lot of bad ones.
Can
Martin Møller Skarbiniks Pedersen wrote:
http://www.gggo.jp/ggmc-v1.3.tar.gz (~200kB)
DNS problems ?
Resolving www.gggo.jp... failed: Temporary failure in name resolution.
___
computer-go mailing list
computer-go@computer-go.org
Hideki Kato wrote:
Rémi Coulom: [EMAIL PROTECTED]:
Hideki Kato wrote:
http://www.gggo.jp/ggmc-v1.3.tar.gz (~200kB)
Hideki (gg)
--
[EMAIL PROTECTED] (Kato)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org
Erik van der Werf wrote:
On 8/10/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
I organized side event Predict Professinal Moves at
European Go Congress 2007. I made a brief and shallow
summary (without any analysis) of the results and decided
to post it here---in case someone is actually
Erik van der Werf wrote:
It might be interesting to see how Rémi Coulom's move predictor does
on these positions.
Erik
You can now donwload it as a GTP engine from the page of the paper:
http://remi.coulom.free.fr/Amsterdam2007/
Rémi
___
Jason House wrote:
I believe the original paper used 32 simulations from each point as
part of its pattern... But the spirit of it (as I understand it)
really is to bias the 1-ply move selection at the start of MC
searches. (In Crazy Stone, just 3x3 patterns are used at deeper ply?)
elife
Jason House wrote:
Yeah. An eye point is defined as an empty point where all four
neighbors are the same chain. This prevents weak combos of false
eyes, but does allow it to miss one kind of life.
Do you mean that your program would fill black eyes there:
#.#O.
.##OO
##OO.
O
? This
Chris Fant wrote:
I was not able to tell from the CrazyStone paper how the patterns are
used in the playouts. Can anyone enlighten me? Does it simply select
the move with the highest score?
___
computer-go mailing list
computer-go@computer-go.org
Chris Fant a écrit :
Does this mean that you need to calculate the Bradley-Terry
probability for every legal move before selecting one on that
probability? Isn't that expensive? Have you tried selecting only N
legal candidates at random and then selecting one of those based on
their
terry mcintyre wrote:
IIRC, a few Microsoft researchers did some interesting work with SVMs
and the prediction of pro-level moves. I've always wondered whether
that could be integrated with UCT to narrow the search tree.
Hi,
This is what I do in Crazy Stone:
Andrés Domínguez wrote:
2007/10/10, Don Dailey [EMAIL PROTECTED]:
Andrés,
You are right about null move of course. The assumption that other
moves are = to the value of a pass is much stronger in GO than in
Chess, yet ironically it's not as effective in Go.
That was what i was
Rémi Coulom wrote:
Andrés Domínguez wrote:
2007/10/10, Don Dailey [EMAIL PROTECTED]:
Andrés,
You are right about null move of course. The assumption that other
moves are = to the value of a pass is much stronger in GO than in
Chess, yet ironically it's not as effective in Go
Don Dailey wrote:
I believe Many Faces is probably stronger than Mogo but I don't know
that this has been proven.
Hi Don,
I'd bet on Mogo. In case nobody noticed, Crazy Stone won a match against
KCC Igo this summer, with 15 wins and 4 losses. The match was organized
by Hiroshi Yamashita.
Ian Osgood wrote:
On Oct 11, 2007, at 11:01 AM, Rémi Coulom wrote:
In case nobody noticed, Crazy Stone won a match against KCC Igo this
summer, with 15 wins and 4 losses. The match was organized by Hiroshi
Yamashita. The games can be found in the KGS archives.
http://www.gokgs.com
David Doshay wrote:
Why would you use 6 of the 8 cores and not all 8?
Cheers,
David
It is the desktop machine of a colleague. He was running one long
computation on one core, and using it for mail, web browsing, etc. So I
left 2 cores to him.
Rémi
Erik S. Steinmetz wrote:
I would like to thank everyone who responded on this thread. The
pointers have been very helpful.
I would also like to see that linked document, as the text describing
the pattern value system looks interesting, and a longer description
of it would be nice! If anyone
Christoph Birk wrote:
On Tue, 23 Oct 2007, Olivier Teytaud wrote:
http://www.lri.fr/~teytaud/cgosStandings.html
If someone wants to test it, the port is 6919 on machine pc5-120.lri.fr.
10 minutes per side. But only try it if you want to take risks, it is
almost surely
not stable yet, and the
Hi,
I have just connected Crazy Stone (CS-8-26-10k-1CPU). It uses 10,000
playouts per move, and runs on 1 CPU. It should finish all its games in
less than 5 minutes. In my tests, it scores 41% against GNU Go 3.6 Level
10, and 73.5% against MoGo_release3 at 10k playouts per move (the
playouts
Edward de Grijs wrote:
The CrazyStone row has dissapeared because not enough
games were played, so there will be a larger standard
deviation around those values (I expect a 1 sigma value of
about 50 elo. It would be interesting to incluse those
numbers on every row (Don?))
Uncertainty about
Christoph Birk wrote:
It appears as if both CGOS servers crashed ...
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
cgos.lri.fr is still working, but the web page is not updated
Gian-Carlo Pascutto wrote:
I think some possibility to send messages would be great. I could swear
I saw MogoBot do this, but I couldn't find anything in the KGSGtp
documentation.
Hi,
I believe MoGo sent its messages in the version string. Name and
version of your program are the only
Jason House wrote:
On Wed, 2007-11-14 at 19:27 +0100, Petr Baudis wrote:
Hi,
is anyone successfully using the kgs-chat GTP command in games?
I cannot get kgsGtp to send me the command when I make a comment inside
a game (as the bot's opponent). I receive the command when
I private-message
Don Dailey a écrit :
Many people have asked about the 9x9 CGOS game archive.I'll try to
keep it up to date and it's partially automated at this point.
It will probably always be a month behind since I archive by month.
But I do have most of the Novembers games although it obviously isn't
Nick Wedd a écrit :
FUTURE TOURNAMENTS
I learned today about the UEC Cup ( http://jsb.cs.uec.ac.jp/~igo/eng/
), a major Computer Go event that is now less than a week away. I
wish I had known about it sooner, I would have listed it at
http://www.computer-go.info/events/future.html, and maybe
Don Dailey wrote:
Stuart,
Here is the deal on euler numbers and implementations:
It's difficult to find any articles on the web that you don't have to
pay for.
Hi,
I remember I read a description by Mark Winands long ago:
Hi,
I thought it may be a good idea to decide on a day when everybody would
connect to CGOS. Many programmers do not wish to let their program play
forever on the server, so it may be interesting to decide on a day to
connect, so that a high variety of programs can play against each other.
Gian-Carlo Pascutto wrote:
If someone has factual data[*] about 9 x 9 performance of
current bots I'll gladly revise the estimate on the webpage
on my own.
Mogo is around 2500 on CGOS:
http://cgos.boardspace.net/9x9/cross/MoGo_psg7.html
In Amsterdam, ajahuang (kgs 6d) played a few games
Robert Jasiek wrote:
Where can one play the latest versions of MoGo or other, similarly
strong programs? It is said that some programs are on KGS, but I
cannot find them. How to find them? Is it possible to play against
them as a human on CGOS? I, German 5d, would want to play even games
on
Rémi Coulom wrote:
Hi,
13x13 StoneCrazy is currently connected to CGOS (computer go room). It
will stay there for about 24h.
Rémi
So far, it lost 1 game against 3d, and 2 games against 2d. In this game,
it started a nice ko fight at move 69 (but lost):
http://files.gokgs.com/games/2007/12
Lars wrote:
I have some questions concernig this paper of Remi:
http://remi.coulom.free.fr/Amsterdam2007/MMGoPatterns.pdf
1. Which sense make the prior (Section 3.3 in the paper and where is the
application?
I understand it the way that you put 2 more competitions to each pattern
in the
Christoph Birk wrote:
On Tue, 4 Dec 2007, Christoph Birk wrote:
On Tue, 4 Dec 2007, Don Dailey wrote:
It would be awkward at best. I could build a client to do this, but
the human would have to be willing to sit and play games at the moment
they were scheduled.
You are right ... it's very
David Fotland wrote:
I'm working on Many Faces of Go 12 engine, which is an alpha-beta searcher,
and it's strong enough now I'd like to some stronger competition on 19x19
CGOS to test against.
Does anyone want to put up some strong programs? I know everyone prefers to
work on 9x9 since it's
Don Dailey wrote:
I'm not sure I used the program correctly - it's rather complicated and
I'm not that great with statistics. If anyone is interested in the
settings I used I can provide that.
Hi,
The only subtlety here, is that bayeselo is tuned for chess, and assumes
that draws are
Don Dailey wrote:
I put up a web page that displays EVERY player who has played at least
200 games on CGOS.
It uses the bayeselo program that Rémi authored.
http://cgos.boardspace.net/9x9/hof.html
I'm not sure I used the program correctly - it's rather complicated and
I'm not
Don Dailey wrote:
Another example I found is the impressive Valkyria program. Version
2.7 won 92% of it's games, more than even the top rated greenpeep0.5.1.
However, the average rating of Valkyria's opponents was only 1722.
This is quite a difference. So Valkyria is rated only
Jason House wrote:
On Dec 6, 2007 11:38 AM, Rémi Coulom [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Jason House wrote:
This may serve as a good test of if there is enough data to assign
values to the patterns.
I did not mention this in my paper, but you can
Jason House wrote:
On Dec 12, 2007 3:09 PM, Álvaro Begué [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
On Dec 12, 2007 3:05 PM, Jason House [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
On Dec 12, 2007 2:59 PM, Rémi Coulom
[EMAIL PROTECTED
Don Dailey wrote:
We may be able to borrow KGS data of well established players playing
9x9 games against each other to estimate this. Would anyone like to
volunteer to do this?
Bill Shubert kindly provided this data to me. I am working on a study
about rating systems for the game of Go.
Don Dailey wrote:
It would be great if you would provide recommendations for a simple
conversion formula when you are ready based on this study. Also,
if you have any suggestions in general for CGOS ratings the
cgos-developers would be willing to listen to your suggestions.
- Don
My
Don Dailey wrote:
I don't really know what you mean by one-dimensional. My
understanding of playing strength is that it's not one-dimensional
meaning that it is foiled by in-transitivities between players with
different styles.You may be able to beat me, but I might be able to
beat
Jason House wrote:
In Remi's paper on ELO ratings of moves, how is mean log evidence
computed? Is that looking at the probability of the training set?
e.g. if the selected moves have estimated probabilities of 1/e, 1/e^2,
1/e, and 1/e, then the log evidences would be -1,-2,-1, and -1 for a
Hi,
I have not had time to study it in details, but I found this:
http://fragrieu.free.fr/zobrist.pdf
A Group-Theoretic Zobrist Hash Function
Antti Huima
September 19, 2000
Abstract
Zobrist hash functions are hash functions that hash go positions to
fixed-length bit strings. They work so that
Jason House wrote:
Given that doing one parameter at a time may be less ideal, I don't
know if my method would really inherit those properties or not.
Probably not, because the Hessian has significant non-diagonal values.
But I expect it would still converge in less iterations than MM.
David Fotland wrote:
The styles of CS (CS-9-17-10k-1CPU), MFGO (mfgo12exp-15), and GNUGO
(gnugo3.7.10_10) are different, and it's generating some odd results.
Many Faces beats GnuGo 70%. There are not many games, but this is
consistent with over 100 test games I've run.
CS beats GnuGo 55%.
steve uurtamo wrote:
did you optimize parameters in MFGO by playing against
gnugo?
that'd do it.
s.
Well, I don't know about David, but I do _all_ my testing and optimizing
against GNU.
Rémi
___
computer-go mailing list
Vlad Dumitrescu wrote:
On Jan 6, 2008 11:00 PM, Don Dailey [EMAIL PROTECTED] wrote:
The idea of a non one dimension rating model is interesting. If you
decide to pursue this I can give you the CGOS data in a compact format,
1 line per result.
Hi all,
I'm not sure I get the whole
Hi,
Some readers of this list may be interested in this one-hour programme
that will be broadcasted live on France Culture tomorrow afternoon:
http://www.radiofrance.fr/chaines/france-culture2/emissions/science_publique/fiche.php?diffusion_id=58397pg=avenir
It will be available for download
Sylvain Gelly wrote:
2008/1/10, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]:
Hi Sylvain,
Have you finished your thesis? We are eager to read it:-)
Hi,
Yes I did! :).It is not on my website, but will (soon?).
However, you should not be so
Gian-Carlo Pascutto wrote:
Multi-stone suicide is allowed, single stone not.
Strange. The reverse would make more sense to me.
Rémi
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
Hi,
My vote would be to keep everything like it is. Maybe use round robin
when the number of participants is close to the number of planned
rounds. Also, don't hesitate to make the time control shorter if it
would be necessary to fit enough rounds within a reasonable time, so we
can play
Nick Knol wrote:
Hi all,
This is my first post to the list, and I'm pretty new to this, so
sorry if I break from etiquette.
I'm currently working on my senior undergrad thesis project. My idea
is to use Bouzy's dilation algorithm (
http://www.gnu.org/software/gnugo/gnugo_14.html ) to find
Eric Boesch wrote:
By the way, does anybody know of any nifty tools or heuristics for
efficient probabilistic multi-parameter optimization? In other words,
like multi-dimensional optimization, except instead of your function
returning a deterministic value, it returns the result of a Bernoulli
Don Dailey wrote:
They seem under-rated to me also. Bayeselo pushes the ratings together
because that is apparently a valid initial assumption. With enough
games I believe that effect goes away.
I could test that theory with some work.Unless there is a way to
turn that off in bayelo (I
Hi,
I found the Master Thesis of Nobuo Araki is available online:
http://ark.qp.land.to/main.pdf
Abstract:
Recently in the Go program, there was a breakthrough by the Monte-Carlo
method using
a game tree search method called UCT (UCB applied to trees, UCB stands
for Upper Confidence
Bounds)
荒木伸夫 wrote:
Hello, Coulom. I'm Nobuo Araki.
Thank you for reading my thesis. However, this thesis is first version, not final
version. Therefore, there are too few experiments. And Mr. Hideki Kato sent me many
warnings about this thesis, for example English is too bad. You may be
confused
Hi,
I would like to confirm your experiments: I have noticed already that
adding shapes of radius 4 improves prediction a lot, but does not
improve playing strength (from progressive widening).
Also, even worse than that, for a given set of features, the pattern
urgencies computed by MM
I believe the main problem is that the Elo-rating model is wrong for
bots. The phenomenon with Mogo is probably the same as Crazy Stone: if
there are enough strong MC bots playing to shield the top MC programs
from playing against GNU, then they'll get a high rating because they
are efficient
荒木伸夫 wrote:
I have considered this, and I think that this may be caused by wrong
training model.
In my master thesis, I mentioned that the relationship between
top 1 accuracy of move prediction and the strength of Monte-Carlo
is not simple (I increased the number of matches to 600, and
David Silver wrote:
I think it is time to share this idea with the world :-)
Great. Thanks for sharing.
Rémi
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
Jason House wrote:
Other games that come to mind:
Chess (covered elsewhere, I assume)
http://www.talkchess.com/forum/viewforum.php?f=7
Checkers
Abalone http://en.wikipedia.org/wiki/Abalone_(board_game)
http://en.wikipedia.org/wiki/Abalone_%28board_game%29
I expect checkers and
Jason House wrote:
I've never much cared for forums. Does this one have features that
allow me to use it like a mailing list (e.g. notifications of new
messages, ability to respond quickly and easily in response to e-mail
notifications, etc...)
Yes. You can click on subscribe forum at the
Hi Jonas,
welcome to the list.
The idea of using f(score) instead of sign(score) is interesting. Long
ago, I tried tanh(K*score) on 9x9 (that was before the 2006 Olympiad, so
it may be worth trying again), and I found that the higher K, the
stronger the program. Still, I believe that other f
Hi,
I have got a lockless hash table to work, and I thought I'd share the
results.
A lockless hash table is very important, because the usual approach that
consists in using a global lock during tree search and update does not
scale well, especially on 9x9. But it is possible to create a
Don Dailey wrote:
These are used in parallel chess programs, and it's very common. A
pretty good article on this written by Hyatt (Crafty programmer and
author of former world computer chess champion Cray Blitz) and it's
called A lock-less transposition table implementation for parallel
Olivier Teytaud wrote:
Hi,
I have got a lockless hash table to work, and I thought I'd share the
results.
[...]
Great! For networks of 4-cores, it is not very useful,
but for highly smp machines it could be great - with your
grid5000 account, you might run crazystone on a
16-core machine
Hi,
This is my CG2008 paper, for statisticians:
Whole-History Rating: A Bayesian Rating System for Players of
Time-Varying Strength
Abstract: Whole-History Rating (WHR) is a new method to estimate the
time-varying strengths of players involved in paired comparisons. Like
many variations of the
Andy wrote:
Remi, you mentioned how the other algorithms predicted well and
guessed that it's because the great majority of games are between
experienced players whose strength is not changing much. I also feel
that the existing KGS ratings work well for those players already. So
how about
1 - 100 of 340 matches
Mail list logo