I've gotten to a point where it probably makes sense to expose it to
actual users.
The location of the project is at http://plug-and-go.dev.java.net
It's new, so I wouldn't be surprised if there are a few little bumps
in the road still. But for anyone who'd like to start a bot, and who
OK, after dicking about for a few hours with git and Mercurial I
decided against using any of them. I keep getting errors or
completely fail to understand how it works. It's just not intuitive
enough to get going quickly.
Moreover, if my goal is to get newcomers up and running quickly, I
A post from Michael Williams led me to review this mail below once
more. I hadn't looked at the code of Don's reference bot very closely
until now and instead relied on the description he gave below:
On 23-okt-08, at 14:29, Don Dailey wrote:
Let me give you a simple example where we set
Maybe Don built it that way so that the playouts could handle integer komi and
the possibility of a draw. In that case, it would neither add one nor subtract
one.
Mark Boon wrote:
A post from Michael Williams led me to review this mail below once more.
I hadn't looked at the code of Don's
On Mon, 2008-10-27 at 17:19 -0200, Mark Boon wrote:
So I understand from the above that when a playout leads to a win
you
add 1 to the wins.
But in the code you subtract one when it leads to a loss.
This is just semantics. In the literal code a win is 1 and a loss is -1
but when I
If you ran 10,000 games your score is amazingly close - you won't be
that close very often in 10,000 game samples. Of course I assume you
are testing this against a fully conforming version.
So what exactly are you doing here to save time? My understanding is
that it has something to do
One more observation, something I found curious, is that according to
the statistics twogtp put together, the average game-length played
was 119 moves. I also noticed this was the number after the other two
runs I had of 1,000 games each.
Since we made such a big deal about the average
On Oct 26, 2008, at 7:19 PM, Mark Boon [EMAIL PROTECTED] wrote:
One more observation, something I found curious, is that according
to the statistics twogtp put together, the average game-length
played was 119 moves. I also noticed this was the number after the
other two runs I had of 1,000
... the average game-length played was 119 moves. ...
...
111 is for random games. What the bots actually do is far from random.
Or perhaps, if they can make a 9x9 game last 119 moves, it is not *that*
far from random ;-).
Darren
___
computer-go
On Sun, 2008-10-26 at 21:10 -0200, Mark Boon wrote:
When I look at CGOS right now my refbot TesujiRefBot has an ELO of
1286, JRef has 1290 and Cref has 1269. So evidence is mounting that
my implementation, although completely different from yours, is
conforming the definition you put
On Sun, 2008-10-26 at 21:19 -0200, Mark Boon wrote:
One more observation, something I found curious, is that according to
the statistics twogtp put together, the average game-length played
was 119 moves. I also noticed this was the number after the other two
runs I had of 1,000 games
On Mon, 2008-10-27 at 08:51 +0900, Darren Cook wrote:
... the average game-length played was 119 moves. ...
...
111 is for random games. What the bots actually do is far from random.
Or perhaps, if they can make a 9x9 game last 119 moves, it is not *that*
far from random ;-).
I know you
On 24-okt-08, at 21:19, Don Dailey wrote:
\
I'm now running a twogtp test against your ref-bot. After 1,000 games
my bot has a winning percentage of 48.8% (+/- 1.6) according to
twogtp.
That is well within 2 standard deviations so I don't think there is a
problem. In fact it is within 1
I would be interested to see if your biased version can pass my eventual
conformance tests. If it can, more power to you, I might use the idea
myself.
- Don
On Sat, 2008-10-25 at 09:36 -0200, Mark Boon wrote:
On 24-okt-08, at 21:19, Don Dailey wrote:
\
I'm now running a twogtp
Hi Don,
I fixed another bug and now I get an average game-length of 111.05,
which seems to be closer again to what you have. A million
simulations now takes about 35 seconds.
I'm now running a twogtp test against your ref-bot. After 1,000 games
my bot has a winning percentage of
Don,
I have figured out the discrepancy in the average game length. As
playout length I count from the start of the game, which gives me
114-115. I believe you count from the starting position where the
playout starts. Because when I modify my code to do that I also get
111 moves per
On Thu, 2008-10-23 at 09:38 -0200, Mark Boon wrote:
Don,
I have figured out the discrepancy in the average game length. As
playout length I count from the start of the game, which gives me
114-115. I believe you count from the starting position where the
playout starts. Because when I
Don,
You're probably right and I'm misunderstanding how it's supposed to
work.
Let me quote te original description:
6. Scoring for game play uses AMAF - all moves as first. In the
play-outs, statistics are taken on moves played during the
play-outs. Statistics are
On Thu, 2008-10-23 at 13:33 -0200, Mark Boon wrote:
Don,
You're probably right and I'm misunderstanding how it's supposed to
work.
Let me quote te original description:
6. Scoring for game play uses AMAF - all moves as first. In the
play-outs, statistics are taken
Thanks again for more explanations. I think the AMAF is clear to me now.
When you say you count all the playouts starting from an empty board,
then
I have no idea how our outcome can be different by 3-4 moves,
which is
coincidentally the average depth of a uniform tree of 1,000,000 moves
on
On Thu, Oct 23, 2008 at 1:00 PM, Mark Boon [EMAIL PROTECTED] wrote:
This is still something I don't understand. Are there others who implemented
the same thing and got 111 moves per game on average? I tried to look
through some posts on this list but didn't see any other numbers published.
111
Thanks again for more explanations. I think the AMAF is clear to me now.
For what it is worth: I read the AMAF section as indicating that the bots
are to play using AMAF heuristics - random playouts, followed by playing
the AMAF-scored winning move, rinse and repeat. Which is why I thought
I
On Thu, 2008-10-23 at 15:00 -0200, Mark Boon wrote:
Thanks again for more explanations. I think the AMAF is clear to me now.
When you say you count all the playouts starting from an empty board,
then
I have no idea how our outcome can be different by 3-4 moves,
which is
Just to be clear, the average length of the playout is what we are
looking for, not the average length of games that might be played from
genmove commands.
- Don
On Thu, 2008-10-23 at 15:00 -0200, Mark Boon wrote:
Thanks again for more explanations. I think the AMAF is clear to me now.
On Thu, 2008-10-23 at 18:46 +0100, Claus Reinke wrote:
Thanks again for more explanations. I think the AMAF is clear to me now.
For what it is worth: I read the AMAF section as indicating that the bots
are to play using AMAF heuristics - random playouts, followed by playing
the AMAF-scored
OK, if the following is not the reason, then I don't know anything
anymore :)
My playouts allow multiple suicide. I believe Orego does the same. I
found that not checking for that actually made things faster overall.
But I bet that accounts for the longer average game-length.
If suicide
On Thu, 2008-10-23 at 16:04 -0200, Mark Boon wrote:
OK, if the following is not the reason, then I don't know anything
anymore :)
My playouts allow multiple suicide. I believe Orego does the same. I
found that not checking for that actually made things faster overall.
But I bet that
I'm getting close to something I'd like to show people and get feedback.
One thing to decide is how to make it public. Previously I used
dev.java.net to host my project. But I stopped using it because I had
a very slow internet connection and I was getting annoyed with the
time it took to
Mark Boon wrote:
I'm getting close to something I'd like to show people and get feedback.
One thing to decide is how to make it public. Previously I used
dev.java.net to host my project. But I stopped using it because I had a
very slow internet connection and I was getting annoyed with the
On Oct 22, 2008, at 11:16 AM, Mark Boon wrote:
I'm getting close to something I'd like to show people and get
feedback.
One thing to decide is how to make it public. Previously I used
dev.java.net to host my project. But I stopped using it because I
had a very slow internet connection
Mark Boon wrote:
I'm getting close to something I'd like to show people and get feedback.
One thing to decide is how to make it public. Previously I used
dev.java.net to host my project. But I stopped using it because I had a
very slow internet connection and I was getting annoyed with the
On Wed, 2008-10-22 at 16:16 -0200, Mark Boon wrote:
When I run my playouts 1,000,000 times I get the
following stats:
Komi 7.5, 114.749758 moves per position and Black wins 43.8657%.
That's a bit different from the 111 moves and 42% Don got in his
reference bot. I haven't looked at
I use subversion and git. Git mostly for just my own personal
repository but it rocks and is my preference.
- Don
On Wed, 2008-10-22 at 20:28 +0200, Urban Hafner wrote:
Mark Boon wrote:
I'm getting close to something I'd like to show people and get feedback.
One thing to decide is how
[computer-go] From zero to playing on CGOS in 10 minutes
---
Don Dailey drdailey at cox.net
---
For one thing, komi is different. I used 0.5 for running this test.
I would have use 0.0 but
On Wed, 2008-10-22 at 20:29 -0200, Mark Boon wrote:
On Wed, Oct 22, 2008 at 6:07 PM, Don Dailey [EMAIL PROTECTED] wrote:
For one thing, komi is different. I used 0.5 for running this test.
I would have use 0.0 but some implementations don't like even komi's.
But the komi should
Prompted by a few requests I had very recently with regards to the
computer-Go framework I once started, plus some free time between a
project I just finished and waiting for a visa to start my next, I
have started on a project probably best described by the title of
this message.
Hi Mark,
Very good ideas. I have actually been intending for a long time to
give the client a test mode - it would test the bot and and find if
there were any problems with your bot as far as GTP or legal moves are
concerned. Or perhaps it would even play a random game or two locally
as if it
You could have a copy of CGOS running on a different port that pairs up
anything that
connects to it against itself and starts a new game as soon as the first game
ends.
Don Dailey wrote:
Hi Mark,
Very good ideas. I have actually been intending for a long time to
give the client a test
On 21-okt-08, at 23:11, Michael Williams wrote:
You could have a copy of CGOS running on a different port that
pairs up anything that
connects to it against itself and starts a new game as soon as the
first game ends.
I don't know if it's a good idea to have it run against itself. I'm
Of course the server code is available on sourceforge, so you can set up
your own test site.
But I think all of that can be simulated with a smarter client. The
only think missing is the actual connection to the server. But this is
for debugging the bots mostly.
- Don
On Tue,
40 matches
Mail list logo