Don Dailey: [EMAIL PROTECTED]:
FatMan uses a fixed memory allocation and I think Mogo does too so I
don't even have a simple way to know how much memory FatMan is
effectively utilizing.
Just as info, MoGo allocates memory dynamically. I saw it via top
command on Linux.
-Hideki
The time
This is very cool. As of 261 games played, I find it very difficult to
guess whether the mogo curve is beginning to dramatically flatten, or
will continue to rise steeply.
I have a few questions.
I can't see the cross table, I guess you haven't put it up yet?
How do you decide the pairings?
I will just let the tests run and we can interpret the results
later. I believe Mogo has some code to reuse nodes in the tree -
although it may not be quite as efficient.
Olivier seemed to think it would work acceptably well.
- Don
Michael Williams wrote:
Also, with the parameters you are
Tom Cooper wrote:
This is very cool. As of 261 games played, I find it very difficult to
guess whether the mogo curve is beginning to dramatically flatten, or
will continue to rise steeply.
I have a few questions.
I can't see the cross table, I guess you haven't put it up yet?
I intend
[... discussion about bounded size tree...]
Olivier seemed to think it would work acceptably well.
Yes, I think so. We trust the pruning method. If we
are wrong, it's a good piece of news for us - we can improve
the algorithm just by increasing the constants :-)
Olivier
That makes sense. It would also be interesting to see a special player in the
mix: a Mogo_13B, with 200k nodes in the tree.
Don Dailey wrote:
I will just let the tests run and we can interpret the results
later. I believe Mogo has some code to reuse nodes in the tree -
although it may
, 2008 5:45:30 AM
Subject: Re: [computer-go] New scalability study progress report
Tom Cooper wrote:
This is very cool. As of 261 games played, I find it very difficult
to
guess whether the mogo curve is beginning to dramatically flatten, or
will continue to rise steeply.
I have a few
Would be cool if you could somehow also show the average memory footprint and
time used at each doubling for each program.
Don Dailey wrote:
I wish I had named the weakest players _00 instead of _01 and expressed
everything as you are suggesting, it would indeed be clearer.
I could
Also, with the parameters you are using for MoGo, I think the tree will stop
growing at 100k nodes, which doesn't take very long to get to.
Don Dailey wrote:
I wish I had named the weakest players _00 instead of _01 and expressed
everything as you are suggesting, it would indeed be clearer.
Michael Williams wrote:
Would be cool if you could somehow also show the average memory
footprint and time used at each doubling for each program.
It would be cool, but each program is running on different hardware
and machine which can be loaded up or not, so at best it would an
average of
The new scalability study is in progress. It will be very slow going,
only a few games a day can be played but we are trying to get more
computers utilized.
I will update the data a few times a day for all to see. This includes
a crosstable and ratings graphs. The games will be made
On Fri, 2008-01-18 at 20:31 -0500, Don Dailey wrote:
Although it's not on the graph itself, Gnugo-3.7.11 level 10 is set to
be 1800.0 ELO.
On the web page it says you are using --min-level 8 --max-level 8.
Each data point in the x axis represent a doubling in power. There are
13 doublings
I wish I had named the weakest players _00 instead of _01 and expressed
everything as you are suggesting, it would indeed be clearer.
I could actually fix this by reprogramming the scripts without changing
the running programs. If I get a burst of energy perhaps ...
The tarball is slightly
Jeff Nowakowski wrote:
On Fri, 2008-01-18 at 20:31 -0500, Don Dailey wrote:
Although it's not on the graph itself, Gnugo-3.7.11 level 10 is set to
be 1800.0 ELO.
On the web page it says you are using --min-level 8 --max-level 8.
I realized after I started the study that I was
14 matches
Mail list logo