Hello,
can someone from the guys in Pamplona please
let us know on which days and at which hours
the games of the Olympiad are played?
Which of those games can be followed on KGS?
Thanks in advance, Ingo.
--
Neu: GMX FreeDSL Komplettanschluss mit DSL 6.000 Flatrate + Telefonanschluss
für nur
See
http://www.grappa.univ-lille3.fr/icga/event_info.php?id=35
Hideki
Ingo Althöfer: 20090512121021.73...@gmx.net:
Hello,
can someone from the guys in Pamplona please
let us know on which days and at which hours
the games of the Olympiad are played?
Which of those games can be followed on KGS?
The Projects link (http://fuego.sourceforge.net/projects.html) on the Fuego
site (http://fuego.sourceforge.net/) is broken.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
Summary: The trend in computer systems has been for CPU power to grow much
faster than memory size. The implication of this trend for MCTS computer go
implementations is that heavy playouts will have a significant cost
advantage
in the future.
I bought a Pentium D 3GHz system a few years back.
increasing memory is more expensive than increasing cpu speed
at this point. there was an addressing issue with 32bit machines,
but that shouldn't be too much of an issue anymore. most people
want to pay less than or equal to the price of their last machine
whenever they buy one, though, so
This is a great post, and some good observations. I agree with your
conclusions that CPU power is increasing faster than memory and memory
bandwidth. Let me give you my take on this.
In a nutshell, I believe memory will increasingly become the limiting
factor no matter what direction we go.
I have a trick ;)
I am currently creating MCTS trees of over a billion nodes on my 4GB machine.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
Compression tricks will only take you so far. Assuming you can get 2 to 1,
for instance, that doesn't scale. It will put the problem off for 1
generation for instance.It's not something you can keep doing - it's a 1
time thing but the memory vs CPU power thing may be constant.
So while
On Tue, May 12, 2009 at 12:16 PM, Michael Williams
michaelwilliam...@gmail.com wrote:
I have a trick ;)
I am currently creating MCTS trees of over a billion nodes on my 4GB
machine.
Ok, I'll bite.What is your solution?
- Don
___
All,
let me chip in with some additional thoughts about massively parallel
hardware.
I recently implemented Monte Carlo playouts on CUDA, to run them on the
GPU. It was more or less a naive implementation (read: a more or less
straight port with optimised memory access patterns). I am
Don Dailey wrote:
On Tue, May 12, 2009 at 12:16 PM, Michael Williams
michaelwilliam...@gmail.com mailto:michaelwilliam...@gmail.com wrote:
I have a trick ;)
I am currently creating MCTS trees of over a billion nodes on my 4GB
machine.
Ok, I'll bite.What is your solution?
is the ssd fast enough to be practical?
s.
On Tue, May 12, 2009 at 12:41 PM, Michael Williams
michaelwilliam...@gmail.com wrote:
Don Dailey wrote:
On Tue, May 12, 2009 at 12:16 PM, Michael Williams
michaelwilliam...@gmail.com mailto:michaelwilliam...@gmail.com wrote:
I have a trick ;)
It depends on how you use it and how much you pay for it. If you get a high-end Intel SSD, you can treat it however you like. But I can't afford that. I got
a cheap SSD and so I had shape my algorithm around which kind of disk operations it likes and which ones it doesn't.
steve uurtamo
cool, that's what i was wondering -- that you'd have to treat it
as something inbetween ram and a HD.
thanks,
s.
On Tue, May 12, 2009 at 12:48 PM, Michael Williams
michaelwilliam...@gmail.com wrote:
It depends on how you use it and how much you pay for it. If you get a
high-end Intel SSD,
Memory-aware algorithms take advantage of the varying access characteristics.
Long, long ago, computer memory was actually a rotating drum; each instruction
chained to the next location; it was worth a lot of effort to place the
instructions in such a manner that they'd be where you need them
This is probably a good solution. I don't believe the memory has to be
very fast at all because even with light playouts you are doing a LOT of
computation between memory accesses.
All of this must be tested of course. In fact I was considering if disk
memory could not be utilized as a kind
Are we approaching a point where it would be practical to precompute the
opening tree to some depth, cache the results on SSD, and incrementally improve
that knowledge based upon subsequent games?
Terry McIntyre terrymcint...@yahoo.com
On general principles, when we are looking for a
So you are saying that use disk memory for this?
This could be pretty deceiving if most of your reads and writes are
cached.What happens when your tree gets much bigger than available
memory?
- Don
On Tue, May 12, 2009 at 1:18 PM, Michael Williams
michaelwilliam...@gmail.com wrote:
In
Those numbers are the average after the tree has grown to 1B nodes. I'm sure the cache hates me. Each tree traversal will likely make several reads from
random locations in a 50 GB file.
Don Dailey wrote:
So you are saying that use disk memory for this?
This could be pretty deceiving if
Just a reminder that epsilon trick (invented by Jakub Pawlewicz) can
be used to avoid excessive memory usage (reuse memory) without
significant performance loss. It has been tested for proof number
search, but there is no reason for it to behave differently in MCTS.
Lukasz Lew
On Tue, May 12,
That's basically what I'm doing. Except that there is no depth limit and only the parts of the tree that you need get loaded back into memory. It's not a
playing engine yet so it can't build the tree as it plays games. Currently it just ponders the empty board.
terry mcintyre wrote:
Are we
How long has it been pondering?
Terry McIntyre terrymcint...@yahoo.com
On general principles, when we are looking for a solution of a social problem,
we must expect to reach conclusions quite opposed to the usual opinions on the
subject; otherwise it would be no problem. We must expect to
2009/5/12 terry mcintyre terrymcint...@yahoo.com
Are we approaching a point where it would be practical to precompute the
opening tree to some depth, cache the results on SSD, and incrementally
improve that knowledge based upon subsequent games?
I have had a theory for a long time that the
Storing an opening book for the first 10 moves requires
331477745148242200 nodes. Even with some reduction for symmetry,
I don't see that much memory becoming available anytime soon, and you still
have to evaluate them somehow.
Actually storing a tree, except for extremely limited
It often gets interrupted by me so that I can change some code, etc. And I often break backwards compatibility, so I have to delete the file and start from
scratch. In the past it has run for up to around 24 hours, but that was an older, slower version. I just kicked-off a 7x7 run. I expect
Where does your 99% figure come from?
Dave Dyer wrote:
Storing an opening book for the first 10 moves requires
331477745148242200 nodes. Even with some reduction for symmetry,
I don't see that much memory becoming available anytime soon, and you still
have to evaluate them somehow.
At 02:13 PM 5/12/2009, Michael Williams wrote:
Where does your 99% figure come from?
1/361 1%
by endgame there are still easily 100 empty spaces
on the board.
___
computer-go mailing list
computer-go@computer-go.org
At 02:13 PM 5/12/2009, Michael Williams wrote:
Where does your 99% figure come from?
1/361 1%
by endgame there are still easily 100 empty spaces
on the board.
___
computer-go mailing list
computer-go@computer-go.org
You have made the assumption that the move that your opponent selected was on average explored equally as much as all of the other moves. That seems a bit
pessimistic. One would expect that the opponent selected a strong move and one would also expect that your tree explored that strong move
In the opening, among reasonably clueful players, the branching factor is much
closer to 10 than to 361.
Terry McIntyre terrymcint...@yahoo.com
On general principles, when we are looking for a solution of a social problem,
we must expect to reach conclusions quite opposed to the usual
I don't think you have any understanding of what I'm suggesting.
You don't actually store the whole tree, you store whatever part of it is
generated by the program and that is an infinitesimal subset.I have
noticed that many times you tend to think in purely theoretical terms when
it was
And for MCTS it is much lower than 10.
2009/5/12 terry mcintyre terrymcint...@yahoo.com
In the opening, among reasonably clueful players, the branching factor is
much closer to 10 than to 361.
Terry McIntyre terrymcint...@yahoo.com
On general principles, when we are looking for a solution
On Tue, May 12, 2009 at 6:05 PM, Don Dailey dailey@gmail.com wrote:
And for MCTS it is much lower than 10.
2009/5/12 terry mcintyre terrymcint...@yahoo.com
In the opening, among reasonably clueful players, the branching factor is
much closer to 10 than to 361.
I assume Dave Dyer does
It's possible for the tree to become too narrow. On a 9x9 board, you might be
able to say that there are only one or two playable moves, but on 19x19, I
doubt that any pro would claim that the options are that narrow, even
accounting for symmetry, It's common to hear that some pros play A, some
An essential feature of monte carlo is that it's search space is
random and extremely sparse, so consequently opportunity to re-use
nodes is also extremely sparse.
On the other hand, if the search close to the root is not sparse, my
previous arguments about the number of nodes and the number of
I assume Dave Dyer does not understand alpha beta pruning either, or he would
not assume the branching factor is 361.
The branch at the root is about (361-move number) - you have to consider
all top level moves. A/B only kicks in by lowering the average branching
factor at lower levels.
If
On Tue, May 12, 2009 at 6:33 PM, Dave Dyer dd...@real-me.net wrote:
An essential feature of monte carlo is that it's search space is
random and extremely sparse, so consequently opportunity to re-use
nodes is also extremely sparse.
That depends. Monte Carlo only expands node it considered
On Tue, May 12, 2009 at 6:47 PM, Dave Dyer dd...@real-me.net wrote:
I assume Dave Dyer does not understand alpha beta pruning either, or he
would not assume the branching factor is 361.
The branch at the root is about (361-move number) - you have to consider
all top level moves. A/B only
If I use persistent storage and do that search again in another game, I can
start exactly where I left off and generate 50,000 more nodes. It will be
the same as if I did 100,000 nodes instead of 50,000 nodes.Or put another
way, it will be the same as if I spent 20 seconds on this
But then MCTS is invalid. The point is that you do spend time learning that
these nodes are not relevant, so you might as well try to remember that.
It is invalid. It's just a heuristic that is working within the current domain.
If you are playing a game of chess and fall for a trap, do
more nodes. It will be the same as if I did 100,000 nodes instead
of 50,000 nodes.Or put another way, it will be the same as if
I spent 20 seconds on this move instead of 10 seconds.
...
Consider move 20 (for example). If you saved every move 20 node
you ever encountered, how often
On Tue, May 12, 2009 at 8:17 PM, Dave Dyer dd...@real-me.net wrote:
If I use persistent storage and do that search again in another game, I
can start exactly where I left off and generate 50,000 more nodes. It will
be the same as if I did 100,000 nodes instead of 50,000 nodes.Or put
42 matches
Mail list logo