On Sun, Nov 01, 2009 at 11:04:25AM -0700, Brian Sheppard wrote:
> >The PRNG (pseudo random number generator) can cause the super-linear
> >speed-up, for example.
>
> Really? The serial simulation argument seems to apply.
But the serial simulation argument is just re-formulation of the
statement,
Darren Cook: <4aecdf3e.7010...@dcook.org>:
> Parallelization *cannot* provide super-linear speed-up.
>>>...
>>> The result follows from a simulation argument. Suppose that you had a
>>> parallel process that performed better than N times a serial program.
>>> Construct a new serial program that
Parallelization *cannot* provide super-linear speed-up.
>>...
>> The result follows from a simulation argument. Suppose that you had a
>> parallel process that performed better than N times a serial program.
>> Construct a new serial program that simulates the parallel process. There is
>> a c
On Fri, Oct 30, 2009 at 12:50 PM, Brian Sheppard wrote:
>>> Parallelization *cannot* provide super-linear speed-up.
>>
>>I don't see that at all.
>
> This is standard computer science stuff, true of all parallel programs and
> not just Go players. No parallel program can be better than N times a s
In the MPI runs we use an 8-core node, so the playouts per node are higher.
I don't ponder, since the program isn't scaling anyway.
The number of nodes with high visits is smaller, and I only send nodes that
changed since the last send.
I do progressive unpruning, so most children have zero vists
On Fri, Oct 30, 2009 at 10:50:05AM -0600, Brian Sheppard wrote:
> >> Parallelization *cannot* provide super-linear speed-up.
> >
> >I don't see that at all.
>
> This is standard computer science stuff, true of all parallel programs and
> not just Go players. No parallel program can be better than
I share all uct-nodes with more than N visits, where N is currently 100, but
performance doesn't seem very sensitive to N.
Does Mogo share RAVE values as well over MPI?
I agree that low scaling is a problem, and I don't understand why.
It might be the MFGO bias. With low numbers of playouts M
On Fri, Oct 30, 2009 at 07:53:15AM -0600, Brian Sheppard wrote:
> >I personally just use root parallelization in Pachi
>
> I think this answers my question; each core in Pachi independently explores
> a tree, and the master thread merges the data. This is even though you have
> shared memory on yo
Sent from my iPhone
On Oct 30, 2009, at 9:53 AM, "Brian Sheppard"
wrote:
confirming the paper's finding that the play improvement is
larger than multiplying number of sequential playouts appropriately.
Well, this is another reason why I doubt the results from the Mango
paper.
Paral
I use thread safe SMP within a node. The tree expansion and playouts have
no locks, but Many Faces' engine is not thread safe, so there is a lock
around it, so only one thread can be biasing moves at a time.
For multinode I use MPI. Even though the model is different, the MPI code
is not very la
It depends upon the scaling you want. Some of what you write seems to
imply that you are thinking about MCTS programs, while your questions
are also more general.
When we wrote SlugGo (one of the top programs a few years ago but
in hibernation now) we went with MPI. MPI lets you simulate as many
On Thu, Oct 29, 2009 at 12:40:26PM -0600, Brian Sheppard wrote:
> And now my question: what do you actually do: MPI, thread-safe, both, or
> something else?
Have you read the Parallel Monte-Carlo Tree Search paper? It sums up the
possibilities nicely. I personally just use root parallelization
(si
12 matches
Mail list logo