I will have to correct something here. From what I can see, it appears the
MPI code may not be creating ompi_proc_t structures, but rather creating
arrays of ompi_proc_t* pointers that are then filled with values equal to
the pointers in the ompi_proc_list held inside of ompi/proc/proc.c.
It
Just curious has anyone done comparisons of latency measurements as one
changes the size of a job. That is changing the size of the job (and
number of nodes used) and just taking the half roundtrip latency of two
of the processes in the job. I am roughly seeing an addition of 5% to
the
On Mon, 7 Jul 2008, Terry Dontje wrote:
Just curious has anyone done comparisons of latency measurements as one
changes the size of a job. That is changing the size of the job (and number
of nodes used) and just taking the half roundtrip latency of two of the
processes in the job. I am
Brian W. Barrett wrote:
On Mon, 7 Jul 2008, Terry Dontje wrote:
Just curious has anyone done comparisons of latency measurements as
one changes the size of a job. That is changing the size of the job
(and number of nodes used) and just taking the half roundtrip latency
of two of the
On Jul 6, 2008, at 1:28 PM, Patrick Geoffray wrote:
WHAT: make mpi_leave_pinned=1 by default when a BTL is used that
would benefit from it (when possible; 0 when not, obviously)
The probable reason registration cache (aka leave_pinned) is
disabled by default is that it may be unsafe. Even
Responding to both of Ralph's e-mails in one, just to confuse people :).
First, the issue of the recursive locks... Back in the day, ompi_proc_t
instances could be created as a side effect of other operations.
Therefore, to maintain sanity, the procs were implicitly added to the
master proc