"r...@open-mpi.org" writes:
> Yes, I’ve been hearing a growing number of complaints about cgroups for that
> reason. Our mapping/ranking/binding options will work with the cgroup
> envelope, but it generally winds up with a result that isn’t what the user
> wanted or
Thanks, Ralph,
A video would be great to accompany the slides!
I hope you have a good and productive SC16.
-- bennet
On Fri, Oct 28, 2016 at 8:40 PM, r...@open-mpi.org wrote:
> Yes, I’ve been hearing a growing number of complaints about cgroups for that
> reason. Our
Yes, I’ve been hearing a growing number of complaints about cgroups for that
reason. Our mapping/ranking/binding options will work with the cgroup envelope,
but it generally winds up with a result that isn’t what the user wanted or
expected.
We always post the OMPI BoF slides on our web site,
Ralph,
Alas, I will not be at SC16. I would like to hear and/or see what you
present, so if it gets made available in alternate format, I'd
appreciated know where and how to get it.
I am more and more coming to think that our cluster configuration is
essentially designed to frustrated MPI
FWIW: I’ll be presenting “Mapping, Ranking, and Binding - Oh My!” at the OMPI
BoF meeting at SC’16, for those who can attend. Will try to explain the
rationale as well as the mechanics of the options
> On Oct 11, 2016, at 8:09 AM, Dave Love wrote:
>
> Gilles
Gilles Gouaillardet writes:
> Bennet,
>
>
> my guess is mapping/binding to sockets was deemed the best compromise
> from an
>
> "out of the box" performance point of view.
>
>
> iirc, we did fix some bugs that occured when running under asymmetric
> cpusets/cgroups.
>
> if you
Bennet,
my guess is mapping/binding to sockets was deemed the best compromise
from an
"out of the box" performance point of view.
iirc, we did fix some bugs that occured when running under asymmetric
cpusets/cgroups.
if you still have some issues with the latest Open MPI version (2.0.1)
Pardon my naivete, but why is bind-to-none not the default, and if the
user wants to specify something, they can then get into trouble
knowingly? We have had all manner of problems with binding when using
cpusets/cgroups.
-- bennet
On Thu, Sep 29, 2016 at 9:52 PM, Gilles Gouaillardet
David,
i guess you would have expected the default mapping/binding scheme is
core instead of sockets
iirc, we decided *not* to bind to cores by default because it is "safer"
if you simply
OMP_NUM_THREADS=8 mpirun -np 2 a.out
then, a default mapping/binding scheme by core means the OpenMP
Hello All,
Would anyone know why the default mapping scheme is socket for jobs with
more than 2 ranks? Would they be able to please take some time and
explain the reasoning? Please note I am not railing against the
decision, but rather trying to gather as much information about it as I
can
10 matches
Mail list logo