Sure - but then we aren't talking about containers any more, just vendor vs
OMPI. I'm not getting in the middle of that one!
On Jan 27, 2022, at 6:28 PM, Gilles Gouaillardet via users
mailto:users@lists.open-mpi.org> > wrote:
Thanks Ralph,
Now I get what you had in mind.
Strictly speaking,
Thanks Ralph,
Now I get what you had in mind.
Strictly speaking, you are making the assumption that Open MPI performance
matches the system MPI performances.
This is generally true for common interconnects and/or those that feature
providers for libfabric or UCX, but not so for "exotic"
See inline
Ralph
On Jan 27, 2022, at 10:05 AM, Brian Dobbins mailto:bdobb...@gmail.com> > wrote:
Hi Ralph,
Thanks again for this wealth of information - we've successfully run the same
container instance across multiple systems without issues, even surpassing
'native' performance in edge
Hi Ralph,
Thanks again for this wealth of information - we've successfully run the
same container instance across multiple systems without issues, even
surpassing 'native' performance in edge cases, presumably because the
native host MPI is either older or simply tuned differently (eg, 'eager
Just to complete this - there is always a lingering question regarding shared
memory support. There are two ways to resolve that one:
* run one container per physical node, launching multiple procs in each
container. The procs can then utilize shared memory _inside_ the container.
This is the
> Fair enough Ralph! I was implicitly assuming a "build once / run everywhere"
> use case, my bad for not making my assumption clear.
> If the container is built to run on a specific host, there are indeed other
> options to achieve near native performances.
>
Err...that isn't actually what I
27, 2022 2:59 AM
To: users@lists.open-mpi.org
Cc: Diego Zuccato
Subject: Re: [OMPI users] RES: OpenMPI - Intel MPI
Sorry for the noob question, but: what should I configure for OpenMPI
"to perform on the host cluster"? Any link to a guide would be welcome!
Slightly extended
Sorry for the noob question, but: what should I configure for OpenMPI
"to perform on the host cluster"? Any link to a guide would be welcome!
Slightly extended rationale for the question: I'm currently using
"unconfigured" Debian packages and getting some strange behaviour...
Maybe it's just
Fair enough Ralph!
I was implicitly assuming a "build once / run everywhere" use case, my bad
for not making my assumption clear.
If the container is built to run on a specific host, there are indeed other
options to achieve near native performances.
Cheers,
Gilles
On Thu, Jan 27, 2022 at
I'll disagree a bit there. You do want to use an MPI library in your container
that is configued to perform on the host cluster. However, that doesn't mean
you are constrained as Gilles describes. It takes a little more setup
knowledge, true, but there are lots of instructions and knowledgeable
Brian,
FWIW
Keep in mind that when running a container on a supercomputer, it is
generally recommended to use the supercomputer MPI implementation
(fine tuned and with support for the high speed interconnect) instead of
the one of the container (generally a vanilla MPI with basic
support for TCP
Hi Ralph,
Thanks for the explanation - in hindsight, that makes perfect sense,
since each process is operating inside the container and will of course
load up identical libraries, so data types/sizes can't be inconsistent. I
don't know why I didn't realize that before. I imagine the past
There is indeed an ABI difference. However, the _launcher_ doesn't have
anything to do with the MPI library. All that is needed is a launcher that can
provide the key exchange required to wireup the MPI processes. At this point,
both MPICH and OMPI have PMIx support, so you can use the same
Hi Ralph,
Afraid I don't understand. If your image has the OMPI libraries installed
> in it, what difference does it make what is on your host? You'll never see
> the IMPI installation.
>
> We have been supporting people running that way since Singularity was
> originally released, without any
Luis,
Can you install OpenMPI into your home directory (or other shared
filesystem) and use that? You may also want contact your cluster
admins to see if they can help do that or offer another solution.
On Wed, Jan 26, 2022 at 3:21 PM Luis Alfredo Pires Barbosa via users
wrote:
>
> Hi Ralph,
>
Afraid I don't understand. If your image has the OMPI libraries installed in
it, what difference does it make what is on your host? You'll never see the
IMPI installation.
We have been supporting people running that way since Singularity was
originally released, without any problems. The only
16 matches
Mail list logo