Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-26 Thread Gilles Gouaillardet via users
Fair enough Ralph!

I was implicitly assuming a "build once / run everywhere" use case, my bad
for not making my assumption clear.

If the container is built to run on a specific host, there are indeed other
options to achieve near native performances.

Cheers,

Gilles

On Thu, Jan 27, 2022 at 4:02 PM Ralph Castain via users <
users@lists.open-mpi.org> wrote:

> I'll disagree a bit there. You do want to use an MPI library in your
> container that is configued to perform on the host cluster. However, that
> doesn't mean you are constrained as Gilles describes. It takes a little
> more setup knowledge, true, but there are lots of instructions and
> knowledgeable people out there to help. Experiments have shown that using
> non-system MPIs provide at least equivalent performance to the native MPIs
> when configured. Matching the internal/external MPI implementations may
> simplify the mechanics of setting it up, but it is definitely not required.
>
>
> On Jan 26, 2022, at 8:55 PM, Gilles Gouaillardet via users <
> users@lists.open-mpi.org> wrote:
>
> Brian,
>
> FWIW
>
> Keep in mind that when running a container on a supercomputer, it is
> generally recommended to use the supercomputer MPI implementation
> (fine tuned and with support for the high speed interconnect) instead of
> the one of the container (generally a vanilla MPI with basic
> support for TCP and shared memory).
> That scenario implies several additional constraints, and one of them is
> the MPI library of the host and the container are (oversimplified) ABI
> compatible.
>
> In your case, you would have to rebuild your container with MPICH (instead
> of Open MPI) so it can be "substituted" at run time with Intel MPI (MPICH
> based and ABI compatible).
>
> Cheers,
>
> Gilles
>
> On Thu, Jan 27, 2022 at 1:07 PM Brian Dobbins via users <
> users@lists.open-mpi.org> wrote:
>
>>
>> Hi Ralph,
>>
>>   Thanks for the explanation - in hindsight, that makes perfect sense,
>> since each process is operating inside the container and will of course
>> load up identical libraries, so data types/sizes can't be inconsistent.  I
>> don't know why I didn't realize that before.  I imagine the past issues I'd
>> experienced were just due to the PMI differences in the different MPI
>> implementations at the time.  I owe you a beer or something at the next
>> in-person SC conference!
>>
>>   Cheers,
>>   - Brian
>>
>>
>> On Wed, Jan 26, 2022 at 4:54 PM Ralph Castain via users <
>> users@lists.open-mpi.org> wrote:
>>
>>> There is indeed an ABI difference. However, the _launcher_ doesn't have
>>> anything to do with the MPI library. All that is needed is a launcher that
>>> can provide the key exchange required to wireup the MPI processes. At this
>>> point, both MPICH and OMPI have PMIx support, so you can use the same
>>> launcher for both. IMPI does not, and so the IMPI launcher will only
>>> support PMI-1 or PMI-2 (I forget which one).
>>>
>>> You can, however, work around that problem. For example, if the host
>>> system is using Slurm, then you could "srun" the containers and let Slurm
>>> perform the wireup. Again, you'd have to ensure that OMPI was built to
>>> support whatever wireup protocol the Slurm installation supported (which
>>> might well be PMIx today). Also works on Cray/ALPS. Completely bypasses the
>>> IMPI issue.
>>>
>>> Another option I've seen used is to have the host system start the
>>> containers (using ssh or whatever), providing the containers with access to
>>> a "hostfile" identifying the TCP address of each container. It is then easy
>>> for OMPI's mpirun to launch the job across the containers. I use this every
>>> day on my machine (using Docker Desktop with Docker containers, but the
>>> container tech is irrelevant here) to test OMPI. Pretty easy to set that
>>> up, and I should think the sys admins could do so for their users.
>>>
>>> Finally, you could always install the PMIx Reference RTE (PRRTE) on the
>>> cluster as that executes at user level, and then use PRRTE to launch your
>>> OMPI containers. OMPI runs very well under PRRTE - in fact, PRRTE is the
>>> RTE embedded in OMPI starting with the v5.0 release.
>>>
>>> Regardless of your choice of method, the presence of IMPI doesn't
>>> preclude using OMPI containers so long as the OMPI library is fully
>>> contained in that container. Choice of launch method just depends on how
>>> your system is setup.
>>>
>>> Ralph
>>>
>>>
>>> On Jan 26, 2022, at 3:17 PM, Brian Dobbins  wrote:
>>>
>>>
>>> Hi Ralph,
>>>
>>> Afraid I don't understand. If your image has the OMPI libraries
 installed in it, what difference does it make what is on your host? You'll
 never see the IMPI installation.

>>>
 We have been supporting people running that way since Singularity was
 originally released, without any problems. The only time you can hit an
 issue is if you try to mount the MPI libraries from the host (i.e., violate
 the container boundary) - so don't do that and 

Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-26 Thread Ralph Castain via users
I'll disagree a bit there. You do want to use an MPI library in your container 
that is configued to perform on the host cluster. However, that doesn't mean 
you are constrained as Gilles describes. It takes a little more setup 
knowledge, true, but there are lots of instructions and knowledgeable people 
out there to help. Experiments have shown that using non-system MPIs provide at 
least equivalent performance to the native MPIs when configured. Matching the 
internal/external MPI implementations may simplify the mechanics of setting it 
up, but it is definitely not required.


On Jan 26, 2022, at 8:55 PM, Gilles Gouaillardet via users 
mailto:users@lists.open-mpi.org> > wrote:

Brian,

FWIW

Keep in mind that when running a container on a supercomputer, it is generally 
recommended to use the supercomputer MPI implementation
(fine tuned and with support for the high speed interconnect) instead of the 
one of the container (generally a vanilla MPI with basic
support for TCP and shared memory).
That scenario implies several additional constraints, and one of them is the 
MPI library of the host and the container are (oversimplified) ABI compatible.

In your case, you would have to rebuild your container with MPICH (instead of 
Open MPI) so it can be "substituted" at run time with Intel MPI (MPICH based 
and ABI compatible).

Cheers,

Gilles

On Thu, Jan 27, 2022 at 1:07 PM Brian Dobbins via users 
mailto:users@lists.open-mpi.org> > wrote:

Hi Ralph,

  Thanks for the explanation - in hindsight, that makes perfect sense, since 
each process is operating inside the container and will of course load up 
identical libraries, so data types/sizes can't be inconsistent.  I don't know 
why I didn't realize that before.  I imagine the past issues I'd experienced 
were just due to the PMI differences in the different MPI implementations at 
the time.  I owe you a beer or something at the next in-person SC conference!

  Cheers,
  - Brian


On Wed, Jan 26, 2022 at 4:54 PM Ralph Castain via users 
mailto:users@lists.open-mpi.org> > wrote:
There is indeed an ABI difference. However, the _launcher_ doesn't have 
anything to do with the MPI library. All that is needed is a launcher that can 
provide the key exchange required to wireup the MPI processes. At this point, 
both MPICH and OMPI have PMIx support, so you can use the same launcher for 
both. IMPI does not, and so the IMPI launcher will only support PMI-1 or PMI-2 
(I forget which one).

You can, however, work around that problem. For example, if the host system is 
using Slurm, then you could "srun" the containers and let Slurm perform the 
wireup. Again, you'd have to ensure that OMPI was built to support whatever 
wireup protocol the Slurm installation supported (which might well be PMIx 
today). Also works on Cray/ALPS. Completely bypasses the IMPI issue.

Another option I've seen used is to have the host system start the containers 
(using ssh or whatever), providing the containers with access to a "hostfile" 
identifying the TCP address of each container. It is then easy for OMPI's 
mpirun to launch the job across the containers. I use this every day on my 
machine (using Docker Desktop with Docker containers, but the container tech is 
irrelevant here) to test OMPI. Pretty easy to set that up, and I should think 
the sys admins could do so for their users.

Finally, you could always install the PMIx Reference RTE (PRRTE) on the cluster 
as that executes at user level, and then use PRRTE to launch your OMPI 
containers. OMPI runs very well under PRRTE - in fact, PRRTE is the RTE 
embedded in OMPI starting with the v5.0 release.

Regardless of your choice of method, the presence of IMPI doesn't preclude 
using OMPI containers so long as the OMPI library is fully contained in that 
container. Choice of launch method just depends on how your system is setup.

Ralph


On Jan 26, 2022, at 3:17 PM, Brian Dobbins mailto:bdobb...@gmail.com> > wrote:


Hi Ralph,

Afraid I don't understand. If your image has the OMPI libraries installed in 
it, what difference does it make what is on your host? You'll never see the 
IMPI installation.

We have been supporting people running that way since Singularity was 
originally released, without any problems. The only time you can hit an issue 
is if you try to mount the MPI libraries from the host (i.e., violate the 
container boundary) - so don't do that and you should be fine.

  Can you clarify what you mean here?  I thought there was an ABI difference 
between the various MPICH-based MPIs and OpenMPI, meaning you can't use a 
host's Intel MPI to launch a container's OpenMPI-compiled program.  You can use 
the internal-to-the-container OpenMPI to launch everything, which is easy for 
single-node runs but more challenging for multi-node ones.  Maybe my 
understanding is wrong or out of date though?

  Thanks,
  - Brian

 

On Jan 26, 2022, at 12:19 PM, Luis Alfredo Pires Barbosa 
mailto:luis_pire...@hotmail.com> > wrote:


Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-26 Thread Gilles Gouaillardet via users
Brian,

FWIW

Keep in mind that when running a container on a supercomputer, it is
generally recommended to use the supercomputer MPI implementation
(fine tuned and with support for the high speed interconnect) instead of
the one of the container (generally a vanilla MPI with basic
support for TCP and shared memory).
That scenario implies several additional constraints, and one of them is
the MPI library of the host and the container are (oversimplified) ABI
compatible.

In your case, you would have to rebuild your container with MPICH (instead
of Open MPI) so it can be "substituted" at run time with Intel MPI (MPICH
based and ABI compatible).

Cheers,

Gilles

On Thu, Jan 27, 2022 at 1:07 PM Brian Dobbins via users <
users@lists.open-mpi.org> wrote:

>
> Hi Ralph,
>
>   Thanks for the explanation - in hindsight, that makes perfect sense,
> since each process is operating inside the container and will of course
> load up identical libraries, so data types/sizes can't be inconsistent.  I
> don't know why I didn't realize that before.  I imagine the past issues I'd
> experienced were just due to the PMI differences in the different MPI
> implementations at the time.  I owe you a beer or something at the next
> in-person SC conference!
>
>   Cheers,
>   - Brian
>
>
> On Wed, Jan 26, 2022 at 4:54 PM Ralph Castain via users <
> users@lists.open-mpi.org> wrote:
>
>> There is indeed an ABI difference. However, the _launcher_ doesn't have
>> anything to do with the MPI library. All that is needed is a launcher that
>> can provide the key exchange required to wireup the MPI processes. At this
>> point, both MPICH and OMPI have PMIx support, so you can use the same
>> launcher for both. IMPI does not, and so the IMPI launcher will only
>> support PMI-1 or PMI-2 (I forget which one).
>>
>> You can, however, work around that problem. For example, if the host
>> system is using Slurm, then you could "srun" the containers and let Slurm
>> perform the wireup. Again, you'd have to ensure that OMPI was built to
>> support whatever wireup protocol the Slurm installation supported (which
>> might well be PMIx today). Also works on Cray/ALPS. Completely bypasses the
>> IMPI issue.
>>
>> Another option I've seen used is to have the host system start the
>> containers (using ssh or whatever), providing the containers with access to
>> a "hostfile" identifying the TCP address of each container. It is then easy
>> for OMPI's mpirun to launch the job across the containers. I use this every
>> day on my machine (using Docker Desktop with Docker containers, but the
>> container tech is irrelevant here) to test OMPI. Pretty easy to set that
>> up, and I should think the sys admins could do so for their users.
>>
>> Finally, you could always install the PMIx Reference RTE (PRRTE) on the
>> cluster as that executes at user level, and then use PRRTE to launch your
>> OMPI containers. OMPI runs very well under PRRTE - in fact, PRRTE is the
>> RTE embedded in OMPI starting with the v5.0 release.
>>
>> Regardless of your choice of method, the presence of IMPI doesn't
>> preclude using OMPI containers so long as the OMPI library is fully
>> contained in that container. Choice of launch method just depends on how
>> your system is setup.
>>
>> Ralph
>>
>>
>> On Jan 26, 2022, at 3:17 PM, Brian Dobbins  wrote:
>>
>>
>> Hi Ralph,
>>
>> Afraid I don't understand. If your image has the OMPI libraries installed
>>> in it, what difference does it make what is on your host? You'll never see
>>> the IMPI installation.
>>>
>>
>>> We have been supporting people running that way since Singularity was
>>> originally released, without any problems. The only time you can hit an
>>> issue is if you try to mount the MPI libraries from the host (i.e., violate
>>> the container boundary) - so don't do that and you should be fine.
>>>
>>
>>   Can you clarify what you mean here?  I thought there was an ABI
>> difference between the various MPICH-based MPIs and OpenMPI, meaning you
>> can't use a host's Intel MPI to launch a container's OpenMPI-compiled
>> program.  You *can* use the internal-to-the-container OpenMPI to launch
>> everything, which is easy for single-node runs but more challenging for
>> multi-node ones.  Maybe my understanding is wrong or out of date though?
>>
>>   Thanks,
>>   - Brian
>>
>>
>>
>>>
>>>
>>> On Jan 26, 2022, at 12:19 PM, Luis Alfredo Pires Barbosa <
>>> luis_pire...@hotmail.com> wrote:
>>>
>>> Hi Ralph,
>>>
>>> My singularity image has OpenMPI, but my host doesnt (Intel MPI). And I
>>> am not sure if I the system would work with Intel + OpenMPI.
>>>
>>> Luis
>>>
>>> Enviado do Email 
>>> para Windows
>>>
>>> *De: *Ralph Castain via users 
>>> *Enviado:*quarta-feira, 26 de janeiro de 2022 16:01
>>> *Para: *Open MPI Users 
>>> *Cc:*Ralph Castain 
>>> *Assunto: *Re: [OMPI users] OpenMPI - Intel MPI
>>>
>>> Err...the whole point of a container is to put all the library
>>> dependencies 

Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-26 Thread Brian Dobbins via users
Hi Ralph,

  Thanks for the explanation - in hindsight, that makes perfect sense,
since each process is operating inside the container and will of course
load up identical libraries, so data types/sizes can't be inconsistent.  I
don't know why I didn't realize that before.  I imagine the past issues I'd
experienced were just due to the PMI differences in the different MPI
implementations at the time.  I owe you a beer or something at the next
in-person SC conference!

  Cheers,
  - Brian


On Wed, Jan 26, 2022 at 4:54 PM Ralph Castain via users <
users@lists.open-mpi.org> wrote:

> There is indeed an ABI difference. However, the _launcher_ doesn't have
> anything to do with the MPI library. All that is needed is a launcher that
> can provide the key exchange required to wireup the MPI processes. At this
> point, both MPICH and OMPI have PMIx support, so you can use the same
> launcher for both. IMPI does not, and so the IMPI launcher will only
> support PMI-1 or PMI-2 (I forget which one).
>
> You can, however, work around that problem. For example, if the host
> system is using Slurm, then you could "srun" the containers and let Slurm
> perform the wireup. Again, you'd have to ensure that OMPI was built to
> support whatever wireup protocol the Slurm installation supported (which
> might well be PMIx today). Also works on Cray/ALPS. Completely bypasses the
> IMPI issue.
>
> Another option I've seen used is to have the host system start the
> containers (using ssh or whatever), providing the containers with access to
> a "hostfile" identifying the TCP address of each container. It is then easy
> for OMPI's mpirun to launch the job across the containers. I use this every
> day on my machine (using Docker Desktop with Docker containers, but the
> container tech is irrelevant here) to test OMPI. Pretty easy to set that
> up, and I should think the sys admins could do so for their users.
>
> Finally, you could always install the PMIx Reference RTE (PRRTE) on the
> cluster as that executes at user level, and then use PRRTE to launch your
> OMPI containers. OMPI runs very well under PRRTE - in fact, PRRTE is the
> RTE embedded in OMPI starting with the v5.0 release.
>
> Regardless of your choice of method, the presence of IMPI doesn't preclude
> using OMPI containers so long as the OMPI library is fully contained in
> that container. Choice of launch method just depends on how your system is
> setup.
>
> Ralph
>
>
> On Jan 26, 2022, at 3:17 PM, Brian Dobbins  wrote:
>
>
> Hi Ralph,
>
> Afraid I don't understand. If your image has the OMPI libraries installed
>> in it, what difference does it make what is on your host? You'll never see
>> the IMPI installation.
>>
>
>> We have been supporting people running that way since Singularity was
>> originally released, without any problems. The only time you can hit an
>> issue is if you try to mount the MPI libraries from the host (i.e., violate
>> the container boundary) - so don't do that and you should be fine.
>>
>
>   Can you clarify what you mean here?  I thought there was an ABI
> difference between the various MPICH-based MPIs and OpenMPI, meaning you
> can't use a host's Intel MPI to launch a container's OpenMPI-compiled
> program.  You *can* use the internal-to-the-container OpenMPI to launch
> everything, which is easy for single-node runs but more challenging for
> multi-node ones.  Maybe my understanding is wrong or out of date though?
>
>   Thanks,
>   - Brian
>
>
>
>>
>>
>> On Jan 26, 2022, at 12:19 PM, Luis Alfredo Pires Barbosa <
>> luis_pire...@hotmail.com> wrote:
>>
>> Hi Ralph,
>>
>> My singularity image has OpenMPI, but my host doesnt (Intel MPI). And I
>> am not sure if I the system would work with Intel + OpenMPI.
>>
>> Luis
>>
>> Enviado do Email 
>> para Windows
>>
>> *De: *Ralph Castain via users 
>> *Enviado:*quarta-feira, 26 de janeiro de 2022 16:01
>> *Para: *Open MPI Users 
>> *Cc:*Ralph Castain 
>> *Assunto: *Re: [OMPI users] OpenMPI - Intel MPI
>>
>> Err...the whole point of a container is to put all the library
>> dependencies _inside_ it. So why don't you just install OMPI in your
>> singularity image?
>>
>>
>>
>> On Jan 26, 2022, at 6:42 AM, Luis Alfredo Pires Barbosa via users <
>> users@lists.open-mpi.org> wrote:
>>
>> Hello all,
>>
>> I have Intel MPI in my cluster but I am running singularity image of a
>> software which uses OpenMPI.
>>
>> Since they may not be compatible and I dont think it is possible to get
>> these two different MPI running in the system.
>> I wounder if there is some work arround for this issue.
>>
>> Any insight would be welcome.
>> Luis
>>
>>
>>
>


Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-26 Thread Ralph Castain via users
There is indeed an ABI difference. However, the _launcher_ doesn't have 
anything to do with the MPI library. All that is needed is a launcher that can 
provide the key exchange required to wireup the MPI processes. At this point, 
both MPICH and OMPI have PMIx support, so you can use the same launcher for 
both. IMPI does not, and so the IMPI launcher will only support PMI-1 or PMI-2 
(I forget which one).

You can, however, work around that problem. For example, if the host system is 
using Slurm, then you could "srun" the containers and let Slurm perform the 
wireup. Again, you'd have to ensure that OMPI was built to support whatever 
wireup protocol the Slurm installation supported (which might well be PMIx 
today). Also works on Cray/ALPS. Completely bypasses the IMPI issue.

Another option I've seen used is to have the host system start the containers 
(using ssh or whatever), providing the containers with access to a "hostfile" 
identifying the TCP address of each container. It is then easy for OMPI's 
mpirun to launch the job across the containers. I use this every day on my 
machine (using Docker Desktop with Docker containers, but the container tech is 
irrelevant here) to test OMPI. Pretty easy to set that up, and I should think 
the sys admins could do so for their users.

Finally, you could always install the PMIx Reference RTE (PRRTE) on the cluster 
as that executes at user level, and then use PRRTE to launch your OMPI 
containers. OMPI runs very well under PRRTE - in fact, PRRTE is the RTE 
embedded in OMPI starting with the v5.0 release.

Regardless of your choice of method, the presence of IMPI doesn't preclude 
using OMPI containers so long as the OMPI library is fully contained in that 
container. Choice of launch method just depends on how your system is setup.

Ralph


On Jan 26, 2022, at 3:17 PM, Brian Dobbins mailto:bdobb...@gmail.com> > wrote:


Hi Ralph,

Afraid I don't understand. If your image has the OMPI libraries installed in 
it, what difference does it make what is on your host? You'll never see the 
IMPI installation.

We have been supporting people running that way since Singularity was 
originally released, without any problems. The only time you can hit an issue 
is if you try to mount the MPI libraries from the host (i.e., violate the 
container boundary) - so don't do that and you should be fine.

  Can you clarify what you mean here?  I thought there was an ABI difference 
between the various MPICH-based MPIs and OpenMPI, meaning you can't use a 
host's Intel MPI to launch a container's OpenMPI-compiled program.  You can use 
the internal-to-the-container OpenMPI to launch everything, which is easy for 
single-node runs but more challenging for multi-node ones.  Maybe my 
understanding is wrong or out of date though?

  Thanks,
  - Brian

 

On Jan 26, 2022, at 12:19 PM, Luis Alfredo Pires Barbosa 
mailto:luis_pire...@hotmail.com> > wrote:

Hi Ralph,
 My singularity image has OpenMPI, but my host doesnt (Intel MPI). And I am not 
sure if I the system would work with Intel + OpenMPI.
 Luis
 Enviado do Email   para Windows
 De: Ralph Castain via users  
Enviado:quarta-feira, 26 de janeiro de 2022 16:01
Para: Open MPI Users  
Cc:Ralph Castain  
Assunto: Re: [OMPI users] OpenMPI - Intel MPI
 Err...the whole point of a container is to put all the library dependencies 
_inside_ it. So why don't you just install OMPI in your singularity image?
 

On Jan 26, 2022, at 6:42 AM, Luis Alfredo Pires Barbosa via users 
mailto:users@lists.open-mpi.org> > wrote:
 Hello all,
 I have Intel MPI in my cluster but I am running singularity image of a 
software which uses OpenMPI.
 Since they may not be compatible and I dont think it is possible to get these 
two different MPI running in the system.
I wounder if there is some work arround for this issue.
 Any insight would be welcome.
Luis




Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-26 Thread Brian Dobbins via users
Hi Ralph,

Afraid I don't understand. If your image has the OMPI libraries installed
> in it, what difference does it make what is on your host? You'll never see
> the IMPI installation.
>

> We have been supporting people running that way since Singularity was
> originally released, without any problems. The only time you can hit an
> issue is if you try to mount the MPI libraries from the host (i.e., violate
> the container boundary) - so don't do that and you should be fine.
>

  Can you clarify what you mean here?  I thought there was an ABI
difference between the various MPICH-based MPIs and OpenMPI, meaning you
can't use a host's Intel MPI to launch a container's OpenMPI-compiled
program.  You *can* use the internal-to-the-container OpenMPI to launch
everything, which is easy for single-node runs but more challenging for
multi-node ones.  Maybe my understanding is wrong or out of date though?

  Thanks,
  - Brian



>
>
> On Jan 26, 2022, at 12:19 PM, Luis Alfredo Pires Barbosa <
> luis_pire...@hotmail.com> wrote:
>
> Hi Ralph,
>
> My singularity image has OpenMPI, but my host doesnt (Intel MPI). And I am
> not sure if I the system would work with Intel + OpenMPI.
>
> Luis
>
> Enviado do Email 
> para Windows
>
> *De: *Ralph Castain via users 
> *Enviado:*quarta-feira, 26 de janeiro de 2022 16:01
> *Para: *Open MPI Users 
> *Cc:*Ralph Castain 
> *Assunto: *Re: [OMPI users] OpenMPI - Intel MPI
>
> Err...the whole point of a container is to put all the library
> dependencies _inside_ it. So why don't you just install OMPI in your
> singularity image?
>
>
>
> On Jan 26, 2022, at 6:42 AM, Luis Alfredo Pires Barbosa via users <
> users@lists.open-mpi.org> wrote:
>
> Hello all,
>
> I have Intel MPI in my cluster but I am running singularity image of a
> software which uses OpenMPI.
>
> Since they may not be compatible and I dont think it is possible to get
> these two different MPI running in the system.
> I wounder if there is some work arround for this issue.
>
> Any insight would be welcome.
> Luis
>
>
>


Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-26 Thread Bennet Fauber via users
Luis,

Can you install OpenMPI into your home directory (or other shared
filesystem) and use that?  You may also want contact your cluster
admins to see if they can help do that or offer another solution.

On Wed, Jan 26, 2022 at 3:21 PM Luis Alfredo Pires Barbosa via users
 wrote:
>
> Hi Ralph,
>
>
>
> My singularity image has OpenMPI, but my host doesnt (Intel MPI). And I am 
> not sure if I the system would work with Intel + OpenMPI.
>
>
>
> Luis
>
>
>
> Enviado do Email para Windows
>
>
>
> De: Ralph Castain via users
> Enviado:quarta-feira, 26 de janeiro de 2022 16:01
> Para: Open MPI Users
> Cc:Ralph Castain
> Assunto: Re: [OMPI users] OpenMPI - Intel MPI
>
>
>
> Err...the whole point of a container is to put all the library dependencies 
> _inside_ it. So why don't you just install OMPI in your singularity image?
>
>
>
>
>
> On Jan 26, 2022, at 6:42 AM, Luis Alfredo Pires Barbosa via users 
>  wrote:
>
>
>
> Hello all,
>
>
>
> I have Intel MPI in my cluster but I am running singularity image of a 
> software which uses OpenMPI.
>
>
>
> Since they may not be compatible and I dont think it is possible to get these 
> two different MPI running in the system.
>
> I wounder if there is some work arround for this issue.
>
>
>
> Any insight would be welcome.
>
> Luis
>
>
>
>


Re: [OMPI users] RES: OpenMPI - Intel MPI

2022-01-26 Thread Ralph Castain via users
Afraid I don't understand. If your image has the OMPI libraries installed in 
it, what difference does it make what is on your host? You'll never see the 
IMPI installation.

We have been supporting people running that way since Singularity was 
originally released, without any problems. The only time you can hit an issue 
is if you try to mount the MPI libraries from the host (i.e., violate the 
container boundary) - so don't do that and you should be fine.


On Jan 26, 2022, at 12:19 PM, Luis Alfredo Pires Barbosa 
mailto:luis_pire...@hotmail.com> > wrote:

Hi Ralph,
 My singularity image has OpenMPI, but my host doesnt (Intel MPI). And I am not 
sure if I the system would work with Intel + OpenMPI.
 Luis
 Enviado do Email   para Windows
 De: Ralph Castain via users  
Enviado:quarta-feira, 26 de janeiro de 2022 16:01
Para: Open MPI Users  
Cc:Ralph Castain  
Assunto: Re: [OMPI users] OpenMPI - Intel MPI
 Err...the whole point of a container is to put all the library dependencies 
_inside_ it. So why don't you just install OMPI in your singularity image?
 

On Jan 26, 2022, at 6:42 AM, Luis Alfredo Pires Barbosa via users 
mailto:users@lists.open-mpi.org> > wrote:
 Hello all,
 I have Intel MPI in my cluster but I am running singularity image of a 
software which uses OpenMPI.
 Since they may not be compatible and I dont think it is possible to get these 
two different MPI running in the system.
I wounder if there is some work arround for this issue.
 Any insight would be welcome.
Luis



[OMPI users] RES: OpenMPI - Intel MPI

2022-01-26 Thread Luis Alfredo Pires Barbosa via users
Hi Ralph,

My singularity image has OpenMPI, but my host doesnt (Intel MPI). And I am not 
sure if I the system would work with Intel + OpenMPI.

Luis

Enviado do Email para Windows

De: Ralph Castain via users
Enviado:quarta-feira, 26 de janeiro de 2022 16:01
Para: Open MPI Users
Cc:Ralph Castain
Assunto: Re: [OMPI users] OpenMPI - Intel MPI

Err...the whole point of a container is to put all the library dependencies 
_inside_ it. So why don't you just install OMPI in your singularity image?



On Jan 26, 2022, at 6:42 AM, Luis Alfredo Pires Barbosa via users 
mailto:users@lists.open-mpi.org>> wrote:

Hello all,

I have Intel MPI in my cluster but I am running singularity image of a software 
which uses OpenMPI.

Since they may not be compatible and I dont think it is possible to get these 
two different MPI running in the system.
I wounder if there is some work arround for this issue.

Any insight would be welcome.
Luis




Re: [OMPI users] [External] Re: OpenMPI - Intel MPI

2022-01-26 Thread Sheppard, Raymond W via users
Hi All,
  FYI, we had trouble with Intel MPI in the past.  A full install of their 
compiler builds it along with the compiler.  Users were having the compiler 
paths, libs etc put in front, along with system dirs.  So the OpenMPI module 
sometimes was not seen even when it was just loaded.  The patch was to force 
the compiler "to the back of the line."  The fix was we split the Intel 
compiler into two modules.  To get all their other toys beyond just the 
compiler,  you now have to load a second module.  I don't know if this applies 
in this case, but I thought I would toss it out in case someone else runs into 
it.
 Ray


From: users  on behalf of John Hearns via 
users 
Sent: Wednesday, January 26, 2022 10:35 AM
To: Open MPI Users
Cc: John Hearns
Subject: [External] Re: [OMPI users] OpenMPI - Intel MPI

This message was sent from a non-IU address. Please exercise caution when 
clicking links or opening attachments from external sources.

Luis, it is perfectly possible to use different MPI on the same cluster.
May we ask what your OS and cluster management stack is?
Normally you use the Modules system to configure your job to use a chosen MPI



On Wed, 26 Jan 2022 at 15:01, Ralph Castain via users 
mailto:users@lists.open-mpi.org>> wrote:
Err...the whole point of a container is to put all the library dependencies 
_inside_ it. So why don't you just install OMPI in your singularity image?


On Jan 26, 2022, at 6:42 AM, Luis Alfredo Pires Barbosa via users 
mailto:users@lists.open-mpi.org>> wrote:

Hello all,

I have Intel MPI in my cluster but I am running singularity image of a software 
which uses OpenMPI.

Since they may not be compatible and I dont think it is possible to get these 
two different MPI running in the system.
I wounder if there is some work arround for this issue.

Any insight would be welcome.
Luis



Re: [OMPI users] OpenMPI - Intel MPI

2022-01-26 Thread John Hearns via users
Luis, it is perfectly possible to use different MPI on the same cluster.
May we ask what your OS and cluster management stack is?
Normally you use the Modules system to configure your job to use a chosen
MPI



On Wed, 26 Jan 2022 at 15:01, Ralph Castain via users <
users@lists.open-mpi.org> wrote:

> Err...the whole point of a container is to put all the library
> dependencies _inside_ it. So why don't you just install OMPI in your
> singularity image?
>
>
> On Jan 26, 2022, at 6:42 AM, Luis Alfredo Pires Barbosa via users <
> users@lists.open-mpi.org> wrote:
>
> Hello all,
>
> I have Intel MPI in my cluster but I am running singularity image of a
> software which uses OpenMPI.
>
> Since they may not be compatible and I dont think it is possible to get
> these two different MPI running in the system.
> I wounder if there is some work arround for this issue.
>
> Any insight would be welcome.
> Luis
>
>
>


Re: [OMPI users] OpenMPI - Intel MPI

2022-01-26 Thread Ralph Castain via users
Err...the whole point of a container is to put all the library dependencies 
_inside_ it. So why don't you just install OMPI in your singularity image?


On Jan 26, 2022, at 6:42 AM, Luis Alfredo Pires Barbosa via users 
mailto:users@lists.open-mpi.org> > wrote:

Hello all,

I have Intel MPI in my cluster but I am running singularity image of a software 
which uses OpenMPI.

Since they may not be compatible and I dont think it is possible to get these 
two different MPI running in the system.
I wounder if there is some work arround for this issue.

Any insight would be welcome.
Luis



[OMPI users] OpenMPI - Intel MPI

2022-01-26 Thread Luis Alfredo Pires Barbosa via users
Hello all,


I have Intel MPI in my cluster but I am running singularity image of a software 
which uses OpenMPI.


Since they may not be compatible and I dont think it is possible to get these 
two different MPI running in the system.

I wounder if there is some work arround for this issue.


Any insight would be welcome.

Luis