[OMPI users] How it the rank determined (Open MPI and Podman)

2019-07-11 Thread Adrian Reber via users
I did a quick test to see if I can use Podman in combination with Open MPI: [test@test1 ~]$ mpirun --hostfile ~/hosts podman run quay.io/adrianreber/mpi-test /home/mpi/hello Hello, world (1 procs total) --> Process # 0 of 1 is alive. ->789b8fb622ef Hello, world (1 procs total) -->

Re: [OMPI users] How it the rank determined (Open MPI and Podman)

2019-07-11 Thread Adrian Reber via users
be an easy ride. > > > Cheers, > > > Gilles > > > On 7/11/2019 4:35 PM, Adrian Reber via users wrote: > > I did a quick test to see if I can use Podman in combination with Open > > MPI: > > > > [test@test1 ~]$ mpirun --hostfile ~/hosts po

Re: [OMPI users] How it the rank determined (Open MPI and Podman)

2019-07-12 Thread Adrian Reber via users
eted ring Rank 2 has completed MPI_Barrier Rank 1 has completed MPI_Barrier This is using the Open MPI ring.c example with SIZE increased from 20 to 2. Any recommendations what vader needs to communicate correctly? Adrian On Thu, Jul 11, 2019 at 12:07:35PM +0200, Adrian R

Re: [OMPI users] How it the rank determined (Open MPI and Podman)

2019-07-12 Thread Adrian Reber via users
res some permission (ptrace?) that might > be dropped by podman. > > Note Open MPI might not detect both MPI tasks run on the same node because of > podman. > If you use UCX, then btl/vader is not used at all (pml/ucx is used instead) > > > Cheers, > > Gilles >

Re: [OMPI users] How it the rank determined (Open MPI and Podman)

2019-07-21 Thread Adrian Reber via users
namespace would also be necessary. Is this a use case important enough to accept a patch for it? Adrian On Fri, Jul 12, 2019 at 03:42:15PM +0200, Adrian Reber via users wrote: > Gilles, > > thanks again. Adding '--mca btl_vader_single_copy_mechanism none' help

Re: [OMPI users] How is the rank determined (Open MPI and Podman)

2019-07-22 Thread Adrian Reber via users
the best > performance. > > -Nathan > > > On Jul 21, 2019, at 2:53 PM, Adrian Reber via users > > wrote: > > > > For completeness I am mentioning my results also here. > > > > To be able to mount file systems in the container it can only work if

Re: [OMPI users] How is the rank determined (Open MPI and Podman)

2019-07-22 Thread Adrian Reber via users
; > > Adrian, > > > > > > An option is to involve the modex. > > > > each task would OPAL_MODEX_SEND() its own namespace ID, and then > > OPAL_MODEX_RECV() > > > > the one from its peers and decide whether CMA support can be enabled. > > >

Re: [OMPI users] How is the rank determined (Open MPI and Podman)

2019-07-25 Thread Adrian Reber via users
t;>> > >>>>>> Gilles > >>>>>> > >>>>>> On 7/22/2019 4:53 PM, Adrian Reber via users wrote: > >>>>>>> I had a look at it and not sure if it really makes sense. > >>>>>>> > >>>>>>

[OMPI users] Do not use UCX for shared memory

2019-09-24 Thread Adrian Reber via users
Now that my PR to autodetect user namespaces has been merged in Open MPI (thanks everyone for the help!) I tried running containers on UCX enabled installation. The whole UCX setup confuses me a lot. Is it possible with UCX enabled installation to tell Open MPI to use vader for shared memory and n

Re: [OMPI users] Do not use UCX for shared memory

2019-09-25 Thread Adrian Reber via users
uct btl if you want to come to use UCX but want Vader for > shared memory. Typical usage is --mca pml ob1 --mca osc ^ucx --mca btl > self,vader,uct --mca byl_uct_memory_domains ib/mlx5_0 > > -Nathan > > > On Sep 24, 2019, at 11:13 AM, Adrian Reber via users > > wrot

Re: [OMPI users] vader_single_copy_mechanism

2020-03-02 Thread Adrian Reber via users
I do not know much about vader, but one of my pull requests was recently merged concerning exactly this: https://github.com/open-mpi/ompi/pull/6844 https://github.com/open-mpi/ompi/pull/6997 The changes in this pull requests are to detect if different Open MPI processes are running in different u