Re: [OMPI users] Increasing OpenMPI RMA win attach region count.

2019-01-10 Thread Gilles Gouaillardet
Jeff, At first glance, a comment in the code suggests the rationale is to minimize the number of allocations and hence the time spent registering the memory. Cheers, Gilles Jeff Hammond wrote: >Why is this allocated statically? I dont understand the difficulty of a >dynamically allocates

Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send

2019-01-10 Thread Peter Kjellström
On Thu, 10 Jan 2019 11:20:12 + ROTHE Eduardo - externe wrote: > Hi Gilles, thank you so much for your support! > > For now I'm just testing the software, so it's running on a single > node. > > Your suggestion was very precise. In fact, choosing the ob1 component > leads to a successfull

Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send

2019-01-10 Thread Gilles Gouaillardet
Eduardo, You have two options to use OmniPath - “directly” via the psm2 mtl mpirun —mca pml cm —mca mtl psm2 ... - “indirectly” via libfabric mpirun —mca pml cm —mca mtl ofi ... I do invite you to try both. By explicitly requesting the mtl you will avoid potential conflicts. libfabric is used

Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send

2019-01-10 Thread ROTHE Eduardo - externe
Hi Gilles, thank you so much for your support! For now I'm just testing the software, so it's running on a single node. Your suggestion was very precise. In fact, choosing the ob1 component leads to a successfull execution! The tcp component had no effect. mpirun --mca pml ob1 —mca btl

Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send

2019-01-10 Thread Peter Kjellström
On Thu, 10 Jan 2019 21:51:03 +0900 Gilles Gouaillardet wrote: > Eduardo, > > You have two options to use OmniPath > > - “directly” via the psm2 mtl > mpirun —mca pml cm —mca mtl psm2 ... > > - “indirectly” via libfabric > mpirun —mca pml cm —mca mtl ofi ... > > I do invite you to try both.

Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send

2019-01-10 Thread ROTHE Eduardo - externe
Hi Gilles, thank you so much once again! I have a success using directly the psm2 mtl. Indeed, I do not need to use the cm pml (I guess this might be because the cm pml gets automatically selected when I enforce the psm2 mtl?). So both the following two commands execute successfully with Open

Re: [OMPI users] Increasing OpenMPI RMA win attach region count.

2019-01-10 Thread Udayanga Wickramasinghe
I actually have a use case where my library will attach many non-overlapping vm segments on demand to a single dynamic OMPI_Win_t object. With the current static limit, I would either have to increase it optimistically before startup or maintain a pool of dynamic win objects. However, other MPI