Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send

2019-01-16 Thread ROTHE Eduardo - externe
port the API that OMPI uses (this was fixed in libfabric 1.6.0). A quick way to test this would be adding ‘-mca mtl_ofi_tag_mode ofi_tag_1’ to your command line. This would force OMPI not using FI_REMOTE_CQ_DATA. Thanks, From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of ROTHE

Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send

2019-01-15 Thread ROTHE Eduardo - externe
tl ofi -mca pml cm ./a Hello World from proccess 0 out of 2 This is process 0 reporting:: Hello World from proccess 1 out of 2 Process 1 received number 10 from process 0 From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of ROTHE Eduardo - externe Sent: Thursday, January 10, 2019

Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send

2019-01-10 Thread ROTHE Eduardo - externe
/4.0.0’ and this is very fishy. You should also use ‘—with-hwloc=external’. How many nodes are you running on and which interconnect are you using ? What if you mpirun —mca pml ob1 —mca btl tcp,self -np 2 ./a.out Cheers, Gilles On Wednesday, January 9, 2019, ROTHE Eduardo - externe mailto:eduardo-exte

Re: [OMPI users] Open MPI 4.0.0 - error with MPI_Send

2019-01-10 Thread ROTHE Eduardo - externe
ys happen and they could be related to Open MPI, libfabric and/or OmniPath (and fwiw, Intel is a major contributor to libfabric too) Cheers, Gilles On Thursday, January 10, 2019, ROTHE Eduardo - externe mailto:eduardo-externe.ro...@edf.fr>> wrote: Hi Gilles, thank you so much for your suppor

[OMPI users] Open MPI 4.0.0 - error with MPI_Send

2019-01-09 Thread ROTHE Eduardo - externe
Hi. I'm testing Open MPI 4.0.0 and I'm struggling with a weird behaviour. In a very simple example (very frustrating). I'm having the following error returned by MPI_Send: [gafront4:25692] *** An error occurred in MPI_Send [gafront4:25692] *** reported by process