Re: [OMPI users] Open questions on MPI_Allreduce background implementation

2019-06-08 Thread George Bosilca via users
There is an ongoing discussion about this on issue #4067 ( https://github.com/open-mpi/ompi/issues/4067). Also the mailing list contains few examples on how to tweak the collective algorithms to your needs. George. On Thu, Jun 6, 2019 at 7:42 PM hash join via users wrote: > Hi all, > > > I

Re: [OMPI users] Can displs in Scatterv/Gatherv/etc be a GPU array for CUDA-aware MPI?

2019-06-11 Thread George Bosilca via users
Leo, In a UMA system having the displacement and/or recvcounts arrays on managed GPU memory should work, but it will incur overheads for at least 2 reasons: 1. the MPI API arguments are checked for correctness (here recvcounts) 2. the collective algorithm part that executes on the CPU uses the

Re: [OMPI users] growing memory use from MPI application

2019-06-19 Thread George Bosilca via users
To completely disable UCX you need to disable the UCX MTL and not only the BTL. I would use "--mca pml ob1 --mca btl ^ucx —mca btl_openib_allow_ib 1". As you have a gdb session on the processes you can try to break on some of the memory allocations function (malloc, realloc, calloc). George.

Re: [OMPI users] OMPI 4.0.1 valgrind error on simple MPI_Send()

2019-04-30 Thread George Bosilca via users
Depending on the alignment of the different types there might be small holes in the low-level headers we exchange between processes It should not be a concern for users. valgrind should not stop on the first detected issue except if --exit-on-first-error has been provided (the default value

Re: [OMPI users] 3.0.4, 4.0.1 build failure on OSX Mojave with LLVM

2019-04-24 Thread George Bosilca via users
Jon, The configure AC_HEADER_STDC macro is considered obsolete [1] as most of the OSes are STDC compliant nowadays. To have it failing on a recent version of OSX, is therefore something unexpected. Moreover, many of the OMPI developers work on OSX Mojave with the default compiler but with the