Re: [OMPI users] Bug: Disabled mpi_leave_pinned for GPUDirect and InfiniBand during run-time caused by GCC optimizations

2015-06-04 Thread Gilles Gouaillardet
Jeff, imho, this is a grey area ... 99.999% of the time, posix_memalign is a "pure" function. "pure" means it has no side effects. unfortunatly, this part of the code is the 0.001% case in which we explicitly rely on a side effect (e.g. posix_memalign calls an Open MPI wrapper that updates a g

Re: [OMPI users] Bug: Disabled mpi_leave_pinned for GPUDirect and InfiniBand during run-time caused by GCC optimizations

2015-06-04 Thread Jeff Squyres (jsquyres)
On Jun 4, 2015, at 5:48 AM, René Oertel wrote: > > Problem description: > === > > The critical code in question is in > opal/mca/memory/linux/memory_linux_ptmalloc2.c: > # > 92 #if HAVE_POSIX_MEMALIGN > 93 /* Double check for posix_memalign, too */ > 94 if (mca_memory

Re: [OMPI users] Fwd[2]: OMPI yalla vs impi

2015-06-04 Thread Timur Ismagilov
Hello, Alina. 1. Here is my  ompi_yalla command line: $HPCX_MPI_DIR/bin/mpirun -mca coll_hcoll_enable 1 -x HCOLL_MAIN_IB=mlx4_0:1 -x MXM_IB_PORTS=mlx4_0:1 -x MXM_SHM_KCOPY_MODE=off --mca pml yalla --hostfile hostlist $@ echo $HPCX_MPI_DIR /gpfs/NETHOME/oivt1/nicevt/itf/sources/hpcx-v1.3.330-icc

[OMPI users] Bug: Disabled mpi_leave_pinned for GPUDirect and InfiniBand during run-time caused by GCC optimizations

2015-06-04 Thread René Oertel
Dear Open MPI developers and users, if I'm not totally wrong then I found a bug in the Open MPI ptmalloc2 memory module in combination with recent GCC code optimizations. Affected Open MPI releases: == All (non-debug) releases using the opal/mca/memory/linux/memory_linux_