Re: [OMPI users] Fortran and OpenMPI 1.8.3 compiled with Intel-15 does nothing silently

2014-11-17 Thread Ralph Castain
Just checked the head of the 1.8 branch (soon to be released as 1.8.4), and confirmed the same results. I know the thread-multiple option is still broken there, but will test that once we get the final fix committed. On Mon, Nov 17, 2014 at 7:29 PM, Ralph Castain wrote: >

Re: [OMPI users] collective algorithms

2014-11-17 Thread Gilles Gouaillardet
Daniel, you can run $ ompi_info --parseable --all | grep _algorithm: | grep enumerator that will give you the list of supported algo for the collectives, here is a sample output : mca:coll:tuned:param:coll_tuned_allreduce_algorithm:enumerator:value:0:ignore

[OMPI users] collective algorithms

2014-11-17 Thread Faraj, Daniel A
I am trying to survey the collective algorithms in Open MPI. I looked at the src code but could not make out the guts of the communication algorithms. There are some open mpi papers but not detailed, where they talk about what algorithms are using in certain collectives. Has anybody done this

Re: [OMPI users] Fortran and OpenMPI 1.8.3 compiled with Intel-15 does nothing silently

2014-11-17 Thread Ralph Castain
FWIW: I don't have access to a Linux box right now, but I built the OMPI devel master on my Mac using Intel 2015 compilers and was able to build/run all of the Fortran examples in our "examples" directory. I suspect the problem here is your use of the --enable-mpi-thread-multiple option. The 1.8

Re: [OMPI users] Fortran and OpenMPI 1.8.3 compiled with Intel-15 does nothing silently

2014-11-17 Thread Gilles Gouaillardet
Hi John, do you MPI_Init() or do you MPI_Init_thread(MPI_THREAD_MULTIPLE) ? does your program calls MPI anywhere from an OpenMP region ? does your program calls MPI only within an !$OMP MASTER section ? does your program does not invoke MPI at all from any OpenMP region ? can you reproduce this

Re: [OMPI users] Fortran and OpenMPI 1.8.3 compiled with Intel-15 does nothing silently

2014-11-17 Thread John Bray
More investigation suggests its the use of -fopenmp (and also its new name -qopenmp) just to compile in OpenMP code, even if it is never executed mpiexec -n 12 ./one_f_debug.exe fails silently mpiexec -n 2 ./one_f_debug.exe has a segfault Both the segfault and the reason why changing the

Re: [OMPI users] Fortran and OpenMPI 1.8.3 compiled with Intel-15 does nothing silently

2014-11-17 Thread Tim Prince
Check by ldd in case you didn't update .so path Sent via the ASUS PadFone X mini, an AT 4G LTE smartphone Original Message From:John Bray Sent:Mon, 17 Nov 2014 11:41:32 -0500 To:us...@open-mpi.org Subject:[OMPI users] Fortran and OpenMPI 1.8.3 compiled with

Re: [OMPI users] mpi_wtime implementation

2014-11-17 Thread Daniels, Marcus G
On Mon, 2014-11-17 at 17:31 +, Dave Love wrote: > I discovered from looking at the mpiP profiler that OMPI always uses > gettimeofday rather than clock_gettime to implement mpi_wtime on > GNU/Linux, and that looks sub-optimal. It can be very expensive in practice, especially for codes that

Re: [OMPI users] oversubscription of slots with GridEngine

2014-11-17 Thread Dave Love
Ralph Castain writes: >> On Nov 13, 2014, at 3:36 PM, Dave Love wrote: >> >> Ralph Castain writes: >> >> cn6050 16 par6.q@cn6050 >> cn6045 16 par6.q@cn6045 The above looks like the PE_HOSTFILE. So it should be

[OMPI users] mpi_wtime implementation

2014-11-17 Thread Dave Love
I discovered from looking at the mpiP profiler that OMPI always uses gettimeofday rather than clock_gettime to implement mpi_wtime on GNU/Linux, and that looks sub-optimal. I don't remember what the resolution of gettimeofday is in practice, but I did need to write a drop-in replacement for

[OMPI users] Fortran and OpenMPI 1.8.3 compiled with Intel-15 does nothing silently

2014-11-17 Thread John Bray
I have succesfully been using OpenMPI 1.8.3 compiled with Intel-14, using ./configure --prefix=/usr/local/mpi/$(basename $PWD) --with-threads=posix --enable-mpi-thread-multiple --disable-vt --with-scif=no I have now switched to Intel 15.0.1, and configuring with the same options, I get minor

Re: [OMPI users] shmalloc error with >=512 mb

2014-11-17 Thread Mike Dubman
Hi, the default memheap size is 256MB, you can override it with oshrun -x SHMEM_SYMMETRIC_HEAP_SIZE=512M ... On Mon, Nov 17, 2014 at 3:38 PM, Timur Ismagilov wrote: > Hello! > Why does shmalloc return NULL when I try to allocate 512MB. > When i thry to allocate 256mb - all

[OMPI users] shmalloc error with >=512 mb

2014-11-17 Thread Timur Ismagilov
Hello! Why does shmalloc return NULL when I try to allocate 512MB. When i thry to allocate 256mb - all fine. I useĀ Open MPI/SHMEM v1.8.4 rc1 (v1.8.3-202-gb568b6e). programm: #include #include int main(int argc, char **argv) { int *src; start_pes(0); int length = 1024*1024*512; src = (int*)