Re: [OMPI users] Strange behaviour OpenMPI in Fortran

2016-01-22 Thread Gilles Gouaillardet
ptr is uninitialized when sent by task 0, isn't it ? On Friday, January 22, 2016, Paweł Jarzębski wrote: > Hi, > > I wrote this code: > > program hello >implicit none > >include 'mpif.h' >integer :: rank, dest, source, tag, ierr, stat >integer :: n >

Re: [OMPI users] Issues Building Open MPI static with Intel Fortran 16

2016-01-22 Thread Matt Thompson
Howard, Welp. That worked! I'm assuming oshmem = OpenSHMEM, right? If so, yeah, for now, not important on my wee workstation. (If it isn't, is it something I should work on getting to work?) Matt On Fri, Jan 22, 2016 at 2:47 PM, Howard Pritchard wrote: > HI Matt, > > If you don't need oshmem,

Re: [OMPI users] Issues Building Open MPI static with Intel Fortran 16

2016-01-22 Thread Howard Pritchard
HI Matt, If you don't need oshmem, you could try again with --disable-oshmem added to the config line Howard 2016-01-22 12:15 GMT-07:00 Matt Thompson : > All, > > I'm trying to duplicate an issue I had with ESMF long ago (not sure if I > reported it here or at ESMF, but...). It had been a whil

[OMPI users] Issues Building Open MPI static with Intel Fortran 16

2016-01-22 Thread Matt Thompson
All, I'm trying to duplicate an issue I had with ESMF long ago (not sure if I reported it here or at ESMF, but...). It had been a while, so I started from scratch. I first built Open MPI 1.10.2 with Intel Fortran 16.0.0.109 and my system GCC (4.8.5 from RHEL7) with mostly defaults: # ./configure

Re: [OMPI users] configuring open mpi 10.1.2 with cuda on NVIDIA TK1

2016-01-22 Thread Kuhl, Spencer J
Thanks for looking in to this. I am looking for other CUDA aware mpi code that I can test on my 32 bit arm system, the Jetson TK1 From: users on behalf of Sylvain Jeaugey Sent: Friday, January 22, 2016 12:07 PM To: us...@open-mpi.org Subject: Re: [OMPI users]

Re: [OMPI users] configuring open mpi 10.1.2 with cuda on NVIDIA TK1

2016-01-22 Thread Sylvain Jeaugey
It looks like the errors are produced by the hwloc configure ; this one somehow can't find CUDA (I have to check if that's a problem btw). Anyway, later in the configure, the VT configure finds cuda correctly, so it seems specific to the hwloc configure. On 01/22/2016 10:01 AM, Kuhl, Spencer J

Re: [OMPI users] configuring open mpi 10.1.2 with cuda on NVIDIA TK1

2016-01-22 Thread Kuhl, Spencer J
Hi Sylvain, The configure does not stop, 'make all install' completes. After remaking and recompiling then ignoring the configure errors, and confirming both a functional cuda install and functional openmpi install. I went to the /usr/local/cuda/samples directory and ran 'make' and succesful

Re: [OMPI users] configuring open mpi 10.1.2 with cuda on NVIDIA TK1

2016-01-22 Thread Sylvain Jeaugey
Hi Spencer, Could you be more specific about what fails ? Did the configure stop at some point ? Or is it a compile error during the build ? I'm not sure the errors you are seeing in config.log are actually the real problem (I'm seeing the same error traces on a perfectly working machine). N

Re: [OMPI users] Strange behaviour OpenMPI in Fortran

2016-01-22 Thread Jeff Squyres (jsquyres)
+1 If you're starting new code, try using the F08 MPI bindings. Type safety === good. > On Jan 22, 2016, at 10:44 AM, Jeff Hammond wrote: > > You will find the MPI Fortran 2008 bindings to be significantly better w.r.t. > MPI types. See e.g. MPI 3.1 section 17.2.5 where it describes > TYP

Re: [OMPI users] Strange behaviour OpenMPI in Fortran

2016-01-22 Thread Jeff Hammond
You will find the MPI Fortran 2008 bindings to be significantly better w.r.t. MPI types. See e.g. MPI 3.1 section 17.2.5 where it describes TYPE(MPI_Status), which means that the status object is a first-class type in the Fortran 2008 interface, rather than being an error prone INTEGER array. I h

Re: [OMPI users] Strange behaviour OpenMPI in Fortran

2016-01-22 Thread Paweł Jarzębski
Thx a lot. I will be more careful with declaration of the MPI variables. Pawel J. W dniu 2016-01-22 o 16:06, Nick Papior pisze: The status field should be integer :: stat(MPI_STATUS_SIZE) Perhaps n is located stackwise just after the stat variable, which then overwrites it. 2016-01-22 15:3

Re: [OMPI users] Strange behaviour OpenMPI in Fortran

2016-01-22 Thread Nick Papior
The status field should be integer :: stat(MPI_STATUS_SIZE) Perhaps n is located stackwise just after the stat variable, which then overwrites it. 2016-01-22 15:37 GMT+01:00 Paweł Jarzębski : > Hi, > > I wrote this code: > > program hello >implicit none > >include 'mpif.h'

Re: [OMPI users] configuring open mpi 10.1.2 with cuda on NVIDIA TK1

2016-01-22 Thread Kuhl, Spencer J
Thanks for the suggestion Ryan, I will remove the symlinks and start try again. I checked config.log, and it appears that the configure finds cuda support, (result: yes), but once configure checks for cuda.h usability, conftest.c reports that a fatal error occurred, 'cuda.h no such file or dire

[OMPI users] Strange behaviour OpenMPI in Fortran

2016-01-22 Thread Paweł Jarzębski
Hi, I wrote this code: program hello implicit none include 'mpif.h' integer :: rank, dest, source, tag, ierr, stat integer :: n integer :: taskinfo, ptr call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr) if(rank.eq.

Re: [OMPI users] MPI hangs on poll_device() with rdma

2016-01-22 Thread Eva
>>You can try a more recent version of openmpi >>1.10.2 was released recently, or try with a nightly snapshot of master. >>If all of these still fail, can you post a trimmed version of your program so we can investigate ? Hi Gilles, I try 1.10.2. My program has been running successfully without

Re: [OMPI users] configuring open mpi 10.1.2 with cuda on NVIDIA TK1

2016-01-22 Thread Novosielski, Ryan
I would check config.log carefully to see what specifically failed or wasn't found where. I would never mess around with the contents of /usr/include. That is sloppy stuff and likely to get you into trouble someday. *Note: UMDNJ is now Rutgers-Biomedical and Health Sciences* || \\UTGERS