Re: [OMPI users] 'AINT' undeclared

2016-05-09 Thread Gilles Gouaillardet
Hi, i was able to build openmpi 1.10.2 with the same configure command line (after i quoted the LDFLAGS parameters) can you please run grep SIZEOF_PTRDIFF_T config.status it should be 4 or 8, but it seems different in your environment (!) are you running 32 or 64 bit kernel ? on which p

Re: [OMPI users] Incorrect function call in simple C program

2016-05-09 Thread Gilles Gouaillardet
Devon, send() is a libc function that is used internally by Open MPI, and it uses your user function instead of the libc ne. simply rename your function mysend() or something else that is not used by libc, and your issue will likely be fixed Cheers, Gilles On Tuesday, May 10, 2016, Devon Hollow

[OMPI users] Incorrect function call in simple C program

2016-05-09 Thread Devon Hollowood
Hello, I am having trouble understanding why I am getting an error when running the program produced by the attached C file. In this file, there are three short functions: send(), bounce() and main(). send() and bounce() both use MPI_Send() and MPI_Recv(), but critically, neither one is called fro

Re: [OMPI users] No core dump in some cases

2016-05-09 Thread dpchoudh .
Hi Gus Thanks for your suggestion. But I am not using any resource manager (i.e. I am launching mpirun from the bash shell.). In fact, both of the two clusters I talked about run CentOS 7 and I launch the job the same way on both of these, yet one of them creates standard core files and the other

Re: [OMPI users] mpirun command won't run unless the firewalld daemon is disabled

2016-05-09 Thread dpchoudh .
Hello Llolsten Is there a specific reason you run as root? This practice is discouraged, isn't it? Also, isn't it true that OMPI uses ephemeral (i.e. 'user level, randomly chosen') ports for TCP transport? In that case, how did this ever worked with a firewall enabled? I have, in the past, have

[OMPI users] mpirun command won't run unless the firewalld daemon is disabled

2016-05-09 Thread Llolsten Kaonga
Hello all, We've been running openmpi for a long time and up to version 1.8.2 and CentOS 6.7 with commands such as the one below: usr/local/bin/mpirun --allow-run-as-root --mca btl openib,self,sm --mca pml ob1 -np 2 -np 8 -hostfile /root/mpi-hosts /usr/local/bin/IMB-MPI1 To be able to run

[OMPI users] 'AINT' undeclared

2016-05-09 Thread Ilias Miroslav
Greetings, I am trying to install OpenMPI 1.10.1/1.10.2 with gcc (GCC) 5.2.1 20150902 (Red Hat 5.2.1-2) statically, $ ./configure --prefix=/home/ilias/bin/openmpi-1.10.1_gnu_static CXX=g++ CC=gcc F77=gfortran FC=gfortran LDFLAGS=--static -ldl -lrt --disable-shared --enable-static --disable-v

Re: [OMPI users] No core dump in some cases

2016-05-09 Thread Gus Correa
Hi Durga Just in case ... If you're using a resource manager to start the jobs (Torque, etc), you need to have them set the limits (for coredump size, stacksize, locked memory size, etc). This way the jobs will inherit the limits from the resource manager daemon. On Torque (which I use) I do th

Re: [OMPI users] Isend, Recv and Test

2016-05-09 Thread Zhen Wang
Jeff, Thanks for the explanation. It's very clear. Best regards, Zhen On Mon, May 9, 2016 at 10:19 AM, Jeff Squyres (jsquyres) wrote: > On May 9, 2016, at 8:23 AM, Zhen Wang wrote: > > > > I have another question. I thought MPI_Test is a local call, meaning it > doesn't send/receive message.

Re: [OMPI users] Isend, Recv and Test

2016-05-09 Thread Jeff Squyres (jsquyres)
On May 9, 2016, at 8:23 AM, Zhen Wang wrote: > > I have another question. I thought MPI_Test is a local call, meaning it > doesn't send/receive message. Am I misunderstanding something? Thanks again. >From the user's perspective, MPI_TEST is a local call, in that it checks to >see if an MPI_Re

Re: [OMPI users] Isend, Recv and Test

2016-05-09 Thread Zhen Wang
Jeff, I have another question. I thought MPI_Test is a local call, meaning it doesn't send/receive message. Am I misunderstanding something? Thanks again. Best regards, Zhen On Thu, May 5, 2016 at 9:45 PM, Jeff Squyres (jsquyres) wrote: > It's taking so long because you are sleeping for .1 sec

Re: [OMPI users] Segmentation Fault (Core Dumped) on mpif90 -v

2016-05-09 Thread Giacomo Rossi
I've send you all the outputs from configure, make and make install commands... Today I've compiled openmpi with the latest gcc version (6.1.1) shipped with my archlinux distro and everything seems ok, so I think that the problem is with intel compiler. Giacomo Rossi Ph.D., Space Engineer Resear