Re: [OMPI users] 'AINT' undeclared

2016-05-09 Thread Gilles Gouaillardet
Hi, i was able to build openmpi 1.10.2 with the same configure command line (after i quoted the LDFLAGS parameters) can you please run grep SIZEOF_PTRDIFF_T config.status it should be 4 or 8, but it seems different in your environment (!) are you running 32 or 64 bit kernel ? on which

Re: [OMPI users] Incorrect function call in simple C program

2016-05-09 Thread Gilles Gouaillardet
Devon, send() is a libc function that is used internally by Open MPI, and it uses your user function instead of the libc ne. simply rename your function mysend() or something else that is not used by libc, and your issue will likely be fixed Cheers, Gilles On Tuesday, May 10, 2016, Devon

Re: [hwloc-users] Topology Error

2016-05-09 Thread Mehmet Belgin
Thank you Brice for your quick reply! We will give BIOS upgrade a try and share our findings with the list. -Mehmet On 5/9/16 6:10 PM, Brice Goglin wrote: Le 09/05/2016 23:58, Mehmet Belgin a écrit : Greetings! We've been receiving this error for a while on our 64-core Interlagos AMD

[OMPI users] Incorrect function call in simple C program

2016-05-09 Thread Devon Hollowood
Hello, I am having trouble understanding why I am getting an error when running the program produced by the attached C file. In this file, there are three short functions: send(), bounce() and main(). send() and bounce() both use MPI_Send() and MPI_Recv(), but critically, neither one is called

Re: [hwloc-users] Topology Error

2016-05-09 Thread Brice Goglin
Le 09/05/2016 23:58, Mehmet Belgin a écrit : > Greetings! > > We've been receiving this error for a while on our 64-core Interlagos > AMD machines: > > > > * hwloc has encountered what looks like an error from the

Re: [hwloc-users] Tolopology Error

2016-05-09 Thread Mehmet Belgin
Sorry for the typo in the subject, I meant "Topology" ;) On 5/9/16 5:58 PM, Mehmet Belgin wrote: Greetings! We've been receiving this error for a while on our 64-core Interlagos AMD machines: * hwloc has

[hwloc-users] Tolopology Error

2016-05-09 Thread Mehmet Belgin
Greetings! We've been receiving this error for a while on our 64-core Interlagos AMD machines: * hwloc has encountered what looks like an error from the operating system. * * Socket (P#2 cpuset 0x,0x0)

Re: [OMPI users] No core dump in some cases

2016-05-09 Thread dpchoudh .
Hi Gus Thanks for your suggestion. But I am not using any resource manager (i.e. I am launching mpirun from the bash shell.). In fact, both of the two clusters I talked about run CentOS 7 and I launch the job the same way on both of these, yet one of them creates standard core files and the other

Re: [OMPI users] mpirun command won't run unless the firewalld daemon is disabled

2016-05-09 Thread dpchoudh .
Hello Llolsten Is there a specific reason you run as root? This practice is discouraged, isn't it? Also, isn't it true that OMPI uses ephemeral (i.e. 'user level, randomly chosen') ports for TCP transport? In that case, how did this ever worked with a firewall enabled? I have, in the past, have

[OMPI users] mpirun command won't run unless the firewalld daemon is disabled

2016-05-09 Thread Llolsten Kaonga
Hello all, We've been running openmpi for a long time and up to version 1.8.2 and CentOS 6.7 with commands such as the one below: usr/local/bin/mpirun --allow-run-as-root --mca btl openib,self,sm --mca pml ob1 -np 2 -np 8 -hostfile /root/mpi-hosts /usr/local/bin/IMB-MPI1 To be able to run

Re: [MTT users] Python choice

2016-05-09 Thread Ralph Castain
Good question - I'm not sure we can. What happens right now is you get syntax errors during the compile. I'll have to play and see if we can generate an error message before we hit that point. On Mon, May 9, 2016 at 9:38 AM, Jeff Squyres (jsquyres) wrote: > Is it possible to

[OMPI users] 'AINT' undeclared

2016-05-09 Thread Ilias Miroslav
Greetings, I am trying to install OpenMPI 1.10.1/1.10.2 with gcc (GCC) 5.2.1 20150902 (Red Hat 5.2.1-2) statically, $ ./configure --prefix=/home/ilias/bin/openmpi-1.10.1_gnu_static CXX=g++ CC=gcc F77=gfortran FC=gfortran LDFLAGS=--static -ldl -lrt --disable-shared --enable-static

Re: [MTT users] Python choice

2016-05-09 Thread Jeff Squyres (jsquyres)
Is it possible to give a friendly error message at run time if you accidentally run with Python 3.x? > On May 9, 2016, at 12:37 PM, Ralph Castain wrote: > > Hi folks > > As we look at the Python client, there is an issue with the supported Python > version. There was a

[MTT users] Python choice

2016-05-09 Thread Ralph Castain
Hi folks As we look at the Python client, there is an issue with the supported Python version. There was a significant break in the user-level API between Python 2.x and Python 3. Some of the issues are described here: https://docs.python.org/2/glossary.html#term-2to3 Noah and I have chatted

Re: [OMPI users] No core dump in some cases

2016-05-09 Thread Gus Correa
Hi Durga Just in case ... If you're using a resource manager to start the jobs (Torque, etc), you need to have them set the limits (for coredump size, stacksize, locked memory size, etc). This way the jobs will inherit the limits from the resource manager daemon. On Torque (which I use) I do

Re: [hwloc-users] hwloc_alloc_membind with HWLOC_MEMBIND_BYNODESET

2016-05-09 Thread Hugo Brunie
So.. I downloaded the last library and it works fine. I am sorry for the useless topic. (I had link with an old version of the lib in my LD_LIBRARY_PATH that's why it didn't work well) Hugo BRUNIE - Mail original - > De: "Hugo Brunie" > À: "Hardware locality

Re: [OMPI users] Isend, Recv and Test

2016-05-09 Thread Zhen Wang
Jeff, Thanks for the explanation. It's very clear. Best regards, Zhen On Mon, May 9, 2016 at 10:19 AM, Jeff Squyres (jsquyres) wrote: > On May 9, 2016, at 8:23 AM, Zhen Wang wrote: > > > > I have another question. I thought MPI_Test is a local call,

Re: [hwloc-users] hwloc_alloc_membind with HWLOC_MEMBIND_BYNODESET

2016-05-09 Thread Hugo Brunie
I will make a small example to reproduce my bug. I am not sure I have the right to describe the machine. I ask and I come back to you. Hugo - Mail original - > De: "Brice Goglin" > À: hwloc-us...@open-mpi.org > Envoyé: Lundi 9 Mai 2016 16:24:49 > Objet: Re:

Re: [hwloc-users] hwloc_alloc_membind with HWLOC_MEMBIND_BYNODESET

2016-05-09 Thread Brice Goglin
Hello Hugo, Can you send your code and a description of the machine so that I try to reproduce ? By the way, BYNODESET is also available in 1.11.3. Brice Le 09/05/2016 16:18, Hugo Brunie a écrit : > Hello, > > When I try to use hwloc_alloc_membind with HWLOC_MEMBIND_BYNODESET > I obtain NULL

Re: [OMPI users] Isend, Recv and Test

2016-05-09 Thread Jeff Squyres (jsquyres)
On May 9, 2016, at 8:23 AM, Zhen Wang wrote: > > I have another question. I thought MPI_Test is a local call, meaning it > doesn't send/receive message. Am I misunderstanding something? Thanks again. >From the user's perspective, MPI_TEST is a local call, in that it checks to

[hwloc-users] hwloc_alloc_membind with HWLOC_MEMBIND_BYNODESET

2016-05-09 Thread Hugo Brunie
Hello, When I try to use hwloc_alloc_membind with HWLOC_MEMBIND_BYNODESET I obtain NULL as a pointer and the error message is : Invalid Argument. I try without, it works. It works also with HWLOC_MEMBIND_STRICMT and/or HWLOC_MEMBIND_THREAD. My hwloc version is : ~/usr/bin/hwloc-bind

Re: [OMPI users] Isend, Recv and Test

2016-05-09 Thread Zhen Wang
Jeff, I have another question. I thought MPI_Test is a local call, meaning it doesn't send/receive message. Am I misunderstanding something? Thanks again. Best regards, Zhen On Thu, May 5, 2016 at 9:45 PM, Jeff Squyres (jsquyres) wrote: > It's taking so long because you

Re: [OMPI users] Segmentation Fault (Core Dumped) on mpif90 -v

2016-05-09 Thread Giacomo Rossi
I've send you all the outputs from configure, make and make install commands... Today I've compiled openmpi with the latest gcc version (6.1.1) shipped with my archlinux distro and everything seems ok, so I think that the problem is with intel compiler. Giacomo Rossi Ph.D., Space Engineer