Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-10 Thread Ralph Castain
Another thing you can do is (a) ensure you built with —enable-debug, and then (b) run it with -mca oob_base_verbose 100 (without the tcp_if_include option) so we can watch the connection handshake and see what it is doing. The —hetero-nodes will have not affect here and can be ignored. Ralph

[OMPI users] what order do I get messages coming to MPI Recv from MPI_ANY_SOURCE?

2014-11-10 Thread David A. Schneider
I am implementing a hub/servers MPI application. Each of the servers can get tied up waiting for some data, then they do an MPI Send to the hub. It is relatively simple for me to have the hub waiting around doing a Recv from ANY_SOURCE. The hub can get busy working with the data. What I'm

Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-10 Thread Gilles Gouaillardet
Hi, IIRC there were some bug fixes between 1.8.1 and 1.8.2 in order to really use all the published interfaces. by any change, are you running a firewall on your head node ? one possible explanation is the compute node tries to access the public interface of the head node, and packets get

Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-10 Thread Reuti
Hi, Am 10.11.2014 um 16:39 schrieb Ralph Castain: > That is indeed bizarre - we haven’t heard of anything similar from other > users. What is your network configuration? If you use oob_tcp_if_include or > exclude, can you resolve the problem? Thx - this option helped to get it working. These

Re: [OMPI users] File-backed mmaped I/O and openib btl.

2014-11-10 Thread Emmanuel Thomé
Thanks for your answer. On Mon, Nov 10, 2014 at 4:31 PM, Joshua Ladd wrote: > Just really quick off the top of my head, mmaping relies on the virtual > memory subsystem, whereas IB RDMA operations rely on physical memory being > pinned (unswappable.) Yes. Does that mean

Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-10 Thread Ralph Castain
That is indeed bizarre - we haven’t heard of anything similar from other users. What is your network configuration? If you use oob_tcp_if_include or exclude, can you resolve the problem? > On Nov 10, 2014, at 4:50 AM, Reuti wrote: > > Am 10.11.2014 um 12:50

Re: [OMPI users] File-backed mmaped I/O and openib btl.

2014-11-10 Thread Joshua Ladd
Just really quick off the top of my head, mmaping relies on the virtual memory subsystem, whereas IB RDMA operations rely on physical memory being pinned (unswappable.) For a large message transfer, the OpenIB BTL will register the user buffer, which will pin the pages and make them unswappable.

Re: [OMPI users] What could cause a segfault in OpenMPI?

2014-11-10 Thread Saliya Ekanayake
Thank you Jeff, I'll try this and let you know. Saliya On Nov 10, 2014 6:42 AM, "Jeff Squyres (jsquyres)" wrote: > I am sorry for the delay; I've been caught up in SC deadlines. :-( > > I don't see anything blatantly wrong in this output. > > Two things: > > 1. Can you try

[OMPI users] File-backed mmaped I/O and openib btl.

2014-11-10 Thread Emmanuel Thomé
Hi, I'm stumbling on a problem related to the openib btl in openmpi-1.[78].*, and the (I think legitimate) use of file-backed mmaped areas for receiving data through MPI collective calls. A test case is attached. I've tried to make is reasonably small, although I recognize that it's not extra

Re: [OMPI users] MPI_Wtime not working with -mno-sse flag

2014-11-10 Thread Alex A. Granovsky
Hello, use RDTSC (or RDTSCP) to read TSC directly Kind regards, Alex Granovsky -Original Message- From: maxinator333 Sent: Monday, November 10, 2014 4:35 PM To: us...@open-mpi.org Subject: [OMPI users] MPI_Wtime not working with -mno-sse flag Hello again, I have a piece of code,

Re: [OMPI users] MPI_Wtime not working with -mno-sse flag

2014-11-10 Thread Jeff Squyres (jsquyres)
On some platforms, the MPI_Wtime function essentially uses gettimeofday() under the covers. See this stackoverflow question about -mno-sse: http://stackoverflow.com/questions/3687845/error-with-mno-sse-flag-and-gettimeofday-in-c On Nov 10, 2014, at 8:35 AM, maxinator333

[OMPI users] MPI_Wtime not working with -mno-sse flag

2014-11-10 Thread maxinator333
Hello again, I have a piece of code, which worked fine on my PC, but on my notebook MPI_Wtime and MPI_Wtick won't work with the -mno-sse flag specified. MPI_Wtick will return 0 instead of 1e-6 and MPI_Wtime will also return always 0. clock() works in all cases. The Code is: #include

Re: [OMPI users] OPENMPI-1.8.3: missing fortran bindings for MPI_SIZEOF

2014-11-10 Thread Dave Love
"Jeff Squyres (jsquyres)" writes: > There were several commits; this was the first one: > > https://github.com/open-mpi/ompi/commit/d7eaca83fac0d9783d40cac17e71c2b090437a8c I don't have time to follow this properly, but am I reading right that that says mpi_sizeof will now

Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-10 Thread Reuti
Am 10.11.2014 um 12:50 schrieb Jeff Squyres (jsquyres): > Wow, that's pretty terrible! :( > > Is the behavior BTL-specific, perchance? E.G., if you only use certain BTLs, > does the delay disappear? You mean something like: reuti@annemarie:~> date; mpiexec -mca btl self,tcp -n 4 --hostfile

Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-10 Thread Jeff Squyres (jsquyres)
Wow, that's pretty terrible! :( Is the behavior BTL-specific, perchance? E.G., if you only use certain BTLs, does the delay disappear? FWIW: the use-all-IP interfaces approach has been in OMPI forever. Sent from my phone. No type good. > On Nov 10, 2014, at 6:42 AM, Reuti

Re: [OMPI users] What could cause a segfault in OpenMPI?

2014-11-10 Thread Jeff Squyres (jsquyres)
I am sorry for the delay; I've been caught up in SC deadlines. :-( I don't see anything blatantly wrong in this output. Two things: 1. Can you try a nightly v1.8.4 snapshot tarball? This will check to see if whatever the bug is has been fixed for the upcoming release:

Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-10 Thread Reuti
Am 10.11.2014 um 12:24 schrieb Reuti: > Hi, > > Am 09.11.2014 um 05:38 schrieb Ralph Castain: > >> FWIW: during MPI_Init, each process “publishes” all of its interfaces. Each >> process receives a complete map of that info for every process in the job. >> So when the TCP btl sets itself up,

Re: [OMPI users] How OMPI picks ethernet interfaces

2014-11-10 Thread Reuti
Hi, Am 09.11.2014 um 05:38 schrieb Ralph Castain: > FWIW: during MPI_Init, each process “publishes” all of its interfaces. Each > process receives a complete map of that info for every process in the job. So > when the TCP btl sets itself up, it attempts to connect across -all- the >

Re: [OMPI users] oversubscription of slots with GridEngine

2014-11-10 Thread Ralph Castain
You might also add the —display-allocation flag to mpirun so we can see what it thinks the allocation looks like. If there are only 16 slots on the node, it seems odd that OMPI would assign 32 procs to it unless it thinks there is only 1 node in the job, and oversubscription is allowed (which