Re: [OMPI users] MX replacement?

2016-02-02 Thread Jeff Hammond
On Tuesday, February 2, 2016, Brice Goglin wrote: > Le 02/02/2016 15:21, Jeff Squyres (jsquyres) a écrit : > > On Feb 2, 2016, at 9:00 AM, Dave Love > wrote: > >> Now that MX support has been dropped, is there an alternative for fast > Ethernet? > > There are several options for low latency ethe

Re: [OMPI users] New libmpi.so dependency on libibverbs.so?

2016-02-02 Thread Jeff Squyres (jsquyres)
On Feb 2, 2016, at 12:15 PM, Number Cruncher wrote: > > Thanks for the info. I'll probably go with insisting on libibverbs. > > Does seem a bit contrary to the very high modularity in OpenMPI that > essentially a Cisco-specific module introduces a libmpi dependency where > openib never did...

Re: [OMPI users] New libmpi.so dependency on libibverbs.so?

2016-02-02 Thread Dave Love
"Jeff Squyres (jsquyres)" writes: > This functionality is there to overcome a bug in libibverbs (that > prints a dire warning about Cisco usNIC devices not being supported). It wasn't clear, but I meant there are also other libraries sucked in, like hwloc and opensm, and that's not new in 1.10.

Re: [OMPI users] New libmpi.so dependency on libibverbs.so?

2016-02-02 Thread Number Cruncher
Thanks for the info. I'll probably go with insisting on libibverbs. Does seem a bit contrary to the very high modularity in OpenMPI that essentially a Cisco-specific module introduces a libmpi dependency where openib never did Simon On 02/02/16 01:26, Gilles Gouaillardet wrote: Simon,

Re: [OMPI users] MX replacement?

2016-02-02 Thread Brice Goglin
Le 02/02/2016 15:21, Jeff Squyres (jsquyres) a écrit : > On Feb 2, 2016, at 9:00 AM, Dave Love wrote: >> Now that MX support has been dropped, is there an alternative for fast >> Ethernet? > There are several options for low latency ethernet, but they're all > vendor-based solutions (e.g., my co

Re: [OMPI users] New libmpi.so dependency on libibverbs.so?

2016-02-02 Thread Jeff Squyres (jsquyres)
This functionality is there to overcome a bug in libibverbs (that prints a dire warning about Cisco usNIC devices not being supported). However, I can see how this additional linkage is undesirable. We can probably flip the default on this component to not build by default -- but leave it ther

Re: [OMPI users] shared memory under fortran, bug?

2016-02-02 Thread Gilles Gouaillardet
Thanks Peter, this is just a workaround for a bug we just identified, the fix will come soon Cheers, Gilles On Tuesday, February 2, 2016, Peter Wind wrote: > That worked! > > i.e with the changed you proposed the code gives the right result. > > That was efficient work, thank you Gilles :) >

Re: [OMPI users] MX replacement?

2016-02-02 Thread Jeff Squyres (jsquyres)
On Feb 2, 2016, at 9:00 AM, Dave Love wrote: > > Now that MX support has been dropped, is there an alternative for fast > Ethernet? There are several options for low latency ethernet, but they're all vendor-based solutions (e.g., my company's usNIC solution). Note that MX support was dropped

Re: [OMPI users] New libmpi.so dependency on libibverbs.so?

2016-02-02 Thread Dave Love
Number Cruncher writes: > Having compiled various recent Open MPI sources with the same > "configure" options, I've noticed that the "libmpi.so" shared library > from 1.10.1 now depends itself directly on > libibverbs.so.1. Previously, 1.10.0 for example, only plugins such as > mca_btl_openib.so

Re: [OMPI users] shared memory under fortran, bug?

2016-02-02 Thread Peter Wind
That worked! i.e with the changed you proposed the code gives the right result. That was efficient work, thank you Gilles :) Best wishes, Peter - Original Message - > Thanks Peter, > that is quite unexpected ... > let s try an other workaround, can you replace > integer

[OMPI users] MX replacement?

2016-02-02 Thread Dave Love
Now that MX support has been dropped, is there an alternative for fast Ethernet?

[OMPI users] shared memory under fortran, bug?

2016-02-02 Thread Gilles Gouaillardet
Thanks Peter, that is quite unexpected ... let s try an other workaround, can you replace integer:: comm_group with integer:: comm_group, comm_tmp and call MPI_COMM_SPLIT(comm, irank*2/num_procs, irank, comm_group, ierr) with call MPI_COMM_SPLIT(comm, irank*2/num

Re: [OMPI users] shared memory under fortran, bug?

2016-02-02 Thread Peter Wind
Thanks Gilles, I get the following output (I guess it is not what you wanted?). Peter $ mpirun --mca osc pt2pt -np 4 a.out -- A requested component was not found, or was unable to be opened. This means that this compon

Re: [OMPI users] shared memory under fortran, bug?

2016-02-02 Thread Gilles Gouaillardet
Peter, at first glance, your test program looks correct. can you please try to run mpirun --mca osc pt2pt -np 4 ... I might have identified a bug with the sm osc component. Cheers, Gilles On Tuesday, February 2, 2016, Peter Wind wrote: > Enclosed is a short (< 100 lines) fortran code examp

[OMPI users] shared memory under fortran, bug?

2016-02-02 Thread Peter Wind
Enclosed is a short (< 100 lines) fortran code example that uses shared memory. It seems to me it behaves wrongly if openmpi is used. Compiled with SGI/mpt , it gives the right result. To fail, the code must be run on a single node. It creates two groups of 2 processes each. Within each group mem