Re: [OMPI users] debugging performance regressions between versions

2013-10-23 Thread Dave Love
"Iliev, Hristo" writes: > Hi Dave, > > Is it MPI_ALLTOALL or MPI_ALLTOALLV that runs slower? Well, the output says MPI_ALLTOALL, but this prompted me to check, and it turns out that it's lumping both together. > If it is the latter, > the reason could be that the

[OMPI users] Get your Open MPI schwag!

2013-10-23 Thread Jeff Squyres (jsquyres)
We've been asked several times over the past few years if we would ever make more Open MPI logo shirts. Making those shirts was always a giant logistical hassle in the past: get quotes from printers, gather orders, get the shirts made, collect payments, distribute the shirts, ...etc. But now

Re: [OMPI users] Get your Open MPI schwag!

2013-10-23 Thread John Hearns
OpenMPI aprons. Nice! Good to wear when cooking up those Chef recipes. (Did I really just say that...)

Re: [OMPI users] [EXTERNAL] MPI_THREAD_SINGLE vs. MPI_THREAD_FUNNELED

2013-10-23 Thread Barrett, Brian W
On 10/22/13 10:23 AM, "Jai Dayal" > wrote: I, for the life of me, can't understand the difference between these two init_thread modes. MPI_THREAD_SINGLE states that "only one thread will execute", but MPI_THREAD_FUNNELED states "The process may

Re: [OMPI users] Get your Open MPI schwag!

2013-10-23 Thread Mike Dubman
maybe to add some nice/funny slogan on the front under the logo, and cool picture on the back. some of community members are still in early twenties (and counting) . :) shall we open a contest for good slogan to put? and mid-size pict to put on the back side? - living the parallel world -

Re: [OMPI users] [EXTERNAL] MPI_THREAD_SINGLE vs. MPI_THREAD_FUNNELED

2013-10-23 Thread Jeff Hammond
On Wed, Oct 23, 2013 at 12:02 PM, Barrett, Brian W wrote: > On 10/22/13 10:23 AM, "Jai Dayal" wrote: > I'm asking because I'm using an open_mpi build ontop of infiniband, and the > maximum thread mode is MPI_THREAD_SINGLE. > > > That doesn't seem right;

Re: [OMPI users] [EXTERNAL] MPI_THREAD_SINGLE vs. MPI_THREAD_FUNNELED

2013-10-23 Thread Jai Dayal
Hi, The version of Open MPI I'm using is mpiexec (OpenRTE) 1.6.5 It's what's offered on this smaller batch scheduling cluster at Oak Ridge (Sith to be exact). Running the ompi_info command, I get Thread support: posix (MPI_THREAD_MULTIPLE: no, progress: no) FT Checkpoint support:

Re: [OMPI users] MPI_Init_thread hangs in OpenMPI 1.7.1 when using --enable-mpi-thread-multiple

2013-10-23 Thread Paul Kapinos
Just kindly reminder: this bug seem still to exist in 1.7.3 :-/ On 08/20/13 22:15, Ralph Castain wrote: On Aug 20, 2013, at 12:40 PM, RoboBeans > wrote: I can confirm that mpi program still hangs if one uses these options while configuring

Re: [OMPI users] [EXTERNAL] MPI_THREAD_SINGLE vs. MPI_THREAD_FUNNELED

2013-10-23 Thread Tim Prince
On 10/23/2013 01:02 PM, Barrett, Brian W wrote: On 10/22/13 10:23 AM, "Jai Dayal" > wrote: I, for the life of me, can't understand the difference between these two init_thread modes. MPI_THREAD_SINGLE states that "only one thread

Re: [OMPI users] [EXTERNAL] MPI_THREAD_SINGLE vs. MPI_THREAD_FUNNELED

2013-10-23 Thread Jeff Hammond
And in practice the difference between FUNNELED and SERIALIZED will be very small. The differences might emerge from thread-local state and thread-specific network registration, but I don't see this being required. Hence, for most purposes SINGLE=FUNNELED=SERIALIZED is equivalent to NOMUTEX and

Re: [OMPI users] Get your Open MPI schwag!

2013-10-23 Thread Shamis, Pavel
+1 for Chuck Norris Pavel (Pasha) Shamis --- Computer Science Research Group Computer Science and Math Division Oak Ridge National Laboratory On Oct 23, 2013, at 1:12 PM, Mike Dubman > wrote: maybe to add some nice/funny slogan on

Re: [OMPI users] MPI_Init_thread hangs in OpenMPI 1.7.1 when using --enable-mpi-thread-multiple

2013-10-23 Thread Ralph Castain
Yes, I expect it will continue to hang until 1.7.4 is released. Should be fixed there, though we'll check. On Wed, Oct 23, 2013 at 11:11 AM, Paul Kapinos wrote: > Just kindly reminder: this bug seem still to exist in 1.7.3 :-/ > > > > On 08/20/13 22:15, Ralph

Re: [OMPI users] Get your Open MPI schwag!

2013-10-23 Thread Damien Hocking
Heheheheh. Chuck Norris has zero latency and infinite bandwidth. Chuck Norris is a hardware implementation only. Software is for sissys. Chuck Norris's version of MPI_IRecv just gives you the answer. Chuck Norris has a 128-bit memory space. Chuck Norris's Law says Chuck Norris gets twice as