Re: [OMPI users] Need help running jobs across different IB vendors

2013-10-15 Thread Jeff Squyres (jsquyres)
Short version: -- What you really want is: mpirun --mca pml ob1 ... The "--mca mtl ^psm" way will get the same result, but forcing pml=ob1 is really a slightly Better solution (from a semantic perspective) More detail: Similarly, there's actually 3 different

Re: [OMPI users] Need help running jobs across different IB vendors

2013-10-15 Thread Kevin M. Hildebrand
Ahhh, that's the piece I was missing. I've been trying to debug everything I could think of related to 'btl', and was completely unaware that 'mtl' was also a transport. If I run a job using --mca mtl ^psm, it does indeed run properly across all of my nodes. (Whether or not that's the

Re: [OMPI users] knem/openmpi performance?

2013-10-15 Thread Dave Love
[Meanwhile, much later...] Mark Dixon writes: > Hi, > > I'm taking a look at knem, to see if it improves the performance of > any applications on our QDR InfiniBand cluster, so I'm eager to hear > about other people's experiences. This doesn't appear to have been >

Re: [OMPI users] Need help running jobs across different IB vendors

2013-10-15 Thread Dave Love
"Kevin M. Hildebrand" writes: > Hi, I'm trying to run an OpenMPI 1.6.5 job across a set of nodes, some > with Mellanox cards and some with Qlogic cards. Maybe you shouldn't... (I'm blessed in one cluster with three somewhat incompatible types of QLogic card and a set of Mellanox

Re: [OMPI users] (no subject)

2013-10-15 Thread San B
Hi, As per your instruction, I did the profiling of the application with mpiP. Following is the difference between the two runs: Run 1: 16 mpi processes on single node @--- MPI Time (seconds) ---