Re: [OMPI users] RE : RE : Latency of 250 microseconds with Open-MPI 1.4.3, Mellanox Infiniband and 256 MPI ranks

2011-11-09 Thread Sébastien Boisvert
Hello, We did more tests concerning the latency using 512 MPI ranks on our super-computer. (64 machines * 8 cores per machine) By default in Ray, any rank can communicate directly with any other. Thus we have a complete graph with 512 vertices and 130816 edges (512*511/2) where vertices are

Re: [OMPI users] RE : RE : Latency of 250 microseconds with Open-MPI 1.4.3, Mellanox Infiniband and 256 MPI ranks

2011-09-26 Thread Yevgeny Kliteynik
On 26-Sep-11 11:27 AM, Yevgeny Kliteynik wrote: > On 22-Sep-11 12:09 AM, Jeff Squyres wrote: >> On Sep 21, 2011, at 4:24 PM, Sébastien Boisvert wrote: >> What happens if you run 2 ibv_rc_pingpong's on each node? Or N ibv_rc_pingpongs? >>> >>> With 11 ibv_rc_pingpong's >>> >>>

Re: [OMPI users] RE : RE : Latency of 250 microseconds with Open-MPI 1.4.3, Mellanox Infiniband and 256 MPI ranks

2011-09-26 Thread Yevgeny Kliteynik
On 22-Sep-11 12:09 AM, Jeff Squyres wrote: > On Sep 21, 2011, at 4:24 PM, Sébastien Boisvert wrote: > >>> What happens if you run 2 ibv_rc_pingpong's on each node? Or N >>> ibv_rc_pingpongs? >> >> With 11 ibv_rc_pingpong's >> >> http://pastebin.com/85sPcA47 >> >> Code to do that =>

Re: [OMPI users] RE : RE : Latency of 250 microseconds with Open-MPI 1.4.3, Mellanox Infiniband and 256 MPI ranks

2011-09-21 Thread Jeff Squyres
On Sep 21, 2011, at 4:24 PM, Sébastien Boisvert wrote: >> What happens if you run 2 ibv_rc_pingpong's on each node? Or N >> ibv_rc_pingpongs? > > With 11 ibv_rc_pingpong's > > http://pastebin.com/85sPcA47 > > Code to do that => https://gist.github.com/1233173 > > Latencies are around 20

[OMPI users] RE : RE : Latency of 250 microseconds with Open-MPI 1.4.3, Mellanox Infiniband and 256 MPI ranks

2011-09-21 Thread Sébastien Boisvert
sers > Objet : Re: [OMPI users] RE : Latency of 250 microseconds with Open-MPI > 1.4.3, Mellanox Infiniband and 256 MPI ranks > > On Sep 21, 2011, at 3:17 PM, Sébastien Boisvert wrote: > >> Meanwhile, I contacted some people at SciNet, which is also part of Compute >&g