Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-06 Thread Carsten Kutzner
On Fri, 6 Jan 2006, Graham E Fagg wrote: > > Looks like the problem is somewhere in the tuned collectives? > > Unfortunately I need a logfile with exactly those :( > > > > Carsten > > I hope not. Carsten can you send me your configure line (not the whole > log) and any other things your set in

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-06 Thread Jeff Squyres
On Jan 6, 2006, at 8:13 AM, Carsten Kutzner wrote: Looks like the problem is somewhere in the tuned collectives? Unfortunately I need a logfile with exactly those :( FWIW, we just activated these tuned collectives on the trunk (which will eventually become the 1.1.x series; the tuned

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-04 Thread Jeff Squyres
On Jan 4, 2006, at 2:08 PM, Anthony Chan wrote: Either my program quits without writing the logfile (and without complaining) or it crashes in MPI_Finalize. I get the message "33 additional processes aborted (not shown)". This is not MPE error message. If the logging crashes in

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-04 Thread Anthony Chan
On Wed, 4 Jan 2006, Carsten Kutzner wrote: > On Tue, 3 Jan 2006, Anthony Chan wrote: > > > MPE/MPE2 logging (or clog/clog2) does not impose any limitation on the > > number of processes. Could you explain what difficulty or error > > message you encountered when using >32 processes ? > >

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-04 Thread Carsten Kutzner
Hi Graham, here are the all-to-all test results with the modification to the decision routine you suggested yesterday. Now the routine behaves nicely for 128 and 256 float messages on 128 CPUs! For the other sizes one probably wants to keep the original algorithm, since it is faster there.

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-03 Thread Anthony Chan
On Tue, 3 Jan 2006, Carsten Kutzner wrote: > On Tue, 3 Jan 2006, Graham E Fagg wrote: > > > Do you have any tools such as Vampir (or its Intel equivalent) available > > to get a time line graph ? (even jumpshot of one of the bad cases such as > > the 128/32 for 256 floats below would help). > >

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2005-12-23 Thread Graham E Fagg
Hi Carsten I have also tried the tuned alltoalls and they are really great!! Only for very few message sizes in the case of 4 CPUs on a node one of my alltoalls performed better. Are these tuned collectives ready to be used for production runs? We are actively testing them on larger systems

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2005-12-23 Thread Carsten Kutzner
On Tue, 20 Dec 2005, George Bosilca wrote: > On Dec 20, 2005, at 3:19 AM, Carsten Kutzner wrote: > > >> I don't see how you deduct that adding barriers increase the > >> congestion ? It increase the latency for the all-to-all but for me > > > > When I do an all-to-all a lot of times, I see that

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2005-12-20 Thread George Bosilca
On Dec 20, 2005, at 3:19 AM, Carsten Kutzner wrote: I don't see how you deduct that adding barriers increase the congestion ? It increase the latency for the all-to-all but for me When I do an all-to-all a lot of times, I see that the time for a single all-to-all varies a lot. My time