Re: [O-MPI users] mpirun --prefix

2006-01-04 Thread Jeff Squyres
On Jan 4, 2006, at 7:24 PM, Anthony Chan wrote: How about this -- an ISV asked me for a similar feature a little while ago: if mpirun is invoked with an absolute pathname, then use that base directory (minus the difference from $bindir) as an option to an implicit --prefix. (your suggestion

Re: [O-MPI users] mpirun --prefix

2006-01-04 Thread Anthony Chan
Hi Jeff, On Wed, 4 Jan 2006, Jeff Squyres wrote: > Anthony -- > > I'm really sorry; we just noticed this message today -- it got lost > in the post-SC recovery/holiday craziness. :-( I understand. :) > > Your request is fairly reasonable, but I wouldn't want to make it the > default behavior.

Re: [O-MPI users] LAM vs OPENMPI performance

2006-01-04 Thread Jeff Squyres
On Jan 4, 2006, at 5:05 PM, Tom Rosmond wrote: Thanks for the quick reply. I ran my tests with a hostfile with cedar.reachone.com slots=4 I clearly misunderstood the role of the 'slots' parameter, because when I removed it, OPENMPI slightly outperformed LAM, which I assume it should. Thanks

Re: [O-MPI users] mpirun --prefix

2006-01-04 Thread Jeff Squyres
Anthony -- I'm really sorry; we just noticed this message today -- it got lost in the post-SC recovery/holiday craziness. :-( Your request is fairly reasonable, but I wouldn't want to make it the default behavior. Specifically, I can envision some scenarios where it might be

Re: [O-MPI users] LAM vs OPENMPI performance

2006-01-04 Thread Tom Rosmond
Thanks for the quick reply. I ran my tests with a hostfile with cedar.reachone.com slots=4 I clearly misunderstood the role of the 'slots' parameter, because when I removed it, OPENMPI slightly outperformed LAM, which I assume it should. Thanks for the help. Tom Brian Barrett wrote: On Jan

Re: [O-MPI users] LAM vs OPENMPI performance

2006-01-04 Thread Patrick Geoffray
Hi Tom, users-requ...@open-mpi.org wrote: I am pretty sure that LAM exploits the fact that the virtual processors are all sharing the same memory, so communication is via memory and/or the PCI bus of the system, while my OPENMPI configuration doesn't exploit this. Is this a reasonable

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-04 Thread Jeff Squyres
On Jan 4, 2006, at 2:08 PM, Anthony Chan wrote: Either my program quits without writing the logfile (and without complaining) or it crashes in MPI_Finalize. I get the message "33 additional processes aborted (not shown)". This is not MPE error message. If the logging crashes in

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-04 Thread Anthony Chan
On Wed, 4 Jan 2006, Carsten Kutzner wrote: > On Tue, 3 Jan 2006, Anthony Chan wrote: > > > MPE/MPE2 logging (or clog/clog2) does not impose any limitation on the > > number of processes. Could you explain what difficulty or error > > message you encountered when using >32 processes ? > >

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-04 Thread Carsten Kutzner
Hi Graham, here are the all-to-all test results with the modification to the decision routine you suggested yesterday. Now the routine behaves nicely for 128 and 256 float messages on 128 CPUs! For the other sizes one probably wants to keep the original algorithm, since it is faster there.