On Mar 13, 2006, at 4:36 PM, Pierre Valiron wrote:

I have successfully build openmpi-1.1a1r9260 (from the subversion trunk)
in 64-bit mode on Solaris Opteron.
This r9260 tarball incorporates the last patches for Solaris from Brian
Barrett.

Just a quick note - these changes were recently moved into the v1.0 release branch and will be part of Open MPI 1.0.2.

- The build is fine
- The standard output in mpirun is now fixed and behaves as expected
- Processor binding is functional (mpirun --mca mpi_paffinity_alone 1)
and performances are improved with this option (tested on SMP
quadriprocessors v40z)
- The latency is very low. Rotating buffers (each task pass a buffer to
its neighbour on a ring) produces the following performance on a
quadripro v40z:

<snip>

Open-mpi offers a much better latency than lam/mpi (3 us instead of 7
us) and also features a higher throughput. This is very promising !

- Finally I could run successfully my production ab-initio quantum
chemistry code DIRCCR12 using open-mpi.

Congratulations to the open-mpi folks!

Glad to hear we could get everything working for you, and thank you :).

PS. The open-mpi performances over gigabit ethernet don't seem as good
with respect to lam/mpi. I'll make more testing after browsing the
ethernet-related messages on the list. I'll also check if parallelizing
over two ethernet NICS helps.

Yes, this is a known issue with our current TCP transport. We haven't spent enough time optimizing performance (our latency could use some work as well). We hope to be able to spend some developer cycles on this problem in the not too distant future, but we can't promise any timetables. It's hard competing against yourself, especially when the previous project has a 10 year head start ;).

Brian


--
  Brian Barrett
  Open MPI developer
  http://www.open-mpi.org/


Reply via email to