I am working on an MD simulation algorithm on a shared-memory system
with 4 dual-core AMD 875 opteron processors. I started with MPICH
(1.2.6) and then shifted to OpenMPI and I found very good improvement
with OpenMPI. Even I would be interested in knowing any other
benchmarks with similar
Not a dumb question at all. :-)
I think the problem is your L flag. Our mpif90 wrapper compiler should
already know where to find the MPI library, which is located in wherever you
installed openmpi. Your flag is trying to overload our settings and I
believe is causing confusion.
So just
Just remove the -L and -l arguments -- OMPI's "mpif90" (and other
wrapper compilers) will do all that magic for you.
Many -L/-l arguments in MPI application Makefiles are throwbacks to
older versions of MPICH wrapper compilers that didn't always work
properly. Those days are long gone;
Hey Victor!
I just ran the old classic cpi.c just to verify that OpenMPI was
working. Now I need to grab some actual benchmarking code. I may try
the NAS Parallel Benchmarks from here...
http://www.nas.nasa.gov/Resources/Software/npb.html
They were pretty easy to build and run under mpich.
Victor,
Just on a hunch, look in your BIOS to see if Hyperthreading is turned
on. If so, turn it off. We have seen some unusual behavior on some of
our machines unless this is disabled.
I am interested in your progress as I have just begun working with
OpenMPI as well. I have used mpich for
The problem is that my executable file runs on the
Pentium D in 80 seconds on two cores and in 25 seconds
on one core.
And on another Sun SMP machine with 20 processors it
runs perfectly (the problem is perfectly scallable).
Victor Marian
Laboratory of Machine Elements and