Steve --
Hypothetically, there shouldn't be much you need to do. Open MPI and
the other MPI's all conform to the same user-level API, so
recompiling your app with Open MPI *should* be sufficient.
That being said, there's a few disclaimers...
1. Command line syntax for certain tools will likely be different.
So be sure to check out the differences between mpirun, etc.
2. Even though the MPI's are source compatible, sometimes you can run
into an issue of performance portability or performance
characteristics. For example, different MPI's tend block *sometimes*
when you use MPI_SEND. The exact conditions for when each MPI
implementation blocks during MPI_SEND are, well, implementation-
specific. :-) That being said, a truly conformant MPI application
will never assume that MPI_SEND *doesn't* block, so this shouldn't be
an issue -- but I've seen many real-world apps that *do* assume that
MPI_SEND doesn't block (hope you can parse that sentence ok :-) ).
Also, OMPI uses pointers for MPI handles (e.g., the C type
"MPI_Comm", under the covers, is actually a pointer). Other MPI
implementations use integers. Some applications assume that MPI
handles are integers, and this can create problems. The MPI_SEND and
handles issues is just two of several that you may run into -- every
application is different, so it's hard to say exactly what will happen.
Your best bet is simply to recompile your app with Open MPI, fix any
warnings that come up, and then try to run and see what happens. If
it doesn't work right out of the box, you *should* be darn close;
hopefully it'll just be a few minor issues that need to get
straightened out.
For those on this list who write portable MPI software, it would be
great to hear what your experiences have been...
On Feb 2, 2007, at 1:59 PM, Steven A. DuChene wrote:
Is there any available documentation or write-ups of hints or
general information on
the task of porting an existing MPI application from a different
MPI implementation to
OpenMPI? We have an app using mpich1 and it needs some updating or
porting to
run on a new platform so I figured it would be a good time to
convert it over to a
better MPI implementation.
--
Steve
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems