On Mar 15, 2006, at 5:32 AM, Charlie Curry wrote:

Has anyone had luck building OpenMPI as a universal binary on Mac OS X?
The only trouble shows up when trying to build opal's asm sub-module.
If there is an official solution, could you mail it to me, as I've been
forced to use a really ugly "compile twice and lipo" step.

At present, it's the compile twice and lipo. We're actively talking to Apple about how to make it possible to build Open MPI directly as a Universal Binary, but there are some stumbling blocks in the process, among them the fact that Apple decided that C++ bool should have different size and alignment on PPC and i386 versions of Mac OS X and the assembly issue you ran into. It is unlikely that there is an easy workaround in the build system right now - we're going to have to add code specifically to support this setup.

If it's any help, there is a script in <srcdir>/contrib/dist/macosx/ that will take an Open MPI tarball and produce a .pkg installer of a Universal Binary of Open MPI. It still uses lipo, but it's less typing ;). The parameters to configure and architectures built can be tweaked with some changes to the first couple lines of the script.

Now for the bad news... Open MPI 1.0.1 can not run at all in a heterogeneous environment. The run-time has been fixed in the upcoming Open MPI 1.0.2 release, so the run-time can run in heterogeneous environments with the same word size (ie all 32 bit or all 64 bit), but the MPI layer is not endian clean. So you would have to compile your application just for ppc or just for i386 and never mix the two in the same job.

We hope that Open MPI 1.1, which should be released sometime in the first half of 2006, will properly support heterogeneous environments. This includes mixed 32 bit / 64 bit run-time environment and machines with different endianness. Presently, this is all implemented (but lightly tested) with the exception of the MPI datatype layer, which is not yet endian clean.


Brian

--
  Brian Barrett
  Open MPI developer
  http://www.open-mpi.org/


Reply via email to