Hi,

it's been some time since and I'd like to give an update on the issue.
Sorry for the long mail!

On Thu, Jul 22, 2010 at 08:34:54AM -0400, Jeff Squyres wrote:
> If anything moves on this front, let me know and I can create a mercurial
> branch out on bitbucket.org and add the relevant configury magic for the
> GCC intrinsics.

I investigated several solutions lately and I'd like to share my "results".
Please bear in mind that I am in no way in expert in this and I can try to
work on the issue but the more help I can get the better. I guess I can get
Open MPI to compile on all platforms but that does not neccessarily mean
that it works.

So, as a summary (I can give details if needed):

 * Debian currently cares about 12 architectures officially, as well as
   9 ports. Of those, Open MPI comiles on 8 and 3, respectively.[1] The
   ones failing are armel, mips, mipsel, and s390. (For the ports: arm,
   armhf, avr32, hppa, m68k, and sh4.)
 * The parts can be ignored but of those at least hppa and m68k have a
   userbase that seems to be interested in having Open MPI.

 * I tried to build OpenPA on all architectures and ports. It compiled
   and passed the test suite on 10 of them. I was not able to test mipsel
   (no porterbox available) but mips is fine so I expect mipsel to be OK
   too. I was not able to test on sparc (same reason) but it build on
   sparc64. Unfortunately, the test suite failed. So all architectures we
   care about can be build with a fallback to OpenPA, though it may not
   produce anything that works. On hppa the test suite failed, and I was
   not able to test m68k at all.
 * OpenPA is not in Debian as a seperate package but as part of MPICH2. I
   think we will split it out as a separate package so Open MPI could be
   linked against it. MPICH2 builds on all of the official architectures,
   so this in an indicator that OpenPA might be a suitable alternative.

 * I also talked to the porters about the GCC intrinsics. The result was
   that if the atomic operations are defined, they are implemented and
   working. I did not check on which platforms this is the case but my
   current understanding is that all of the official architectures have
   them defined. I can test this at request, though I do not know how to
   verify that they work correctly.[2]

 * I had a look at libatomic-ops. It builds on all official architectures.
   It fails on 3 ports, one being m68k.

 * Atomic operations are also provided by glib. libglib2.0 builds on all
   official architectures. It fails on 2 ports, one being m68k. I may
   very well be that it does not provide all operations necessary, though.

My personal conclusion from this is:

 * All choices (OpenPA, GCC intrinsics, libatomic-ops, glib) would enable
   Open MPI to build on all official arches, as well as hppa. m68k does
   not work with any of them but since it's a port only, I do not (and
   don't have to) care about that.
 * libatomic-ops and glib provide their own data types etc. This might
   make integration harder.
 * OpenPA seems a reasonable choice since it's used by MPICH2 and as far
   as I know you already have good contact to the MPICH2 developers, so
   everyone could benefit.
 * GCC intrinsics are fine as well. My understanding is that the OpenPA
   implementation might be faster.

That's about the status quo. Jeff, if you could create a branch to play
with I can do so. Not sure if I will succeed but I'll give it a shot. I
have not touched Mercurial for a while but I'll find my way. (I'm using
Git but DVCS are not that different after all.)

Please let me know which solution is the preffered by the developers. If
I can do more tests or provide additional information, feel free to ask!

Best regards,
Manuel


[1] Archticture that I was not able to test were counted as failing, but
    this is only an issues for the ports. I have results for all official
    architectures.
[2] If someone knows a simple test case that I could run, please point me
    to it. Also, I'm not totally clear about which operations are currently
    used in Open MPI.

Reply via email to