On Jan 23, 2009, at 1:20 PM, N.M. Maclaren wrote:

FWIW, ABI is not necessarily a bad thing; it has its benefits and drawbacks (and enablers and limitations). Some people want it and some people don't (most don't care, I think). We'll see where that effort goes in the Forum and elsewhere.

Right. But, as someone with experience of trying to design portable ABIs, it requires more knowledge and skill than the typical person tackling the
job knows even exists ....

Indeed. This is at least one of the reasons for the current deadlock in the ABI discussions on the Forum (of which I am a part).

MPI did the Right Thing back in the mid-90's by just designing source- level compatibility. Whether it's the right time to move to ABI or not is a very politically- and religiously-charged discussion. :-)

FWIW, the F03 bindings for MPI may allow address-sized integers to be handles in Fortran. In this case, MPI handles will likely take on exactly the same value that they are in C. In OMPI's case, that's a C pointer, so the F03 value for MPI_COMM_WORLD will be some very large non-zero integer value. (standard disclaimers about future features/ functionality -- time will tell if this stuff plays out as expected)

That would solve this particular problem, if it is what I think it is.

Good.

A private Email made me realise that he was probably passing the Fortran MPI_COMM_WORLD to NetCDF4 for use as a communicator - and, unless NetCDF
is much better quality than when I last looked at it, I will bet that
its Fortran interface is just a thin wrapper. You can guess the rest :-)

FWIW, it probably works with MPICH and friends because they use integer handles in both Fortran and C, and therefore the values are exactly the same. Specifically, if NetCDF is just passing the value of Fortran MPI_COMM_WORLD back to a C MPI API function, it'll likely work in MPICH. But it won't in Open MPI because our handles are different between Fortran and C. The Right solution is to use the various MPI_*_f2c and MPI_*_c2f conversion routines (these are in the MPI spec -- we didn't make them up for OMPI). See

    http://www.mpi-forum.org/docs/mpi21-report-bw/node355.htm#Node355

I don't know anything about NetCDF4, so I don't know if it's neglecting to do that or not.

...but it sounds probable.  :-)

--
Jeff Squyres
Cisco Systems

Reply via email to