BTW, I totally forgot to mention a notable C++ MPI bindings project that is the next-generation/successor to OMPI: the Boost C++ MPI bindings (boost.mpi).

    http://www.generic-programming.org/~dgregor/boost.mpi/doc/

I believe there's also python bindings included...?



On Aug 1, 2007, at 8:30 PM, Jeff Squyres wrote:

On Jul 31, 2007, at 6:43 PM, Lisandro Dalcin wrote:

I am working in the development of MPI for Python, a port of MPI to
Python, a high level language with automatic memory management. Said
that, in such an environment, having to call XXX.Free() for  every
object i get from a call like XXX.Get_something() is really an
unnecesary pain.

Gotcha.

But I don't see why this means that you need to know if an MPI handle points to an intrinsic object or not...?

Many things in MPI are LOCAL (datatypes, groups, predefined
operations) and in general destroying them for user-space is
guaranteed by MPI to not conflict with system(MPI)-space and
communication (i.e. if you create a derived datatype four using it in
a construction of another derived datatype, you can safely free the
first).

Well, for all those LOCAL objects, I could implement automatic
deallocation of handles for Python (for Comm, Win, and File, that is
not so easy, at freeing them is a collective operation AFAIK, and
automaticaly freeing them can lead to deadlocks).

This is a difficult issue -- deadlocks for removing objects that are collective actions. It's one of the reasons the Forum decided not to have the C++ bindings automatically free handles when they go out of scope.

My Python wrappers (mpi4py) are inteded to be used in any platform
with any MPI implementation. But things are not so easy, as there are
many corner cases in the MPI standard.

Yes, indeed.  :-)

Python es a wonderfull, powerfull language, very friendly to write
things. Prove of that is the many bug reports I provided here. By
using python, I can run all my unittest script in a single MPI run,
thus they have the potential to find interaction problems between all
parts of MPI. If any of you, OMPI developers, have some knowledge of
Python, I invite you to try mpi4py, as you would be able to write very
fast many many tests, not only for things that should work, but also
for things that should fail.

Sorry for the long mail. In short, many things in MPI are not clearly
designed for languages other than C and Fortran. Even in C++
specification, there are things that are unnaceptable, like the
open-door to the problem of having dangling references, which could be
avoided with negligible cost.

Yes and no. As the author of the C++ bindings chapter in MPI-2, I have a pretty good idea why we didn't do this. :-)

1. The reason I cited above: triggering an automatic destructor to invoke the corresponding MPI_*_FREE function when local handles go out of scope is fraught with deadlock.

2. The C++ bindings are just that; they are meant to be building blocks to create more interesting C++ class libraries (such as OOMPI). They are not intended to be the penultimate C++ interface, partly because the value of a good C++ interface is a) an active field of research, and b) subjective, and c) potentially dependent upon the requirements of the application that it is being designed for.

3. It seemed simplest to use some simple, fundamental C++ concepts (namespaces, basic objects) and make the bindings be extremely analogous to their C and Fortran counterparts. Otherwise, it would be essentially designing a whole new interface with different semantics for message passing. This was not deemed appropriate for a standard. The standard is meant to be as simple, straightforward, and cross-language as possible (and look where it is! Imagine if we had tried to make a real class library -- it would have led to even more corner cases and imprecision in the official standard).

In short, the Forum *strongly* decided against creating a C++ class library for MPI and instead provided building blocks where third parties could do whatever they wanted.

Anyway, all those issues are minor for
me, and the MPI specification is just great. I hope I can find the
time to contribute to the MPI-2.1 effort to better define MPI behavior
in the corner cases (fortunatelly, there are a really small number of
them).

Regards,

--
Lisandro Dalcín
---------------
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594

_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel


--
Jeff Squyres
Cisco Systems




--
Jeff Squyres
Cisco Systems


Reply via email to