I have no issues other than wondering why we don't do it in perl given that we 
already do all non-shell actions in perl - is it necessary to intro another 
language?


On May 22, 2013, at 5:58 AM, Jeff Squyres (jsquyres) <jsquy...@cisco.com> wrote:

> WHAT: Replace all mpif.h, use mpi, and use mpi_f08 code with Python-generated 
> code
> 
> WHY: there are ***7*** copies of the Fortran bindings; keeping them all in 
> sync when adding a new MPI-3 function (or updating/fixing an old one) is a 
> nightmare
> 
> WHERE: run a python generator script in ompi/mpi/fortran during "make"
> 
> WHEN: sometime in the next few months
> 
> TIMEOUT: discuss next Tuesday at the teleconf, 28 May 2013
> 
> -----
> 
> MORE DETAIL:
> 
> The last iteration of Fortran updates represented a huge leap forward in 
> OMPI's Fortran support.  However, one must remember that Fortran compilers 
> all have different stages of compliance to the current Fortran standard.  
> Hence, we have a lot of configury, preprocessor macros, and conditional code 
> in the OMPI Fortran bindings code to handle all these differences.  Also, 
> there are entire copies of the Fortran bindings code to handle some 
> differences that are too big for preprocessor macros.
> 
> As such, I count ***7*** copies of different Fortran bindings (not including 
> the PMPI copies/sym links/weak symbols) in the OMPI tree.  This is a freaking 
> nightmare to maintain as one adds new MPI-3 functions, or updates old 
> functions.  For example, we periodically find that one of the 7 copies has a 
> bug in a function prototype, but the other 6 are ok.  Or we added 6 
> interfaces when adding a new MPI-3 function, but forgot to add the 7th.  Ugh!
> 
> Craig has been working on a better system, somewhat modeled off the Bill 
> Gropp's Fortran generator system in MPICH.  That is, there's basically a 
> parsable file that breaks down every Fortran interface into its individual 
> parts.  Craig has some python scriptery that reads this file and then 
> generates all the OMPI interfaces and wrapper code for mpif.h, use mpi, and 
> use mpi_f08.
> 
> Specifically: the python scripts not only generates fixed interfaces (think 
> of them as "header files"), *it also generates the wrapper code* -- i.e., all 
> the C code that is currently in ompi/mpi/fortran/mpif-h.
> 
> *** Note that the current "use mpi" code is also script-generated during 
> "make" (and has been for years), but they are created by Bourne shell scripts.
> 
> This is a Big Change for (at least) two reasons:
> 
> 1. We'll actually be replacing the mpif.h and use mpi code that has been in 
> our tree forever.  Hence, there will likely be some bugs as we shake all this 
> out.
> 
> 2. We'll be running python code during "make".  I don't think that this is a 
> Big Issue these days, but a few years ago, I remember we had Big Discussions 
> about whether we could run non-sh-based-scripts during "make" (i.e., whether 
> we could assume that relevant interpreters were available to run such 
> scripts).  But to be clear: I'm no longer worried about people not having 
> Python available.
> 
> There's no fixed timeline for this yet; Craig is still working on his python 
> scripts.  The intent is that his scripts will actually be the basis for other 
> projects besides Open MPI (e.g., tools that also need 
> Fortran/PMPI-interception capabilities).  But this is such a big change that 
> I wanted to give the community a heads-up and have a chance to discuss this 
> before we're ready to bring back to the trunk.
> 
> Comments / thoughts?
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> 
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel


Reply via email to