WHAT: Revamp the entire MPI Fortran bindings; new "mpifort" wrapper compiler

WHY: Much better mpi module implementation; addition of MPI-3 mpi_f08 module

WHERE: Remove ompi/mpi/f77 and ompi/mpi/f90, replace with ompi/mpi/fortran

TIMEOUT: Teleconf, Tue Apr 17, 2012

====================================================

Highlights:
-----------

1. New mpifort wrapper compiler: you can utilize mpif.h, use mpi, and use 
mpi_f08 through this one wrapper compiler
2. mpif77 and mpif90 still exist, but are sym links to mpifort and may be 
removed in a future release
3. The mpi module has been re-implemented and is significantly "mo' bettah"
4. The mpi_f08 module offers many, many improvements over mpif.h and the mpi 
module

I will request an SVN "quiet time" to commit this stuff.  It's coming from a 
VERY long-lived mercurial branch (3 years! see below); it'll almost certainly 
take a few SVN commits and a bunch of testing before I get it correctly 
committed to the SVN trunk.

More details:
-------------

Craig Rasmussen and I have been working with the MPI-3 Fortran WG and Fortran 
J3 committees for a long, long time to make a prototype MPI-3 Fortran bindings 
implementation.  We think we're at a stable enough state to bring this stuff 
back to the trunk, with the goal of including it in OMPI v1.7.  

Special thanks go out to everyone who has been incredibly patient and helpful 
to us in this journey:

- Rolf Rabenseifner/HLRS (mastermind/genius behind the entire MPI-3 Fortran 
effort)
- The Fortran J3 committee
- Tobias Burnus/gfortran
- Tony Goetz/Absoft
- Terry Donte/Oracle
- ...and probably others whom I'm forgetting :-(

There's still opportunities for optimization in the mpi_f08 implementation, but 
by and large, it is as far along as it can be until Fortran compilers start 
implementing the new F08 dimension(..) syntax.

Note that gfortran is currently unsupported for the mpi_f08 module and the new 
mpi module.  gfortran users will a) fall back to the same mpi module 
implementation that is in OMPI v1.5.x, and b) not get the new mpi_f08 module.  
The gfortran maintainers are actively working hard to add the necessary 
features to support both the new mpi_f08 module and the new mpi module 
implementations.  This will take some time.

As mentioned above, ompi/mpi/f77 and ompi/mpi/f90 no longer exist.  All the 
fortran bindings implementations have been collated under ompi/mpi/fortran; 
each implementation has its own subdirectory:

ompi/mpi/fortran/
  base/               - glue code
  mpif-h/             - what used to be ompi/mpi/f77
  use-mpi-tkr/        - what used to be ompi/mpi/f90
  use-mpi-ignore-tkr/ - new mpi module implementation
  use-mpi-f08/        - new mpi_f08 module implementation

There's also a prototype 6-function-MPI implementation under use-mpi-f08-desc 
that emulates the new F08 dimension(..) syntax that isn't fully available in 
Fortran compilers yet.  We did that to prove it to ourselves that it could be 
done once the compilers fully support it.  This directory/implementation will 
likely eventually replace the use-mpi-f08 version.

Other things that were done:

- ompi_info grew a few new output fields to describe what level of Fortran 
support is included

- Existing Fortran examples in examples/ were renamed; new mpi_f08 examples 
were added

- The old Fortran MPI libraries were renamed:
  - libmpi_f77 -> libmpi_mpifh
  - libmpi_f90 -> libmpi_usempi

- The configury for Fortran was consolidated and significantly slimmed down.  
Note that the F77 env variable is now IGNORED for configure; you should only 
use FC.  Example:

    ./configure CC=icc CXX=icpc FC=ifort ...

- The https://bitbucket.org/jsquyres/mpi3-fortran branch has got to be one of 
OMPI's longest-running branches.  Its first commit was Tue Apr 07 23:01:46 2009 
-0400 -- in 2 days, it'll be 3 years old.  :-)  We think we've pulled in all 
relevant changes from the OMPI trunk (e.g., Fortran implementations of the new 
MPI-3 MPROBE stuff for mpif.h, use mpi, and use mpi_f08, and the recent Fujitsu 
Fortran patches).

I anticipate some instability when we bring this stuff into the trunk, simply 
because it touches a LOT of code in the MPI layer in the OMPI code base.  We'll 
try our best to make it as pain-free as possible, but please bear with us when 
it is committed.

Thanks!

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to