After spending few hours pondering about this problem, we came to the conclusion that the best approach is to keep what we had before (i.e. the original approach). This means I'll undo my patch in the trunk, and not change the behavior on the next releases (1.3 and 1.2.9). This approach, while different from others MPI implementations, is as legal as possible from the MPI standard point of view. Any suggestions on this topic or about the inconsistent behavior between the MPI implementations, should be directed to the MPI Forum Tools group for further evaluation.

The main reason for this is being nice with tool developers. In the current incarnation, they can either catch the Fortran calls or the C calls. If they provide both, then they will have to figure out how to cope with the double calls (as your example highlight).

Here is the behavior Open MPI will stick too:
Fortran MPI  -> C MPI
Fortran PMPI -> C MPI

  george.

PS: There was another possible approach, which could avoid the double calls while preserving the tool writers friendliness. This possible approach will do:
    Fortran MPI  -> C MPI
    Fortran PMPI -> C PMPI
                      ^
Unfortunately, we will have to heavily modify all files in the Fortran interface layer in order to support this approach. We're too close to a major release to start such time consuming work.

  george.

On Dec 5, 2008, at 13:27 , Nick Wright wrote:

Brian

Sorry I picked the wrong word there. I guess this is more complicated than I thought it was.

For the first case you describe, as OPENMPI is now, the call sequence from fortran is

mpi_comm_rank -> MPI_Comm_rank -> PMPI_Comm_rank

For the second case, as MPICH is now, its

mpi_comm_rank -> PMPI_Comm_rank

So for the first case if I have a pure fortran/C++ code I have to profile at the C interface.

So is the patch now retracted ?

Nick.

I think you have an incorrect deffinition of "correctly" :). According to the MPI standard, an MPI implementation is free to either layer language bindings (and only allow profiling at the lowest layer) or not layer the language bindings (and require profiling libraries intercept each language). The only requirement is that the implementation document what it has done. Since everyone is pretty clear on what Open MPI has done, I don't think you can claim Open MPI is doing it "incorrectly". Different from MPICH is not necessarily incorrect. (BTW, LAM/MPI handles profiling the same way as Open MPI).
Brian
On Fri, 5 Dec 2008, Nick Wright wrote:
Hi Antony

That will work yes, but its not portable to other MPI's that do implement the profiling layer correctly unfortunately.

I guess we will just need to detect that we are using openmpi when our tool is configured and add some macros to deal with that accordingly. Is there an easy way to do this built into openmpi?

Thanks

Nick.

Anthony Chan wrote:
Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.

<OMPI>/bin/mpif77 -o foo foo.f -L<OMPI>/lib -lmpi_f77 - lYourProfClib

where libYourProfClib.a is your profiling tool written in C. If you don't want to intercept the MPI call twice for fortran program, you need to implment fortran layer. In that case, I would think you can just call C version of PMPI_xxx directly from your fortran layer, e.g.

void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   *info = PMPI_Comm_rank(comm,rank);
}

A.Chan

----- "Nick Wright" <nwri...@sdsc.edu> wrote:

Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched
on.

The problem seems to be that if one eg. intercepts to call to mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this then

calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.

So if one wants to create a library that can profile C and Fortran
codes at the same time one ends up intercepting the mpi call twice. Which is

not desirable and not what should happen (and indeed doesn't happen in

other MPI implementations).

A simple example to illustrate is below. If somebody knows of a fix to

avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include<stdio.h>
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
  printf("mpi_comm_rank call successfully intercepted\n");
  pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
  printf("MPI_comm_rank call successfully intercepted\n");
  PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

      program hello
       implicit none
       include 'mpif.h'
       integer ierr
       integer myid,nprocs
       character*24 fdate,host
       call MPI_Init( ierr )
      myid=0
      call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
      call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
      call getenv('HOST',host)
write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
      call mpi_finalize(ierr)
      end



_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to