I think this issue is now resolved and thanks everybody for your help. I certainly learnt a lot!

For the first case you describe, as OPENMPI is now, the call sequence

from fortran is

mpi_comm_rank -> MPI_Comm_rank -> PMPI_Comm_rank

For the second case, as MPICH is now, its

mpi_comm_rank -> PMPI_Comm_rank


AFAIK, all known/popular MPI implemention's fortran binding layer is implemented with C MPI functions, including MPICH2 and OpenMPI. If MPICH2's fortran layer was implemented the way you said, typical profiling tools including MPE will
fail to work with fortran applications.

e.g. check mpich2-xxx/src/binding/f77/sendf.c.

To answer this specific point see for example the comment in

src/binding/f77/comm_sizef.c

/* This defines the routine that we call, which must be the PMPI version
   since we're renameing the Fortran entry as the pmpi version */

and the workings of the definition in MPICH

#ifndef MPICH_MPI_FROM_PMPI

This is what makes MPICH behaviour different than OPENMPI's in this matter.

Regards, Nick.

A.Chan

So for the first case if I have a pure fortran/C++ code I have to profile at the C interface.

So is the patch now retracted ?

Nick.

I think you have an incorrect deffinition of "correctly" :).
According
to the MPI standard, an MPI implementation is free to either layer language bindings (and only allow profiling at the lowest layer) or
not
layer the language bindings (and require profiling libraries
intercept
each language). The only requirement is that the implementation document what it has done.

Since everyone is pretty clear on what Open MPI has done, I don't
think
you can claim Open MPI is doing it "incorrectly".  Different from
MPICH
is not necessarily incorrect.  (BTW, LAM/MPI handles profiling the
same
way as Open MPI).

Brian

On Fri, 5 Dec 2008, Nick Wright wrote:

Hi Antony

That will work yes, but its not portable to other MPI's that do implement the profiling layer correctly unfortunately.

I guess we will just need to detect that we are using openmpi when
our
tool is configured and add some macros to deal with that
accordingly.
Is there an easy way to do this built into openmpi?

Thanks

Nick.

Anthony Chan wrote:
Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real
instrumentation,
you don't need to implement the fortran layer, you can simply
link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.

<OMPI>/bin/mpif77 -o foo foo.f -L<OMPI>/lib -lmpi_f77
-lYourProfClib
where libYourProfClib.a is your profiling tool written in C. If
you
don't want to intercept the MPI call twice for fortran program,
you need to implment fortran layer.  In that case, I would think
you
can just call C version of PMPI_xxx directly from your fortran
layer,
e.g.

void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
    printf("mpi_comm_rank call successfully intercepted\n");
    *info = PMPI_Comm_rank(comm,rank);
}

A.Chan

----- "Nick Wright" <nwri...@sdsc.edu> wrote:

Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile
switched
on.

The problem seems to be that if one eg. intercepts to call to mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this
then
calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it
should.
So if one wants to create a library that can profile C and
Fortran
codes at the same time one ends up intercepting the mpi call
twice.
Which is

not desirable and not what should happen (and indeed doesn't
happen in
other MPI implementations).

A simple example to illustrate is below. If somebody knows of a
fix to
avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include<stdio.h>
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
   printf("mpi_comm_rank call successfully intercepted\n");
   pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
   printf("MPI_comm_rank call successfully intercepted\n");
   PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

       program hello
        implicit none
        include 'mpif.h'
        integer ierr
        integer myid,nprocs
        character*24 fdate,host
        call MPI_Init( ierr )
       myid=0
       call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
       call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
       call getenv('HOST',host)
       write (*,*) 'Hello World from proc',myid,' out
of',nprocs,host
       call mpi_finalize(ierr)
       end



_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to