FWIW, if that one-liner fix works (George and I just chatted about this on the phone), we can probably also push it into v1.2.9.

On Dec 5, 2008, at 10:49 AM, George Bosilca wrote:

Nick,

Thanks for noticing this. It's unbelievable that nobody noticed that over the last 5 years. Anyway, I think we have a one line fix for this problem. I'll test it asap, and then push it in the 1.3.

 Thanks,
   george.

On Dec 5, 2008, at 10:14 , Nick Wright wrote:

Hi Antony

That will work yes, but its not portable to other MPI's that do implement the profiling layer correctly unfortunately.

I guess we will just need to detect that we are using openmpi when our tool is configured and add some macros to deal with that accordingly. Is there an easy way to do this built into openmpi?

Thanks

Nick.

Anthony Chan wrote:
Hope I didn't misunderstand your question.  If you implement
your profiling library in C where you do your real instrumentation,
you don't need to implement the fortran layer, you can simply link
with Fortran to C MPI wrapper library -lmpi_f77. i.e.
<OMPI>/bin/mpif77 -o foo foo.f -L<OMPI>/lib -lmpi_f77 -lYourProfClib
where libYourProfClib.a is your profiling tool written in C. If you don't want to intercept the MPI call twice for fortran program,
you need to implment fortran layer.  In that case, I would think you
can just call C version of PMPI_xxx directly from your fortran layer, e.g.
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
  printf("mpi_comm_rank call successfully intercepted\n");
  *info = PMPI_Comm_rank(comm,rank);
}
A.Chan
----- "Nick Wright" <nwri...@sdsc.edu> wrote:
Hi

I am trying to use the PMPI interface with OPENMPI to profile a
fortran program.

I have tried with 1.28 and 1.3rc1 with --enable-mpi-profile switched
on.

The problem seems to be that if one eg. intercepts to call to mpi_comm_rank_ (the fortran hook) then calls pmpi_comm_rank_ this then

calls MPI_Comm_rank (the C hook) not PMPI_Comm_rank as it should.

So if one wants to create a library that can profile C and Fortran
codes at the same time one ends up intercepting the mpi call twice. Which is

not desirable and not what should happen (and indeed doesn't happen in

other MPI implementations).

A simple example to illustrate is below. If somebody knows of a fix to

avoid this issue that would be great !

Thanks

Nick.

pmpi_test.c: mpicc pmpi_test.c -c

#include<stdio.h>
#include "mpi.h"
void mpi_comm_rank_(MPI_Comm *comm, int *rank, int *info) {
 printf("mpi_comm_rank call successfully intercepted\n");
 pmpi_comm_rank_(comm,rank,info);
}
int MPI_Comm_rank(MPI_Comm comm, int *rank) {
 printf("MPI_comm_rank call successfully intercepted\n");
 PMPI_Comm_rank(comm,rank);
}

hello_mpi.f: mpif77 hello_mpi.f pmpi_test.o

     program hello
      implicit none
      include 'mpif.h'
      integer ierr
      integer myid,nprocs
      character*24 fdate,host
      call MPI_Init( ierr )
     myid=0
     call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr )
     call mpi_comm_size(MPI_COMM_WORLD , nprocs, ierr )
     call getenv('HOST',host)
     write (*,*) 'Hello World from proc',myid,' out of',nprocs,host
     call mpi_finalize(ierr)
     end



_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Cisco Systems

Reply via email to