Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran withMPI_REDUCE / MPI_ALLREDUCE

2009-08-04 Thread Ricardo Fonseca

Hi Jeff

This is a Mac OS X (10.5.7) specific issue, that occurs for all  
versions > 1.2.9 that I've tested (1.3.0 through the 1.4 nightly),  
regardless of what fortran compiler you use (ifort / g95 / gfortran).  
I've been able to replicate this issue on other OS X machines, and I  
am sure that I am using the correct headers / libraries. Version 1.2.9  
is working correctly. Here are some system details:


$ uname -a
Darwin zamblap.epp.ist.utl.pt 9.7.0 Darwin Kernel Version 9.7.0: Tue  
Mar 31 22:52:17 PDT 2009; root:xnu-1228.12.14~1/RELEASE_I386 i386


$ gcc --version
i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5493)

$ ld -v
@(#)PROGRAM:ld  PROJECT:ld64-85.2.1

This might be a (again, Mac OS X specific) libtool issue. If you look  
at the name list of the generated .dylib libraries for 1.3.3 you get:


$ nm /opt/openmpi/1.3.3-g95-32/lib/*.dylib | grep -i in_place
000a4d30 S _MPI_FORTRAN_IN_PLACE
000a4d34 S _mpi_fortran_in_place
000a4d38 S _mpi_fortran_in_place_
000a4d3c S _mpi_fortran_in_place__
000a4d30 S _MPI_FORTRAN_IN_PLACE
000a4d34 S _mpi_fortran_in_place
000a4d38 S _mpi_fortran_in_place_
000a4d3c S _mpi_fortran_in_place__
7328 S __ZN3MPI8IN_PLACEE
7328 S __ZN3MPI8IN_PLACEE
 U _mpi_fortran_in_place__
 U _mpi_fortran_in_place__
00036eea D _orte_snapc_base_store_in_place
00036eea D _orte_snapc_base_store_in_place

But for 1.2.9 you get:

$ nm /opt/openmpi/1.2.9-g95-32/lib/*.dylib | grep -i in_place
00093950 S _MPI_FORTRAN_IN_PLACE
00093954 S _mpi_fortran_in_place
00093958 S _mpi_fortran_in_place_
0009395c S _mpi_fortran_in_place__
00093950 S _MPI_FORTRAN_IN_PLACE
00093954 S _mpi_fortran_in_place
00093958 S _mpi_fortran_in_place_
0009395c S _mpi_fortran_in_place__
e00c D __ZN3MPI8IN_PLACEE
e00c D __ZN3MPI8IN_PLACEE
 U _mpi_fortran_in_place__
 U _mpi_fortran_in_place__

So the __ZN3MPI8IN_PLACEE symbol, that I guess refers to the Fortran  
MPI_IN_PLACE constant is being defined incorrectly in the 1.3.3  
version as a S (symbol in a section other than those above), while it  
should be defined as a D (data section  symbol) as part of an  
"external" common block, as it happens in 1.2.9. So when linking the  
1.3.3 version the MPI_IN_PLACE constant will never have the same  
address as any of the mpi_fortran_in_place variables, but rather its  
own address.


Thanks again for your help,
Ricardo

---
Prof. Ricardo Fonseca

GoLP - Grupo de Lasers e Plasmas
Instituto de Plasmas e Fusão Nuclear
Instituto Superior Técnico
Av. Rovisco Pais
1049-001 Lisboa
Portugal

tel: +351 21 8419202
fax: +351 21 8464455
web: http://cfp.ist.utl.pt/golp/

On Aug 1, 2009, at 17:00 , users-requ...@open-mpi.org wrote:


Message: 2
Date: Sat, 1 Aug 2009 07:44:47 -0400
From: Jeff Squyres 
Subject: Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran
    withMPI_REDUCE  / MPI_ALLREDUCE
To: Open MPI Users 
Message-ID: 
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes

Hmm.  FWIW, I'm unable to replicate your error.  I tried with the OMPI
SVN trunk and a build of the OMPI 1.3.3 tarball using the GNU compiler
suite on RHEL4U5.

I've even compiled your sample code with "mpif90" using the "use mpi"
statement -- I did not get an unclassifiable statement.  What version
of Open MPI are you using?  Please sent the info listed here:

http://www.open-mpi.org/community/help/

Can you confirm that you're not accidentally mixing and matching
multiple versions of Open MPI?




Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran withMPI_REDUCE / MPI_ALLREDUCE

2009-08-01 Thread Jeff Squyres
Hmm.  FWIW, I'm unable to replicate your error.  I tried with the OMPI  
SVN trunk and a build of the OMPI 1.3.3 tarball using the GNU compiler  
suite on RHEL4U5.


I've even compiled your sample code with "mpif90" using the "use mpi"  
statement -- I did not get an unclassifiable statement.  What version  
of Open MPI are you using?  Please sent the info listed here:


http://www.open-mpi.org/community/help/

Can you confirm that you're not accidentally mixing and matching  
multiple versions of Open MPI?




On Jul 30, 2009, at 10:41 AM, Ricardo Fonseca wrote:


(I just realized I had the wrong subject line, here it goes again)

Hi Jeff

Yes, I am using the right one. I've installed the freshly compiled  
openmpi into /opt/openmpi/1.3.3-g95-32. If I edit the mpif.h file by  
hand and put "error!" in the first line I get:


zamblap:sandbox zamb$ edit /opt/openmpi/1.3.3-g95-32/include/mpif.h

zamblap:sandbox zamb$ mpif77 inplace_test.f90

In file mpif.h:1

   Included at inplace_test.f90:7

error!

1

Error: Unclassifiable statement at (1)

(btw, if I use the F90 bindings instead I get a similar problem,  
except the address for the MPI_IN_PLACE fortran constant is slightly  
different from the F77 binding, i.e. instead of 0x50920 I get 0x508e0)


Thanks for your help,

Ricardo


On Jul 29, 2009, at 17:00 , users-requ...@open-mpi.org wrote:


Message: 2
Date: Wed, 29 Jul 2009 07:54:38 -0500
From: Jeff Squyres 
Subject: Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran
withMPI_REDUCE  / MPI_ALLREDUCE
To: "Open MPI Users" 
Message-ID: <986510b6-7103-4d7b-b7d6-9d8afdc19...@cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed;  
delsp=yes


Can you confirm that you're using the right mpif.h?

Keep in mind that each MPI implementation's mpif.h is different --
it's a common mistake to assume that the mpif.h from one MPI
implementation should work with another implementation (e.g., someone
copied mpif.h from one MPI to your software's source tree, so the
compiler always finds that one instead of the MPI-implementation-
provided mpif.h.).


On Jul 28, 2009, at 1:17 PM, Ricardo Fonseca wrote:


Hi George

I did some extra digging and found that (for some reason) the
MPI_IN_PLACE parameter is not being recognized as such by
mpi_reduce_f (reduce_f.c:61). I added a couple of printfs:

   printf(" sendbuf = %p \n", sendbuf );

   printf(" MPI_FORTRAN_IN_PLACE = %p \n", &MPI_FORTRAN_IN_PLACE );
   printf(" mpi_fortran_in_place = %p \n", &mpi_fortran_in_place );
   printf(" mpi_fortran_in_place_ = %p \n",  
&mpi_fortran_in_place_ );

   printf(" mpi_fortran_in_place__ = %p \n",
&mpi_fortran_in_place__ );

And this is what I get on node 0:

sendbuf = 0x50920
MPI_FORTRAN_IN_PLACE = 0x17cd30
mpi_fortran_in_place = 0x17cd34
mpi_fortran_in_place_ = 0x17cd38
mpi_fortran_in_place__ = 0x17cd3c

This makes OMPI_F2C_IN_PLACE(sendbuf) fail. If I replace the line:

sendbuf = OMPI_F2C_IN_PLACE(sendbuf);

with:

   if ( sendbuf == 0x50920 ) {
 printf("sendbuf is MPI_IN_PLACE!\n");
 sendbuf = MPI_IN_PLACE;
   }

Then the code works and gives the correct result:

sendbuf is MPI_IN_PLACE!
Result:
3. 3. 3. 3.

So my guess is that somehow the MPI_IN_PLACE constant for fortran is
getting the wrong address. Could this be related to the fortran
compilers I'm using (ifort / g95)?

Ricardo




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
jsquy...@cisco.com



Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran withMPI_REDUCE / MPI_ALLREDUCE

2009-07-30 Thread Ricardo Fonseca

(I just realized I had the wrong subject line, here it goes again)

Hi Jeff

Yes, I am using the right one. I've installed the freshly compiled  
openmpi into /opt/openmpi/1.3.3-g95-32. If I edit the mpif.h file by  
hand and put "error!" in the first line I get:


zamblap:sandbox zamb$ edit /opt/openmpi/1.3.3-g95-32/include/mpif.h

zamblap:sandbox zamb$ mpif77 inplace_test.f90

In file mpif.h:1

   Included at inplace_test.f90:7

error!

1

Error: Unclassifiable statement at (1)

(btw, if I use the F90 bindings instead I get a similar problem,  
except the address for the MPI_IN_PLACE fortran constant is slightly  
different from the F77 binding, i.e. instead of 0x50920 I get 0x508e0)


Thanks for your help,

Ricardo


On Jul 29, 2009, at 17:00 , users-requ...@open-mpi.org wrote:


Message: 2
Date: Wed, 29 Jul 2009 07:54:38 -0500
From: Jeff Squyres 
Subject: Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran
    withMPI_REDUCE  / MPI_ALLREDUCE
To: "Open MPI Users" 
Message-ID: <986510b6-7103-4d7b-b7d6-9d8afdc19...@cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed; delsp=yes

Can you confirm that you're using the right mpif.h?

Keep in mind that each MPI implementation's mpif.h is different --
it's a common mistake to assume that the mpif.h from one MPI
implementation should work with another implementation (e.g., someone
copied mpif.h from one MPI to your software's source tree, so the
compiler always finds that one instead of the MPI-implementation-
provided mpif.h.).


On Jul 28, 2009, at 1:17 PM, Ricardo Fonseca wrote:


Hi George

I did some extra digging and found that (for some reason) the
MPI_IN_PLACE parameter is not being recognized as such by
mpi_reduce_f (reduce_f.c:61). I added a couple of printfs:

   printf(" sendbuf = %p \n", sendbuf );

   printf(" MPI_FORTRAN_IN_PLACE = %p \n", &MPI_FORTRAN_IN_PLACE );
   printf(" mpi_fortran_in_place = %p \n", &mpi_fortran_in_place );
   printf(" mpi_fortran_in_place_ = %p \n", &mpi_fortran_in_place_ );
   printf(" mpi_fortran_in_place__ = %p \n",
&mpi_fortran_in_place__ );

And this is what I get on node 0:

sendbuf = 0x50920
MPI_FORTRAN_IN_PLACE = 0x17cd30
mpi_fortran_in_place = 0x17cd34
mpi_fortran_in_place_ = 0x17cd38
mpi_fortran_in_place__ = 0x17cd3c

This makes OMPI_F2C_IN_PLACE(sendbuf) fail. If I replace the line:

sendbuf = OMPI_F2C_IN_PLACE(sendbuf);

with:

   if ( sendbuf == 0x50920 ) {
 printf("sendbuf is MPI_IN_PLACE!\n");
 sendbuf = MPI_IN_PLACE;
   }

Then the code works and gives the correct result:

sendbuf is MPI_IN_PLACE!
Result:
3. 3. 3. 3.

So my guess is that somehow the MPI_IN_PLACE constant for fortran is
getting the wrong address. Could this be related to the fortran
compilers I'm using (ifort / g95)?

Ricardo






Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran withMPI_REDUCE / MPI_ALLREDUCE

2009-07-29 Thread Jeff Squyres

Can you confirm that you're using the right mpif.h?

Keep in mind that each MPI implementation's mpif.h is different --  
it's a common mistake to assume that the mpif.h from one MPI  
implementation should work with another implementation (e.g., someone  
copied mpif.h from one MPI to your software's source tree, so the  
compiler always finds that one instead of the MPI-implementation- 
provided mpif.h.).



On Jul 28, 2009, at 1:17 PM, Ricardo Fonseca wrote:


Hi George

I did some extra digging and found that (for some reason) the  
MPI_IN_PLACE parameter is not being recognized as such by  
mpi_reduce_f (reduce_f.c:61). I added a couple of printfs:


printf(" sendbuf = %p \n", sendbuf );

printf(" MPI_FORTRAN_IN_PLACE = %p \n", &MPI_FORTRAN_IN_PLACE );
printf(" mpi_fortran_in_place = %p \n", &mpi_fortran_in_place );
printf(" mpi_fortran_in_place_ = %p \n", &mpi_fortran_in_place_ );
printf(" mpi_fortran_in_place__ = %p \n",  
&mpi_fortran_in_place__ );


And this is what I get on node 0:

 sendbuf = 0x50920
 MPI_FORTRAN_IN_PLACE = 0x17cd30
 mpi_fortran_in_place = 0x17cd34
 mpi_fortran_in_place_ = 0x17cd38
 mpi_fortran_in_place__ = 0x17cd3c

This makes OMPI_F2C_IN_PLACE(sendbuf) fail. If I replace the line:

sendbuf = OMPI_F2C_IN_PLACE(sendbuf);

with:

if ( sendbuf == 0x50920 ) {
  printf("sendbuf is MPI_IN_PLACE!\n");
  sendbuf = MPI_IN_PLACE;
}

Then the code works and gives the correct result:

sendbuf is MPI_IN_PLACE!
 Result:
 3. 3. 3. 3.

So my guess is that somehow the MPI_IN_PLACE constant for fortran is  
getting the wrong address. Could this be related to the fortran  
compilers I'm using (ifort / g95)?


Ricardo

---
Prof. Ricardo Fonseca

GoLP - Grupo de Lasers e Plasmas
Instituto de Plasmas e Fusão Nuclear
Instituto Superior Técnico
Av. Rovisco Pais
1049-001 Lisboa
Portugal

tel: +351 21 8419202
fax: +351 21 8464455
web: http://cfp.ist.utl.pt/golp/

On Jul 28, 2009, at 17:00 , users-requ...@open-mpi.org wrote:


Message: 1
Date: Tue, 28 Jul 2009 11:16:34 -0400
From: George Bosilca 
Subject: Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran with
MPI_REDUCE / MPI_ALLREDUCE
To: Open MPI Users 
Message-ID: 
Content-Type: text/plain; charset=ISO-8859-1; format=flowed;  
delsp=yes


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
jsquy...@cisco.com