Greetings David.

I think we should have a more explicit note about MPI_REAL16 support in the README.

This issue has come up before; see https://svn.open-mpi.org/trac/ompi/ticket/1603 . If you read through that ticket, you'll see that I was unable to find a C equivalent type for REAL*16 with the Intel compilers. This is what blocked us from making that work. :-\ But then again, I haven't tried the test codes on that ticket with the Intel 11.0 compilers to see what would happen (last tests were with 10.something). It *seems* to be a compiler issue, but I confess that we never had a high enough priority to follow through and figure it out completely.

If you have an Intel support contract, you might want to take some of the final observations on #1603 (e.g., the test codes I put near the end) and see what Intel has to say about it. Perhaps we're doing something wrong...?

I hate to pass the buck here, but I unfortunately have a whole pile of higher-priority items that I need to work on...



On Jun 19, 2009, at 1:32 PM, David Robertson wrote:

Hi all,

I have compiled Open MPI 1.3.2 with Intel Fortran and C/C++ 11.0
compilers. Fortran Real*16 seems to be working except for MPI_Allreduce.
I have attached a simple program to show what I mean. I am not an MPI
programmer but I work for one and he actually wrote the attached
program. The program sets a variable to 1 on all processes then sums.

Running with real*8 (comment #define REAL16 in quad_test.F) produces the
expected results:

  Number of Nodes =            4

  ALLREDUCE sum   =    4.00000000000000
  ALLGATHER sum   =    4.00000000000000
  ISEND/IRECV sum =    4.00000000000000

  Node =            0   Value =    1.00000000000000
  Node =            2   Value =    1.00000000000000
  Node =            3   Value =    1.00000000000000
  Node =            1   Value =    1.00000000000000

Running with real*16 produces the following:

  Number of Nodes =            4

  ALLREDUCE sum   =    1.00000000000000000000000000000000
  ALLGATHER sum   =    4.00000000000000000000000000000000
  ISEND/IRECV sum =    4.00000000000000000000000000000000
  Node =            0   Value =    1.00000000000000000000000000000000
  Node =            1   Value =    1.00000000000000000000000000000000
  Node =            2   Value =    1.00000000000000000000000000000000
  Node =            3   Value =    1.00000000000000000000000000000000

As I mentioned, I'm not a parallel programmer but I would expect the
similar results from identical operations on real*8 and real*16 variables.

NOTE: I get the same behavior with MPICH and MPICH2.



--
Jeff Squyres
Cisco Systems

Reply via email to