Out of curiosity: if both systems are Intel, they why are you enabling hetero? 
You don’t need it in that scenario.

Admittedly, we do need to fix the bug - just trying to understand why you are 
configuring that way.


> On Feb 10, 2016, at 8:46 PM, Michael Rezny <michael.re...@monash.edu> wrote:
> 
> Hi Gilles,
> I can confirm that with a fresh download and build from source for OpenMPI 
> 1.10.2
> with --enable-heterogeneous
> the unpacked ints are the wrong endian.
> 
> However, without --enable-heterogeneous, the unpacked ints are correct.
> 
> So, this problem still exists in heterogeneous builds with OpenMPI version 
> 1.10.2.
> 
> kindest regards
> Mike
> 
> On 11 February 2016 at 14:48, Gilles Gouaillardet 
> <gilles.gouaillar...@gmail.com <mailto:gilles.gouaillar...@gmail.com>> wrote:
> Michael,
> 
> does your two systems have the same endianness ?
> 
> do you know how openmpi was configure'd on both systems ?
> (is --enable-heterogeneous enabled or disabled on both systems ?)
> 
> fwiw, openmpi 1.6.5 is old now and no more maintained.
> I strongly encourage you to use openmpi 1.10.2
> 
> Cheers,
> 
> Gilles
> 
> On Thursday, February 11, 2016, Michael Rezny <michael.re...@monash.edu 
> <mailto:michael.re...@monash.edu>> wrote:
> Hi,
> I am running Ubuntu 14.04 LTS with OpenMPI 1.6.5 and gcc 4.8.4
> 
> On a single rank program which just packs and unpacks two ints using 
> MPI_Pack_external and MPI_Unpack_external
> the unpacked ints are in the wrong endian order.
> 
> However, on a HPC, (not Ubuntu), using OpenMPI 1.6.5 and gcc 4.8.4 the 
> unpacked ints are correct.
> 
> Is it possible to get some assistance to track down what is going on?
> 
> Here is the output from the program:
> 
>  ~/tests/mpi/Pack test1
> send data 000004d2 0000162e 
> MPI_Pack_external: 0
> buffer size: 8
> MPI_unpack_external: 0
> recv data d2040000 2e160000 
> 
> And here is the source code:
> 
> #include <stdio.h>
> #include <mpi.h>
> 
> int main(int argc, char *argv[]) {
>   int numRanks, myRank, error;
> 
>   int send_data[2] = {1234, 5678};
>   int recv_data[2];
> 
>   MPI_Aint buffer_size = 1000;
>   char buffer[buffer_size];
> 
>   MPI_Init(&argc, &argv);
>   MPI_Comm_size(MPI_COMM_WORLD, &numRanks);
>   MPI_Comm_rank(MPI_COMM_WORLD, &myRank);
> 
>   printf("send data %08x %08x \n", send_data[0], send_data[1]);
> 
>   MPI_Aint position = 0;
>   error = MPI_Pack_external("external32", (void*) send_data, 2, MPI_INT,
>           buffer, buffer_size, &position);
>   printf("MPI_Pack_external: %d\n", error);
> 
>   printf("buffer size: %d\n", (int) position);
> 
>   position = 0;
>   error = MPI_Unpack_external("external32", buffer, buffer_size, &position,
>           recv_data, 2, MPI_INT);
>   printf("MPI_unpack_external: %d\n", error);
> 
>   printf("recv data %08x %08x \n", recv_data[0], recv_data[1]);
> 
>   MPI_Finalize();
> 
>   return 0;
> }
> 
> 
> 
> _______________________________________________
> devel mailing list
> de...@open-mpi.org <mailto:de...@open-mpi.org>
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel 
> <http://www.open-mpi.org/mailman/listinfo.cgi/devel>
> Link to this post: 
> http://www.open-mpi.org/community/lists/devel/2016/02/18573.php 
> <http://www.open-mpi.org/community/lists/devel/2016/02/18573.php>
> 
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> Link to this post: 
> http://www.open-mpi.org/community/lists/devel/2016/02/18575.php

Reply via email to