>     MPI_Exscan(&b, &offset, 1, MPI_INT, MPI_SUM,MPI_COMM_WORLD);

Your offset is of type MPI_Offset, an 8-byte integer.
Variable b is of type int, a 4-byte integer.

Please try change the data type for b to MPI_Offset and use
    MPI_Exscan(&b, &offset, 1, MPI_OFFSET, MPI_SUM,MPI_COMM_WORLD);


Wei-keng

On Jul 8, 2013, at 10:58 AM, Yves Revaz wrote:

> Dear List,
> 
> I'm facing a problem with the mpi-io on pvfs.
> I'm using orangefs-2.8.7 on 6 server nodes.
> 
> I recently tried to play with mpi-io as pvfs is designed to
> support parallel access to a single file.
> 
> I used the following simple code (see below) where different processes open a 
> file
> and write each at a different position in the file.
> 
> I compiled it with mpich2, as pvfs seems to only support this version to use 
> the
>  parallel access facilities, right ?
> 
> When running my test code on a classical file system, the code works 
> perfectly and I get the following output:
> 
> mpirun -n 4 ./a.out  
> PE03: write at offset = 30
> PE03: write count = 10
> PE00: write at offset = 0
> PE00: write count = 10
> PE01: write at offset = 10
> PE01: write count = 10
> PE02: write at offset = 20
> PE02: write count = 10
> 
> as expected a file "testfile" is created.
> However, the same code acessing my pvfs file system gives:
> 
> mpirun -n 4 ./a.out 
> PE00: write at offset = 0
> PE00: write count = -68457862
> PE02: write at offset = 20
> PE02: write count = 32951150
> PE01: write at offset = 10
> PE01: write count = -110085322
> PE03: write at offset = 30
> PE03: write count = -268114218
> 
> and no file "testfile" is created.
> 
> I'm I doing something wrong ? Do I need to compile orangefs or mpich2 with
> particular options ?
> 
> Thanks for your precious help,
> 
> yves
> 
> 
> 
> My simple code:
> ---------------------
> 
> #include "mpi.h"
> #include <stdio.h>
> #define BUFSIZE 10
> 
> int main(int argc, char *argv[])
> {
>     int i,  me, buf[BUFSIZE];
>     int b, offset, count;
>     MPI_File myfile;
>     MPI_Offset disp;
>     MPI_Status stat;
> 
>     MPI_Init(&argc, &argv);
>     MPI_Comm_rank(MPI_COMM_WORLD, &me);
>     for (i=0; i<BUFSIZE; i++)
>         buf[i] = me*BUFSIZE + i;
> 
>     MPI_File_open(MPI_COMM_WORLD, "testfile", MPI_MODE_WRONLY | 
> MPI_MODE_CREATE,
>                   MPI_INFO_NULL, &myfile);
> 
>     offset = 0;
>     b = BUFSIZE;
>     MPI_Exscan(&b, &offset, 1, MPI_INT, MPI_SUM,MPI_COMM_WORLD);
>     disp = offset*sizeof(int);
>     printf("PE%2.2i: write at offset = %d\n", me, offset);
> 
>     MPI_File_set_view(myfile, disp,
>                       MPI_INT, MPI_INT, "native", MPI_INFO_NULL);
>     MPI_File_write(myfile, buf, BUFSIZE, MPI_INT,
>                    &stat);
>     MPI_Get_count(&stat, MPI_INT, &count);
>     printf("PE%2.2i: write count = %d\n", me, count);
>     MPI_File_close(&myfile);
> 
>     MPI_Finalize();
> 
>     return 0;
> }
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
>                                                
> ---------------------------------------------------------------------
>   Dr. Yves Revaz
>   Laboratory of Astrophysics
>   Ecole Polytechnique Fédérale de Lausanne (EPFL)
>   Observatoire de Sauverny     Tel : +41 22 379 24 28
>   51. Ch. des Maillettes       Fax : +41 22 379 22 05
>   1290 Sauverny             e-mail : 
> [email protected]
> 
>   SWITZERLAND                  Web : 
> http://people.epfl.ch/yves.revaz
> 
> ---------------------------------------------------------------------
> 
> _______________________________________________
> Pvfs2-users mailing list
> [email protected]
> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users


_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to