Hi,

while doing some I/O visualization we noticed that the attached program
(MPI-IO.c) produces an I/O error under certain circumstances. The error
messages printed by the program (MPI-IO.out) and pvfs2-server
(pvfs2-server.out) are also attached.

The program basically writes and reads data using combinations of
(non-)collective and (non-)contiguous I/O. It seems this error only
occurs if we do non-collective, contiguous I/O (level 0) with multiple
processes. Less processes and other levels work just fine. (The number
of iterations our program does also seems to play a role; 2 iterations
work, 3 produce the error.)

To reproduce, create a default configuration for PVFS2 (data and
metadata on localhost), start pvfs2-server and run the following:

$ mpiexec -np 4 ./MPI-IO -i 10 -f pvfs2:///path/to/pvfs2 level0
# -i controls the number of iterations, -f the file that is written/read

The error occured with PVFS 2.6.2, MPICH2 1.0.5p3 and on several
different machines.

We also noticed that a similar Flow error occurs if the server is low on
free memory. Maybe in this case the error message should be modified to
indicate the lack of memory?


Regards, Michael

Attachment: MPI-IO.c.gz
Description: GNU Zip compressed data

Attachment: MPI-IO.out.gz
Description: GNU Zip compressed data

Attachment: pvfs2-server.out.gz
Description: GNU Zip compressed data

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to