Hi all,

For unknown reasons, it seems that pvfs2-1.4.0 is not happy with FC5.
So I directly build the CVS tree and now the pvfs is running.

However, in my case, the pvfs2 became error prone. For example, even
mpi-tile-io test won't go through.

Here is the errors I got:

mpiexec -np 100 $MPITILEIO --collective --write_file --filename=/pvfs2/foo
# mpi-tile-io run on cse-wang04.unl.edu
# 100 process(es) available, 100 used
# filename: /pvfs2/foo
# collective I/O on
# 0 byte header
# 2500 x 400 element dataset, 32 bytes per element
# 25 x 4 tiles, each tile is 100 x 100 elements
# tiles overlap by 0 elements in X, 0 elements in Y
# total file size is ~30.00 Mbytes, 1 file(s) total.
[E 14:55:33.303022] invalid (unknown) I/O type specified
[E 14:55:33.303872] PVFS_isys_io call: Invalid argument
failed during MPI_File_(read or write)
[E 14:55:27.425927] invalid (unknown) I/O type specified
[E 14:55:27.426202] PVFS_isys_io call: Invalid argument
failed during MPI_File_(read or write)
[E 14:55:54.533111] invalid (unknown) I/O type specified
[E 14:55:54.533262] PVFS_isys_io call: Invalid argument
failed during MPI_File_(read or write)
[E 12:37:37.743449] invalid (unknown) I/O type specified
[E 12:37:37.745183] PVFS_isys_io call: Invalid argument
failed during MPI_File_(read or write)

After that the program just hanged.

Any ideas?

FYI: mpi-tile-io is happy with native ext3 file system. As an example,

mpiexec -np 100 $MPITILEIO --collective --write_file --filename=/nfs1/foo
# mpi-tile-io run on cse-wang04.unl.edu
# 100 process(es) available, 100 used
# filename: /nfs1/foo
# collective I/O on
# 0 byte header
# 2500 x 400 element dataset, 32 bytes per element
# 25 x 4 tiles, each tile is 100 x 100 elements
# tiles overlap by 0 elements in X, 0 elements in Y
# total file size is ~30.00 Mbytes, 1 file(s) total.
# Times are total for all operations of the given type
# Open: min_t = 0.754820, max_t = 0.792003, mean_t = 0.774454, var_t = 0.000108
# Write: min_t = 2.452388, max_t = 4.087749, mean_t = 2.599117, var_t = 0.058856
# Close: min_t = 0.007669, max_t = 0.024663, mean_t = 0.015516, var_t = 0.000008
# Note: bandwidth values based on max_t (worst case)
Write Bandwidth = 7.339 Mbytes/sec

Thanks,
Peng

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to