On Tue, May 16, 2006 at 03:12:42PM -0500, Peng Gu wrote:
> Hi all,
> 
> For unknown reasons, it seems that pvfs2-1.4.0 is not happy with FC5.
> So I directly build the CVS tree and now the pvfs is running.
> 
> However, in my case, the pvfs2 became error prone. For example, even
> mpi-tile-io test won't go through.
> 
> Here is the errors I got:
> 
> mpiexec -np 100 $MPITILEIO --collective --write_file --filename=/pvfs2/foo
> # mpi-tile-io run on cse-wang04.unl.edu
> # 100 process(es) available, 100 used
> # filename: /pvfs2/foo
> # collective I/O on
> # 0 byte header
> # 2500 x 400 element dataset, 32 bytes per element
> # 25 x 4 tiles, each tile is 100 x 100 elements
> # tiles overlap by 0 elements in X, 0 elements in Y
> # total file size is ~30.00 Mbytes, 1 file(s) total.
> [E 14:55:33.303022] invalid (unknown) I/O type specified
> [E 14:55:33.303872] PVFS_isys_io call: Invalid argument
> failed during MPI_File_(read or write)
> [E 14:55:27.425927] invalid (unknown) I/O type specified
> [E 14:55:27.426202] PVFS_isys_io call: Invalid argument
> failed during MPI_File_(read or write)
> [E 14:55:54.533111] invalid (unknown) I/O type specified
> [E 14:55:54.533262] PVFS_isys_io call: Invalid argument
> failed during MPI_File_(read or write)
> [E 12:37:37.743449] invalid (unknown) I/O type specified
> [E 12:37:37.745183] PVFS_isys_io call: Invalid argument
> failed during MPI_File_(read or write)
> 
> After that the program just hanged.
> 
> Any ideas?

When you upgraded pvfs2 to CVS HEAD, did you also rebuild MPICH2 (at
least the src/mpi/romio part, if nothing else)?  After you rebuilt
MPICH2, did you re-link mpi-tile-io ?

I think if you do that, this error should go away.

==rob

-- 
Rob Latham
Mathematics and Computer Science Division    A215 0178 EA2D B059 8CDF
Argonne National Labs, IL USA                B29D F333 664A 4280 315B
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to