Hi,

Are there any known problems with using purely userspace access to PVFS2;
With this I mean directly through MPI-IO & ROMIO by specifying 
pvfs2://dfdf/file ?

I've been trying to get the HDF5 tests working, and they always fail
when using pvfs2://.. files. (As opposed to not specifying any prefix at all)
This data corruption does not occur when specifying ufs:// or nfs://
(and actually running on nfs or local)

I've tested both the current CVS version (1.4.1pre1-2006-05-10-134208)
and the latest release (1.4.0), both on
native IB and TCP, with mpich2 & Open MPI)...

The behaviour was reproducable on two completely different systems
(one part of the cluster, the other system is a local workstation where I did a 
complete reinstall
of pvfs2, mpich & hdf5)

I'm using hdf5 1.6.5.

Testing  -- extendible dataset collective write (ecdsetw)
Testing  -- extendible dataset collective read (ecdsetr)
Testing  -- extendible dataset collective read (ecdsetr)
Testing  -- extendible dataset collective read (ecdsetr)
Dataset Verify failed at [0][0](row 0, col 800): expect 801, got 0
Dataset Verify failed at [0][1](row 0, col 801): expect 802, got 0
Dataset Verify failed at [0][2](row 0, col 802): expect 803, got 0
Dataset Verify failed at [0][3](row 0, col 803): expect 804, got 0
Dataset Verify failed at [0][4](row 0, col 804): expect 805, got 0
Dataset Verify failed at [0][5](row 0, col 805): expect 806, got 0
Dataset Verify failed at [0][6](row 0, col 806): expect 807, got 0
Dataset Verify failed at [0][7](row 0, col 807): expect 808, got 0
Dataset Verify failed at [0][8](row 0, col 808): expect 809, got 0
Dataset Verify failed at [0][9](row 0, col 809): expect 810, got 0
[more errors ...]
40 errors found in dataset_vrfy
Proc 2: Dataset Verify failed at [0][0](row 0, col 400): expect 401, got 0
Dataset Verify failed at [0][1](row 0, col 401): expect 402, got 0
Dataset Verify failed at [0][2](row 0, col 402): expect 403, got 0
Dataset Verify failed at [0][3](row 0, col 403): expect 404, got 0
Dataset Verify failed at [0][4](row 0, col 404): expect 405, got 0
Dataset Verify failed at [0][5](row 0, col 405): expect 406, got 0
Dataset Verify failed at [0][6](row 0, col 406): expect 407, got 0
Dataset Verify failed at [0][7](row 0, col 407): expect 408, got 0
Dataset Verify failed at [0][8](row 0, col 408): expect 409, got 0
Dataset Verify failed at [0][9](row 0, col 409): expect 410, got 0
[more errors ...]
80 errors found in dataset_vrfy


PVFS2 itself seems to work:
0 [lo-03-02]~/work/hdf5/hdf5-1.6.5/build/mpich2/testpar|> pvfs2-ls -al
drwxrwxrwx    1 dries    users           4096 2006-05-10 14:31 .
drwxrwxrwx    1 dries    users           4096 2006-05-10 14:31 .. (faked)
-rw-r--r--    1 dries    users     4300210176 2006-05-10 14:31 MPItest.h5
-rw-r--r--    1 dries    users        5777744 2006-05-10 14:31 ParaTest.h5
drwxrwxrwx    1 dries    users           4096 2006-05-10 14:25 lost+found

lo-03-02]~/work/hdf5/hdf5-1.6.5/build/mpich2/testpar|> pvfs2-ping -m $HOME/pvfs2

(1) Parsing tab file...

(2) Initializing system interface...

(3) Initializing each file system found in tab file: 
/data/home/dries/pvfs2-config/pvfs2tab-ng...

   /data/home/dries/pvfs2: Ok

(4) Searching for /data/home/dries/pvfs2 in pvfstab...

   PVFS2 servers: tcp://ni-01-01:3334
   Storage name: pvfs2-fs
   Local mount point: /data/home/dries/pvfs2

   meta servers:
   tcp://ni-01-01:3334
   tcp://ni-01-03:3334

   data servers:
   tcp://ni-01-01:3334
   tcp://ni-01-03:3334
   tcp://ni-01-02:3334
   tcp://ni-01-04:3334

(5) Verifying that all servers are responding...

   meta servers:
   tcp://ni-01-01:3334 Ok
   tcp://ni-01-03:3334 Ok

   data servers:
   tcp://ni-01-01:3334 Ok
   tcp://ni-01-03:3334 Ok
   tcp://ni-01-02:3334 Ok
   tcp://ni-01-04:3334 Ok

(6) Verifying that fsid 1790750689 is acceptable to all servers...

   Ok; all servers understand fs_id 1790750689

(7) Verifying that root handle is owned by one server...

   Root handle: 1048576
   Ok; root handle is owned by exactly one server.

=============================================================

The PVFS filesystem at /data/home/dries/pvfs2 appears to be correctly 
configured.



    Greetings,
    Dries


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to