Hi Xuechen. I think both your messages are related so I"m going to resond to them together below:
On Fri, Nov 16, 2007 at 08:46:08PM -0500, xuechen zhang wrote: > I just configure my cluster according to the instruction of quick > start. And everything goes well until I run my first MPI-IO code. > I got the unexpected results which are listed below. Let's see what the debugging tools suggest: - what does 'pvfs2-ping' say? (you might need to fix up your /etc/fstab entry.. pvfs2-ping will tell you for sure) - do 'pvfs2-ls' and 'pvfs2-cp' work for you? > [EMAIL PROTECTED] exp1]$ mpiexec -np 1 sample ... > > MPI_File_open(MPI_COMM_WORLD,"/mnt/pvfs2/matrix",MPI_MODE_RDWR|MPI_MODE_CREATE,MPI_INFO_NULL,&fh); Do you have the file system mounted yet? You don't have to if you are using MPI-IO (I run tests that way quite often), but you'll have to add the 'pvfs2:' prefix to your file name if you don't have the kernel interface up yet. On Mon, Nov 19, 2007 at 02:40:08PM -0500, xuechen zhang wrote: > I exactly did the instructions of PVFS2-install quick start. However, > when I use MPI-IO to access PVFS2, it still can not find the mount > point. Below is what I got after running test code "simple". > > [EMAIL PROTECTED] exp1]$ mpiexec -n 1 simple -fname /mnt/pvfs2/matrix It's pretty important to check error codes: 'simple' does not, though it really should (I'll fix that right now). Your test code doesn't either. I suspect in both cases the MPI_File_open call is failing. > Can anyone help me find the reason? Can you try a different test? How about 'noncontig_coll2', which *does* check the return value of every MPI call. I'm happy to see another user of MPI-IO: I hope we can get you back on track. ==rob -- Rob Latham Mathematics and Computer Science Division A215 0178 EA2D B059 8CDF Argonne National Lab, IL USA B29D F333 664A 4280 315B _______________________________________________ Pvfs2-users mailing list [email protected] http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
