Thanks Kyle, I actually posted this question on mpich-discuss mailing
list as well. Rajeev Thakur has replied : You need to give the full
path to the file, /mnt/pvfs2/testfile. The pvfs2: prefix should not be
necessary, but you can try with and without the prefix (and give the
full path).

The problem is that when I give the full path ( with and without
pvfs2: prefix ) the results are same and seem to make sense as I am
not on a very fast hardware at all. I am just using a Linux VM on a
Windows Desktop with some 1TB normal hard-drive.

The results with local file system ( ext4 ) :

ammar@ammar-vbox1:~/romio_test$ mpirun -np 2 ./a.out -fname testfile

Access size per process = 4194304 bytes, ntimes = 5
Write bandwidth without file sync = 4415.056842 Mbytes/sec
Read bandwidth without prior file sync = 4466.777423 Mbytes/sec
Write bandwidth including file sync = 191.602752 Mbytes/sec
Read bandwidth after file sync = 3695.015086 Mbytes/sec

The results when I give the path where pvfs2 is mounted.

ammar@ammar-vbox1:~/romio_test$ mpirun -np 2 ./a.out -fname
pvfs2:/mnt/pvfs2/testfile
Access size per process = 4194304 bytes, ntimes = 5
Write bandwidth without file sync = 743.357895 Mbytes/sec
Read bandwidth without prior file sync = 981.611678 Mbytes/sec
Write bandwidth including file sync = 114.261305 Mbytes/sec
Read bandwidth after file sync = 1213.585735 Mbytes/sec

ammar@ammar-vbox1:~/romio_test$ mpirun -np 2 ./a.out -fname /mnt/pvfs2/testfile
Access size per process = 4194304 bytes, ntimes = 5
Write bandwidth without file sync = 882.129239 Mbytes/sec
Read bandwidth without prior file sync = 834.376029 Mbytes/sec
Write bandwidth including file sync = 123.615020 Mbytes/sec
Read bandwidth after file sync = 1250.398062 Mbytes/sec

I am not sure how the performance on local file system is better than
pvfs2. Secondly, shall i even expect correct results when I am running
the two processes on the same machine?

Regards
-- ammar


On Wed, Jul 18, 2012 at 11:12 PM, Kyle Schochenmaier <[email protected]> wrote:
> Hi ammar, something looks broken here as the second set of numbers you
> listed shows 600gb/s which is probably an order of magnitude more than cpu
> cache speeds.
> I think you may need to specify pvfs2:/path/to/fs
>
> If you specify the actual mounted directory you're probably going to be
> utilizing the vfs kernel device which you'll want to avoid if you have
> really fast hardware (I'm assuming you're on infiniband?). If you use the
> romio interface it should use libpvfs apis which are much faster.
> Is your hardware really capable of 6gbyte/s on a single node? If not maybe
> all of your numbers are wrong.
>
> To answer your questions more directly :
> Using pvfs:/path should make use of libpvfs and bypass the kernel interface
> Using the direct path to the mount without the pvfs: flag will use the
> regular 'posix' filesystem calls via the pvfs kernel module.
> I would expect the pvfs: version to significantly outperform the other, but
> not by 100x like you're reporting, maybe 2-3x on very fast hardware.
> I don't have much on the pvfs_util problem you reported, maybe something
> wrong with the way you built romio support.
>
> Regards
>
> On Jul 18, 2012 1:38 AM, "Ammar Ahmad Awan" <[email protected]>
> wrote:
>>
>> Dear All,
>>
>> I am very new to parallel file systems. I am trying to learn how to
>> use PVFS using ROMIO. The romio source has a test directory which
>> contains perf.c file.
>>
>> I am compiling perf.c as follows
>>
>> mpicc perf.c -o perf.o
>>
>> I run it like
>>
>> mpirun -np 2 ./perf.o -fname tesfile
>>
>> This works fine and gives me the results as
>>
>> Access size per process = 4194304 bytes, ntimes = 5
>> Write bandwidth without file sync = 2751.942262 Mbytes/sec
>> Read bandwidth without prior file sync = 5796.239765 Mbytes/sec
>> Write bandwidth including file sync = 198.921236 Mbytes/sec
>> Read bandwidth after file sync = 5482.750327 Mbytes/sec
>>
>> I want to run it so I can test the performance of my PVFS2
>> installation. PVFS2 is mounted in /mnt/pvfs2 as per the quick start
>> guide along with a loaded  kernel module.
>>
>> My question is when I execute the same perf.c with "pvfs2:" before the
>> filename like
>>
>> mpirun -np 2 ./perf.o -fname pvfs2:tesfile
>>
>> The results that I get are strange. Very high bandwidth but the file
>> is not created on /mnt/pvfs2. What is the reason that the test
>> executes but gives the PVFS_util_resolve error and also I cannot see
>> any file being created on /mnt/pvfs2?
>>
>> Access size per process = 4194304 bytes, ntimes = 5
>> PVFS_util_resolve: No such file or directory (error class: 0)
>> Write bandwidth without file sync = 657930.039216 Mbytes/sec
>> Read bandwidth without prior file sync = 657930.039216 Mbytes/sec
>> PVFS_util_resolve: No such file or directory (error class: 0)
>>
>> Write bandwidth including file sync = 532610.031746 Mbytes/sec
>> Read bandwidth after file sync = 729444.173913 Mbytes/sec
>>
>> If I give the full path to the file in the same test like this
>>
>> mpirun -np 2 ./perf.o -fname /mnt/pvfs2/testfile
>>
>> The performance reduces and results are :
>>
>> Access size per process = 4194304 bytes, ntimes = 5
>> Write bandwidth without file sync = 974.060381 Mbytes/sec
>> Read bandwidth without prior file sync = 1306.128143 Mbytes/sec
>> Write bandwidth including file sync = 103.729221 Mbytes/sec
>> Read bandwidth after file sync = 1261.634532 Mbytes/sec
>>
>> Can anyone tell me what is the difference between these two methods of
>> executing ROMIO programs.
>>
>> 1. using the pvfs2: prefix before the filename
>> 2. using the full path to specify the filename without pvfs2: prefix
>>
>> Which is correct? Both are same?
>>
>> Kind Regards
>> -- Ammar
>> ----------------------------------------------------------------------
>> Masters Student
>> Dept. Of Computer Engineering,
>> Kyung Hee University, South Korea
>> ----------------------------------------------------------------------
>> _______________________________________________
>> Pvfs2-users mailing list
>> [email protected]
>> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users



-- 

Kind Regards
-- Ammar
----------------------------------------------------------------------
Masters Student
Dept. Of Computer Engineering,
Kyung Hee University
----------------------------------------------------------------------
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to