Hi Kshitij -

Its possible your IO buffers are getting thrashed with small data accesses
trying to serve out the 32MB accesses, usually I would get around this by
setting the FlowBufferSizeBytes to 1MB in the config files, restarting the
servers/clients and rerunning the test.

It might be worth a try, if you're in a testing phase.

Kyle Schochenmaier


On Wed, Dec 14, 2011 at 1:38 AM, Cs <[email protected]> wrote:

> I am using a transfer size of 32 MB, which should have shown much better
> performance (My apologies for not mentioning this before). The total file
> size being written is 8GB.
>
> - Kshitij
>
> On Dec 14, 2011, at 1:34 AM, Kyle Schochenmaier <[email protected]>
> wrote:
>
> Hi  Kshitij  -
>
> This is the expected behaviour, PVFS2 is not highly optimized for small
> writes/reads, which is what IOR is typically performing.  So you will
> always see degraded performances here compared to the underlying
> filesystem's base performance.
>
> There are ways to tune to help optimize for this type of access.
>
> If you set your IOR block accesses to something larger such as 64K instead
> of the default (4K?) I think you would see performances which are closer.
>
> This used to be pretty well documented in the FAQ documents for PVFS, i'm
> not sure where the links are now..
>
> Cheers,
> Kyle Schochenmaier
>
>
> On Wed, Dec 14, 2011 at 1:09 AM, Kshitij Mehta <[email protected]> wrote:
>
>> Well , heres why I wanted to trace in the first place.
>>
>> I have a test configuration where we have configured PVFS2 over an SSD
>> storage. There are two I/O servers that talk to the SSD storage through
>> Infiniband (There are 2 IB channels going into the SSD, and each storage
>> server can 'see' one half of the SSD).
>>
>> Now I used the IOR benchmark to test the write bandwidth. I first spawn a
>> process on the I/O server such that it writes data to the underlying ext4
>> file system on the SSD instead of PVFS2. I see a bandwidth of ~350 MB/s.
>> Now I spawn a process on the same I/O server and write data to the PVFS2
>> file system configured over the SSD, and I see a write bandwidth of ~180
>> MB/s.
>>
>> This seems to represent some kind of overhead with PVFS2, but seems too
>> large. Has anybody else seen similar results? Is the overhead of pvfs2
>> documented?
>>
>> Do let me know if something is not clear or if you have additional
>> questions about the above setup.
>>
>> Here are some other details:
>> I/O servers: dual core with 2G main memory each.
>> PVFS 2.8.2
>>
>> Thanks,
>> Kshitij
>>
>> -----Original Message-----
>> From: Julian Kunkel [mailto:[email protected]]
>> Sent: Tuesday, December 13, 2011 3:10 AM
>> To: Kshitij Mehta
>> Cc: [email protected]
>> Subject: Re: [Pvfs2-users] Tracing pvfs2 internals
>>
>> Dear Kshitij,
>> we have a version of OrangeFS which is instrumented with HDTrace, there
>> you can record detailed information about activity of statemachines and I/O.
>> For a description see the thesis:
>>
>> http://wr.informatik.uni-hamburg.de/_media/research:theses:Tien%20Duc%20Tien_Tracing%20Internal%20Behavior%20in%20PVFS.pdf
>>
>> The code is available in our redmine (here is a link to the wiki):
>> http://redmine.wr.informatik.uni-hamburg.de/projects/piosimhd/wiki
>>
>> I consider the tracing implemented in PVFS as rather robust, since it is
>> our second implementation with PVFS_hints.
>> However, you might encounter some issues with the build system.
>> If you want to try it and you need help, just ask.
>>
>> Regards,
>> Julian Kunkel
>>
>>
>>
>> 2011/12/13 Kshitij Mehta <[email protected]>:
>> > Hello,
>> >
>> > Is there a way I can trace/measure the internal behavior of pvfs2?
>> > Suppose I have a simple I/O code that writes to pvfs2, I would like to
>> > find out how much time exactly do various internal operations of Pvfs2
>> > take (metadata lookup, creating iovecs, etc.), before data is finally
>> pushed to disk.
>> >
>> >
>> >
>> > Is there a configure option (what does `enabletracing` do in the
>> > config
>> > file) ?  Or is there any other way to determine this ?
>> >
>> >
>> >
>> > Thanks,
>> > Kshitij
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > Pvfs2-users mailing list
>> > [email protected]
>> > http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>> >
>>
>>
>> _______________________________________________
>> Pvfs2-users mailing list
>> [email protected]
>> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
>>
>
>
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to