I could give you the size of the data done with each "write" statement in our 
fortran code if that is helpful. It is a fortran code with write statements 
inside a do-loop. I can give you the size of the writes along with the number 
if writes if that is helpful. 

> You could also enable the io-stats xlator on the client side just below FUSE 
> (before reaching write-behind), and extract data using setfattr.

Happy to do any testing you want. I have no idea how to do the above. If you 
can tell me what to do, I will test when I get back Monday. 

David  (Sent from mobile)

===============================
David F. Robinson, Ph.D. 
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310      [cell]
704.799.7974      [fax]
[email protected]
http://www.corvidtechnologies.com

> On Aug 7, 2014, at 2:05 PM, Anand Avati <[email protected]> wrote:
> 
> David,
> Is it possible to profile the app to understand the block sizes used for 
> performing write() (using strace, source code inspection etc)? The block 
> sizes reported by gluster volume profile is measured on the server side and 
> is subject to some aggregation by the client side write-behind xlator. 
> Typically the biggest hurdle for small block writes is FUSE context switches 
> which happens even before reaching the client side write-behind xlator.
> 
> You could also enable the io-stats xlator on the client side just below FUSE 
> (before reaching write-behind), and extract data using setfattr.
> 
> 
> 
>> On Wed, Aug 6, 2014 at 10:00 AM, David F. Robinson 
>> <[email protected]> wrote:
>> My apologies.  I did some additional testing and realized that my timing 
>> wasn't right.  I believe that after I do the write, NFS caches the data and 
>> until I close and flush the file, the timing isn't correct.
>> I believe the appropriate timing is now 38-seconds for NFS and 60-seconds 
>> for gluster.  I played around with some of the parameters and got it down to 
>> 52-seconds with gluster by setting:
>> 
>> performance.write-behind-window-size: 128MB
>> performance.cache-size: 128MB
>> 
>> I couldn't get it closer to the NFS timing on the writes, although the read 
>> speads were slightly better than NFS.  I am not sure if this is reasonable, 
>> or if I should be able to get write speeds that are more comparable to the 
>> NFS mount...
>> 
>> Sorry for the confusion I might have caused with my first email... It isn't 
>> 25x slower.  It is roughly 30% slower for the writes...
>> 
>> 
>> David
>> 
>> 
>> ------ Original Message ------
>> From: "Vijay Bellur" <[email protected]>
>> To: "David F. Robinson" <[email protected]>; 
>> [email protected]
>> Sent: 8/6/2014 12:48:09 PM
>> Subject: Re: [Gluster-devel] Fw: Re: Corvid gluster testing
>> 
>>>> On 08/06/2014 12:11 AM, David F. Robinson wrote:
>>>> I have been testing some of the fixes that Pranith incorporated into the
>>>> 3.5.2-beta to see how they performed for moderate levels of i/o. All of
>>>> the stability issues that I had seen in previous versions seem to have
>>>> been fixed in 3.5.2; however, there still seem to be some significant
>>>> performance issues. Pranith suggested that I send this to the
>>>> gluster-devel email list, so here goes:
>>>> I am running an MPI job that saves a restart file to the gluster file
>>>> system. When I use the following in my fstab to mount the gluster
>>>> volume, the i/o time for the 2.5GB file is roughly 45-seconds.
>>>> / gfsib01a.corvidtec.com:/homegfs /homegfs glusterfs
>>>> transport=tcp,_netdev 0 0
>>>> /
>>>> When I switch this to use the NFS protocol (see below), the i/o time is
>>>> 2.5-seconds.
>>>> / gfsib01a.corvidtec.com:/homegfs /homegfs nfs
>>>> vers=3,intr,bg,rsize=32768,wsize=32768 0 0/
>>>> The read-times for gluster are 10-20% faster than NFS, but the write
>>>> times are almost 20x slower.
>>> 
>>> What is the block size of the writes that are being performed? You can 
>>> expect better throughput and lower latency with block sizes that are close 
>>> to or greater than 128KB.
>>> 
>>> -Vijay
>> 
>> _______________________________________________
>> Gluster-devel mailing list
>> [email protected]
>> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
> 
_______________________________________________
Gluster-devel mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-devel

Reply via email to