Thanks a lot Soumya. I’ve figured it out and now I’m getting performance 
roughly 10x faster than glfs_open/glfs_write/glfs_close sequence.

It’s just exactly what I was looking for.

Ivica

> On 25 Aug 2015, at 12:16, Soumya Koduri <[email protected]> wrote:
> 
> 
> 
> On 08/25/2015 12:16 AM, Ivica Siladic wrote:
>> Thanks for the answer. I’ve already tried that earlier and I’ve got 2x 
>> performance boost as compared to glfs_creat/glfs_write/glfs_close sequence 
>> (which gives 4x boost compared to mounted volume).
>> 
>> But, within my loop just glfs_h_open/glfs_h_close calls lasts 2 times longer 
>> than glfs_h_anonymous_write alone. So I was guessing that there are some RPC 
>> calls behind glfs_h_open/glfs_h_close calls which could be avoided.
>> 
> But you do not need glfs_h_open/glfs_h_close to use anonymous write fops. If 
> the file is already present, below APIs are sufficient -
> 'glfs_h_lookupat' - to get 'glfs_object' given file path.
> 'glfs_h_anonymous_write' - takes above handle as one of the inputs.
> 
> Thanks,
> Soumya
> 
>> Ivica
>> 
>>> On 24 Aug 2015, at 20:02, Soumya Koduri <[email protected]> wrote:
>>> 
>>> 
>>> 
>>> On 08/24/2015 11:24 PM, Ivica Siladic wrote:
>>>> Hi,
>>>> 
>>>> I'm doing a lot of small writes to distributed/replicated Gluster volume. 
>>>> The performance I'm getting is not acceptable. Interestingly, I'm getting 
>>>> doubled speed boost if I use libgfsapi instead of kernel volume mount.
>>>> 
>>>> My guess is that I could get significant boost if I could reduce RPC 
>>>> roundtrips somehow. So, instead of open->write->close sequence I'd like to 
>>>> use single write(filename, ...) call.
>>>> 
>>>> Can someone point me to the relevant places in Gluster source code and 
>>>> briefly explain how to acomplish that? I really need just some rough ideas.
>>>> 
>>> This can be done using anonymous fd write. There are APIs exported by 
>>> libgfapi to do anonymous write (glfs_h_anonymous_write). Please refer to 
>>> 'tests/basic/gfapi/anonymous_fd_read_write.c' regarding its usage.
>>> 
>>> Thanks,
>>> Soumya
>>> 
>>>> Ivica
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> [email protected]
>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>> 
>> 

_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to