I tried disabling write behind for the volume. Still I am facing the same issue. The return code in callback reports that all the writes in fact have succeeded.
Volume Name: dispersevol Type: Distributed-Disperse Volume ID: 5ae61550-51b1-4e72-875e-2e0f1f206882 Status: Started Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: pbbaglusterfs1:/ws/disk/ws_brick Brick2: pbbaglusterfs2:/ws/disk/ws_brick Brick3: pbbaglusterfs3:/ws/disk/ws_brick Brick4: pbbaglusterfs1:/ws/disk2/brick Brick5: pbbaglusterfs2:/ws/disk2/brick Brick6: pbbaglusterfs3:/ws/disk2/brick Options Reconfigured: performance.write-behind: off performance.io-thread-count: 32 On Wed, Nov 11, 2015 at 11:58 AM, Vijay Bellur <[email protected]> wrote: > On Wednesday 11 November 2015 10:22 PM, Ramachandra Reddy Ankireddypalle > wrote: > >> Hi, >> I am trying to write data using libgfapi async write. The write >> returns successful and call back is also getting invoked. But some of >> the data is not making it to gluster volume. If I put a sleep in the >> code after each write then all the writes are making it to the gluster >> volume. This makes me feel that libgfapi is dropping some of the requests. >> >> > This could be related to write-behind in gluster. If you disable > write-behind for the volume, do you observe similar results? > > Regards, > Vijay > >
_______________________________________________ Gluster-users mailing list [email protected] http://www.gluster.org/mailman/listinfo/gluster-users
