On 01/27/2014 03:40 PM, Fabio Rosati wrote:
Vijay,

    I can confirm that enabling network.remote-dio on the volume solves the 
problem.
Do you think this is the best option also for performace?

Thanks for the confirmation. Yes, network.remote-dio is recommended to be on for better performance.

Regards,
Vijay



Thanks a lot
Fabio

----- Messaggio originale -----
Da: "Vijay Bellur" <[email protected]>
A: "Pranith Kumar Karampuri" <[email protected]>
Cc: "Fabio Rosati" <[email protected]>, "[email protected] List" 
<[email protected]>
Inviato: Sabato, 25 gennaio 2014 11:25:01
Oggetto: Re: [Gluster-users] Replication delay

On 01/25/2014 03:36 PM, Pranith Kumar Karampuri wrote:


----- Original Message -----
From: "Vijay Bellur" <[email protected]>
To: "Pranith Kumar Karampuri" <[email protected]>
Cc: "Fabio Rosati" <[email protected]>, "[email protected] List" 
<[email protected]>
Sent: Saturday, January 25, 2014 3:32:24 PM
Subject: Re: [Gluster-users] Replication delay

On 01/25/2014 02:28 PM, Pranith Kumar Karampuri wrote:
Vijay,
        But it seems like self-heal's fd is able to perform 'writes'.
        Shouldn't it be uniform if it is the problem with xfs?

The problem is not with xfs alone. It is due to a combination of several
factors including disk sector size, xfs sector size and the nature of
writes being performed. With cache=none, qemu does O_DIRECT open() which
necessitates proper alignment for write operations to happen
successfully. Self-heal does not open() with O_DIRECT and hence write
operations initiated by self-heal go through.

I was also guessing it could be related to O_DIRECT. Anyway to fix that?

One option might be to enable option "network.remote-dio" on the
glusterfs volume. Fabio - can you please check if this works?

Wonder why it has to happen only on one of the bricks.

I suspect that the bricks are not completely identical. Hence it does go
through on one and fails on the other.

-Vijay




_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to