On 01/25/2014 02:28 PM, Pranith Kumar Karampuri wrote:
Vijay,
      But it seems like self-heal's fd is able to perform 'writes'. Shouldn't 
it be uniform if it is the problem with xfs?

The problem is not with xfs alone. It is due to a combination of several factors including disk sector size, xfs sector size and the nature of writes being performed. With cache=none, qemu does O_DIRECT open() which necessitates proper alignment for write operations to happen successfully. Self-heal does not open() with O_DIRECT and hence write operations initiated by self-heal go through.

-Vijay


Pranith
----- Original Message -----
From: "Vijay Bellur" <[email protected]>
To: "Fabio Rosati" <[email protected]>, "Pranith Kumar Karampuri" 
<[email protected]>
Cc: "[email protected] List" <[email protected]>
Sent: Saturday, January 25, 2014 1:23:52 PM
Subject: Re: [Gluster-users] Replication delay

On 01/24/2014 09:24 PM, Fabio Rosati wrote:



The block size is the same, 4096 bytes.
I did some other investigation and it seems the problem happens only with
VM disk images internally formatted with a blocksize of 1024 bytes. There
are no problems with disk images formatted with a block size on 4096
bytes. Anyway, I don't know if this is a coincidence.

Do you think this could be the origin of the problem? If so, how can I
solve it?
In the links posted by Vijay someone suggests to start the VM with cache !=
none but this will prevent live migration, AFAIK.
Another solution may be to recreate the volume backing it with XFS
partitions formatted with a different block size (smaller? 1024 bytes?),
this would be a painful option, but if this will solve the problem, I go
for it.


A lower sector size (512) for xfs has been observed to be useful in
overcoming this problem.

Another solution might be to use logical_block_size=4096 option as
referred here [1].

-Vijay

[1] https://bugzilla.redhat.com/show_bug.cgi?id=997839#c7





_______________________________________________
Gluster-users mailing list
[email protected]
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to