Hi, 

Although I do not have experience with VM live migration, IIUC, it is got to do 
with a different server (and as a result a new glusterfs client process) taking 
over the operations and mgmt of the VM. 
If this is a correct assumption, then I think this could be the result of the 
same caching bug that I talked about sometime back in 3.7.5, which is fixed in 
3.7.6. 
The issue could cause the new client to not see the correct size and block 
count of the file, leading to errors in reads (perhaps triggered by the restart 
of the vm) and writes on the image. 

-Krutika 
----- Original Message -----

> From: "Lindsay Mathieson" <lindsay.mathie...@gmail.com>
> To: "gluster-users" <gluster-users@gluster.org>
> Sent: Thursday, November 5, 2015 3:53:25 AM
> Subject: [Gluster-users] File Corruption with shards - 100% reproducable

> Gluster 3.7.5, gluster repos, on proxmox (debian 8)

> I have an issue with VM images (qcow2) being corrupted.

> - gluster replica 3, shards on, shard size = 256MB
> - Gluster nodes are all also VM host nodes
> - VM image mounted from qemu via gfapi

> To reproduce
> - Start VM
> - live migrate it to another node
> - VM will rapidly become unresponsive and have to be stopped
> - attempting to restart the vm results in a "qcow2: Image is corrupt; cannot
> be opened read/write" error.

> I have never seen this before. 100% reproducible with shards on, never
> happens with shards off.

> I don't think this happens when using NFS to access the shard volume, I
> suspect because with NFS it is still accessing the one node, whereas with
> gfapi it's handed off to the node the VM is running on.

> --
> Lindsay

> _______________________________________________
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to