The logs are at /var/log/glusterfs/<hyphenated-path-to-the-mountpoint>.log
OK. So what do you observe when you set group virt to on? # gluster volume set <VOL> group virt -Krutika ----- Original Message ----- > From: "Lindsay Mathieson" <[email protected]> > To: "Krutika Dhananjay" <[email protected]> > Cc: "gluster-users" <[email protected]> > Sent: Friday, November 13, 2015 11:57:15 AM > Subject: Re: [Gluster-users] File Corruption with shards - 100% reproducable > On 12 November 2015 at 15:46, Krutika Dhananjay < [email protected] > > wrote: > > OK. What do the client logs say? > > Dumb question - Which logs are those? > > Could you share the exact steps to recreate this, and I will try it locally > > on my setup? > > I'm running this on a 3 node proxmox cluster, which makes the vm creation & > migration easy to test. > Steps: > - Create 3 node gluster datastore using proxmox vm host nodes > - Add gluster datastore as a storage dvice to proxmox > * qemu vms use the gfapi to access the datastore > * proxmox also adds a fuse mount for easy acces > - create a VM on the gluster storage, QCOW2 format. I just created a simple > debain Mate vm > - start the vm, open a console to it. > - live migrate the VM to a another node > - It will rapdily barf itself with disk errors > - stop the VM > - qemu will show file corruption (many many errors) > * qemu-img check <vm disk image> > * qemu-img info <vm disk image> > Repeating the process with sharding off has no errors. > > Also, want to see the output of 'gluster volume info'. > > I've trimmed settings down to a bare minimum. This is a test gluster cluster > so I can do with it as I wish. > gluster volume info > Volume Name: datastore1 > Type: Replicate > Volume ID: 238fddd0-a88c-4edb-8ac5-ef87c58682bf > Status: Started > Number of Bricks: 1 x 3 = 3 > Transport-type: tcp > Bricks: > Brick1: vnb.proxmox.softlog:/mnt/ext4 > Brick2: vng.proxmox.softlog:/mnt/ext4 > Brick3: vna.proxmox.softlog:/mnt/ext4 > Options Reconfigured: > performance.strict-write-ordering: on > performance.readdir-ahead: off > cluster.quorum-type: auto > features.shard: on > -- > Lindsay
_______________________________________________ Gluster-users mailing list [email protected] http://www.gluster.org/mailman/listinfo/gluster-users
