Hi, We fixed this (thanks to Satheesaran for recreating the issue and to Raghavendra G and Pranith for the RCA) as recently as last week. The bug was in DHT-shard interaction.
The patches are https://review.gluster.org/#/c/16709/ followed by https://review.gluster.org/#/c/14419 to be applied in that order. Do you mind giving these a try before it makes it into the next .x releases of 3.8, 3.9 and 3.10? I could make the src tarball with these patches applied if you like. -Krutika On Sat, Feb 25, 2017 at 8:56 PM, Mahdi Adnan <[email protected]> wrote: > Hi, > > > We have a volume of 4 servers 8x2 bricks (Distributed-Replicate) hosting > VMs for ESXi, i tried expanding the volume with 8 more bricks, and after > rebalancing the volume, the VMs got corrupted. > > Gluster version is 3.8.9 and the volume is using the default parameters of > group "virt" plus sharding. > > I created a new volume without sharding and got the same issue after the > rebalance. > > I checked the reported bugs and the mailing list, and i noticed it's a bug > in Gluster. > > Is it affecting all of Gluster versions ? is there any workaround or a > volume setup that is not affected by this issue ? > > > Thank you. > > -- > > Respectfully > *Mahdi A. Mahdi* > > > _______________________________________________ > Gluster-users mailing list > [email protected] > http://lists.gluster.org/mailman/listinfo/gluster-users >
_______________________________________________ Gluster-users mailing list [email protected] http://lists.gluster.org/mailman/listinfo/gluster-users
