How did you replicate the issue?
Next week I'll spin up a gluster storage and I would like to try the same
to see the corruption and to test any patches from gluster

Il 25 feb 2017 4:31 PM, "Mahdi Adnan" <[email protected]> ha scritto:

Hi,


We have a volume of 4 servers 8x2 bricks (Distributed-Replicate) hosting
VMs for ESXi, i tried expanding the volume with 8 more bricks, and after
rebalancing the volume, the VMs got corrupted.

Gluster version is 3.8.9 and the volume is using the default parameters of
group "virt" plus sharding.

I created a new volume without sharding and got the same issue after the
rebalance.

I checked the reported bugs and the mailing list, and i noticed it's a bug
in Gluster.

Is it affecting all of Gluster versions ? is there any workaround or a
volume setup that is not affected by this issue ?


Thank you.

-- 

Respectfully
*Mahdi A. Mahdi*


_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to