On 7/12/2015 9:03 PM, Udo Giacomozzi wrote:
All VMs were running on machine #1 - the two other machines (#2 and
#3) were *idle*.
Gluster was fully operating (no healing) when I rebooted machine #2.
For other reasons I had to reboot machines #2 and #3 a few times, but
since all VMs were running on machine #1 and nothing on the other
machines was accessing Gluster files, I was confident that this
wouldn't disturb Gluster.
But anyway this means that I rebootet Gluster nodes during a healing
process.
After a few minutes, Gluster files began showing corruption - up to
the point that the qcow2 files became unreadable and all VMs stopped
working.
Udo, it occurs to me that if your VM's were running on #2 & #3 and you
live migrated them to #1 prior to rebooting #2/3, then you would indeed
rapidly get progressive VM corruption.
However it wouldn't be due to the heal process, but rather the live
migration with "performance.stat-prefetch" on. This always leads to
qcow2 files becoming corrupted and unusable.
--
Lindsay Mathieson
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users