Hello from the Philippines!

Apparently we're the first company here to use Gluster and Enomaly, and
we've snagged on the same well-documented race condition lock-up. I read
that Gluster 3.2.2 does not suffer from this problem, so we upgraded.

Catch: We lost 1 of 4 storage nodes a week before the upgrade.
Issue: It seems that bringing that 1 guy back online is interfering with
self-heal on a number of files (VMs, really).

How do we resolve the "cannot self-heal" problem?


Regards,
Andro Mauricio

_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to