On Sat, Apr 22, 2017 at 12:48 PM, Pranith Kumar Karampuri < [email protected]> wrote:
> We are evolving a document to answer this question, the wip document is at > https://github.com/karthik-us/glusterdocs/blob/ > 1c97001d482923c2e6a9c566b3faf89d0c32b269/Administrator% > 20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it.md > > Let us know if you still have doubts after reading this. > > On Tue, Apr 18, 2017 at 8:41 AM, Mahdi Adnan <[email protected]> > wrote: > >> Hi, >> >> We have a replica 2 volume and we have issue with setting proper quorum. >> >> The volumes used as datastore for vmware/ovirt, the current settings for >> the quorum are: >> >> >> >> cluster.quorum-type: auto >> >> cluster.server-quorum-type: server >> >> cluster.server-quorum-ratio: 51% >> >> >> >> Losing the first node which hosting the first bricks will take the >> storage domain in ovirt offline but in the FUSE mount point works fine >> "read/write" >> >> Losing the second node or any other node that hosts only the second >> bricks of the replication will not affect ovirt storage domain i.e 2nd or >> 4th ndoes. >> >> As i understand, losing the first brick in replica 2 volumes will render >> the volume to read only, but how FUSE mount works in read write ? >> >> Also, can we add an arbiter node to the current replica 2 volume without >> losing data ? if yes, does the re-balance bug "Bug 1440635" affect this >> process ? >> > I remember we had one user in Redhat who wanted to do the same procedure on his production workload with 3.8.x upstream bits and it worked fine. I forget which version it was. Ravi, do you remember? > And what happen if we set "cluster.quorum-type: none" and the first node >> goes offline ? >> > cluster.quorum-type none is equivalent to no quorum, so the file can go into split-brains/ it can serve stale reads. Please don't do this on 3-way replica/arbiter volumes unless you know what you are doing. > Thank you. >> >> Get Outlook for Android <https://aka.ms/ghei36> >> >> >> _______________________________________________ >> Gluster-users mailing list >> [email protected] >> http://lists.gluster.org/mailman/listinfo/gluster-users >> > > > > -- > Pranith > -- Pranith
_______________________________________________ Gluster-users mailing list [email protected] http://lists.gluster.org/mailman/listinfo/gluster-users
