The Quorum buster node :P
-- Lauz
On 12/11/2016 20:39, Sobey, Richard A wrote:
Sorry... one of the quorum nodes.
-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of
[email protected]
Sent: 12 November 2016 20:24
To: gpfsug main discussion list <[email protected]>
Subject: Re: [gpfsug-discuss] How to clear stale entries in GUI log
On Fri, 11 Nov 2016 08:50:00 +0000, "Sobey, Richard A" said:
Question: when I upgrade to the new PTF when it???s available, can I
install it first on just the GUI node (which happens to be the Quorum
server for the
cluster)
*the* quorum server, not "one of the quorum nodes"?
Best practice is to have enough nodes designated as quorum nodes so even if one
of them is taken down for upgrade or maintenance, the cluster as a whole
remains up and serving data. That way, you can do rolling installs of patches
without taking an outage.
The number to pick depends on your config - we have one cluster with 4 NSD
servers, where we've defined all 4 as quorum nodes. That way, as long as 3 of
them (half plus 1) are up, the cluster stays up. We have another stretch
cluster with 10 servers (5 at each node), and we defined 3 quorum nodes at our
main site, and 2 at the remote site, specifically so that if we did lose the
10G link between sites, the main site would retain quorum and stay up.
(Losing the remote site is, in our setup, *much* less critical than ensuring
the main site stays up. We replicate between the two, and if the remote is
down, and thus falls behind, mmrestripefs is available for cleaning up)
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss