Yes - it is a cluster. The sites should NOT be further than a MAN - or Campus network. If you're looking to do this over a large distance - it would be best to choose another GPFS solution (Multi-Cluster, AFM, etc).
Regards, Ken Hill Technical Sales Specialist | Software Defined Solution Sales IBM Systems Phone: 1-540-207-7270 E-mail: [email protected] 2300 Dulles Station Blvd Herndon, VA 20171-6133 United States From: "[email protected]" <[email protected]> To: gpfsug main discussion list <[email protected]> Date: 07/20/2016 07:33 PM Subject: Re: [gpfsug-discuss] NDS in Two Site scenario Sent by: [email protected] So in this scenario Ken, can server3 see any disks in site1? From: <[email protected]> on behalf of Ken Hill <[email protected]> Reply-To: gpfsug main discussion list <[email protected]> Date: Wednesday, July 20, 2016 at 4:15 PM To: gpfsug main discussion list <[email protected]> Subject: Re: [gpfsug-discuss] NDS in Two Site scenario Site1 Site2 Server1 (quorum 1) Server3 (quorum 2) Server2 Server4 SiteX Server5 (quorum 3) You need to set up another site (or server) that is at least power isolated (if not completely infrastructure isolated) from Site1 or Site2. You would then set up a quorum node at that site | location. This insures you can still access your data even if one of your sites go down. You can further isolate failure by increasing quorum (odd numbers). The way quorum works is: The majority of the quorum nodes need to be up to survive an outage. - With 3 quorum nodes you can have 1 quorum node failures and continue filesystem operations. - With 5 quorum nodes you can have 2 quorum node failures and continue filesystem operations. - With 7 quorum nodes you can have 3 quorum node failures and continue filesystem operations. - etc Please see http://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.0/ibmspectrumscale42_content.html?view=kc for more information about quorum and tiebreaker disks. Ken Hill Technical Sales Specialist | Software Defined Solution Sales IBM Systems Phone:1-540-207-7270 E-mail: [email protected] 2300 Dulles Station Blvd Herndon, VA 20171-6133 United States From: "[email protected]" <[email protected]> To: gpfsug main discussion list <[email protected]> Date: 07/20/2016 04:47 PM Subject: [gpfsug-discuss] NDS in Two Site scenario Sent by: [email protected] For some reason this concept is a round peg that doesn’t fit the square hole inside my brain. Can someone please explain the best practice to setting up two sites same cluster? I get that I would likely have two NDS nodes in site 1 and two NDS nodes in site two. What I don’t understand are the failure scenarios and what would happen if I lose one or worse a whole site goes down. Do I solve this by having scale replication set to 2 for all my files? I mean a single site I think I get it’s when there are two datacenters and I don’t want two clusters typically. Mark R. Bush| Solutions Architect Mobile: 210.237.8415 | [email protected] Sirius Computer Solutions | www.siriuscom.com 10100 Reunion Place, Suite 500, San Antonio, TX 78216 This message (including any attachments) is intended only for the use of the individual or entity to which it is addressed and may contain information that is non-public, proprietary, privileged, confidential, and exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. This message may be viewed by parties at Sirius Computer Solutions other than those named in the message header. This message does not contain an official representation of Sirius Computer Solutions. If you have received this communication in error, notify Sirius Computer Solutions immediately and (i) destroy this message if a facsimile or (ii) delete this message immediately if this is an electronic communication. Thank you. Sirius Computer Solutions _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
