Thanks Vic&Simon, I’m totally cool with “it depends” the solution guidance is 
to achieve a Highly Available FS.  And there is Dark Fibre between the two 
locations.  FileNet is the application and they want two things.  Ability to 
write in both locations (maybe close to at the same time not necessarily the 
same files though) and protect against any site failure.  So in my mind my 
Scenario 1 would work as long as I had copies=2 and restripe are acceptable.  
Is my Scenario 2 I would still have to restripe if the SAN in site 1 went down.

I’m looking for the simplest approach that provides the greatest availability.



From: <[email protected]> on behalf of "Simon Thompson 
(Research Computing - IT Services)" <[email protected]>
Reply-To: gpfsug main discussion list <[email protected]>
Date: Thursday, July 21, 2016 at 8:02 AM
To: gpfsug main discussion list <[email protected]>
Subject: Re: [gpfsug-discuss] NDS in Two Site scenario

It depends.

What are you protecting against?

Either will work depending on your acceptable failure modes. I'm assuming here 
that you are using copies=2 to replicate the data, and that the NSD devices 
have different failure groups per site.

In the second example, if you were to lose the NSD servers in Site 1, but not 
the SAN, you would continue to have 2 copies of data written as the NSD servers 
in Site 2 could write to the SAN in Site 1.

In the first example you would need to rest ripe the file-system when brining 
the Site 1 back online to ensure data is replicated.\

Simon

From: 
<[email protected]<mailto:[email protected]>>
 on behalf of "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Reply-To: 
"[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Thursday, 21 July 2016 at 13:45
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: [gpfsug-discuss] NDS in Two Site scenario

This is where my confusion sits.  So if I have two sites, and two NDS Nodes per 
site with 1 NSD (to keep it simple), do I just present the physical LUN in 
Site1 to Site1 NDS Nodes and physical LUN in Site2 to Site2 NSD Nodes?  Or is 
it that I present physical LUN in Site1 to all 4 NDS Nodes and the same at 
Site2?  (Assuming SAN and not direct attached in this case).  I know I’m being 
persistent but this for some reason confuses me.

Site1
NSD Node1
                                ---NSD1 ---Physical LUN1 from SAN1
NSD Node2


Site2
NSD Node3
---NSD2 –Physical LUN2 from SAN2
NSD Node4


Or


Site1
NSD Node1
                                ----NSD1 –Physical LUN1 from SAN1
                               ----NSD2 –Physical LUN2 from SAN2
NSD Node2

Site 2
NSD Node3
                                ---NSD2 – Physical LUN2 from SAN2
                                ---NSD1  --Physical LUN1 from SAN1
NSD Node4


Site 3
Node5 Quorum



From: 
<[email protected]<mailto:[email protected]>>
 on behalf of Ken Hill <[email protected]<mailto:[email protected]>>
Reply-To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date: Wednesday, July 20, 2016 at 7:02 PM
To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Subject: Re: [gpfsug-discuss] NDS in Two Site scenario

Yes - it is a cluster.

The sites should NOT be further than a MAN - or Campus network. If you're 
looking to do this over a large distance - it would be best to choose another 
GPFS solution (Multi-Cluster, AFM, etc).

Regards,

Ken Hill
Technical Sales Specialist | Software Defined Solution Sales
IBM Systems
________________________________

Phone:1-540-207-7270
E-mail: [email protected]<mailto:[email protected]>

[cid:[email protected]]<http://www.ibm.com/us-en/>  
[cid:[email protected]] 
<http://www-03.ibm.com/systems/platformcomputing/products/lsf/>   
[cid:[email protected]] 
<http://www-03.ibm.com/systems/platformcomputing/products/high-performance-services/index.html>
   [cid:[email protected]] 
<http://www-03.ibm.com/systems/platformcomputing/products/symphony/index.html>  
 [cid:[email protected]] 
<http://www-03.ibm.com/systems/storage/spectrum/>   
[cid:[email protected]] 
<http://www-01.ibm.com/software/tivoli/csi/cloud-storage/>   
[cid:[email protected]] 
<http://www-01.ibm.com/software/tivoli/csi/backup-recovery/>   
[cid:[email protected]] 
<http://www-03.ibm.com/systems/storage/tape/ltfs/index.html>   
[cid:[email protected]] 
<http://www-03.ibm.com/systems/storage/spectrum/>   
[cid:[email protected]] 
<http://www-03.ibm.com/systems/storage/spectrum/scale/>   
[cid:[email protected]] 
<https://www.ibm.com/marketplace/cloud/object-storage/us/en-us>

2300 Dulles Station Blvd
Herndon, VA 20171-6133
United States







From:        "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
To:        gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date:        07/20/2016 07:33 PM
Subject:        Re: [gpfsug-discuss] NDS in Two Site scenario
Sent by:        
[email protected]<mailto:[email protected]>

________________________________



So in this scenario Ken, can server3 see any disks in site1?

From: 
<[email protected]<mailto:[email protected]>>
 on behalf of Ken Hill <[email protected]<mailto:[email protected]>>
Reply-To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date: Wednesday, July 20, 2016 at 4:15 PM
To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Subject: Re: [gpfsug-discuss] NDS in Two Site scenario


                                 Site1                                          
  Site2
                                 Server1 (quorum 1)                      
Server3 (quorum 2)
                                 Server2                                       
Server4




                                 SiteX
                                 Server5 (quorum 3)




You need to set up another site (or server) that is at least power isolated (if 
not completely infrastructure isolated) from Site1 or Site2. You would then set 
up a quorum node at that site | location. This insures you can still access 
your data even if one of your sites go down.

You can further isolate failure by increasing quorum (odd numbers).

The way quorum works is: The majority of the quorum nodes need to be up to 
survive an outage.

- With 3 quorum nodes you can have 1 quorum node failures and continue 
filesystem operations.
- With 5 quorum nodes you can have 2 quorum node failures and continue 
filesystem operations.
- With 7 quorum nodes you can have 3 quorum node failures and continue 
filesystem operations.
- etc

Please see 
http://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.0/ibmspectrumscale42_content.html?view=kcfor
 more information about quorum and tiebreaker disks.

Ken Hill
Technical Sales Specialist | Software Defined Solution Sales
IBM Systems
________________________________

Phone:1-540-207-7270
E-mail: [email protected]<mailto:[email protected]>

[cid:[email protected]]<http://www.ibm.com/us-en/>  
[cid:[email protected]] 
<http://www-03.ibm.com/systems/platformcomputing/products/lsf/>   
[cid:[email protected]] 
<http://www-03.ibm.com/systems/platformcomputing/products/high-performance-services/index.html>
   [cid:[email protected]] 
<http://www-03.ibm.com/systems/platformcomputing/products/symphony/index.html>  
 [cid:[email protected]] 
<http://www-03.ibm.com/systems/storage/spectrum/>   
[cid:[email protected]] 
<http://www-01.ibm.com/software/tivoli/csi/cloud-storage/>   
[cid:[email protected]] 
<http://www-01.ibm.com/software/tivoli/csi/backup-recovery/>   
[cid:[email protected]] 
<http://www-03.ibm.com/systems/storage/tape/ltfs/index.html>   
[cid:[email protected]] 
<http://www-03.ibm.com/systems/storage/spectrum/>   
[cid:[email protected]] 
<http://www-03.ibm.com/systems/storage/spectrum/scale/>   
[cid:[email protected]] 
<https://www.ibm.com/marketplace/cloud/object-storage/us/en-us>

2300 Dulles Station Blvd
Herndon, VA 20171-6133
United States







From:        "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
To:        gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date:        07/20/2016 04:47 PM
Subject:        [gpfsug-discuss] NDS in Two Site scenario
Sent by:        
[email protected]<mailto:[email protected]>

________________________________




For some reason this concept is a round peg that doesn’t fit the square hole 
inside my brain.  Can someone please explain the best practice to setting up 
two sites same cluster?  I get that I would likely have two NDS nodes in site 1 
and two NDS nodes in site two.  What I don’t understand are the failure 
scenarios and what would happen if I lose one or worse a whole site goes down.  
Do I solve this by having scale replication set to 2 for all my files?  I mean 
a single site I think I get it’s when there are two datacenters and I don’t 
want two clusters typically.



Mark R. Bush| Solutions Architect
Mobile: 210.237.8415 | [email protected]<mailto:[email protected]>
Sirius Computer Solutions | www.siriuscom.com<http://www.siriuscom.com/>
10100 Reunion Place, Suite 500, San Antonio, TX 78216


This message (including any attachments) is intended only for the use of the 
individual or entity to which it is addressed and may contain information that 
is non-public, proprietary, privileged, confidential, and exempt from 
disclosure under applicable law. If you are not the intended recipient, you are 
hereby notified that any use, dissemination, distribution, or copying of this 
communication is strictly prohibited. This message may be viewed by parties at 
Sirius Computer Solutions other than those named in the message header. This 
message does not contain an official representation of Sirius Computer 
Solutions. If you have received this communication in error, notify Sirius 
Computer Solutions immediately and (i) destroy this message if a facsimile or 
(ii) delete this message immediately if this is an electronic communication. 
Thank you.

Sirius Computer 
Solutions<http://www.siriuscom.com/>_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

 _______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to