Yes we do this when we really really need to take a remote FS offline, which we 
try at all costs to avoid unless we have a maintenance window.

Note if you only export via SMB, then you don’t have the same effect (unless 
something has changed recently)

Simon

From: <[email protected]> on behalf of 
"[email protected]" <[email protected]>
Reply-To: "[email protected]" <[email protected]>
Date: Thursday, 3 May 2018 at 15:41
To: "[email protected]" <[email protected]>
Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts

Thanks Mathiaz,
Yes i do understand the concern, that if one of the remote file systems go down 
abruptly - the others will go down too.

However, i suppose we could bring down one of the filesystems before a planned 
downtime?
For example, by unexporting the filesystems on NFS/SMB before the downtime?

I might not want to be in a situation, where i have to bring down all the 
remote filesystems because of planned downtime of one of the remote clusters.

Regards,
Lohit

On May 3, 2018, 7:41 AM -0400, Mathias Dietz <[email protected]>, wrote:

Hi Lohit,

>I am thinking of using a single CES protocol cluster, with remote mounts from 
>3 storage clusters.
Technically this should work fine (assuming all 3 clusters use the same 
uids/guids). However this has not been tested in our Test lab.


>One thing to watch, be careful if your CES root is on a remote fs, as if that 
>goes away, so do all CES exports.
Not only the ces root file system is a concern, the whole CES cluster will go 
down if any remote file systems with NFS exports is not available.
e.g. if remote cluster 1 is not available, the CES cluster will unmount the 
corresponding file system which will lead to a NFS failure on all CES nodes.


Mit freundlichen Grüßen / Kind regards

Mathias Dietz

Spectrum Scale Development - Release Lead Architect (4.2.x)
Spectrum Scale RAS Architect
---------------------------------------------------------------------------
IBM Deutschland
Am Weiher 24
65451 Kelsterbach
Phone: +49 70342744105
Mobile: +49-15152801035
E-Mail: [email protected]
-----------------------------------------------------------------------------
IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Martina Koederitz, Geschäftsführung: Dirk 
WittkoppSitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht 
Stuttgart, HRB 243294



From:        [email protected]
To:        gpfsug main discussion list <[email protected]>
Date:        01/05/2018 16:34
Subject:        Re: [gpfsug-discuss] Spectrum Scale CES and remote file system 
mounts
Sent by:        [email protected]
________________________________



Thanks Simon.
I will make sure i am careful about the CES root and test nfs exporting more 
than 2 remote file systems.

Regards,
Lohit

On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) 
<[email protected]>, wrote:
You have been able to do this for some time, though I think it's only just 
supported.

We've been exporting remote mounts since CES was added.

At some point we've had two storage clusters supplying data and at least 3 
remote file-systems exported over NFS and SMB.

One thing to watch, be careful if your CES root is on a remote fs, as if that 
goes away, so do all CES exports. We do have CES root on a remote fs and it 
works, just be aware...

Simon
________________________________________
From: [email protected] 
[[email protected]] on behalf of [email protected] 
[[email protected]]
Sent: 30 April 2018 22:11
To: gpfsug main discussion list
Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts

Hello All,

I read from the below link, that it is now possible to export remote mounts 
over NFS/SMB.

https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm

I am thinking of using a single CES protocol cluster, with remote mounts from 3 
storage clusters.
May i know, if i will be able to export the 3 remote mounts(from 3 storage 
clusters) over NFS/SMB from a single CES protocol cluster?

Because according to the limitations as mentioned in the below link:

https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm

It says “You can configure one storage cluster and up to five protocol clusters 
(current limit).”


Regards,
Lohit
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to