Thanks Brian,
May i know, if you could explain a bit more on the metadata updates issue?
I am not sure i exactly understand on why the metadata updates would fail 
between filesystems/between clusters - since every remote cluster will have its 
own metadata pool/servers.
I suppose the metadata updates for respective remote filesystems should go to 
respective remote clusters/metadata servers and should not depend on metadata 
servers of other remote clusters?
Please do correct me if i am wrong.
As of now, our workload is to use NFS/SMB to read files and update files from 
different remote servers. It is not for running heavy parallel read/write 
workloads across different servers.

Thanks,
Lohit

On May 3, 2018, 10:25 AM -0400, Bryan Banister <[email protected]>, 
wrote:
> Hi Lohit,
>
> Just another thought, you also have to consider that metadata updates will 
> have to fail between nodes in the CES cluster with those in other clusters 
> because nodes in separate remote clusters do not communicate directly for 
> metadata updates, which depends on your workload is that would be an issue.
>
> Cheers,
> -Bryan
>
> From: [email protected] 
> [mailto:[email protected]] On Behalf Of Mathias Dietz
> Sent: Thursday, May 03, 2018 6:41 AM
> To: gpfsug main discussion list <[email protected]>
> Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts
>
> Note: External Email
> Hi Lohit,
>
> >I am thinking of using a single CES protocol cluster, with remote mounts 
> >from 3 storage clusters.
> Technically this should work fine (assuming all 3 clusters use the same 
> uids/guids). However this has not been tested in our Test lab.
>
>
> >One thing to watch, be careful if your CES root is on a remote fs, as if 
> >that goes away, so do all CES exports.
> Not only the ces root file system is a concern, the whole CES cluster will go 
> down if any remote file systems with NFS exports is not available.
> e.g. if remote cluster 1 is not available, the CES cluster will unmount the 
> corresponding file system which will lead to a NFS failure on all CES nodes.
>
>
> Mit freundlichen Grüßen / Kind regards
>
> Mathias Dietz
>
> Spectrum Scale Development - Release Lead Architect (4.2.x)
> Spectrum Scale RAS Architect
> ---------------------------------------------------------------------------
> IBM Deutschland
> Am Weiher 24
> 65451 Kelsterbach
> Phone: +49 70342744105
> Mobile: +49-15152801035
> E-Mail: [email protected]
> -----------------------------------------------------------------------------
> IBM Deutschland Research & Development GmbH
> Vorsitzender des Aufsichtsrats: Martina Koederitz, Geschäftsführung: Dirk 
> WittkoppSitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht 
> Stuttgart, HRB 243294
>
>
>
> From:        [email protected]
> To:        gpfsug main discussion list <[email protected]>
> Date:        01/05/2018 16:34
> Subject:        Re: [gpfsug-discuss] Spectrum Scale CES and remote file 
> system mounts
> Sent by:        [email protected]
>
>
>
> Thanks Simon.
> I will make sure i am careful about the CES root and test nfs exporting more 
> than 2 remote file systems.
>
> Regards,
> Lohit
>
> On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) 
> <[email protected]>, wrote:
> You have been able to do this for some time, though I think it's only just 
> supported.
>
> We've been exporting remote mounts since CES was added.
>
> At some point we've had two storage clusters supplying data and at least 3 
> remote file-systems exported over NFS and SMB.
>
> One thing to watch, be careful if your CES root is on a remote fs, as if that 
> goes away, so do all CES exports. We do have CES root on a remote fs and it 
> works, just be aware...
>
> Simon
> ________________________________________
> From: [email protected] 
> [[email protected]] on behalf of 
> [email protected] [[email protected]]
> Sent: 30 April 2018 22:11
> To: gpfsug main discussion list
> Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts
>
> Hello All,
>
> I read from the below link, that it is now possible to export remote mounts 
> over NFS/SMB.
>
> https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm
>
> I am thinking of using a single CES protocol cluster, with remote mounts from 
> 3 storage clusters.
> May i know, if i will be able to export the 3 remote mounts(from 3 storage 
> clusters) over NFS/SMB from a single CES protocol cluster?
>
> Because according to the limitations as mentioned in the below link:
>
> https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm
>
> It says “You can configure one storage cluster and up to five protocol 
> clusters (current limit).”
>
>
> Regards,
> Lohit
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
>
> Note: This email is for the confidential use of the named addressee(s) only 
> and may contain proprietary, confidential or privileged information. If you 
> are not the intended recipient, you are hereby notified that any review, 
> dissemination or copying of this email is strictly prohibited, and to please 
> notify the sender immediately and destroy this email and any attachments. 
> Email transmission cannot be guaranteed to be secure or error-free. The 
> Company, therefore, does not make any guarantees as to the completeness or 
> accuracy of this email or any attachments. This email is for informational 
> purposes only and does not constitute a recommendation, offer, request or 
> solicitation of any kind to buy, sell, subscribe, redeem or perform any type 
> of transaction of a financial product.
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to