On 10/12/2020 21:59, Andrew Beattie wrote:
CAUTION: This email originated outside the University. Check before
clicking links or attachments.
Thanks Ed,
The UQ team are well aware of the current limits published in the FAQ.
However the issue is not the number of physical nodes or the concurrent
user sessions, but rather the number of SMB / NFS export mounts that
Spectrum Scale supports from a single cluster or even remote mount
protocol clusters is no longer enough for their research environment.
The current total number of Exports can not exceed 1000, which is an
issue when they have multiple thousands of research project ID’s with
users needing access to every project ID with its relevant security
permissions.
Grouping Project ID’s under a single export isn’t a viable option as
there is no simple way to identify which research group / user is going
to request a new project ID, new project ID’s are automatically created
and allocated when a request for storage allocation is fulfilled.
Projects ID’s (independent file sets) are published not only as SMB
exports, but are also mounted using multiple AFM cache clusters to high
performance instrument clusters, multiple HPC clusters or up to 5
different campus access points, including remote universities.
The data workflow is not a simple linear workflow
And the mixture of different types of users with requests for storage,
and storage provisioning has resulted in the University creating their
own provisioning portal which interacts with the Spectrum Scale data
fabric (multiple Spectrum Scale clusters in single global namespace,
connected via 100GB Ethernet over AFM) in multiple points to deliver the
project ID provisioning at the relevant locations specified by the user
/ research group.
One point of data surfacing, in this data fabric, is the Spectrum Scale
Protocols cluster that Les manages, which provides the central user
access point via SMB or NFS, all research users across the university
who want to access one or more of their storage allocations do so via
the SMB / NFS mount points from this specific storage cluster.
I am not sure thousands of SMB exports is ever a good idea. I suspect
Windows Server would keel over and die too in that scenario
My suggestion would be to looking into some consolidated SMB exports and
then mask it all with DFS.
Though this presumes that they are not handing out "project" security
credentials that are shared between multiple users. That would be very
bad......
JAB.
--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss