Dear Marc,

so at least your documentation says:

https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.1/com.ibm.spectrum.scale.v5r01.doc/bl1hlp_filesfilesets.htm

>>> User group and user quotas can be tracked at the file system level or per 
>>> independent fileset.

But obviously as a customer I don't know if that "Really" depends on 
independence.


Currently about 70% of our filesets in the Data Science Storage systems get 
backed up to ISP. But that number may change over time as it depends on the 
requirements of our projects. For them it is just selecting "Protect this DSS 
Container by ISP" in a Web form an our portal then automatically does all the 
provisioning of the ISP Node to one of our ISP servers, rolling out the new dsm 
config files to the backup workers and so on.

Best Regards,
Stephan Peinkofer
________________________________
From: gpfsug-discuss-boun...@spectrumscale.org 
<gpfsug-discuss-boun...@spectrumscale.org> on behalf of Marc A Kaplan 
<makap...@us.ibm.com>
Sent: Friday, August 10, 2018 7:15 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit

I know quota stuff was cooked into GPFS before we even had "independent 
filesets"...
So which particular quota features or commands or options now depend on 
"independence"?! Really?

Yes, independent fileset performance for mmapplypolicy and mmbackup scales with 
the inodespace sizes. But I'm curious to know how many of those indy filesets 
are mmback-ed-up.

Appreciate your elaborations, 'cause even though I've worked on some of this 
code, I don't know how/when/if customers push which limits.

---------------------

Dear Marc,

well the primary reasons for us are:

- Per fileset quota (this seems to work also for dependent filesets as far as I 
know)

- Per user per fileset quota (this seems only to work for independent filesets)

- The dedicated inode space to speedup mmpolicy runs which only have to be 
applied to a specific subpart of the file system

- Scaling mmbackup by backing up different filesets to different TSM Servers 
economically

We have currently more than 1000 projects on our HPC machines and several 
different existing and planned file systems (use cases):


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to