Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas?

2018-08-17 Thread Marc A Kaplan
My idea, not completely thought out, is that before you hit the 1000 
limit, you start putting new customers or projects into dependent filesets 
and define those new dependent filesets within either a lesser number of 
independent filesets expressly created to receive the new customers OR 
perhaps even lump them with already existing independent filesets that 
have matching backup requirements.

I would NOT try to create backups for each dependent fileset.  But stick 
with the supported facilities to manage backup per independent...

Having said that, if you'd still like to do backup per dependent fileset 
-- then have at it -- but test, test and retest And measure 
performance...
My GUESS is that IF you can hack mmbackup or similar to use  mmapplypolicy 
/path-to-dependent-fileset  --scope fileset  
instead of mmapplypolicy /path-to-independent-fileset --scope inodespace 


You'll be okay because the inodescan where you end up reading some extra 
inodes is probably a tiny fraction of all the other IO you'll be doing! 

BUT I don't think IBM is in a position to encourage you to hack mmbackup 
-- it's already very complicated!





From:   "Peinkofer, Stephan" 
To: gpfsug main discussion list 
Date:   08/17/2018 07:40 AM
Subject:    Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs 
Quotas?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Dear Marc,

well as I think I cannot simply "move" dependent filesets between 
independent ones and our customers must have the opportunity to change 
data protection policy for their Containers at any given time, I cannot 
map them to a "backed up" or "not backed up" independent fileset.

So how much performance impact is lets say 1-10 exclude.dir directives per 
independent fileset?

Many thanks in advance.
Best Regards,
Stephan Peinkofer



From: gpfsug-discuss-boun...@spectrumscale.org 
 on behalf of Marc A Kaplan 

Sent: Tuesday, August 14, 2018 5:31 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas? 
 
True, mmbackup is designed to work best backing up either a single 
independent fileset or the entire file system.  So if you know some 
filesets do not need to be backed up, map them to one or more indepedent 
filesets that will not be backed up.

mmapplypolicy is happy to scan a single dependent fileset, use option 
--scope fileset and make the primary argument the path to the root of the 
fileset you wish to scan.   The overhead is not simply described.   The 
directory scan phase will explore or walk the (sub)tree in parallel with 
multiple threads on multiple nodes, reading just the directory blocks that 
need to be read.

The inodescan phase will read blocks of inodes from the given inodespace 
...  since the inodes of dependent filesets may be "mixed" into the same 
blocks as other dependend filesets that are in the same independent 
fileset, mmapplypolicy will incur what you might consider "extra" 
overhead.




From:"Peinkofer, Stephan" 
To:gpfsug main discussion list 
Date:08/14/2018 12:50 AM
Subject:Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs 
Quotas?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Dear Marc,


If you "must" exceed 1000 filesets because you are assigning each project 
to its own fileset, my suggestion is this:

Yes, there are scaling/performance/manageability benefits to using 
mmbackup over independent filesets.

But maybe you don't need 10,000 independent filesets -- 
maybe you can hash or otherwise randomly assign projects that each have 
their own (dependent) fileset name to a lesser number of independent 
filesets that will serve as management groups for (mm)backup...

OK, if that might be doable, whats then the performance impact of having 
to specify Include/Exclude lists for each independent fileset in order to 
specify which dependent fileset should be backed up and which one not?
I don’t remember exactly, but I think I’ve heard at some time, that 
Include/Exclude and mmbackup have to be used with caution. And the same 
question holds true for running mmapplypolicy for a “job” on a single 
dependent fileset? Is the scan runtime linear to the size of the 
underlying independent fileset or are there some optimisations when I just 
want to scan a subfolder/dependent fileset of an independent one?

Like many things in life, sometimes compromises are necessary!

Hmm, can I reference this next time, when we negotiate Scale License 
pricing with the ISS sales people? ;)

Best Regards,
Stephan Peinkofer
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gp

Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas?

2018-08-14 Thread Marc A Kaplan
True, mmbackup is designed to work best backing up either a single 
independent fileset or the entire file system.  So if you know some 
filesets do not need to be backed up, map them to one or more indepedent 
filesets that will not be backed up. 

mmapplypolicy is happy to scan a single dependent fileset, use option 
--scope fileset and make the primary argument the path to the root of the 
fileset you wish to scan.   The overhead is not simply described.   The 
directory scan phase will explore or walk the (sub)tree in parallel with 
multiple threads on multiple nodes, reading just the directory blocks that 
need to be read.

The inodescan phase will read blocks of inodes from the given inodespace 
...  since the inodes of dependent filesets may be "mixed" into the same 
blocks as other dependend filesets that are in the same independent 
fileset, mmapplypolicy will incur what you might consider "extra" 
overhead.




From:   "Peinkofer, Stephan" 
To: gpfsug main discussion list 
Date:   08/14/2018 12:50 AM
Subject:    Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs 
Quotas?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Dear Marc,


If you "must" exceed 1000 filesets because you are assigning each project 
to its own fileset, my suggestion is this:

Yes, there are scaling/performance/manageability benefits to using 
mmbackup over independent filesets.

But maybe you don't need 10,000 independent filesets -- 
maybe you can hash or otherwise randomly assign projects that each have 
their own (dependent) fileset name to a lesser number of independent 
filesets that will serve as management groups for (mm)backup...

OK, if that might be doable, whats then the performance impact of having 
to specify Include/Exclude lists for each independent fileset in order to 
specify which dependent fileset should be backed up and which one not?
I don’t remember exactly, but I think I’ve heard at some time, that 
Include/Exclude and mmbackup have to be used with caution. And the same 
question holds true for running mmapplypolicy for a “job” on a single 
dependent fileset? Is the scan runtime linear to the size of the 
underlying independent fileset or are there some optimisations when I just 
want to scan a subfolder/dependent fileset of an independent one?

Like many things in life, sometimes compromises are necessary!

Hmm, can I reference this next time, when we negotiate Scale License 
pricing with the ISS sales people? ;)

Best Regards,
Stephan Peinkofer
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas?

2018-08-14 Thread Grunenberg, Renar
+1  great answer Stephan. We also dont understand why funktions are existend, 
but every time we want to use it, the first step is make a requirement.

Von meinem iPhone gesendet


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.


Am 14.08.2018 um 06:50 schrieb Peinkofer, Stephan 
mailto:stephan.peinko...@lrz.de>>:

Dear Marc,


If you "must" exceed 1000 filesets because you are assigning each project to 
its own fileset, my suggestion is this:

Yes, there are scaling/performance/manageability benefits to using mmbackup 
over independent filesets.

But maybe you don't need 10,000 independent filesets --
maybe you can hash or otherwise randomly assign projects that each have their 
own (dependent) fileset name to a lesser number of independent filesets that 
will serve as management groups for (mm)backup...

OK, if that might be doable, whats then the performance impact of having to 
specify Include/Exclude lists for each independent fileset in order to specify 
which dependent fileset should be backed up and which one not?
I don’t remember exactly, but I think I’ve heard at some time, that 
Include/Exclude and mmbackup have to be used with caution. And the same 
question holds true for running mmapplypolicy for a “job” on a single dependent 
fileset? Is the scan runtime linear to the size of the underlying independent 
fileset or are there some optimisations when I just want to scan a 
subfolder/dependent fileset of an independent one?

Like many things in life, sometimes compromises are necessary!

Hmm, can I reference this next time, when we negotiate Scale License pricing 
with the ISS sales people? ;)

Best Regards,
Stephan Peinkofer

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas?

2018-08-13 Thread Peinkofer, Stephan
Dear Marc,


If you "must" exceed 1000 filesets because you are assigning each project to 
its own fileset, my suggestion is this:

Yes, there are scaling/performance/manageability benefits to using mmbackup 
over independent filesets.

But maybe you don't need 10,000 independent filesets --
maybe you can hash or otherwise randomly assign projects that each have their 
own (dependent) fileset name to a lesser number of independent filesets that 
will serve as management groups for (mm)backup...

OK, if that might be doable, whats then the performance impact of having to 
specify Include/Exclude lists for each independent fileset in order to specify 
which dependent fileset should be backed up and which one not?
I don’t remember exactly, but I think I’ve heard at some time, that 
Include/Exclude and mmbackup have to be used with caution. And the same 
question holds true for running mmapplypolicy for a “job” on a single dependent 
fileset? Is the scan runtime linear to the size of the underlying independent 
fileset or are there some optimisations when I just want to scan a 
subfolder/dependent fileset of an independent one?

Like many things in life, sometimes compromises are necessary!

Hmm, can I reference this next time, when we negotiate Scale License pricing 
with the ISS sales people? ;)

Best Regards,
Stephan Peinkofer

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas?

2018-08-13 Thread Marc A Kaplan
If you "must" exceed 1000 filesets because you are assigning each project 
to its own fileset, my suggestion is this:

Yes, there are scaling/performance/manageability benefits to using 
mmbackup over independent filesets.

But maybe you don't need 10,000 independent filesets -- 
maybe you can hash or otherwise randomly assign projects that each have 
their own (dependent) fileset name to a lesser number of independent 
filesets that will serve as management groups for (mm)backup...

Like many things in life, sometimes compromises are necessary!



From:   "Peinkofer, Stephan" 
To: gpfsug main discussion list 
Date:   08/13/2018 03:26 AM
Subject:    Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs 
Quotas?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Dear Marc,

OK, so let’s give it a try:

[root@datdsst100 pr74qo]# mmlsfileset dsstestfs01
Filesets in file system 'dsstestfs01':
Name StatusPath 
root Linked/dss/dsstestfs01 
...
quota_test_independent   Linked/dss/dsstestfs01/quota_test_independent 

quota_test_dependent Linked 
/dss/dsstestfs01/quota_test_independent/quota_test_dependent

[root@datdsst100 pr74qo]# mmsetquota dsstestfs01:quota_test_independent 
--user a2822bp --block 1G:1G --files 10:10
[root@datdsst100 pr74qo]# mmsetquota dsstestfs01:quota_test_dependent 
--user a2822bp --block 10G:10G --files 100:100

[root@datdsst100 pr74qo]#  mmrepquota -u -v 
dsstestfs01:quota_test_independent 
*** Report for USR quotas on dsstestfs01
 Block Limits| 
File Limits
Name   filesettype KB  quota  limit   in_doubt 
   grace |files   quotalimit in_doubtgrace entryType
a2822bpquota_test_independent USR   01048576 1048576   
0 none |0  10   100 none e 
root   quota_test_independent USR   0  0 0  0 none 
|1   000 none i 

[root@datdsst100 pr74qo]#  mmrepquota -u -v 
dsstestfs01:quota_test_dependent
*** Report for USR quotas on dsstestfs01
 Block Limits| 
File Limits
Name   filesettype KB  quota  limit   in_doubt 
   grace |files   quotalimit in_doubtgrace entryType
a2822bpquota_test_dependent USR   0   10485760   10485760  
   0 none |0 100  1000 none e 
root   quota_test_dependent USR   0  0  0  
   0 none |1   000 none i 

Looks good …

[root@datdsst100 pr74qo]# cd 
/dss/dsstestfs01/quota_test_independent/quota_test_dependent/
[root@datdsst100 quota_test_dependent]# for foo in `seq 1 99`; do touch 
file${foo}; chown a2822bp:pr28fa file${foo}; done

[root@datdsst100 quota_test_dependent]#  mmrepquota -u -v 
dsstestfs01:quota_test_dependent
*** Report for USR quotas on dsstestfs01
 Block Limits| 
File Limits
Name   filesettype KB  quota  limit   in_doubt 
   grace |files   quotalimit in_doubtgrace entryType
a2822bpquota_test_dependent USR   0   10485760   10485760  
   0 none |   99 100  1000 none e 
root   quota_test_dependent USR   0  0  0  
   0 none |1   000 none i 
 
[root@datdsst100 quota_test_dependent]#  mmrepquota -u -v 
dsstestfs01:quota_test_independent 
*** Report for USR quotas on dsstestfs01
 Block Limits| 
File Limits
Name   filesettype KB  quota  limit   in_doubt 
   grace |files   quotalimit in_doubtgrace entryType
a2822bpquota_test_independent USR   01048576 1048576   
0 none |0  10   100 none e 
root   quota_test_independent USR   0  0 0  0 none 
|1   000 none i 

So it seems that per fileset per user quota is really not depending on 
independence. But what is the documentation then meaning with:
>>> User group and user quotas can be tracked at the file system level or 
per independent fileset.
???

However, there still remains the problem with mmbackup and mmapplypolicy …
And if you look at some of the RFEs, like the one from DESY, they want 
even more than 10k independent filesets …


Best Regards,
Stephan Peinkofer
-- 
Stephan Peinkofer
Dipl. Inf. (FH), M. Sc. (TUM)
 
Leibniz Supercomputing Centre
Data and Storage Division
Boltzmannstraße 1, 85748 Garching b. München
Tel: +49(0)89 35831-8715 Fax: +49(0)89 35831-9700
URL: http://www.lrz.de

On

Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas?

2018-08-13 Thread Olaf Weiser
as Dominic said.. .. your are absolutely
right .. for mmbackup you need dedicated inode spaces .. so "independent"
filesets .. (in case I  you wanna  be able to mmbackup on a fileset
level or multiple mmbackup's in parallel .. )From:      
 "Peinkofer, Stephan"
To:      
 gpfsug main discussion
list Date:      
 08/13/2018 09:26 AMSubject:    
   Re: [gpfsug-discuss]
GPFS Independent Fileset Limit vs Quotas?Sent by:    
   gpfsug-discuss-boun...@spectrumscale.orgDear Marc,OK, so let’s give it a try:[root@datdsst100 pr74qo]# mmlsfileset dsstestfs01Filesets in file system 'dsstestfs01':Name              
      Status    Path        
                     
     root              
      Linked    /dss/dsstestfs01    
                   ...quota_test_independent   Linked    /dss/dsstestfs01/quota_test_independent
quota_test_dependent     Linked    /dss/dsstestfs01/quota_test_independent/quota_test_dependent[root@datdsst100 pr74qo]# mmsetquota dsstestfs01:quota_test_independent
--user a2822bp --block 1G:1G --files 10:10[root@datdsst100 pr74qo]# mmsetquota dsstestfs01:quota_test_dependent
--user a2822bp --block 10G:10G --files 100:100[root@datdsst100 pr74qo]#  mmrepquota -u -v dsstestfs01:quota_test_independent
*** Report for USR quotas on dsstestfs01               
         Block Limits        
                     
     |              
      File LimitsName       fileset    type  
          KB      quota  
   limit   in_doubt    grace |    files
  quota    limit in_doubt    grace entryTypea2822bp    quota_test_independent USR  
            0    1048576  
 1048576          0     none |
       0      10      
10        0     none e      
  root       quota_test_independent USR  
            0        
 0          0        
 0     none |        1    
  0        0        0  
  none i         [root@datdsst100 pr74qo]#  mmrepquota -u -v dsstestfs01:quota_test_dependent*** Report for USR quotas on dsstestfs01               
         Block Limits        
                     
     |              
      File LimitsName       fileset    type  
          KB      quota  
   limit   in_doubt    grace |    files
  quota    limit in_doubt    grace entryTypea2822bp    quota_test_dependent USR    
          0   10485760   10485760  
       0     none |      
 0     100      100      
 0     none e         root       quota_test_dependent USR  
            0        
 0          0        
 0     none |        1    
  0        0        0  
  none i         Looks good …[root@datdsst100 pr74qo]# cd /dss/dsstestfs01/quota_test_independent/quota_test_dependent/[root@datdsst100 quota_test_dependent]# for foo in `seq
1 99`; do touch file${foo}; chown a2822bp:pr28fa file${foo}; done[root@datdsst100 quota_test_dependent]#  mmrepquota
-u -v dsstestfs01:quota_test_dependent*** Report for USR quotas on dsstestfs01               
         Block Limits        
                     
     |              
      File LimitsName       fileset    type  
          KB      quota  
   limit   in_doubt    grace |    files
  quota    limit in_doubt    grace entryTypea2822bp    quota_test_dependent USR    
          0   10485760   10485760  
       0     none |      
99     100      100        0
    none e         root       quota_test_dependent USR  
            0        
 0          0        
 0     none |        1    
  0        0        0  
  none i         [root@datdsst100 quota_test_dependent]#  mmrepquota
-u -v dsstestfs01:quota_test_independent *** Report for USR quotas on dsstestfs01               
         Block Limits        
                     
     |              
      File LimitsName       fileset    type  
          KB      quota  
   limit   in_doubt    grace |    files
  quota    limit in_doubt    grace entryTypea2822bp    quota_test_independent USR  
            0    1048576  
 1048576          0     none |
       0      10      
10        0     none e      
  root       quota_test_independent USR  
            0        
 0          0        
 0     none |        1    
  0        0        0  
  none i         So it seems that per fileset per user quota is really
not depending on independence. But what is the documentation then meaning
with:>>> User group and user quotas can be tracked
at the file system level or per independent fileset.???However, there still remains the problem with mmbackup
and mmapplypolicy …And if you look at some of the RFEs, like the one from
DESY, they want even more than 10k independent filesets …Best Regards,Stephan Peinkofer-- Stephan PeinkoferDipl. Inf. (FH), M. Sc. (TUM) Leibniz Supercomputing CentreData and Storage DivisionBoltzmannstraße 1, 85748 Garching b. MünchenTel: +49(0)89 35831-8715     Fax: +49(0)89 35831-9700URL: http://www.lrz.deOn 12. Aug 2018, at 15:05, Marc A Kaplan <makap...@us.ibm.com>
wrote:That's interesting, I confess I never read that piece
of documentation.What's also interesting, is that if you look at this doc for quotas:https://www.ibm.com/supp

Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas?

2018-08-13 Thread Peinkofer, Stephan
ependent" nor "independent"...




From:"Peinkofer, Stephan" 
mailto:stephan.peinko...@lrz.de>>
To:gpfsug main discussion list 
mailto:gpfsug-discuss@spectrumscale.org>>
Date:    08/11/2018 03:03 AM
Subject:Re: [gpfsug-discuss] GPFS Independent Fileset Limit
Sent by:
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>



Dear Marc,

so at least your documentation says:
https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.1/com.ibm.spectrum.scale.v5r01.doc/bl1hlp_filesfilesets.htm
>>> User group and user quotas can be tracked at the file system level or per 
>>> independent fileset.
But obviously as a customer I don't know if that "Really" depends on 
independence.

Currently about 70% of our filesets in the Data Science Storage systems get 
backed up to ISP. But that number may change over time as it depends on the 
requirements of our projects. For them it is just selecting "Protect this DSS 
Container by ISP" in a Web form an our portal then automatically does all the 
provisioning of the ISP Node to one of our ISP servers, rolling out the new dsm 
config files to the backup workers and so on.


Best Regards,
Stephan Peinkofer


From: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
 
mailto:gpfsug-discuss-boun...@spectrumscale.org>>
 on behalf of Marc A Kaplan mailto:makap...@us.ibm.com>>
Sent: Friday, August 10, 2018 7:15 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit

I know quota stuff was cooked into GPFS before we even had "independent 
filesets"...
So which particular quota features or commands or options now depend on 
"independence"?! Really?

Yes, independent fileset performance for mmapplypolicy and mmbackup scales with 
the inodespace sizes. But I'm curious to know how many of those indy filesets 
are mmback-ed-up.

Appreciate your elaborations, 'cause even though I've worked on some of this 
code, I don't know how/when/if customers push which limits.

-
Dear Marc,
well the primary reasons for us are:
- Per fileset quota (this seems to work also for dependent filesets as far as I 
know)
- Per user per fileset quota (this seems only to work for independent filesets)
- The dedicated inode space to speedup mmpolicy runs which only have to be 
applied to a specific subpart of the file system
- Scaling mmbackup by backing up different filesets to different TSM Servers 
economically
We have currently more than 1000 projects on our HPC machines and several 
different existing and planned file systems (use cases):


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas?

2018-08-12 Thread Marc A Kaplan
That's interesting, I confess I never read that piece of documentation.
What's also interesting, is that if you look at this doc for quotas:

https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/bl1adm_change_quota_anynum_users_onproject_basis_acrs_protocols.htm

The word independent appears only once in a "Note": It is recommended to 
create an independent fileset for the project. 

AND if you look at the mmchfs or mmchcr command you see:

--perfileset-quota 
 Sets the scope of user and group quota limit checks to the individual 
fileset level, rather than to the entire file system. 

With no mention of "dependent" nor "independent"...




From:   "Peinkofer, Stephan" 
To: gpfsug main discussion list 
Date:   08/11/2018 03:03 AM
Subject:    Re: [gpfsug-discuss] GPFS Independent Fileset Limit
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Dear Marc,

so at least your documentation says:
https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.1/com.ibm.spectrum.scale.v5r01.doc/bl1hlp_filesfilesets.htm
>>> User group and user quotas can be tracked at the file system level or 
per independent fileset.
But obviously as a customer I don't know if that "Really" depends on 
independence.

Currently about 70% of our filesets in the Data Science Storage systems 
get backed up to ISP. But that number may change over time as it depends 
on the requirements of our projects. For them it is just selecting 
"Protect this DSS Container by ISP" in a Web form an our portal then 
automatically does all the provisioning of the ISP Node to one of our ISP 
servers, rolling out the new dsm config files to the backup workers and so 
on.

Best Regards, 
Stephan Peinkofer

From: gpfsug-discuss-boun...@spectrumscale.org 
 on behalf of Marc A Kaplan 

Sent: Friday, August 10, 2018 7:15 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit 
 
I know quota stuff was cooked into GPFS before we even had "independent 
filesets"...
So which particular quota features or commands or options now depend on 
"independence"?! Really?

Yes, independent fileset performance for mmapplypolicy and mmbackup scales 
with the inodespace sizes. But I'm curious to know how many of those indy 
filesets are mmback-ed-up.

Appreciate your elaborations, 'cause even though I've worked on some of 
this code, I don't know how/when/if customers push which limits.

- 
Dear Marc,
well the primary reasons for us are:
- Per fileset quota (this seems to work also for dependent filesets as far 
as I know) 
- Per user per fileset quota (this seems only to work for independent 
filesets)
- The dedicated inode space to speedup mmpolicy runs which only have to be 
applied to a specific subpart of the file system
- Scaling mmbackup by backing up different filesets to different TSM 
Servers economically
We have currently more than 1000 projects on our HPC machines and several 
different existing and planned file systems (use cases):

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-11 Thread Peinkofer, Stephan
Dear Marc,


so at least your documentation says:

https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.1/com.ibm.spectrum.scale.v5r01.doc/bl1hlp_filesfilesets.htm

>>> User group and user quotas can be tracked at the file system level or per 
>>> independent fileset.

But obviously as a customer I don't know if that "Really" depends on 
independence.


Currently about 70% of our filesets in the Data Science Storage systems get 
backed up to ISP. But that number may change over time as it depends on the 
requirements of our projects. For them it is just selecting "Protect this DSS 
Container by ISP" in a Web form an our portal then automatically does all the 
provisioning of the ISP Node to one of our ISP servers, rolling out the new dsm 
config files to the backup workers and so on.

Best Regards,
Stephan Peinkofer

From: gpfsug-discuss-boun...@spectrumscale.org 
 on behalf of Marc A Kaplan 

Sent: Friday, August 10, 2018 7:15 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit

I know quota stuff was cooked into GPFS before we even had "independent 
filesets"...
So which particular quota features or commands or options now depend on 
"independence"?! Really?

Yes, independent fileset performance for mmapplypolicy and mmbackup scales with 
the inodespace sizes. But I'm curious to know how many of those indy filesets 
are mmback-ed-up.

Appreciate your elaborations, 'cause even though I've worked on some of this 
code, I don't know how/when/if customers push which limits.

-

Dear Marc,

well the primary reasons for us are:

- Per fileset quota (this seems to work also for dependent filesets as far as I 
know)

- Per user per fileset quota (this seems only to work for independent filesets)

- The dedicated inode space to speedup mmpolicy runs which only have to be 
applied to a specific subpart of the file system

- Scaling mmbackup by backing up different filesets to different TSM Servers 
economically

We have currently more than 1000 projects on our HPC machines and several 
different existing and planned file systems (use cases):


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Jake Carroll
troducing it - was as also just that one had to pick a  number...
>
>  ... turns out.. that a general commitment to support > 1000 
> ind.fileset is more or less hard.. because what  uses cases should we 
> test / support  I think , there might be a good chance for you , that 
> for your specific workload, one would allow and support  more than 
> 1000
>
>  do you still have a PMR for your side for this ? - if not - I know .. 
> open PMRs is an additional ...but could you  please ..
>  then we can decide .. if raising the limit is an option for you ..
>
>  Mit freundlichen Gr??en / Kind regards
>
>  Olaf Weiser
>
>  EMEA Storage Competence Center Mainz, German / IBM Systems, Storage 
> Platform,
>  
> --
> -
>  IBM Deutschland
>  IBM Allee 1
>  71139 Ehningen
>  Phone: +49-170-579-44-66
>  E-Mail: olaf.wei...@de.ibm.com
>  
> --
> -
>  IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
>  Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, 
> Norbert Janzen, Dr. Christian Keller, Ivo  Koerner, Markus Koerner  
> Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht 
> Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE
>  99369940
>
>  From: "Peinkofer, Stephan" 
>  To: gpfsug main discussion list 
>  Cc: Doris Franke , Uwe Tron 
> , Dorian Krause  
>  Date: 08/10/2018 01:29 PM
>  Subject: [gpfsug-discuss] GPFS Independent Fileset Limit  Sent by: 
> gpfsug-discuss-boun...@spectrumscale.org
> --
> -
>
>  Dear IBM and GPFS List,
>
>  we at the Leibniz Supercomputing Centre and our GCS Partners from the 
> J?lich Supercomputing Centre will  soon be hitting the current Independent 
> Fileset Limit of 1000 on a number of our GPFS Filesystems.
>
>  There are also a number of RFEs from other users open, that target this 
> limitation:
>  
> https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=
> 56780
>  
> https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=
> 120534
>  
> https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=
> 106530
>  
> https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=
> 85282
>
>  I know GPFS Development was very busy fulfilling the CORAL 
> requirements but maybe now there is again  some time to improve something 
> else.
>
>  If there are any other users on the list that are approaching the 
> current limitation in independent filesets,  please take some time and vote 
> for the RFEs above.
>
>  Many thanks in advance and have a nice weekend.
>  Best Regards,
>  Stephan Peinkofer
>
>  ___________________
>  gpfsug-discuss mailing list
>  gpfsug-discuss at spectrumscale.org
>  http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss


--

Message: 2
Date: Fri, 10 Aug 2018 16:01:17 +
From: Bryan Banister 
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit
Message-ID: <01780289b9e14e599f848f78b3399...@jumptrading.com>
Content-Type: text/plain; charset="iso-8859-1"

Just as a follow up to my own note, Stephan, already provided a list of 
existing RFEs from which to vote through the IBM RFE site, cheers, -Bryan

From: gpfsug-discuss-boun...@spectrumscale.org 
 On Behalf Of Bryan Banister
Sent: Friday, August 10, 2018 10:51 AM
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit

Note: External Email

This is definitely a great candidate for a RFE, if one does not already exist.

Not to try and contradict by friend Olaf here, but I have been talking a lot 
with those internal to IBM, and the PMR process is for finding and correcting 
operational problems with the code level you are running, and closing out the 
PMR as quickly as possible.  PMRs are not the vehicle for getting substantive 
changes and enhancements made to the product in general, which the RFE process 
is really the main way to do this.

I just got off a call with Kristie and Carl about the RFE process and those on 
the list may know that we are working to improve this overall process.  More 
will be sent out about this in the near future!!  So I thought I would chime i

Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Marc A Kaplan
I know quota stuff was cooked into GPFS before we even had "independent 
filesets"...
So which particular quota features or commands or options now depend on 
"independence"?! Really?

Yes, independent fileset performance for mmapplypolicy and mmbackup scales 
with the inodespace sizes. But I'm curious to know how many of those indy 
filesets are mmback-ed-up.

Appreciate your elaborations, 'cause even though I've worked on some of 
this code, I don't know how/when/if customers push which limits.

-
Dear Marc,

well the primary reasons for us are:
- Per fileset quota (this seems to work also for dependent filesets as far 
as I know) 
- Per user per fileset quota (this seems only to work for independent 
filesets)
- The dedicated inode space to speedup mmpolicy runs which only have to be 
applied to a specific subpart of the file system
- Scaling mmbackup by backing up different filesets to different TSM 
Servers economically

We have currently more than 1000 projects on our HPC machines and several 
different existing and planned file systems (use cases):



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Bryan Banister
Just as a follow up to my own note, Stephan, already provided a list of 
existing RFEs from which to vote through the IBM RFE site, cheers,
-Bryan

From: gpfsug-discuss-boun...@spectrumscale.org 
 On Behalf Of Bryan Banister
Sent: Friday, August 10, 2018 10:51 AM
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit

Note: External Email

This is definitely a great candidate for a RFE, if one does not already exist.

Not to try and contradict by friend Olaf here, but I have been talking a lot 
with those internal to IBM, and the PMR process is for finding and correcting 
operational problems with the code level you are running, and closing out the 
PMR as quickly as possible.  PMRs are not the vehicle for getting substantive 
changes and enhancements made to the product in general, which the RFE process 
is really the main way to do this.

I just got off a call with Kristie and Carl about the RFE process and those on 
the list may know that we are working to improve this overall process.  More 
will be sent out about this in the near future!!  So I thought I would chime in 
on this discussion here to hopefully help us understand how important the RFE 
(admittedly currently got great) process really is and will be a great way to 
work together on these common goals and needs for the product we rely so 
heavily upon!

Cheers!!
-Bryan

From: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
 
mailto:gpfsug-discuss-boun...@spectrumscale.org>>
 On Behalf Of Peinkofer, Stephan
Sent: Friday, August 10, 2018 10:40 AM
To: gpfsug main discussion list 
mailto:gpfsug-discuss@spectrumscale.org>>
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit

Note: External Email


Dear Olaf,



I know that this is "just" a "support" limit. However Sven some day on a UG 
meeting in Ehningen told me that there is more to this than just

adjusting your QA qualification tests since the way it is implemented today 
does not really scale ;).

That's probably the reason why you said you see sometimes problems when you are 
not even close to the limit.



So if you look at the 250PB Alpine file system of Summit today, that is what's 
going to deployed at more than one site world wide in 2-4 years and

imho independent filesets are a great way to make this large systems much more 
handy while still maintaining a unified namespace.

So I really think it would be beneficial if the architectural limit that 
prevents scaling the number of independent filesets could be removed at all.


Best Regards,
Stephan Peinkofer

From: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
 
mailto:gpfsug-discuss-boun...@spectrumscale.org>>
 on behalf of Olaf Weiser 
mailto:olaf.wei...@de.ibm.com>>
Sent: Friday, August 10, 2018 2:51 PM
To: gpfsug main discussion list
Cc: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>;
 Doris Franke; Uwe Tron; Dorian Krause
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit

Hallo Stephan,
the limit is not a hard coded limit  - technically spoken, you can raise it 
easily.
But as always, it is a question of test 'n support ..

I've seen customer cases, where the use of much smaller amount of independent 
filesets generates a lot performance issues, hangs ... at least noise and 
partial trouble ..
it might be not the case with your specific workload, because due to the fact, 
that you 're running already  close to 1000 ...

I suspect , this number of 1000 file sets  - at the time of introducing it - 
was as also just that one had to pick a number...

... turns out.. that a general commitment to support > 1000 ind.fileset is more 
or less hard.. because what uses cases should we test / support
I think , there might be a good chance for you , that for your specific 
workload, one would allow and support more than 1000

do you still have a PMR for your side for this ?  - if not - I know .. open 
PMRs is an additional ...but could you please ..
then we can decide .. if raising the limit is an option for you ..





Mit freundlichen Grüßen / Kind regards


Olaf Weiser

EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform,
---
IBM Deutschland
IBM Allee 1
71139 Ehningen
Phone: +49-170-579-44-66
E-Mail: olaf.wei...@de.ibm.com<mailto:olaf.wei...@de.ibm.com>
---
IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
Geschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, Norb

Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Doug Johnson
Hi all,

I want to chime in because this is precisely what we have done at OSC
due to the same motivations Janell described.  Our design was based in
part on the guidelines in the "Petascale Data Protection" white paper
from IBM.  We only have ~200 filesets and 250M inodes today, but expect
to grow.

We are also very interested in details about performance issues and
independent filesets.  Can IBM elaborate?

Best,
Doug


Martin Lischewski  writes:

> Hello Olaf, hello Marc,
>
> we in Jülich are in the middle of migrating/copying all our old filesystems 
> which were created with filesystem
> version: 13.23 (3.5.0.7) to new filesystems created with GPFS 5.0.1.
>
> We move to new filesystems mainly for two reasons: 1. We want to use the new 
> increased number of subblocks.
> 2. We have to change our quota from normal "group-quota per filesystem" to 
> "fileset-quota".
>
> The idea is to create a separate fileset for each group/project. For the 
> users the quota-computation should be
> much more transparent. From now on all data which is stored inside of their 
> directory (fileset) counts for their
> quota independent of the ownership.
>
> Right now we have round about 900 groups which means we will create round 
> about 900 filesets per filesystem.
> In one filesystem we will have about 400million inodes (with rising tendency).
>
> This filesystem we will back up with "mmbackup" so we talked with Dominic 
> Mueller-Wicke and he recommended
> us to use independent filesets. Because then the policy-runs can be 
> parallelized and we can increase the backup
> performance. We belive that we require these parallelized policies run to 
> meet our backup performance targets.
>
> But there are even more features we enable by using independet filesets. E.g. 
> "Fileset level snapshots" and "user
> and group quotas inside of a fileset".
>
> I did not know about performance issues regarding independent filesets... Can 
> you give us some more
> information about this?
>
> All in all we are strongly supporting the idea of increasing this limit.
>
> Do I understand correctly that by opening a PMR IBM allows to increase this 
> limit on special sides? I would rather
> like to increase the limit and make it official public available and 
> supported.
>
> Regards,
>
> Martin
>
> Am 10.08.2018 um 14:51 schrieb Olaf Weiser:
>
>  Hallo Stephan,
>  the limit is not a hard coded limit - technically spoken, you can raise it 
> easily.
>  But as always, it is a question of test 'n support ..
>
>  I've seen customer cases, where the use of much smaller amount of 
> independent filesets generates a lot
>  performance issues, hangs ... at least noise and partial trouble ..
>  it might be not the case with your specific workload, because due to the 
> fact, that you 're running already
>  close to 1000 ...
>
>  I suspect , this number of 1000 file sets - at the time of introducing it - 
> was as also just that one had to pick a
>  number...
>
>  ... turns out.. that a general commitment to support > 1000 ind.fileset is 
> more or less hard.. because what
>  uses cases should we test / support
>  I think , there might be a good chance for you , that for your specific 
> workload, one would allow and support
>  more than 1000
>
>  do you still have a PMR for your side for this ? - if not - I know .. open 
> PMRs is an additional ...but could you
>  please ..
>  then we can decide .. if raising the limit is an option for you ..
>
>  Mit freundlichen Grüßen / Kind regards
>
>  Olaf Weiser
>
>  EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform,
>  
> ---
>  IBM Deutschland
>  IBM Allee 1
>  71139 Ehningen
>  Phone: +49-170-579-44-66
>  E-Mail: olaf.wei...@de.ibm.com
>  
> ---
>  IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
>  Geschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert 
> Janzen, Dr. Christian Keller, Ivo
>  Koerner, Markus Koerner
>  Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, 
> HRB 14562 / WEEE-Reg.-Nr. DE
>  99369940
>
>  From: "Peinkofer, Stephan" 
>  To: gpfsug main discussion list 
>  Cc: Doris Franke , Uwe Tron , 
> Dorian Krause
>  
>  Date: 08/10/2018 01:29 PM
>  Subject: [gpfsug-discuss] GPFS Independent Fileset Limit
>  Sent by: gpfsug-discuss-boun...@spectrum

Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Peinkofer, Stephan
Dear Olaf,


I know that this is "just" a "support" limit. However Sven some day on a UG 
meeting in Ehningen told me that there is more to this than just

adjusting your QA qualification tests since the way it is implemented today 
does not really scale ;).

That's probably the reason why you said you see sometimes problems when you are 
not even close to the limit.


So if you look at the 250PB Alpine file system of Summit today, that is what's 
going to deployed at more than one site world wide in 2-4 years and

imho independent filesets are a great way to make this large systems much more 
handy while still maintaining a unified namespace.

So I really think it would be beneficial if the architectural limit that 
prevents scaling the number of independent filesets could be removed at all.


Best Regards,
Stephan Peinkofer


From: gpfsug-discuss-boun...@spectrumscale.org 
 on behalf of Olaf Weiser 

Sent: Friday, August 10, 2018 2:51 PM
To: gpfsug main discussion list
Cc: gpfsug-discuss-boun...@spectrumscale.org; Doris Franke; Uwe Tron; Dorian 
Krause
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit

Hallo Stephan,
the limit is not a hard coded limit  - technically spoken, you can raise it 
easily.
But as always, it is a question of test 'n support ..

I've seen customer cases, where the use of much smaller amount of independent 
filesets generates a lot performance issues, hangs ... at least noise and 
partial trouble ..
it might be not the case with your specific workload, because due to the fact, 
that you 're running already  close to 1000 ...

I suspect , this number of 1000 file sets  - at the time of introducing it - 
was as also just that one had to pick a number...

... turns out.. that a general commitment to support > 1000 ind.fileset is more 
or less hard.. because what uses cases should we test / support
I think , there might be a good chance for you , that for your specific 
workload, one would allow and support more than 1000

do you still have a PMR for your side for this ?  - if not - I know .. open 
PMRs is an additional ...but could you please ..
then we can decide .. if raising the limit is an option for you ..





Mit freundlichen Grüßen / Kind regards


Olaf Weiser

EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform,
---
IBM Deutschland
IBM Allee 1
71139 Ehningen
Phone: +49-170-579-44-66
E-Mail: olaf.wei...@de.ibm.com
---
IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
Geschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert 
Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 
14562 / WEEE-Reg.-Nr. DE 99369940



From:"Peinkofer, Stephan" 
To:gpfsug main discussion list 
Cc:Doris Franke , Uwe Tron , 
Dorian Krause 
Date:08/10/2018 01:29 PM
Subject:[gpfsug-discuss] GPFS Independent Fileset Limit
Sent by:gpfsug-discuss-boun...@spectrumscale.org




Dear IBM and GPFS List,

we at the Leibniz Supercomputing Centre and our GCS Partners from the Jülich 
Supercomputing Centre will soon be hitting the current Independent Fileset 
Limit of 1000 on a number of our GPFS Filesystems.

There are also a number of RFEs from other users open, that target this 
limitation:
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780
Sign up for an IBM 
account<https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780>
www.ibm.com
IBM account registration



<https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780>https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=120534
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=106530
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=85282

I know GPFS Development was very busy fulfilling the CORAL requirements but 
maybe now there is again some time to improve something else.

If there are any other users on the list that are approaching the current 
limitation in independent filesets, please take some time and vote for the RFEs 
above.

Many thanks in advance and have a nice weekend.
Best Regards,
Stephan Peinkofer

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Martin Lischewski

Hello Olaf, hello Marc,

we in Jülich are in the middle of migrating/copying all our old 
filesystems which were created with filesystem version: 13.23 (3.5.0.7) 
to new filesystems created with GPFS 5.0.1.


We move to new filesystems mainly for two reasons: 1. We want to use the 
new increased number of subblocks. 2. We have to change our quota from 
normal "group-quota per filesystem" to "fileset-quota".


The idea is to create a separate fileset for each group/project. For the 
users the quota-computation should be much more transparent. From now on 
all data which is stored inside of their directory (fileset) counts for 
their quota independent of the ownership.


Right now we have round about 900 groups which means we will create 
round about 900 filesets per filesystem. In one filesystem we will have 
about 400million inodes (with rising tendency).


This filesystem we will back up with "mmbackup" so we talked with 
Dominic Mueller-Wicke and he recommended us to use independent filesets. 
Because then the policy-runs can be parallelized and we can increase the 
backup performance. We belive that we require these parallelized 
policies run to meet our backup performance targets.


But there are even more features we enable by using independet filesets. 
E.g. "Fileset level snapshots" and "user and group quotas inside of a 
fileset".


I did not know about performance issues regarding independent 
filesets... Can you give us some more information about this?


All in all we are strongly supporting the idea of increasing this limit.

Do I understand correctly that by opening a PMR IBM allows to increase 
this limit on special sides? I would rather like to increase the limit 
and make it official public available and supported.


Regards,

Martin


Am 10.08.2018 um 14:51 schrieb Olaf Weiser:

Hallo Stephan,
the limit is not a hard coded limit  - technically spoken, you can 
raise it easily.

But as always, it is a question of test 'n support ..

I've seen customer cases, where the use of much smaller amount of 
independent filesets generates a lot performance issues, hangs ... at 
least noise and partial trouble ..
it might be not the case with your specific workload, because due to 
the fact, that you 're running already  close to 1000 ...


I suspect , this number of 1000 file sets  - at the time of 
introducing it - was as also just that one had to pick a number...


... turns out.. that a general commitment to support > 1000 
ind.fileset is more or less hard.. because what uses cases should we 
test / support
I think , there might be a good chance for you , that for your 
specific workload, one would allow and support more than 1000


do you still have a PMR for your side for this ?  - if not - I know .. 
open PMRs is an additional ...but could you please ..

then we can decide .. if raising the limit is an option for you ..





Mit freundlichen Grüßen / Kind regards


Olaf Weiser

EMEA Storage Competence Center Mainz, German / IBM Systems, Storage 
Platform,

---
IBM Deutschland
IBM Allee 1
71139 Ehningen
Phone: +49-170-579-44-66
E-Mail: olaf.wei...@de.ibm.com
---
IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
Geschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, 
Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht 
Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940




From: "Peinkofer, Stephan" 
To: gpfsug main discussion list 
Cc: Doris Franke , Uwe Tron 
, Dorian Krause 

Date: 08/10/2018 01:29 PM
Subject: [gpfsug-discuss] GPFS Independent Fileset Limit
Sent by: gpfsug-discuss-boun...@spectrumscale.org




Dear IBM and GPFS List,

we at the Leibniz Supercomputing Centre and our GCS Partners from the 
Jülich Supercomputing Centre will soon be hitting the current 
Independent Fileset Limit of 1000 on a number of our GPFS Filesystems.


There are also a number of RFEs from other users open, that target 
this limitation:

_https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780_
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=120534_
__https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=106530_
_https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=85282_

I know GPFS Development was very busy fulfilling the CORAL 
requirements but maybe now there is again some time to improve 
something else.


If there are any other users on the list that are approaching the 
current limitation in independent filesets, please take some time 

Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Marc A Kaplan
Questions:  How/why was the decision made to use a large number (~1000) of 
independent filesets ?
What functions/features/commands are being used that work with independent 
filesets, that do not also work with "dependent" filesets?


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Olaf Weiser
Hallo Stephan, the limit is not a hard coded limit
 - technically spoken, you can raise it easily. But as always, it is a question of test
'n support .. I've seen customer cases, where the
use of much smaller amount of independent filesets generates a lot performance
issues, hangs ... at least noise and partial trouble .. it might be not the case with your specific
workload, because due to the fact, that you 're running already  close
to 1000 ...I suspect , this number of 1000 file
sets  - at the time of introducing it - was as also just that one
had to pick a number... ... turns out.. that a general commitment
to support > 1000 ind.fileset is more or less hard.. because what uses
cases should we test / supportI think , there might be a good chance
for you , that for your specific workload, one would allow and support
more than 1000 do you still have a PMR for your side
for this ?  - if not - I know .. open PMRs is an additional ...but
could you please .. then we can decide .. if raising the
limit is an option for you .. Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform,---IBM DeutschlandIBM Allee 171139 EhningenPhone: +49-170-579-44-66E-Mail: olaf.wei...@de.ibm.com---IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin JetterGeschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert
Janzen, Dr. Christian Keller, Ivo Koerner, Markus KoernerSitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart,
HRB 14562 / WEEE-Reg.-Nr. DE 99369940 From:      
 "Peinkofer, Stephan"
To:      
 gpfsug main discussion
list Cc:      
 Doris Franke ,
Uwe Tron , Dorian Krause Date:      
 08/10/2018 01:29 PMSubject:    
   [gpfsug-discuss]
GPFS Independent Fileset LimitSent by:    
   gpfsug-discuss-boun...@spectrumscale.orgDear IBM and GPFS List,we at the Leibniz Supercomputing Centre
and our GCS Partners from the Jülich Supercomputing Centre will soon be
hitting the current Independent Fileset Limit of 1000 on a number of our
GPFS Filesystems.There are also a number of RFEs from other
users open, that target this limitation:https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=120534https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=106530https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=85282I know GPFS Development was very busy fulfilling
the CORAL requirements but maybe now there is again some time to improve
something else. If there are any other users on the list
that are approaching the current limitation in independent filesets, please
take some time and vote for the RFEs above.Many thanks in advance and have a nice
weekend.Best Regards,Stephan Peinkofer___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Peinkofer, Stephan
Dear IBM and GPFS List,


we at the Leibniz Supercomputing Centre and our GCS Partners from the Jülich 
Supercomputing Centre will soon be hitting the current Independent Fileset 
Limit of 1000 on a number of our GPFS Filesystems.


There are also a number of RFEs from other users open, that target this 
limitation:

https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780

https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=120534
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=106530

https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=85282


I know GPFS Development was very busy fulfilling the CORAL requirements but 
maybe now there is again some time to improve something else.


If there are any other users on the list that are approaching the current 
limitation in independent filesets, please take some time and vote for the RFEs 
above.


Many thanks in advance and have a nice weekend.

Best Regards,

Stephan Peinkofer


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss