Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Jake Carroll
 IBM 
account<https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780>
www.ibm.com<http://www.ibm.com>
IBM account registration



https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=120534
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=106530
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=85282

I know GPFS Development was very busy fulfilling the CORAL requirements but 
maybe now there is again some time to improve something else.

If there are any other users on the list that are approaching the current 
limitation in independent filesets, please take some time and vote for the RFEs 
above.

Many thanks in advance and have a nice weekend.
Best Regards,
Stephan Peinkofer

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




Note: This email is for the confidential use of the named addressee(s) only and 
may contain proprietary, confidential, or privileged information and/or 
personal data. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, or copying of this email is strictly 
prohibited, and requested to notify the sender immediately and destroy this 
email and any attachments. Email transmission cannot be guaranteed to be secure 
or error-free. The Company, therefore, does not make any guarantees as to the 
completeness or accuracy of this email or any attachments. This email is for 
informational purposes only and does not constitute a recommendation, offer, 
request, or solicitation of any kind to buy, sell, subscribe, redeem, or 
perform any type of transaction of a financial product. Personal data, as 
defined by applicable data privacy laws, contained in this email may be 
processed by the Company, and any of its affiliated or related companies, for 
potent
   ial ongoing compliance and/or business-related purposes. You may have rights 
regarding your personal data; for information on exercising these rights or the 
Company's treatment of personal data, please email 
datareque...@jumptrading.com<mailto:datareque...@jumptrading.com>.



Note: This email is for the confidential use of the named addressee(s) only and 
may contain proprietary, confidential, or privileged information and/or 
personal data. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, or copying of this email is strictly 
prohibited, and requested to notify the sender immediately and destroy this 
email and any attachments. Email transmission cannot be guaranteed to be secure 
or error-free. The Company, therefore, does not make any guarantees as to the 
completeness or accuracy of this email or any attachments. This email is for 
informational purposes only and does not constitute a recommendation, offer, 
request, or solicitation of any kind to buy, sell, subscribe, redeem, or 
perform any type of transaction of a financial product. Personal data, as 
defined by applicable data privacy laws, contained in this email may be 
processed by the Company, and any of its affiliated or related companies, for 
potent
   ial ongoing compliance and/or business-related purposes. You may have rights 
regarding your personal data; for information on exercising these rights or the 
Company's treatment of personal data, please email datareque...@jumptrading.com.
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20180810/c91cc55c/attachment.html>

--

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


End of gpfsug-discuss Digest, Vol 79, Issue 29
**

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Top files on GPFS filesystem

2018-08-10 Thread Anderson Ferreira Nobre
Hi all,
 
Does anyone know how to list the top files by throughput and IOPS in a single GPFS filesystem like filemon in AIX?
 
 
Abraços / Regards / Saludos,
 
Anderson NobreAIX & Power ConsultantMaster Certified IT SpecialistIBM Systems Hardware Client Technical Team – IBM Systems Lab Services 
Phone: 55-19-2132-4317E-mail: ano...@br.ibm.com

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Marc A Kaplan
I know quota stuff was cooked into GPFS before we even had "independent 
filesets"...
So which particular quota features or commands or options now depend on 
"independence"?! Really?

Yes, independent fileset performance for mmapplypolicy and mmbackup scales 
with the inodespace sizes. But I'm curious to know how many of those indy 
filesets are mmback-ed-up.

Appreciate your elaborations, 'cause even though I've worked on some of 
this code, I don't know how/when/if customers push which limits.

-
Dear Marc,

well the primary reasons for us are:
- Per fileset quota (this seems to work also for dependent filesets as far 
as I know) 
- Per user per fileset quota (this seems only to work for independent 
filesets)
- The dedicated inode space to speedup mmpolicy runs which only have to be 
applied to a specific subpart of the file system
- Scaling mmbackup by backing up different filesets to different TSM 
Servers economically

We have currently more than 1000 projects on our HPC machines and several 
different existing and planned file systems (use cases):



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Bryan Banister
Just as a follow up to my own note, Stephan, already provided a list of 
existing RFEs from which to vote through the IBM RFE site, cheers,
-Bryan

From: gpfsug-discuss-boun...@spectrumscale.org 
 On Behalf Of Bryan Banister
Sent: Friday, August 10, 2018 10:51 AM
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit

Note: External Email

This is definitely a great candidate for a RFE, if one does not already exist.

Not to try and contradict by friend Olaf here, but I have been talking a lot 
with those internal to IBM, and the PMR process is for finding and correcting 
operational problems with the code level you are running, and closing out the 
PMR as quickly as possible.  PMRs are not the vehicle for getting substantive 
changes and enhancements made to the product in general, which the RFE process 
is really the main way to do this.

I just got off a call with Kristie and Carl about the RFE process and those on 
the list may know that we are working to improve this overall process.  More 
will be sent out about this in the near future!!  So I thought I would chime in 
on this discussion here to hopefully help us understand how important the RFE 
(admittedly currently got great) process really is and will be a great way to 
work together on these common goals and needs for the product we rely so 
heavily upon!

Cheers!!
-Bryan

From: 
gpfsug-discuss-boun...@spectrumscale.org
 
mailto:gpfsug-discuss-boun...@spectrumscale.org>>
 On Behalf Of Peinkofer, Stephan
Sent: Friday, August 10, 2018 10:40 AM
To: gpfsug main discussion list 
mailto:gpfsug-discuss@spectrumscale.org>>
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit

Note: External Email


Dear Olaf,



I know that this is "just" a "support" limit. However Sven some day on a UG 
meeting in Ehningen told me that there is more to this than just

adjusting your QA qualification tests since the way it is implemented today 
does not really scale ;).

That's probably the reason why you said you see sometimes problems when you are 
not even close to the limit.



So if you look at the 250PB Alpine file system of Summit today, that is what's 
going to deployed at more than one site world wide in 2-4 years and

imho independent filesets are a great way to make this large systems much more 
handy while still maintaining a unified namespace.

So I really think it would be beneficial if the architectural limit that 
prevents scaling the number of independent filesets could be removed at all.


Best Regards,
Stephan Peinkofer

From: 
gpfsug-discuss-boun...@spectrumscale.org
 
mailto:gpfsug-discuss-boun...@spectrumscale.org>>
 on behalf of Olaf Weiser 
mailto:olaf.wei...@de.ibm.com>>
Sent: Friday, August 10, 2018 2:51 PM
To: gpfsug main discussion list
Cc: 
gpfsug-discuss-boun...@spectrumscale.org;
 Doris Franke; Uwe Tron; Dorian Krause
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit

Hallo Stephan,
the limit is not a hard coded limit  - technically spoken, you can raise it 
easily.
But as always, it is a question of test 'n support ..

I've seen customer cases, where the use of much smaller amount of independent 
filesets generates a lot performance issues, hangs ... at least noise and 
partial trouble ..
it might be not the case with your specific workload, because due to the fact, 
that you 're running already  close to 1000 ...

I suspect , this number of 1000 file sets  - at the time of introducing it - 
was as also just that one had to pick a number...

... turns out.. that a general commitment to support > 1000 ind.fileset is more 
or less hard.. because what uses cases should we test / support
I think , there might be a good chance for you , that for your specific 
workload, one would allow and support more than 1000

do you still have a PMR for your side for this ?  - if not - I know .. open 
PMRs is an additional ...but could you please ..
then we can decide .. if raising the limit is an option for you ..





Mit freundlichen Grüßen / Kind regards


Olaf Weiser

EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform,
---
IBM Deutschland
IBM Allee 1
71139 Ehningen
Phone: +49-170-579-44-66
E-Mail: olaf.wei...@de.ibm.com
---
IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
Geschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert 
Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner
Sitz der 

Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Doug Johnson
Hi all,

I want to chime in because this is precisely what we have done at OSC
due to the same motivations Janell described.  Our design was based in
part on the guidelines in the "Petascale Data Protection" white paper
from IBM.  We only have ~200 filesets and 250M inodes today, but expect
to grow.

We are also very interested in details about performance issues and
independent filesets.  Can IBM elaborate?

Best,
Doug


Martin Lischewski  writes:

> Hello Olaf, hello Marc,
>
> we in Jülich are in the middle of migrating/copying all our old filesystems 
> which were created with filesystem
> version: 13.23 (3.5.0.7) to new filesystems created with GPFS 5.0.1.
>
> We move to new filesystems mainly for two reasons: 1. We want to use the new 
> increased number of subblocks.
> 2. We have to change our quota from normal "group-quota per filesystem" to 
> "fileset-quota".
>
> The idea is to create a separate fileset for each group/project. For the 
> users the quota-computation should be
> much more transparent. From now on all data which is stored inside of their 
> directory (fileset) counts for their
> quota independent of the ownership.
>
> Right now we have round about 900 groups which means we will create round 
> about 900 filesets per filesystem.
> In one filesystem we will have about 400million inodes (with rising tendency).
>
> This filesystem we will back up with "mmbackup" so we talked with Dominic 
> Mueller-Wicke and he recommended
> us to use independent filesets. Because then the policy-runs can be 
> parallelized and we can increase the backup
> performance. We belive that we require these parallelized policies run to 
> meet our backup performance targets.
>
> But there are even more features we enable by using independet filesets. E.g. 
> "Fileset level snapshots" and "user
> and group quotas inside of a fileset".
>
> I did not know about performance issues regarding independent filesets... Can 
> you give us some more
> information about this?
>
> All in all we are strongly supporting the idea of increasing this limit.
>
> Do I understand correctly that by opening a PMR IBM allows to increase this 
> limit on special sides? I would rather
> like to increase the limit and make it official public available and 
> supported.
>
> Regards,
>
> Martin
>
> Am 10.08.2018 um 14:51 schrieb Olaf Weiser:
>
>  Hallo Stephan,
>  the limit is not a hard coded limit - technically spoken, you can raise it 
> easily.
>  But as always, it is a question of test 'n support ..
>
>  I've seen customer cases, where the use of much smaller amount of 
> independent filesets generates a lot
>  performance issues, hangs ... at least noise and partial trouble ..
>  it might be not the case with your specific workload, because due to the 
> fact, that you 're running already
>  close to 1000 ...
>
>  I suspect , this number of 1000 file sets - at the time of introducing it - 
> was as also just that one had to pick a
>  number...
>
>  ... turns out.. that a general commitment to support > 1000 ind.fileset is 
> more or less hard.. because what
>  uses cases should we test / support
>  I think , there might be a good chance for you , that for your specific 
> workload, one would allow and support
>  more than 1000
>
>  do you still have a PMR for your side for this ? - if not - I know .. open 
> PMRs is an additional ...but could you
>  please ..
>  then we can decide .. if raising the limit is an option for you ..
>
>  Mit freundlichen Grüßen / Kind regards
>
>  Olaf Weiser
>
>  EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform,
>  
> ---
>  IBM Deutschland
>  IBM Allee 1
>  71139 Ehningen
>  Phone: +49-170-579-44-66
>  E-Mail: olaf.wei...@de.ibm.com
>  
> ---
>  IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
>  Geschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert 
> Janzen, Dr. Christian Keller, Ivo
>  Koerner, Markus Koerner
>  Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, 
> HRB 14562 / WEEE-Reg.-Nr. DE
>  99369940
>
>  From: "Peinkofer, Stephan" 
>  To: gpfsug main discussion list 
>  Cc: Doris Franke , Uwe Tron , 
> Dorian Krause
>  
>  Date: 08/10/2018 01:29 PM
>  Subject: [gpfsug-discuss] GPFS Independent Fileset Limit
>  Sent by: gpfsug-discuss-boun...@spectrumscale.org
> ---
>
>  Dear IBM and GPFS List,
>
>  we at the Leibniz Supercomputing Centre and our GCS Partners from the Jülich 
> Supercomputing Centre will
>  soon be hitting the current Independent Fileset Limit of 1000 on a number of 
> our GPFS Filesystems.
>
>  There are also a number of RFEs 

Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Peinkofer, Stephan
Dear Olaf,


I know that this is "just" a "support" limit. However Sven some day on a UG 
meeting in Ehningen told me that there is more to this than just

adjusting your QA qualification tests since the way it is implemented today 
does not really scale ;).

That's probably the reason why you said you see sometimes problems when you are 
not even close to the limit.


So if you look at the 250PB Alpine file system of Summit today, that is what's 
going to deployed at more than one site world wide in 2-4 years and

imho independent filesets are a great way to make this large systems much more 
handy while still maintaining a unified namespace.

So I really think it would be beneficial if the architectural limit that 
prevents scaling the number of independent filesets could be removed at all.


Best Regards,
Stephan Peinkofer


From: gpfsug-discuss-boun...@spectrumscale.org 
 on behalf of Olaf Weiser 

Sent: Friday, August 10, 2018 2:51 PM
To: gpfsug main discussion list
Cc: gpfsug-discuss-boun...@spectrumscale.org; Doris Franke; Uwe Tron; Dorian 
Krause
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit

Hallo Stephan,
the limit is not a hard coded limit  - technically spoken, you can raise it 
easily.
But as always, it is a question of test 'n support ..

I've seen customer cases, where the use of much smaller amount of independent 
filesets generates a lot performance issues, hangs ... at least noise and 
partial trouble ..
it might be not the case with your specific workload, because due to the fact, 
that you 're running already  close to 1000 ...

I suspect , this number of 1000 file sets  - at the time of introducing it - 
was as also just that one had to pick a number...

... turns out.. that a general commitment to support > 1000 ind.fileset is more 
or less hard.. because what uses cases should we test / support
I think , there might be a good chance for you , that for your specific 
workload, one would allow and support more than 1000

do you still have a PMR for your side for this ?  - if not - I know .. open 
PMRs is an additional ...but could you please ..
then we can decide .. if raising the limit is an option for you ..





Mit freundlichen Grüßen / Kind regards


Olaf Weiser

EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform,
---
IBM Deutschland
IBM Allee 1
71139 Ehningen
Phone: +49-170-579-44-66
E-Mail: olaf.wei...@de.ibm.com
---
IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
Geschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert 
Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 
14562 / WEEE-Reg.-Nr. DE 99369940



From:"Peinkofer, Stephan" 
To:gpfsug main discussion list 
Cc:Doris Franke , Uwe Tron , 
Dorian Krause 
Date:08/10/2018 01:29 PM
Subject:[gpfsug-discuss] GPFS Independent Fileset Limit
Sent by:gpfsug-discuss-boun...@spectrumscale.org




Dear IBM and GPFS List,

we at the Leibniz Supercomputing Centre and our GCS Partners from the Jülich 
Supercomputing Centre will soon be hitting the current Independent Fileset 
Limit of 1000 on a number of our GPFS Filesystems.

There are also a number of RFEs from other users open, that target this 
limitation:
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780
Sign up for an IBM 
account
www.ibm.com
IBM account registration



https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=120534
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=106530
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=85282

I know GPFS Development was very busy fulfilling the CORAL requirements but 
maybe now there is again some time to improve something else.

If there are any other users on the list that are approaching the current 
limitation in independent filesets, please take some time and vote for the RFEs 
above.

Many thanks in advance and have a nice weekend.
Best Regards,
Stephan Peinkofer

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Martin Lischewski

Hello Olaf, hello Marc,

we in Jülich are in the middle of migrating/copying all our old 
filesystems which were created with filesystem version: 13.23 (3.5.0.7) 
to new filesystems created with GPFS 5.0.1.


We move to new filesystems mainly for two reasons: 1. We want to use the 
new increased number of subblocks. 2. We have to change our quota from 
normal "group-quota per filesystem" to "fileset-quota".


The idea is to create a separate fileset for each group/project. For the 
users the quota-computation should be much more transparent. From now on 
all data which is stored inside of their directory (fileset) counts for 
their quota independent of the ownership.


Right now we have round about 900 groups which means we will create 
round about 900 filesets per filesystem. In one filesystem we will have 
about 400million inodes (with rising tendency).


This filesystem we will back up with "mmbackup" so we talked with 
Dominic Mueller-Wicke and he recommended us to use independent filesets. 
Because then the policy-runs can be parallelized and we can increase the 
backup performance. We belive that we require these parallelized 
policies run to meet our backup performance targets.


But there are even more features we enable by using independet filesets. 
E.g. "Fileset level snapshots" and "user and group quotas inside of a 
fileset".


I did not know about performance issues regarding independent 
filesets... Can you give us some more information about this?


All in all we are strongly supporting the idea of increasing this limit.

Do I understand correctly that by opening a PMR IBM allows to increase 
this limit on special sides? I would rather like to increase the limit 
and make it official public available and supported.


Regards,

Martin


Am 10.08.2018 um 14:51 schrieb Olaf Weiser:

Hallo Stephan,
the limit is not a hard coded limit  - technically spoken, you can 
raise it easily.

But as always, it is a question of test 'n support ..

I've seen customer cases, where the use of much smaller amount of 
independent filesets generates a lot performance issues, hangs ... at 
least noise and partial trouble ..
it might be not the case with your specific workload, because due to 
the fact, that you 're running already  close to 1000 ...


I suspect , this number of 1000 file sets  - at the time of 
introducing it - was as also just that one had to pick a number...


... turns out.. that a general commitment to support > 1000 
ind.fileset is more or less hard.. because what uses cases should we 
test / support
I think , there might be a good chance for you , that for your 
specific workload, one would allow and support more than 1000


do you still have a PMR for your side for this ?  - if not - I know .. 
open PMRs is an additional ...but could you please ..

then we can decide .. if raising the limit is an option for you ..





Mit freundlichen Grüßen / Kind regards


Olaf Weiser

EMEA Storage Competence Center Mainz, German / IBM Systems, Storage 
Platform,

---
IBM Deutschland
IBM Allee 1
71139 Ehningen
Phone: +49-170-579-44-66
E-Mail: olaf.wei...@de.ibm.com
---
IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
Geschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, 
Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht 
Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940




From: "Peinkofer, Stephan" 
To: gpfsug main discussion list 
Cc: Doris Franke , Uwe Tron 
, Dorian Krause 

Date: 08/10/2018 01:29 PM
Subject: [gpfsug-discuss] GPFS Independent Fileset Limit
Sent by: gpfsug-discuss-boun...@spectrumscale.org




Dear IBM and GPFS List,

we at the Leibniz Supercomputing Centre and our GCS Partners from the 
Jülich Supercomputing Centre will soon be hitting the current 
Independent Fileset Limit of 1000 on a number of our GPFS Filesystems.


There are also a number of RFEs from other users open, that target 
this limitation:

_https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780_
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=120534_
__https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=106530_
_https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=85282_

I know GPFS Development was very busy fulfilling the CORAL 
requirements but maybe now there is again some time to improve 
something else.


If there are any other users on the list that are approaching the 
current limitation in independent filesets, please take some time and 
vote for the RFEs above.


Many thanks in advance and have 

Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Marc A Kaplan
Questions:  How/why was the decision made to use a large number (~1000) of 
independent filesets ?
What functions/features/commands are being used that work with independent 
filesets, that do not also work with "dependent" filesets?


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Olaf Weiser
Hallo Stephan, the limit is not a hard coded limit
 - technically spoken, you can raise it easily. But as always, it is a question of test
'n support .. I've seen customer cases, where the
use of much smaller amount of independent filesets generates a lot performance
issues, hangs ... at least noise and partial trouble .. it might be not the case with your specific
workload, because due to the fact, that you 're running already  close
to 1000 ...I suspect , this number of 1000 file
sets  - at the time of introducing it - was as also just that one
had to pick a number... ... turns out.. that a general commitment
to support > 1000 ind.fileset is more or less hard.. because what uses
cases should we test / supportI think , there might be a good chance
for you , that for your specific workload, one would allow and support
more than 1000 do you still have a PMR for your side
for this ?  - if not - I know .. open PMRs is an additional ...but
could you please .. then we can decide .. if raising the
limit is an option for you .. Mit freundlichen Grüßen / Kind regards Olaf Weiser EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform,---IBM DeutschlandIBM Allee 171139 EhningenPhone: +49-170-579-44-66E-Mail: olaf.wei...@de.ibm.com---IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin JetterGeschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert
Janzen, Dr. Christian Keller, Ivo Koerner, Markus KoernerSitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart,
HRB 14562 / WEEE-Reg.-Nr. DE 99369940 From:      
 "Peinkofer, Stephan"
To:      
 gpfsug main discussion
list Cc:      
 Doris Franke ,
Uwe Tron , Dorian Krause Date:      
 08/10/2018 01:29 PMSubject:    
   [gpfsug-discuss]
GPFS Independent Fileset LimitSent by:    
   gpfsug-discuss-boun...@spectrumscale.orgDear IBM and GPFS List,we at the Leibniz Supercomputing Centre
and our GCS Partners from the Jülich Supercomputing Centre will soon be
hitting the current Independent Fileset Limit of 1000 on a number of our
GPFS Filesystems.There are also a number of RFEs from other
users open, that target this limitation:https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=120534https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=106530https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=85282I know GPFS Development was very busy fulfilling
the CORAL requirements but maybe now there is again some time to improve
something else. If there are any other users on the list
that are approaching the current limitation in independent filesets, please
take some time and vote for the RFEs above.Many thanks in advance and have a nice
weekend.Best Regards,Stephan Peinkofer___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Peinkofer, Stephan
Dear IBM and GPFS List,


we at the Leibniz Supercomputing Centre and our GCS Partners from the Jülich 
Supercomputing Centre will soon be hitting the current Independent Fileset 
Limit of 1000 on a number of our GPFS Filesystems.


There are also a number of RFEs from other users open, that target this 
limitation:

https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780

https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=120534
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=106530

https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=85282


I know GPFS Development was very busy fulfilling the CORAL requirements but 
maybe now there is again some time to improve something else.


If there are any other users on the list that are approaching the current 
limitation in independent filesets, please take some time and vote for the RFEs 
above.


Many thanks in advance and have a nice weekend.

Best Regards,

Stephan Peinkofer


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss