Re: [gpfsug-discuss] More information about CVE-2019-4715

2019-12-20 Thread Peinkofer, Stephan
Dear Leonardo,

I had the same issue as you today. After some time (after I already opened a 
case for this) I noticed that they referenced the APAR numbers in the second 
link you posted.

A google search for this apar numbers gives this here 
https://www-01.ibm.com/support/docview.wss?uid=isg1IJ20901

So seems to be SMB related.

Best,
Stephan Peinkofer

Von meinem iPhone gesendet

Am 20.12.2019 um 16:33 schrieb Avila, Leandro :

Good morning,

I am looking for additional information related to CVE-2019-4715
to try to determine the applicability and impact of this vulnerability
in our environment.

https://exchange.xforce.ibmcloud.com/vulnerabilities/172093
and
https://www.ibm.com/support/pages/node/1118913

For the documents above it is not very clear if the issue affects mmfsd
or just one of the protocol components (NFS,SMB).

Thank you very much for your attention and help


--

Leandro Avila | NCSA


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] AMD Rome support?

2019-10-22 Thread Peinkofer, Stephan
Dear Jon,

we run a bunch of AMD EPYC Naples Dual Socket servers with GPFS in our TSM 
Server Cluster. From what I can say it runs stable, but IO performance in 
general and GPFS performance in particular - even compared to an Xeon E5 v3 
system -  is rather poor. So to put that into perspective on the Xeon Systems 
with two EDR IB Links, we get 20GB/s read and write performance to GPFS using 
iozone very easily. On the AMD systems - with all AMD EPYC tuning suggestions 
applied you can find in the internet - we get around 15GB/s write but only 
6GB/s read. We also opened a ticket at IBM for this but never found out 
anything. Probably because not many are running GPFS on AMD EPYC right now? The 
answer from AMD basically was that the bad IO performance is expected in Dual 
Socket systems because the Socket Interconnect is the bottleneck. (See also the 
IB tests DELL did 
https://www.dell.com/support/article/de/de/debsdt1/sln313856/amd-epyc-stream-hpl-infiniband-and-wrf-performance-study?lang=en
 as soon as you have to cross the socket border you get only half of the IB 
performance)

Of course with ROME everything get’s better (that’s what AMD told us through 
our vendor) but if you have the chance then I would recommend to benchmark AMD 
vs. XEON with your particular IO workloads before buying.

Best Regards,
Stephan Peinkofer
--
Stephan Peinkofer
Dipl. Inf. (FH), M. Sc. (TUM)

Leibniz Supercomputing Centre
Data and Storage Division
Boltzmannstraße 1, 85748 Garching b. München
URL: http://www.lrz.de

On 22. Oct 2019, at 11:12, Jon Diprose 
mailto:j...@well.ox.ac.uk>> wrote:

Dear GPFSUG,

I see the faq says Spectrum Scale is supported on "AMD Opteron based servers".

Does anyone know if/when support will be officially extended to cover AMD Epyc, 
especially the new 7002 (Rome) series?

Does anyone have any experience of running Spectrum Scale on Rome they could 
share, in particular for protocol nodes and for plain clients?

Thanks,

Jon

--
Dr. Jonathan Diprose mailto:j...@well.ox.ac.uk>>
 Tel: 01865 287837
Research Computing Manager
Henry Wellcome Building for Genomic Medicine Roosevelt Drive, Headington, 
Oxford OX3 7BN

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org<http://spectrumscale.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas?

2018-08-13 Thread Peinkofer, Stephan
Dear Marc,


If you "must" exceed 1000 filesets because you are assigning each project to 
its own fileset, my suggestion is this:

Yes, there are scaling/performance/manageability benefits to using mmbackup 
over independent filesets.

But maybe you don't need 10,000 independent filesets --
maybe you can hash or otherwise randomly assign projects that each have their 
own (dependent) fileset name to a lesser number of independent filesets that 
will serve as management groups for (mm)backup...

OK, if that might be doable, whats then the performance impact of having to 
specify Include/Exclude lists for each independent fileset in order to specify 
which dependent fileset should be backed up and which one not?
I don’t remember exactly, but I think I’ve heard at some time, that 
Include/Exclude and mmbackup have to be used with caution. And the same 
question holds true for running mmapplypolicy for a “job” on a single dependent 
fileset? Is the scan runtime linear to the size of the underlying independent 
fileset or are there some optimisations when I just want to scan a 
subfolder/dependent fileset of an independent one?

Like many things in life, sometimes compromises are necessary!

Hmm, can I reference this next time, when we negotiate Scale License pricing 
with the ISS sales people? ;)

Best Regards,
Stephan Peinkofer

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas?

2018-08-13 Thread Peinkofer, Stephan
Dear Marc,

OK, so let’s give it a try:

[root@datdsst100 pr74qo]# mmlsfileset dsstestfs01
Filesets in file system 'dsstestfs01':
Name StatusPath
root Linked/dss/dsstestfs01
...
quota_test_independent   Linked/dss/dsstestfs01/quota_test_independent
quota_test_dependent Linked
/dss/dsstestfs01/quota_test_independent/quota_test_dependent

[root@datdsst100 pr74qo]# mmsetquota dsstestfs01:quota_test_independent --user 
a2822bp --block 1G:1G --files 10:10
[root@datdsst100 pr74qo]# mmsetquota dsstestfs01:quota_test_dependent --user 
a2822bp --block 10G:10G --files 100:100

[root@datdsst100 pr74qo]#  mmrepquota -u -v dsstestfs01:quota_test_independent
*** Report for USR quotas on dsstestfs01
 Block Limits|  
   File Limits
Name   filesettype KB  quota  limit   in_doubt
grace |files   quotalimit in_doubtgrace entryType
a2822bpquota_test_independent USR   010485761048576 
 0 none |0  10   100 none e
root   quota_test_independent USR   0  0  0 
 0 none |1   000 none i

[root@datdsst100 pr74qo]#  mmrepquota -u -v dsstestfs01:quota_test_dependent
*** Report for USR quotas on dsstestfs01
 Block Limits|  
   File Limits
Name   filesettype KB  quota  limit   in_doubt
grace |files   quotalimit in_doubtgrace entryType
a2822bpquota_test_dependent USR   0   10485760   10485760   
   0 none |0 100  1000 none e
root   quota_test_dependent USR   0  0  0   
   0 none |1   000 none i

Looks good …

[root@datdsst100 pr74qo]# cd 
/dss/dsstestfs01/quota_test_independent/quota_test_dependent/
[root@datdsst100 quota_test_dependent]# for foo in `seq 1 99`; do touch 
file${foo}; chown a2822bp:pr28fa file${foo}; done

[root@datdsst100 quota_test_dependent]#  mmrepquota -u -v 
dsstestfs01:quota_test_dependent
*** Report for USR quotas on dsstestfs01
 Block Limits|  
   File Limits
Name   filesettype KB  quota  limit   in_doubt
grace |files   quotalimit in_doubtgrace entryType
a2822bpquota_test_dependent USR   0   10485760   10485760   
   0 none |   99 100  1000 none e
root   quota_test_dependent USR   0  0  0   
   0 none |1   000 none i

[root@datdsst100 quota_test_dependent]#  mmrepquota -u -v 
dsstestfs01:quota_test_independent
*** Report for USR quotas on dsstestfs01
 Block Limits|  
   File Limits
Name   filesettype KB  quota  limit   in_doubt
grace |files   quotalimit in_doubtgrace entryType
a2822bpquota_test_independent USR   010485761048576 
 0 none |0  10   100 none e
root   quota_test_independent USR   0  0  0 
 0 none |1   000 none i

So it seems that per fileset per user quota is really not depending on 
independence. But what is the documentation then meaning with:
>>> User group and user quotas can be tracked at the file system level or per 
>>> independent fileset.
???

However, there still remains the problem with mmbackup and mmapplypolicy …
And if you look at some of the RFEs, like the one from DESY, they want even 
more than 10k independent filesets …


Best Regards,
Stephan Peinkofer
--
Stephan Peinkofer
Dipl. Inf. (FH), M. Sc. (TUM)

Leibniz Supercomputing Centre
Data and Storage Division
Boltzmannstraße 1, 85748 Garching b. München
Tel: +49(0)89 35831-8715 Fax: +49(0)89 35831-9700
URL: http://www.lrz.de

On 12. Aug 2018, at 15:05, Marc A Kaplan 
mailto:makap...@us.ibm.com>> wrote:

That's interesting, I confess I never read that piece of documentation.
What's also interesting, is that if you look at this doc for quotas:

https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/bl1adm_change_quota_anynum_users_onproject_basis_acrs_protocols.htm

The word independent appears only once in a "Note": It is recommended to create 
an independent fileset for the project.

AND if you look at the mmchfs or mmchcr command you see:
--perfileset-quota

 Sets the scope of user and group quota limit checks to the individual fileset 
level, rather than to the entire file system.

With no mention of "d

Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-11 Thread Peinkofer, Stephan
Dear Marc,


so at least your documentation says:

https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.1/com.ibm.spectrum.scale.v5r01.doc/bl1hlp_filesfilesets.htm

>>> User group and user quotas can be tracked at the file system level or per 
>>> independent fileset.

But obviously as a customer I don't know if that "Really" depends on 
independence.


Currently about 70% of our filesets in the Data Science Storage systems get 
backed up to ISP. But that number may change over time as it depends on the 
requirements of our projects. For them it is just selecting "Protect this DSS 
Container by ISP" in a Web form an our portal then automatically does all the 
provisioning of the ISP Node to one of our ISP servers, rolling out the new dsm 
config files to the backup workers and so on.

Best Regards,
Stephan Peinkofer

From: gpfsug-discuss-boun...@spectrumscale.org 
 on behalf of Marc A Kaplan 

Sent: Friday, August 10, 2018 7:15 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit

I know quota stuff was cooked into GPFS before we even had "independent 
filesets"...
So which particular quota features or commands or options now depend on 
"independence"?! Really?

Yes, independent fileset performance for mmapplypolicy and mmbackup scales with 
the inodespace sizes. But I'm curious to know how many of those indy filesets 
are mmback-ed-up.

Appreciate your elaborations, 'cause even though I've worked on some of this 
code, I don't know how/when/if customers push which limits.

-

Dear Marc,

well the primary reasons for us are:

- Per fileset quota (this seems to work also for dependent filesets as far as I 
know)

- Per user per fileset quota (this seems only to work for independent filesets)

- The dedicated inode space to speedup mmpolicy runs which only have to be 
applied to a specific subpart of the file system

- Scaling mmbackup by backing up different filesets to different TSM Servers 
economically

We have currently more than 1000 projects on our HPC machines and several 
different existing and planned file systems (use cases):


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Peinkofer, Stephan
Dear Olaf,


I know that this is "just" a "support" limit. However Sven some day on a UG 
meeting in Ehningen told me that there is more to this than just

adjusting your QA qualification tests since the way it is implemented today 
does not really scale ;).

That's probably the reason why you said you see sometimes problems when you are 
not even close to the limit.


So if you look at the 250PB Alpine file system of Summit today, that is what's 
going to deployed at more than one site world wide in 2-4 years and

imho independent filesets are a great way to make this large systems much more 
handy while still maintaining a unified namespace.

So I really think it would be beneficial if the architectural limit that 
prevents scaling the number of independent filesets could be removed at all.


Best Regards,
Stephan Peinkofer


From: gpfsug-discuss-boun...@spectrumscale.org 
 on behalf of Olaf Weiser 

Sent: Friday, August 10, 2018 2:51 PM
To: gpfsug main discussion list
Cc: gpfsug-discuss-boun...@spectrumscale.org; Doris Franke; Uwe Tron; Dorian 
Krause
Subject: Re: [gpfsug-discuss] GPFS Independent Fileset Limit

Hallo Stephan,
the limit is not a hard coded limit  - technically spoken, you can raise it 
easily.
But as always, it is a question of test 'n support ..

I've seen customer cases, where the use of much smaller amount of independent 
filesets generates a lot performance issues, hangs ... at least noise and 
partial trouble ..
it might be not the case with your specific workload, because due to the fact, 
that you 're running already  close to 1000 ...

I suspect , this number of 1000 file sets  - at the time of introducing it - 
was as also just that one had to pick a number...

... turns out.. that a general commitment to support > 1000 ind.fileset is more 
or less hard.. because what uses cases should we test / support
I think , there might be a good chance for you , that for your specific 
workload, one would allow and support more than 1000

do you still have a PMR for your side for this ?  - if not - I know .. open 
PMRs is an additional ...but could you please ..
then we can decide .. if raising the limit is an option for you ..





Mit freundlichen Grüßen / Kind regards


Olaf Weiser

EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform,
---
IBM Deutschland
IBM Allee 1
71139 Ehningen
Phone: +49-170-579-44-66
E-Mail: olaf.wei...@de.ibm.com
---
IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
Geschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert 
Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 
14562 / WEEE-Reg.-Nr. DE 99369940



From:"Peinkofer, Stephan" 
To:gpfsug main discussion list 
Cc:Doris Franke , Uwe Tron , 
Dorian Krause 
Date:08/10/2018 01:29 PM
Subject:[gpfsug-discuss] GPFS Independent Fileset Limit
Sent by:gpfsug-discuss-boun...@spectrumscale.org




Dear IBM and GPFS List,

we at the Leibniz Supercomputing Centre and our GCS Partners from the Jülich 
Supercomputing Centre will soon be hitting the current Independent Fileset 
Limit of 1000 on a number of our GPFS Filesystems.

There are also a number of RFEs from other users open, that target this 
limitation:
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780
Sign up for an IBM 
account<https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780>
www.ibm.com
IBM account registration



<https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780>https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=120534
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=106530
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=85282

I know GPFS Development was very busy fulfilling the CORAL requirements but 
maybe now there is again some time to improve something else.

If there are any other users on the list that are approaching the current 
limitation in independent filesets, please take some time and vote for the RFEs 
above.

Many thanks in advance and have a nice weekend.
Best Regards,
Stephan Peinkofer

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] GPFS Independent Fileset Limit

2018-08-10 Thread Peinkofer, Stephan
Dear IBM and GPFS List,


we at the Leibniz Supercomputing Centre and our GCS Partners from the Jülich 
Supercomputing Centre will soon be hitting the current Independent Fileset 
Limit of 1000 on a number of our GPFS Filesystems.


There are also a number of RFEs from other users open, that target this 
limitation:

https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=56780

https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=120534
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=106530

https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=85282


I know GPFS Development was very busy fulfilling the CORAL requirements but 
maybe now there is again some time to improve something else.


If there are any other users on the list that are approaching the current 
limitation in independent filesets, please take some time and vote for the RFEs 
above.


Many thanks in advance and have a nice weekend.

Best Regards,

Stephan Peinkofer


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Mixing RDMA Client Fabrics for a single NSD Cluster

2018-07-20 Thread Peinkofer, Stephan
Dear Simon and List,


thanks. That was exactly I was looking for.


Best Regards,

Stephan Peinkofer



From: gpfsug-discuss-boun...@spectrumscale.org 
 on behalf of Simon Thompson 

Sent: Thursday, July 19, 2018 5:42 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Mixing RDMA Client Fabrics for a single NSD 
Cluster

I think what you want is to use fabric numbers with verbsPorts, e.g. we have 
two IB fabrics and in the config we do thinks like:

[nodeclass1]
verbsPorts mlx4_0/1/1
[nodeclass2]
verbsPorts mlx5_0/1/3

GPFS recognises the /1 or /3 at the end as a fabric number and knows they are 
separate and will Ethernet between those nodes instead.

Simon

From:  on behalf of 
"stephan.peinko...@lrz.de" 
Reply-To: "gpfsug-discuss@spectrumscale.org" 
Date: Thursday, 19 July 2018 at 15:13
To: "gpfsug-discuss@spectrumscale.org" 
Subject: [gpfsug-discuss] Mixing RDMA Client Fabrics for a single NSD Cluster

Dear GPFS List,

does anyone of you know, if it is possible to have multiple file systems in a 
GPFS Cluster that all are served primary via Ethernet but for which different 
“booster” connections to various IB/OPA fabrics exist.

For example let’s say in my central Storage/NSD Cluster, I implement two file 
systems FS1 and FS2. FS1 is served by NSD-A and NSD-B and FS2 is served by 
NSD-C and NSD-D.
Now I have two client Clusters C1 and C2 which have different OPA fabrics. Both 
Clusters can mount the two file systems via Ethernet, but I now add OPA 
connections for NSD-A and NSD-B to C1’s fabric and OPA connections for NSD-C 
and NSD-D to  C2’s fabric and just switch on RDMA.
As far as I understood, GPFS will use RDMA if it is available between two nodes 
but switch to Ethernet if RDMA is not available between the two nodes. So given 
just this, the above scenario could work in principle. But will it work in 
reality and will it be supported by IBM?

Many thanks in advance.
Best Regards,
Stephan Peinkofer
--
Stephan Peinkofer
Leibniz Supercomputing Centre
Data and Storage Division
Boltzmannstraße 1, 85748 Garching b. München
URL: http://www.lrz.de
LRZ: Leibniz-Rechenzentrum der Bayerischen Akademie der 
Wissenschaften<http://www.lrz.de/>
www.lrz.de
Das LRZ ist das Rechenzentrum für die Münchner Universitäten, die Bayerische 
Akademie der Wissenschaften sowie nationales Zentrum für Hochleistungsrechnen.



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Mixing RDMA Client Fabrics for a single NSD Cluster

2018-07-19 Thread Peinkofer, Stephan
Dear GPFS List,

does anyone of you know, if it is possible to have multiple file systems in a 
GPFS Cluster that all are served primary via Ethernet but for which different 
“booster” connections to various IB/OPA fabrics exist.

For example let’s say in my central Storage/NSD Cluster, I implement two file 
systems FS1 and FS2. FS1 is served by NSD-A and NSD-B and FS2 is served by 
NSD-C and NSD-D.
Now I have two client Clusters C1 and C2 which have different OPA fabrics. Both 
Clusters can mount the two file systems via Ethernet, but I now add OPA 
connections for NSD-A and NSD-B to C1’s fabric and OPA connections for NSD-C 
and NSD-D to  C2’s fabric and just switch on RDMA.
As far as I understood, GPFS will use RDMA if it is available between two nodes 
but switch to Ethernet if RDMA is not available between the two nodes. So given 
just this, the above scenario could work in principle. But will it work in 
reality and will it be supported by IBM?

Many thanks in advance.
Best Regards,
Stephan Peinkofer
--
Stephan Peinkofer
Leibniz Supercomputing Centre
Data and Storage Division
Boltzmannstraße 1, 85748 Garching b. München
URL: http://www.lrz.de

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS

2018-01-08 Thread Peinkofer, Stephan
Dear List,


my very personal experience today, using the patched kernel for SLES 12.1 LTS 
(3.12.74-60.64.69.1) on one single VM, was that GPFS (4.2.3-4) did not even 
start (the kernel modules seemed to compile fine using mmbuildgpl). 
Interestingly, even when I disabled PTI explicitely, using the nopti kernel 
option, GPFS refused to start with the same error!?


mmfs.log always showed something like this:

...

/usr/lpp/mmfs/bin/runmmfs[336]: .[213]: loadKernelExt[674]: InsModWrapper[95]: 
eval: line 1: 3915: Memory fault

...

2018-01-08_09:01:27.520+0100 runmmfs: error in loading or unloading the mmfs 
kernel extension

...

Since I had no time to investigate the issue further and raise a ticket right 
now, I just downgraded to the previous kernel and everything worked again.

As we have to patch at least the login nodes of our HPC clusters asap, I would 
also appreciate if we could get a statement from IBM how the KPTI patches are 
expected to interact with GPFS and if there are any (general) problems, when we 
can expect updated GPFS packages.

Many thanks in advance.
Best Regards,
Stephan Peinkofer


From: gpfsug-discuss-boun...@spectrumscale.org 
 on behalf of Buterbaugh, Kevin L 

Sent: Monday, January 8, 2018 5:52 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS

Hi GPFS Team,

Thanks for this response.  If it is at all possible I know that we (and I would 
suspect many others are in this same boat) would greatly appreciate a update 
from IBM on how a patched kernel impacts GPFS functionality.  Yes, we’d love to 
know the performance impact of the patches on GPFS, but that pales in 
significance to knowing whether GPFS version 4.x.x.x will even *start* with the 
patched kernel(s).

Thanks again…

Kevin

On Jan 4, 2018, at 4:55 PM, IBM Spectrum Scale 
> wrote:


Kevin,

The team is aware of Meltdown and Spectre. Due to the late availability of 
production-ready test patches (they became available today) we started today 
working on evaluating the impact of applying these patches. The focus would be 
both on any potential functional impacts (especially to the kernel modules 
shipped with GPFS) and on the performance degradation which affects user/kernel 
mode transitions. Performance characterization will be complex, as some system 
calls which may get invoked often by the mmfsd daemon will suddenly become 
significantly more expensive because of the kernel changes. Depending on the 
main areas affected, code changes might be possible to alleviate the impact, by 
reducing frequency of certain calls, etc. Any such changes will be deployed 
over time.

At this point, we can't say what impact this will have on stability or 
Performance on systems running GPFS — until IBM issues an official statement on 
this topic. We hope to have some basic answers soon.



Regards, The Spectrum Scale (GPFS) team

--
If you feel that your question can benefit other users of Spectrum Scale 
(GPFS), then please post it to the public IBM developerWroks Forum at 
https://www.ibm.com/developerworks/community/forums/html/forum?id=----0479.

If your query concerns a potential software error in Spectrum Scale (GPFS) and 
you have an IBM software maintenance contract please contact 1-800-237-5511 in 
the United States or your local IBM Service Center in other countries.

The forum is informally monitored as time permits and should not be used for 
priority messages to the Spectrum Scale (GPFS) team.

"Buterbaugh, Kevin L" ---01/04/2018 01:11:59 PM---Happy New Year 
everyone, I’m sure that everyone is aware of Meltdown and Spectre by now … we, 
like m

From: "Buterbaugh, Kevin L" 
>
To: gpfsug main discussion list 
>
Date: 01/04/2018 01:11 PM
Subject: [gpfsug-discuss] Meltdown, Spectre, and impacts on GPFS
Sent by: 
gpfsug-discuss-boun...@spectrumscale.org





Happy New Year everyone,

I’m sure that everyone is aware of Meltdown and Spectre by now … we, like many 
other institutions, will be patching for it at the earliest possible 
opportunity.

Our understanding is that the most serious of the 

Re: [gpfsug-discuss] Experience with CES NFS export management

2017-10-23 Thread Peinkofer, Stephan
Dear Chetan,

interesting. I’m running ISS 4.2.3-4 and it seems to ship with 
nfs-ganesha-2.3.2. So are you already using a future ISS version?

Here is what I see:
[root@datdsst102 pr74cu-dss-0002]# mmnfs export list
Path Delegations Clients
--
/dss/dsstestfs01/pr74cu-dss-0002 NONE10.156.29.73
/dss/dsstestfs01/pr74cu-dss-0002 NONE10.156.29.72

[root@datdsst102 pr74cu-dss-0002]# mmnfs export change 
/dss/dsstestfs01/pr74cu-dss-0002 --nfschange 
"10.156.29.72(access_type=RW,squash=no_root_squash,protocols=4,transports=tcp,sectype=sys,manage_gids=true)"
datdsst102.dss.lrz.de<http://datdsst102.dss.lrz.de>:  Redirecting to 
/bin/systemctl stop nfs-ganesha.service
datdsst102.dss.lrz.de<http://datdsst102.dss.lrz.de>:  Redirecting to 
/bin/systemctl start nfs-ganesha.service
NFS Configuration successfully changed. NFS server restarted on all NFS nodes 
on which NFS server is running.

[root@datdsst102 pr74cu-dss-0002]# mmnfs export change 
/dss/dsstestfs01/pr74cu-dss-0002 --nfschange 
"10.156.29.72(access_type=RW,squash=no_root_squash,protocols=4,transports=tcp,sectype=sys,manage_gids=true)"
datdsst102.dss.lrz.de<http://datdsst102.dss.lrz.de>:  Redirecting to 
/bin/systemctl stop nfs-ganesha.service
datdsst102.dss.lrz.de<http://datdsst102.dss.lrz.de>:  Redirecting to 
/bin/systemctl start nfs-ganesha.service
NFS Configuration successfully changed. NFS server restarted on all NFS nodes 
on which NFS server is running.

[root@datdsst102 pr74cu-dss-0002]# mmnfs export change 
/dss/dsstestfs01/pr74cu-dss-0002 --nfsadd 
"10.156.29.74(access_type=RW,squash=no_root_squash,protocols=4,transports=tcp,sectype=sys,manage_gids=true)"
datdsst102.dss.lrz.de<http://datdsst102.dss.lrz.de>:  Redirecting to 
/bin/systemctl stop nfs-ganesha.service
datdsst102.dss.lrz.de<http://datdsst102.dss.lrz.de>:  Redirecting to 
/bin/systemctl start nfs-ganesha.service
NFS Configuration successfully changed. NFS server restarted on all NFS nodes 
on which NFS server is running.

[root@datdsst102 ~]# mmnfs export change /dss/dsstestfs01/pr74cu-dss-0002 
--nfsremove 10.156.29.74
datdsst102.dss.lrz.de<http://datdsst102.dss.lrz.de>:  Redirecting to 
/bin/systemctl stop nfs-ganesha.service
datdsst102.dss.lrz.de<http://datdsst102.dss.lrz.de>:  Redirecting to 
/bin/systemctl start nfs-ganesha.service
NFS Configuration successfully changed. NFS server restarted on all NFS nodes 
on which NFS server is running.

Best Regards,
Stephan Peinkofer
--
Stephan Peinkofer
Dipl. Inf. (FH), M. Sc. (TUM)

Leibniz Supercomputing Centre
Data and Storage Division
Boltzmannstraße 1, 85748 Garching b. München
Tel: +49(0)89 35831-8715 Fax: +49(0)89 35831-9700
URL: http://www.lrz.de

On 23. Oct 2017, at 13:56, Chetan R Kulkarni 
<chetk...@in.ibm.com<mailto:chetk...@in.ibm.com>> wrote:


Hi Stephan,

I observed ganesha service getting restarted only after adding first nfs export.
For rest of the operations (e.g. adding more nfs exports, changing nfs exports, 
removing nfs exports); ganesha service doesn't restart.

My observations are based on following simple tests. I ran them against rhel7.3 
test cluster having nfs-ganesha-2.5.2.

tests:
1. created 1st nfs export - ganesha service was restarted
2. created 4 more nfs exports (mmnfs export add path)
3. changed 2 nfs exports (mmnfs export change path --nfschange);
4. removed all 5 exports one by one (mmnfs export remove path)
5. no nfs exports after step 4 on my test system. So, created a new nfs export 
(which will be the 1st nfs export).
6. change nfs export created in step 5

results observed:
ganesha service restarted for test 1 and test 5.
For rest tests (2,3,4,6); ganesha service didn't restart.

Thanks,
Chetan.

"Peinkofer, Stephan" ---10/23/2017 04:11:33 PM---Dear List, I’m 
currently working on a self service portal for managing NFS exports of ISS. 
Basically

From: "Peinkofer, Stephan" 
<stephan.peinko...@lrz.de<mailto:stephan.peinko...@lrz.de>>
To: "gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Date: 10/23/2017 04:11 PM
Subject: [gpfsug-discuss] Experience with CES NFS export management
Sent by: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>





Dear List,

I’m currently working on a self service portal for managing NFS exports of ISS. 
Basically something very similar to OpenStack Manila but tailored to our 
specific needs.
While it was very easy to do this using the great REST API of ISS, I stumbled 
across a fact that may be even a show stopper: According to the documentation 
for mmnfs, each time we
create/change/delete a NFS export via mmnfs, ganesha service is restar