Re: [gpfsug-discuss] IO sizes

2022-02-28 Thread Grunenberg, Renar
Hallo Uwe,
are numactl already installed on that affected node? If it missed the numa 
scale stuff is not working.


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Helen Reck, Dr. Jörg Rheinländer, Thomas Sehn, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
 Im Auftrag von Uwe Falke
Gesendet: Montag, 28. Februar 2022 10:17
An: gpfsug-discuss@spectrumscale.org
Betreff: Re: [gpfsug-discuss] IO sizes


Hi, Kumaran,

that would explain the smaller IOs before the reboot, but not the 
larger-than-4MiB IOs afterwards on that machine.

Then, I already saw that the numaMemoryInterleave setting seems to have no 
effect (on that very installation), I just have not yet requested a PMR for it. 
I'd checked memory usage of course and saw that regardless of this setting 
always one socket's memory is almost completely consumed while the other one's 
is rather empty - looks like a bug to me, but that needs further investigation.

Uwe

On 24.02.22 15:32, Kumaran Rajaram wrote:
Hi Uwe,

>> But what puzzles me even more: one of the server compiles IOs even smaller, 
>> varying between 3.2MiB and 3.6MiB mostly - both for reads and writes ... I 
>> just cannot see why.

IMHO, If GPFS on this particular NSD server was restarted often during the 
setup, then it is possible that the GPFS pagepool may not be contiguous. As a 
result, GPFS 8MiB buffer in the pagepool might be a scatter-gather (SG) list 
with many small entries (in the memory) resulting in smaller I/O when these 
buffers are issued to the disks. The fix would be to reboot the server and 
start GPFS so that pagepool is contiguous resulting in 8MiB buffer to be 
comprised of 1 (or fewer) SG entries.

>>In the current situation (i.e. with IOs bit larger than 4MiB) setting 
>>max_sectors_kB to 4096 might do the trick, but as I do not know the cause for 
>>that behaviour it might well start to issue IOs >>smaller than 4MiB again at 
>>some point, so that is not a nice solution.
It will be advised not to restart GPFS often in the NSD servers (in production) 
to keep the pagepool contiguous. Ensure that there is enough free memory in NSD 
server and not run any memory intensive jobs so that pagepool is not impacted 
(e.g. swapped out).

Also, enable GPFS numaMemoryInterleave=yes and verify that pagepool is equally 
distributed across the NUMA domains for good performance. GPFS 
numaMemoryInterleave=yes requires that numactl packages are installed and then 
GPFS restarted.

# mmfsadm dump config | egrep "numaMemory|pagepool "
! numaMemoryInterleave yes
! pagepool 282394099712

# pgrep mmfsd | xargs numastat -p

Per-node process memory usage (in MBs) for PID 2120821 (mmfsd)
   Node 0  Node 1   Total
  --- --- ---
Huge 0.000.000.00
Heap 1.263.264.52
Stack0.010.010.02
Private 137710.43   137709.96   275420.39
  --- --- ---
Total   137711.70   137713.23   275424.92

My two cents,
-Kums

Kumaran Rajaram
[cid:image001.png@01D82CA6.6F82DC70]

From: 
gpfsug-discuss-boun...@spectrumscale.org
 

 On Behalf Of Uwe Falke
Sent: Wednesday, February 23, 2022 8:04 PM
To: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] IO sizes


Hi,

the test bench is gpfsperf running on up to 12 clients with 1...64 threads 
doing 

Re: [gpfsug-discuss] WAS: alternative path; Now: RDMA

2021-12-10 Thread Grunenberg, Renar
Hallo Walter,
we had many experiences now to change our Storage-Systems in our 
Backup-Environment to RDMA-IB with HDR and EDR Connections. What we see now 
(came from a 16Gbit FC Infrastructure) we enhance our throuhput from 7 GB/s to 
30 GB/s. The main reason are the elimination of the driver-layers in the 
client-systems and make a Buffer to Buffer communication because of RDMA. The 
latency reduction are significant.
Regards Renar.
We use now ESS3k and ESS5k systems with 6.1.1.2-Code level.


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer, Thomas Sehn, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
 Im Auftrag von Walter Sklenka
Gesendet: Freitag, 10. Dezember 2021 11:17
An: gpfsug-discuss@spectrumscale.org
Betreff: Re: [gpfsug-discuss] WAS: alternative path; Now: RDMA

Hello Douglas!
May I ask a basic question regarding GPUdirect Storage or all local attached 
storage like NVME disks. Do you think it outerperforms “classical” shared 
storagesystems which are attached via FC connected to  NSD servers HDR attached?
With FC you have also bounce copies and more delay  , isn´t it?
There are solutions around which work with local NVME disks building some 
protection level with Raid (or duplication) . I am curious if it would be a 
better approach than shared storage which has it´s limitation (cost intensive 
scale out, extra infrstructure, max 64Gb at this time … )

Best regards
Walter

From: 
gpfsug-discuss-boun...@spectrumscale.org
 
mailto:gpfsug-discuss-boun...@spectrumscale.org>>
 On Behalf Of Douglas O'flaherty
Sent: Freitag, 10. Dezember 2021 05:24
To: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] WAS: alternative path; Now: RDMA

Jonathan:

You posed a reasonable question, which was "when is RDMA worth the hassle?"  I 
agree with part of your premises, which is that it only matters when the 
bottleneck isn't somewhere else. With a parallel file system, like Scale/GPFS, 
the absolute performance bottleneck is not the throughput of a single drive. In 
a majority of Scale/GPFS clusters the network data path is the performance 
limitation. If they deploy HDR or 100/200/400Gbps Ethernet...  At that point, 
the buffer copy time inside the server matters.

When the device is an accelerator, like a GPU, the benefit of RDMA (GDS) is 
easily demonstrated because it eliminates the bounce copy through the system 
memory. In our NVIDIA DGX A100 server testing testing we were able to get 
around 2x the per system throughput by using RDMA direct to GPU (GUP Direct 
Storage). (Tested on 2 DGX system with 4x HDR links per storage node.)

However, your question remains. Synthetic benchmarks are good indicators of 
technical benefit, but do your users and applications need that extra 
performance?

These are probably only a handful of codes in organizations that need this. 
However, they are high-value use cases. We have client applications that either 
read a lot of data semi-randomly and not-cached - think mini-Epics for scaling 
ML training. Or, demand lowest response time, like production inference on 
voice recognition and NLP.

If anyone has use cases for GPU accelerated codes with truly demanding data 
needs, please reach out directly. We are looking for more use cases to 
characterize the benefit for a new paper. f you can provide some code examples, 
we can help test if RDMA direct to GPU (GPUdirect Storage) is a benefit.

Thanks,

doug

Douglas O'Flaherty
dougla...@us.ibm.com






- Message from Jonathan Buzzard 
mailto:jonathan.buzz...@strath.ac.uk>> on Fri, 

Re: [gpfsug-discuss] GUI does not work after upgrade from 4.X to 5.1.1

2021-08-04 Thread Grunenberg, Renar
Hallo Iban,
this is already fixed in 5.1.1.1.



Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:  09561 96-44110
Telefax:  09561 96-44104
E-Mail:   renar.grunenb...@huk-coburg.de
Internet: www.huk.de
===
HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer, Thomas Sehn, Daniel Thomas.
===
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.
===

-Ursprüngliche Nachricht-
Von: gpfsug-discuss-boun...@spectrumscale.org 
 Im Auftrag von Iban Cabrillo
Gesendet: Mittwoch, 4. August 2021 13:34
An: gpfsug-discuss 
Betreff: Re: [gpfsug-discuss] GUI does not work after upgrade from 4.X to 5.1.1

Hi

   Fixing the line from /usr/lpp/mmfs/gui/bin-sudo/check4sudoers:

   msg=$(echo "$ii" | egrep "env_keep=\"PATH SUDO_USER REMOTE_USER 
GPFS_GUI_USER ANSIBLE_\*\"")

   by

   msg=$(echo "$ii" | egrep "env_keep=\"PATH SUDO_USER REMOTE_USER 
GPFS_GUI_USER\"")

do the trick

Regards I

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] TSM errors restoring files with ACL's

2021-03-05 Thread Grunenberg, Renar
Hallo All,
thge mentioned problem with protect was this:
https://www.ibm.com/support/pages/node/6415985?myns=s033=OCSTXKQY=E_sp=s033-_-OCSTXKQY-_-E
Regards Renar



Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:  09561 96-44110
Telefax:  09561 96-44104
E-Mail:   renar.grunenb...@huk-coburg.de
Internet: www.huk.de
===
HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer, Sarah Rössler, Thomas Sehn, Daniel Thomas.
===
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.
===

-Ursprüngliche Nachricht-
Von: gpfsug-discuss-boun...@spectrumscale.org 
 Im Auftrag von Jonathan Buzzard
Gesendet: Freitag, 5. März 2021 14:08
An: gpfsug-discuss@spectrumscale.org
Betreff: Re: [gpfsug-discuss] TSM errors restoring files with ACL's

On 05/03/2021 12:15, Frederick Stock wrote:
> CAUTION: This email originated outside the University. Check before
> clicking links or attachments.
> Have you checked to see if Spectrum Protect (TSM) has addressed this
> problem.  There recently was an issue with Protect and how it used the
> GPFS API for ACLs.  If I recall Protect was not properly handling a
> return code.  I do not know if it is relevant to your problem but  it
> seemed worth mentioning.

As far as I am aware 8.1.11.0 is the most recent version of the Spectrum
Protect/TSM client. There is nothing newer showing on the IBM FTP site

ftp://ftp.software.ibm.com/storage/tivoli-storage-management/maintenance/client/v8r1/Linux/LinuxX86/BA/

Checking on fix central also seems to show that 8.1.11.0 is the latest
version, and the only fix over 8.1.10.0 is a security update to do with
the client web user interface.


JAB.

--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Disk in unrecovered state

2021-01-12 Thread Grunenberg, Renar
Hallo Iban,
first you should check the path to the disk. (mmlsnsd -m) It seems to be broken 
from the OS view. This should fixed first. If you see no dev entry you have a 
HW problem. If this is fixed then you can start each disk individuell to see 
there are something start here. On wich scale version do you are?


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer, Sarah Rössler, Thomas Sehn, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
 Im Auftrag von Iban Cabrillo
Gesendet: Dienstag, 12. Januar 2021 15:32
An: gpfsug-discuss 
Betreff: [gpfsug-discuss] Disk in unrecovered state

Dear,
   Since this moning I have a couple of disk (7) in down state, I have tried to 
start them again but after that they change to unrecovered.
   These "failed" disk are only DATA. Both pool Data and Metadata has two 
failures groups, and set replica to 2. The Metadata disks are in two different 
enclosures one for each filure group. The filesystem has been unmounted , but 
when i have tried to run the mmfsck told me the I should remove the down disk

   [root@gpfs06 ~]# mmlsdisk gpfs2 -L | grep -v up
disk driver   sector failure holdsholds 
   storage
  -- ---  - - 
 ---  -
.
nsd18jbod1   nsd 512   2 No   Yes   to be emptied 
unrecovered   26 data
nsd19jbod1   nsd 512   2 No   Yes   ready 
unrecovered  27 data
nsd19jbod2   nsd 512   3 No   Yes   ready down  
46 data
nsd24jbod2   nsd 512   3 No   Yes   ready down  
51 data
nsd57jbod1   nsd 512   2 No   Yes   ready down  
  109 data
nsd61jbod1   nsd 512   2 No   Yes   ready down  
   113 data
nsd71jbod1   nsd 512   2 No   Yes   ready down  
   123 data
.

Any help is welcomed.
Regards, I

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Introduction: IBM Elastic Storage System (ESS) 3000 (Spectrum Scale)

2020-01-07 Thread Grunenberg, Renar
Hallo Farida,
can you check your Links, it seems these doesnt work for the poeples outside 
the IBM network.


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer, Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
 Im Auftrag von Farida Yaragatti1
Gesendet: Dienstag, 7. Januar 2020 11:48
An: gpfsug-discuss@spectrumscale.org
Cc: Wesley Jones ; Mohsin A Inamdar 
; Sumit Kumar43 ; Ricardo Daniel 
Zamora Ruvalcaba ; Rajan Mishra1 ; 
Pramod T Achutha ; Rezaul Islam ; 
Ravindra Sure 
Betreff: [gpfsug-discuss] Introduction: IBM Elastic Storage System (ESS) 3000 
(Spectrum Scale)


Hello All,

My name is Farida Yaragatti and I am part IBM Elastic Storage System (ESS) 3000 
Team, India Systems Development Lab, IBM India Pvt. Ltd.

IBM Elastic Storage System (ESS) 3000 installs and upgrade GPFS using 
Containerization.
For more details, please go through following links which has been published 
and released recently in December 9th 2019.

The IBM Lab Services team can install an Elastic Storage Server 3000 as an
included service part of acquisition. Alternatively, the customer’s IT team can 
do the
installation.

> The ESS 3000 quick deployment documentation is at the following web page:
https://ibm.biz/Bdz7qb

The following documents provide information that you need for proper deployment,
installation, and upgrade procedures for an IBM ESS 3000:

> IBM ESS 3000: Planning for the system, service maintenance packages, and 
> service procedures:
https://ibm.biz/Bdz7qp

Our team would like to participate in Spectrum Scale user group events which is 
happening across the world as we are using Spectrum Scale in 2020. Please let 
us know how we can initiate or post our submission for the events.


Regards,
Farida Yaragatti
ESS Deployment (Testing Team), India Systems Development Lab
IBM India Pvt. Ltd., EGL D Block, 6th Floor, Bangalore, Karnataka, 560071, India


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full

2019-12-10 Thread Grunenberg, Renar
Hallo Juanma,
ist save, the only change are only happen if you change the filesystem version 
with mmcfs device –V full.
As a tip you schould update to 5.0.3.3 ist a very stable Level for us.
Regards Renar


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer, Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
 Im Auftrag von FUENTES DIAZ, JUAN 
MANUEL
Gesendet: Dienstag, 10. Dezember 2019 10:45
An: gpfsug-discuss@spectrumscale.org
Betreff: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full

Hi,

Recently our group have migrated the Spectrum Scale from 4.2.3.9 to 5.0.3.0. 
According to the documentation to finish and consolidate the migration we 
should also update the config and the filesystems to the latest version with 
the commands above. Our cluster is a single cluster and all the nodes have the 
same version. My question is if we can update safely with those commands 
without compromising the data and metadata.

Thanks Juanma
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Spectrum Scale Technote mmap

2019-08-20 Thread Grunenberg, Renar
Hallo All,
can everyone clarify the effected  Level´s in witch ptf is the problem and in 
witch is not.
The Abstract mean for v5.0.3.0 to 5.0.3.2. But in the content it says 5.0.3.0 
to 5.0.3.3?

https://www-01.ibm.com/support/docview.wss?uid=ibm10960396

Regards Renar


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer, Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] rescan-scsi-bus.sh and "Local access to NSD failed with EIO, switching to access the disk remotely."

2019-06-25 Thread Grunenberg, Renar
Hallo Son,
Please put mmnsddiscover -a N all. Are all NSD‘s had there Server stanza 
Definition?

Von meinem iPhone gesendet


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:  09561 96-44110
Telefax:  09561 96-44104
E-Mail:   renar.grunenb...@huk-coburg.de
Internet: www.huk.de
===
HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.
===
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.
===


> Am 25.06.2019 um 17:02 schrieb Son Truong :
>
>
> Hello Renar,
>
> Thanks for that command, very useful and I can now see the problematic NSDs 
> are all served remotely.
>
> I have double checked the multipath and devices and I can see these NSDs are 
> available locally.
>
> How do I get GPFS to recognise this and server them out via 'localhost'?
>
> mmnsddiscover -d  seemed to have brought two of the four problematic 
> NSDs back to being served locally, but the other two are not behaving. I have 
> double checked the availability of these devices and their multipaths but 
> everything on that side seems fine.
>
> Any more ideas?
>
> Regards,
> Son
>
>
> ---
>
> Message: 2
> Date: Tue, 25 Jun 2019 12:10:53 +
> From: "Grunenberg, Renar" 
> To: "gpfsug-discuss@spectrumscale.org"
>
> Subject: Re: [gpfsug-discuss] rescan-scsi-bus.sh and "Local access to
>NSD failed with EIO, switching to access the disk remotely."
> Message-ID: 
> Content-Type: text/plain; charset="utf-8"
>
> Hallo Son,
>
> you can check the access to the nsd with mmlsdisk  -m. This give you 
> a colum like ?IO performed on node?. On NSD-Server you should see localhost, 
> on nsd-client you see the hostig nsd-server per device.
>
> Regards Renar
>
>
> Renar Grunenberg
> Abteilung Informatik - Betrieb
>
> HUK-COBURG
> Bahnhofsplatz
> 96444 Coburg
> Telefon:09561 96-44110
> Telefax:09561 96-44104
> E-Mail: renar.grunenb...@huk-coburg.de
> Internet:   www.huk.de
> 
> HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter 
> Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 
> 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg 
> Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
> Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
> Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas.
> 
> Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte 
> Informationen.
> Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich 
> erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie 
> diese Nachricht.
> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
> nicht gestattet.
>
> This information may contain confidential and/or privileged information.
> If you are not the intended recipient (or have received this information in 
> error) please notify the sender immediately and destroy this information.
> Any unauthorized copying, disclosure or distribution of the material in this 
> information is strictly forbidden.
> 
> Von: gpfsug-discuss-boun...@spectrumscale.org 
>  Im Auftrag von Son Truong
> Gesendet: Dienstag, 25. Juni 2019 13:38
> An: gpfsug-discuss@spectrumscale.org
> Betreff: [gpfsug-discuss] rescan-scsi-bus.sh and "Local access to NSD failed 
> with EIO, switching to access the disk remotely."
>
> Hello,
>

Re: [gpfsug-discuss] rescan-scsi-bus.sh and "Local access to NSD failed with EIO, switching to access the disk remotely."

2019-06-25 Thread Grunenberg, Renar
Hallo Son,

you can check the access to the nsd with mmlsdisk  -m. This give you a 
colum like ‚IO performed on node‘. On NSD-Server you should see localhost, on 
nsd-client you see the hostig nsd-server per device.

Regards Renar


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
 Im Auftrag von Son Truong
Gesendet: Dienstag, 25. Juni 2019 13:38
An: gpfsug-discuss@spectrumscale.org
Betreff: [gpfsug-discuss] rescan-scsi-bus.sh and "Local access to NSD failed 
with EIO, switching to access the disk remotely."

Hello,

I wonder if anyone has seen this… I am (not) having fun with the 
rescan-scsi-bus.sh command especially with the -r switch. Even though there are 
no devices removed the script seems to interrupt currently working NSDs and 
these messages appear in the mmfs.logs:

2019-06-25_06:30:48.706+0100: [I] Connected to   
2019-06-25_06:30:48.764+0100: [E] Local access to  failed with EIO, 
switching to access the disk remotely.
2019-06-25_06:30:51.187+0100: [E] Local access to  failed with EIO, 
switching to access the disk remotely.
2019-06-25_06:30:51.188+0100: [E] Local access to  failed with EIO, 
switching to access the disk remotely.
2019-06-25_06:30:51.188+0100: [N] Connecting to   
2019-06-25_06:30:51.195+0100: [I] Connected to   
2019-06-25_06:30:59.857+0100: [N] Connecting to   
2019-06-25_06:30:59.863+0100: [I] Connected to   
2019-06-25_06:33:30.134+0100: [E] Local access to  failed with EIO, 
switching to access the disk remotely.
2019-06-25_06:33:30.151+0100: [E] Local access to  failed with EIO, 
switching to access the disk remotely.

These messages appear roughly at the same time each day and I’ve checked the 
NSDs via mmlsnsd and mmlsdisk commands and they are all ‘ready’ and ‘up’. The 
multipaths to these NSDs are all fine too.

Is there a way of finding out what ‘access’ (local or remote) a particular node 
has to an NSD? And is there a command to force it to switch to local access – 
‘mmnsdrediscover’ returns nothing and run really fast (contrary to the 
statement ‘This may take a while’ when it runs)?

Any ideas appreciated!

Regards,
Son

Son V Truong - Senior Storage Administrator
Advanced Computing Research Centre
IT Services, University of Bristol
Email: son.tru...@bristol.ac.uk
Tel: Mobile: +44 (0) 7732 257 232
Address: 31 Great George Street, Bristol, BS1 5QD

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] WG: Spectrum Scale with RHEL7.6 kernel 3.10.0-957.21.2

2019-06-11 Thread Grunenberg, Renar
Hallo Felipe,
can you explain is this a generic Problem in rhel or only a scale related. Are 
there any cicumstance already available? We ask redhat but have no points that 
this are know to them?

Regards Renar


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
 Im Auftrag von Felipe Knop
Gesendet: Montag, 10. Juni 2019 15:43
An: gpfsug main discussion list 
Betreff: Re: [gpfsug-discuss] WG: Spectrum Scale with RHEL7.6 kernel 
3.10.0-957.21.2


Renar,

Thanks. Of the changes below, it appears that

* security: double-free attempted in security_inode_init_security() (BZ#1702286)

was the one that ended up triggering the problem. Our investigations now show 
that RHEL kernels >= 3.10.0-957.19.1 are impacted.


Felipe


Felipe Knop k...@us.ibm.com<mailto:k...@us.ibm.com>
GPFS Development and Security
IBM Systems
IBM Building 008
2455 South Rd, Poughkeepsie, NY 12601
(845) 433-9314 T/L 293-9314



[Inactive hide details for "Grunenberg, Renar" ---06/10/2019 08:43:27 
AM---Hallo Felipe, here are the change list:]"Grunenberg, Renar" ---06/10/2019 
08:43:27 AM---Hallo Felipe, here are the change list:

From: "Grunenberg, Renar" 
mailto:renar.grunenb...@huk-coburg.de>>
To: "'gpfsug-discuss@spectrumscale.org'" 
mailto:gpfsug-discuss@spectrumscale.org>>
Date: 06/10/2019 08:43 AM
Subject: [EXTERNAL] [gpfsug-discuss] WG: Spectrum Scale with RHEL7.6 kernel 
3.10.0-957.21.2
Sent by: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>





Hallo Felipe,

here are the change list:
RHBA-2019:1337 kernel bug fix update


Summary:

Updated kernel packages that fix various bugs are now available for Red Hat 
Enterprise Linux 7.

The kernel packages contain the Linux kernel, the core of any Linux operating 
system.

This update fixes the following bugs:

* Mellanox CX-5 MAC learning with OVS H/W offload not working (BZ#1686292)

* RHEL7.4 NFS4.1 client and server repeated SEQUENCE / TEST_STATEIDs with 
SEQUENCE Reply has SEQ4_STATUS_RECALLABLE_STATE_REVOKED set - NFS server should 
return NFS4ERR_DELEG_REVOKED or NFS4ERR_BAD_STATEID for revoked delegations 
(BZ#1689811)

* PANIC: "BUG: unable to handle kernel paging request" in the mtip32xx 
mtip_init_cmd_header routine (BZ#1689929)

* The nvme cli delete-ns command hangs indefinitely. (BZ#1690519)

* drm/nouveau: nv50 - Graphics become sluggish or frozen for nvidia Pascal 
cards (Regression from 1584963) - Need to flush fb writes when rewinding push 
buffer (BZ#1690761)

* [CEE/SD] Ceph+NFS server crashed and rebooted due to CephFS kernel client 
issue (BZ#1692266)

* [Mellanox OVS offload] tc fails to calculate the checksum in case vlan trunk 
and header rewrite (BZ#1693110)

* aio O_DIRECT writes to non-page-aligned file locations on ext4 can result in 
the overlapped portion of the page containing zeros (BZ#1693561)

* [HP WS 7.6 bug] Audio driver does not recognize multi function audio jack 
microphone input (BZ#1693562)

* XFS returns ENOSPC when using extent size hint with space still available 
(BZ#1693796)

* OVN requires IPv6 to be enabled (BZ#1694981)

* breaks DMA API for non-GPL drivers (BZ#1695511)

* ovl_create can return positive retval and crash the host (BZ#1696292)

* ceph: append mode is broken for sync/direct write (BZ#1696595)

* Problem building module due to -EXPORT_SYMBOL_GPL/-EXPORT_SYMBOL (BZ#1697241)

* Failed to load kpatch module after install the rpm package occasionally on 
ppc64le (BZ#1697867)

* [Hyper-V][RHEL7]

[gpfsug-discuss] WG: Spectrum Scale with RHEL7.6 kernel 3.10.0-957.21.2

2019-06-10 Thread Grunenberg, Renar
Hallo Felipe,

here are the change list:
RHBA-2019:1337 kernel bug fix update


Summary:

Updated kernel packages that fix various bugs are now available for Red Hat 
Enterprise Linux 7.

The kernel packages contain the Linux kernel, the core of any Linux operating 
system.

This update fixes the following bugs:

* Mellanox CX-5 MAC learning with OVS H/W offload not working (BZ#1686292)

* RHEL7.4 NFS4.1 client and server repeated SEQUENCE / TEST_STATEIDs with 
SEQUENCE Reply has SEQ4_STATUS_RECALLABLE_STATE_REVOKED set - NFS server should 
return NFS4ERR_DELEG_REVOKED or NFS4ERR_BAD_STATEID for revoked delegations 
(BZ#1689811)

* PANIC: "BUG: unable to handle kernel paging request" in the mtip32xx 
mtip_init_cmd_header routine (BZ#1689929)

* The nvme cli delete-ns command hangs indefinitely. (BZ#1690519)

* drm/nouveau: nv50 - Graphics become sluggish or frozen for nvidia Pascal 
cards (Regression from 1584963) - Need to flush fb writes when rewinding push 
buffer (BZ#1690761)

* [CEE/SD] Ceph+NFS server crashed and rebooted due to CephFS kernel client 
issue (BZ#1692266)

* [Mellanox OVS offload] tc fails to calculate the checksum in case vlan trunk 
and header rewrite (BZ#1693110)

* aio O_DIRECT writes to non-page-aligned file locations on ext4 can result in 
the overlapped portion of the page containing zeros (BZ#1693561)

* [HP WS 7.6 bug]  Audio driver does not recognize multi function audio jack 
microphone input (BZ#1693562)

* XFS returns ENOSPC when using extent size hint with  space still available 
(BZ#1693796)

* OVN requires IPv6 to be enabled (BZ#1694981)

* breaks DMA API for non-GPL drivers (BZ#1695511)

* ovl_create can return positive retval and crash the host (BZ#1696292)

* ceph: append mode is broken for sync/direct write (BZ#1696595)

* Problem building module due to -EXPORT_SYMBOL_GPL/-EXPORT_SYMBOL (BZ#1697241)

* Failed to load kpatch module after install the rpm package occasionally on 
ppc64le (BZ#1697867)

* [Hyper-V][RHEL7] Stop suppressing PCID bit (BZ#1697940)

* Resizing an online EXT4 filesystem on a loopback device hangs (BZ#1698110)

* dm table: propagate BDI_CAP_STABLE_WRITES (BZ#1699722)

* [ESXi][RHEL7.6]After upgrade to kernel-3.10.0-957.el7, system is unable to 
discover newly added VMware LSI Logic SAS virtual disks without a reboot. 
(BZ#1699723)

* kernel: zcrypt: fix specification exception on z196 at ap probe (BZ#1700706)

* XFS: Metadata corruption detected at xfs_attr3_leaf_write_verify() 
(BZ#1701293)

* stime showed huge values related to wrong calculation of time deltas (L3:) 
(BZ#1701743)

* Kernel panic due to NULL pointer dereference at 
sysfs_do_create_link_sd.isra.2+0x34 while loading [ipmi_si] module using 
hard-coded device (BZ#1701991)

* IPv6 ECMP modulo N hashing inefficient when X^2 rt6i_nsiblings (BZ#1702282)

* security: double-free attempted in security_inode_init_security() (BZ#1702286)

* Missing wakeup leaves task stuck waiting in blk_queue_enter() (BZ#1702921)

* Satellite Capsule sync triggers several XFS corruptions (BZ#1702922)

* BUG: SELinux doesn't handle NFS crossmnt well (BZ#1702923)

* md_clear flag missing from /proc/cpuinfo on late microcode update (BZ#1712993)

* MDS mitigations are not enabled after double microcode update (BZ#1712998)

* WARNING: CPU: 0 PID: 0 at kernel/jump_label.c:90 
__static_key_slow_dec+0xa6/0xb0 (BZ#1713004)

Users of kernel are advised to upgrade to these updated packages, which fix 
these bugs.

Full details and references:

https://access.redhat.com/errata/RHBA-2019:1337?sc_cid=70160006NHXAA2

Revision History:

Issue Date: 2019-06-04
Updated:2019-06-04

Regards Renar


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 

Re: [gpfsug-discuss] Spectrum Scale with RHEL7.6 kernel 3.10.0-957.21.2

2019-06-10 Thread Grunenberg, Renar
Hallo Felippe,

here are the change list:
RHBA-2019:1337 kernel bug fix update


Summary:

Updated kernel packages that fix various bugs are now available for Red Hat 
Enterprise Linux 7.

The kernel packages contain the Linux kernel, the core of any Linux operating 
system.

This update fixes the following bugs:

* Mellanox CX-5 MAC learning with OVS H/W offload not working (BZ#1686292)

* RHEL7.4 NFS4.1 client and server repeated SEQUENCE / TEST_STATEIDs with 
SEQUENCE Reply has SEQ4_STATUS_RECALLABLE_STATE_REVOKED set - NFS server should 
return NFS4ERR_DELEG_REVOKED or NFS4ERR_BAD_STATEID for revoked delegations 
(BZ#1689811)

* PANIC: "BUG: unable to handle kernel paging request" in the mtip32xx 
mtip_init_cmd_header routine (BZ#1689929)

* The nvme cli delete-ns command hangs indefinitely. (BZ#1690519)

* drm/nouveau: nv50 - Graphics become sluggish or frozen for nvidia Pascal 
cards (Regression from 1584963) - Need to flush fb writes when rewinding push 
buffer (BZ#1690761)

* [CEE/SD] Ceph+NFS server crashed and rebooted due to CephFS kernel client 
issue (BZ#1692266)

* [Mellanox OVS offload] tc fails to calculate the checksum in case vlan trunk 
and header rewrite (BZ#1693110)

* aio O_DIRECT writes to non-page-aligned file locations on ext4 can result in 
the overlapped portion of the page containing zeros (BZ#1693561)

* [HP WS 7.6 bug]  Audio driver does not recognize multi function audio jack 
microphone input (BZ#1693562)

* XFS returns ENOSPC when using extent size hint with  space still available 
(BZ#1693796)

* OVN requires IPv6 to be enabled (BZ#1694981)

* breaks DMA API for non-GPL drivers (BZ#1695511)

* ovl_create can return positive retval and crash the host (BZ#1696292)

* ceph: append mode is broken for sync/direct write (BZ#1696595)

* Problem building module due to -EXPORT_SYMBOL_GPL/-EXPORT_SYMBOL (BZ#1697241)

* Failed to load kpatch module after install the rpm package occasionally on 
ppc64le (BZ#1697867)

* [Hyper-V][RHEL7] Stop suppressing PCID bit (BZ#1697940)

* Resizing an online EXT4 filesystem on a loopback device hangs (BZ#1698110)

* dm table: propagate BDI_CAP_STABLE_WRITES (BZ#1699722)

* [ESXi][RHEL7.6]After upgrade to kernel-3.10.0-957.el7, system is unable to 
discover newly added VMware LSI Logic SAS virtual disks without a reboot. 
(BZ#1699723)

* kernel: zcrypt: fix specification exception on z196 at ap probe (BZ#1700706)

* XFS: Metadata corruption detected at xfs_attr3_leaf_write_verify() 
(BZ#1701293)

* stime showed huge values related to wrong calculation of time deltas (L3:) 
(BZ#1701743)

* Kernel panic due to NULL pointer dereference at 
sysfs_do_create_link_sd.isra.2+0x34 while loading [ipmi_si] module using 
hard-coded device (BZ#1701991)

* IPv6 ECMP modulo N hashing inefficient when X^2 rt6i_nsiblings (BZ#1702282)

* security: double-free attempted in security_inode_init_security() (BZ#1702286)

* Missing wakeup leaves task stuck waiting in blk_queue_enter() (BZ#1702921)

* Satellite Capsule sync triggers several XFS corruptions (BZ#1702922)

* BUG: SELinux doesn't handle NFS crossmnt well (BZ#1702923)

* md_clear flag missing from /proc/cpuinfo on late microcode update (BZ#1712993)

* MDS mitigations are not enabled after double microcode update (BZ#1712998)

* WARNING: CPU: 0 PID: 0 at kernel/jump_label.c:90 
__static_key_slow_dec+0xa6/0xb0 (BZ#1713004)

Users of kernel are advised to upgrade to these updated packages, which fix 
these bugs.

Full details and references:

https://access.redhat.com/errata/RHBA-2019:1337?sc_cid=70160006NHXAA2

Revision History:

Issue Date: 2019-06-04
Updated:2019-06-04

Regards Renar


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 

Re: [gpfsug-discuss] Identifiable groups of disks?

2019-05-14 Thread Grunenberg, Renar
Hallo Aaron,
the granularity to handle storagecapacity  in scale is the disk during 
createing of the filssystem. These disk are created nsd’s that represent your 
physical lun’s. Per fs there are a unique count of nsd’s == disk per 
filesystem. What you want is possible, no problem.
Regards Renar


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Aaron Turner
Gesendet: Dienstag, 14. Mai 2019 10:47
An: gpfsug-discuss@spectrumscale.org
Betreff: [gpfsug-discuss] Identifiable groups of disks?

Scenario:


  *   one set of JBODS
  *   want to create two GPFS file systems
  *   want to ensure that file system A uses physical disks a0, a1... an-1 and 
file system B uses physical disks b0, b1... bn-1
  *   want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on 
creation
  *   Potentially allows all disks b0..bn-1 to be destroyed if required whilst 
not affecting a0..an-1
Is this possible in GPFS?

Regards

___​___
Aaron Turner
Senior IT Services Specialist in High Performance Computing
Loughborough University
a.tur...@lboro.ac.uk
01509 226185
__


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

2019-01-28 Thread Grunenberg, Renar
Hallo Truong Vu,

unfortunality the results are the same, the cmd-responce are not what we want.
Ok, we want to analyze something with the trace facility and came to following 
link in the knowledge center:
https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1ins_instracsupp.htm

The docu mentioned that we must copy to windows files, tracefmt.exe and 
tracelog.exe, but the first one are only available in the DDK-Version 7.1 
(W2K3), not in the WDK Version 8 or 10. We use W2K12. Can you clarify where I 
can find the mentioned files.

Regards Renar.


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Truong Vu
Gesendet: Donnerstag, 24. Januar 2019 19:18
An: gpfsug-discuss@spectrumscale.org
Betreff: Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays


Hi Renar,

Let's see if it is really the /bin/rm is the problem here. Can you run the 
command again without cleanup the temp files as follow:

DEBUG=1 keepTempFiles=1 mmgetstate -a

Thanks,
Tru.


[Inactive hide details for gpfsug-discuss-request---01/23/2019 07:46:30 
AM---Send gpfsug-discuss mailing list submissions to  
gp]gpfsug-discuss-request---01/23/2019 07:46:30 AM---Send gpfsug-discuss 
mailing list submissions to 
gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>

From: 
gpfsug-discuss-requ...@spectrumscale.org<mailto:gpfsug-discuss-requ...@spectrumscale.org>
To: gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>
Date: 01/23/2019 07:46 AM
Subject: gpfsug-discuss Digest, Vol 84, Issue 32
Sent by: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>





Send gpfsug-discuss mailing list submissions to
gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>

To subscribe or unsubscribe via the World Wide Web, visit
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
or, via email, send a message with subject or body 'help' to
gpfsug-discuss-requ...@spectrumscale.org<mailto:gpfsug-discuss-requ...@spectrumscale.org>

You can reach the person managing the list at
gpfsug-discuss-ow...@spectrumscale.org<mailto:gpfsug-discuss-ow...@spectrumscale.org>

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

  1. Re: Spectrum Scale Cygwin cmd delays (Grunenberg, Renar)


--

Message: 1
Date: Wed, 23 Jan 2019 12:45:39 +
From: "Grunenberg, Renar" 
mailto:renar.grunenb...@huk-coburg.de>>
To: 'gpfsug main discussion list' 
mailto:gpfsug-discuss@spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays
Message-ID: 
<349cb338583a4c1d996677837fc65...@smxrf105.msg.hukrf.de<mailto:349cb338583a4c1d996677837fc65...@smxrf105.msg.hukrf.de>>
Content-Type: text/plain; charset="utf-8"

Hallo All,

as a point to the problem, it seems to be that all the delayes are happening 
here

DEBUG=1 mmgetstate ?a

??..
/bin/rm -f /var/mmfs/gen/mmsdrfs.1256 
/var/mmfs/tmp/allClusterNodes.mmgetstate.1256 
/var/mmfs/tmp/allQuorumNodes.mmgetstate.1256 
/var/mmfs/tmp/allNonQuorumNodes.mmgetstate.1256 
/var/mmfs/ssl/stage/tmpKeyData.mmgetstate.1256.pub 
/var/mmfs/ssl/stage/tmpKeyData.mmgetstate.1256.priv 
/var/mmfs/ssl/stage/tmpKeyData.mmgetstate.1256.cert 
/var/mmfs/ssl/stage/tmpKeyData.mmgetstate.1256.keysto

Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

2019-01-23 Thread Grunenberg, Renar
Hallo All,

as a point to the problem, it seems to be that all the delayes are happening 
here

DEBUG=1 mmgetstate –a

……..
/bin/rm -f /var/mmfs/gen/mmsdrfs.1256 
/var/mmfs/tmp/allClusterNodes.mmgetstate.1256 
/var/mmfs/tmp/allQuorumNodes.mmgetstate.1256 
/var/mmfs/tmp/allNonQuorumNodes.mmgetstate.1256 
/var/mmfs/ssl/stage/tmpKeyData.mmgetstate.1256.pub 
/var/mmfs/ssl/stage/tmpKeyData.mmgetstate.1256.priv 
/var/mmfs/ssl/stage/tmpKeyData.mmgetstate.1256.cert 
/var/mmfs/ssl/stage/tmpKeyData.mmgetstate.1256.keystore 
/var/mmfs/tmp/nodefile.mmgetstate.1256 /var/mmfs/tmp/diskfile.mmgetstate.1256 
/var/mmfs/tmp/diskNamesFile.mmgetstate.1256

Any points to this it will be fixed in the near future are welcome.

Regards Renar


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: Grunenberg, Renar
Gesendet: Dienstag, 22. Januar 2019 18:10
An: 'gpfsug main discussion list' 
Betreff: AW: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

Hallo Roger,
first thanks fort he tip. But we decided to separate the linux-io-Cluster from 
the Windows client only cluster, because of security requirements and ssh 
management requirements. We can use at this point, local named admins on 
Windows and use on Linux a Deamon and an separated Admin-interface Network for 
pwless root ssh. Your Hint seems to be CCR related or is this a Cygwin problem.
@Spectrum Scale Team:
Point1: IP V6 can’t disabled because of applications that want to use this. But 
the mmcmi cmd are give us already the right ipv4 adresses.
Point2. There are no DNS-Issues
Point3: We must check these.
Any recommendations to Rogers statements?

Regards Renar

Von: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
 [mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Roger Moye
Gesendet: Dienstag, 22. Januar 2019 16:43
An: gpfsug main discussion list 
mailto:gpfsug-discuss@spectrumscale.org>>
Betreff: Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

We experienced the same issue and were advised not to use Windows for quorum 
nodes.   We moved our Windows nodes into the storage cluster which was entirely 
Linux and that solved it.   If this is not an option, perhaps adding some Linux 
nodes to your remote cluster as quorum nodes would help.

-Roger


From: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
 [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of IBM Spectrum 
Scale
Sent: Monday, January 21, 2019 5:35 PM
To: gpfsug main discussion list 
mailto:gpfsug-discuss@spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

Hello Renar,

A few things to try:

  1.  Make sure IPv6 is disabled. On each Windows node, run "mmcmi  host  
", with  being itself and each and every node in the 
cluster. Make sure mmcmi prints valid IPv4 address.

  1.  To eliminate DNS issues, try adding IPv4 entries for each cluster node in 
"c:\windows\system32\drivers\etc\hosts".

  1.  If any anti-virus is active, disable realtime scanning on c:\cygwin64  
(wherever you installed cygwin 64-bit).

You can also try debugging a script, say: (from GPFS ksh):  DEBUG=1  
mmlscluster, and see what takes time.

Regards, The Spectrum Scale (GPFS) team

--
If you feel that your question can benefit other users of  Spectrum Scale 
(GPFS), then please post it to the public IBM developerWroks Forum at 
https://www.ibm.com/developerworks/community/forums/html/forum?id=--000

Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

2019-01-22 Thread Grunenberg, Renar
Hallo Roger,
first thanks fort he tip. But we decided to separate the linux-io-Cluster from 
the Windows client only cluster, because of security requirements and ssh 
management requirements. We can use at this point, local named admins on 
Windows and use on Linux a Deamon and an separated Admin-interface Network for 
pwless root ssh. Your Hint seems to be CCR related or is this a Cygwin problem.
@Spectrum Scale Team:
Point1: IP V6 can’t disabled because of applications that want to use this. But 
the mmcmi cmd are give us already the right ipv4 adresses.
Point2. There are no DNS-Issues
Point3: We must check these.
Any recommendations to Rogers statements?

Regards Renar


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Roger Moye
Gesendet: Dienstag, 22. Januar 2019 16:43
An: gpfsug main discussion list 
Betreff: Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

We experienced the same issue and were advised not to use Windows for quorum 
nodes.   We moved our Windows nodes into the storage cluster which was entirely 
Linux and that solved it.   If this is not an option, perhaps adding some Linux 
nodes to your remote cluster as quorum nodes would help.

-Roger


From: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
 [mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of IBM Spectrum 
Scale
Sent: Monday, January 21, 2019 5:35 PM
To: gpfsug main discussion list 
mailto:gpfsug-discuss@spectrumscale.org>>
Subject: Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

Hello Renar,

A few things to try:

  1.  Make sure IPv6 is disabled. On each Windows node, run "mmcmi  host  
", with  being itself and each and every node in the 
cluster. Make sure mmcmi prints valid IPv4 address.

  1.  To eliminate DNS issues, try adding IPv4 entries for each cluster node in 
"c:\windows\system32\drivers\etc\hosts".

  1.  If any anti-virus is active, disable realtime scanning on c:\cygwin64  
(wherever you installed cygwin 64-bit).

You can also try debugging a script, say: (from GPFS ksh):  DEBUG=1  
mmlscluster, and see what takes time.

Regards, The Spectrum Scale (GPFS) team

--
If you feel that your question can benefit other users of  Spectrum Scale 
(GPFS), then please post it to the public IBM developerWroks Forum at 
https://www.ibm.com/developerworks/community/forums/html/forum?id=----0479.

If your query concerns a potential software error in Spectrum Scale (GPFS) and 
you have an IBM software maintenance contract please contact  1-800-237-5511 in 
the United States or your local IBM Service Center in other countries.

The forum is informally monitored as time permits and should not be used for 
priority messages to the Spectrum Scale (GPFS) team.



From:"Grunenberg, Renar" 
mailto:renar.grunenb...@huk-coburg.de>>
To:"'gpfsug-discuss@spectrumscale.org'" 
mailto:gpfsug-discuss@spectrumscale.org>>
Date:01/21/2019 08:01 AM
Subject:[gpfsug-discuss] Spectrum Scale Cygwin cmd delays
Sent by:
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>



Hello All,

We test spectrum scale on an windows only Client-Cluster (remote mounted to a 
linux Cluster) but

the execution of mm commands in cygwin is very sl

[gpfsug-discuss] Spectrum Scale Cygwin cmd delays

2019-01-21 Thread Grunenberg, Renar
Hello All,
We test spectrum scale on an windows only Client-Cluster (remote mounted to a 
linux Cluster) but
the execution of mm commands in cygwin is very slow.
We have tried the following adjustments to increase the execution speed.


  *   We have installed Cygwin Server as a service (cygserver-config). 
Unfortunately, this resulted in no faster execution.
  *   Adaptation of the hosts file: 127.0.0.1 localhost cygdrive wpad
to prevent any DNS problems when accessing „/cygdrive/...“
  *   Started them as Administrator

All adjustments have so far not led to any improvement. Are there any hints to 
enhance the cmd execution time on windows (w2k12 actual used)

Regards Renar


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] A cautionary tale of upgrades

2019-01-11 Thread Grunenberg, Renar
Hallo Simon,
Welcome to the Club. These behavior are a Bug in tsctl to change the DNS names 
. We had this already  4 weeks  ago. The fix was Update to 5.0.2.1.
Regards Renar


Von meinem iPhone gesendet


Renar Grunenberg
Abteilung Informatik - Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.


Am 11.01.2019 um 15:19 schrieb Simon Thompson 
mailto:s.j.thomp...@bham.ac.uk>>:


I’ll start by saying this is our experience, maybe we did something stupid 
along the way, but just in case others see similar issues …

We have a cluster which contains protocol nodes, these were all happily running 
GPFS 5.0.1-2 code. But the cluster was a only 4 nodes + 1 quorum node – manager 
and quorum functions were handled by the 4 protocol nodes.

Then one day we needed to reboot a protocol node. We did so and its disk 
controller appeared to have failed. Oh well, we thought we’ll fix that another 
day, we still have three other quorum nodes.

As they are all getting a little long in the tooth and were starting to 
struggle, we thought, well we have DME, lets add some new nodes for quorum and 
token functions. Being shiny and new they were all installed with GPFS 5.0.2-1 
code.

All was well.

The some-time later, we needed to restart another of the CES nodes, when we 
started GPFS on the node, it was causing havock in our cluster – CES IPs were 
constantly being assigned, then removed from the remaining nodes in the 
cluster. Crap we thought and disabled the node in the cluster. This made things 
stabilise and as we’d been having other GPFS issues, we didn’t want service to 
be interrupted whilst we dug into this. Besides, it was nearly Christmas and we 
had conferences and other work to content with.

More time passes and we’re about to cut over all our backend storage to some 
shiny new DSS-G kit, so we plan a whole system maintenance window. We finish 
all our data sync’s and then try to start our protocol nodes to test them. No 
dice … we can’t get any of the nodes to bring up IPs, the logs look like they 
start the assignment process, but then gave up.

A lot of digging in the mm korn shell scripts, and some studious use of DEBUG=1 
when testing, we find that mmcesnetmvaddress is calling “tsctl shownodes up”. 
On our protocol nodes, we find output of the form:
bear-er-dtn01.bb2.cluster.cluster,rds-aw-ctdb01-data.bb2.cluster.cluster,rds-er-ctdb01-data.bb2.cluster.cluster,bber-irods-ires01-data.bb2.cluster.cluster,bber-irods-icat01-data.bb2.cluster.cluster,bbaw-irods-icat01-data.bb2.cluster.cluster,proto-pg-mgr01.bear.cluster.cluster,proto-pg-pf01.bear.cluster.cluster,proto-pg-dtn01.bear.cluster.cluster,proto-er-mgr01.bear.cluster.cluster,proto-er-pf01.bear.cluster.cluster,proto-aw-mgr01.bear.cluster.cluster,proto-aw-pf01.bear.cluster.cluster

Now our DNS name for these nodes is bb2.cluster … something is repeating the 
DNS name.

So we dig around, resolv.conf, /etc/hosts etc all look good and name resolution 
seems fine.

We look around on the manager/quorum nodes and they don’t do this 
cluster.cluster thing. We can’t find anything else Linux config wise that looks 
bad. In fact the only difference is that our CES nodes are running 5.0.1-2 and 
the manager nodes 5.0.2-1. Given we’re changing the whole storage hardware, we 
didn’t want to change the GPFS/NFS/SMB code on the CES nodes, (we’ve been 
bitten before with SMB packages not working properly in our environment), but 
we go ahead and do GPFS and NFS packages.

Suddenly, magically all is working again. CES starts fine and IPs get assigned 
OK. And tsctl gives the correct output.

So, my supposition is that there is some incompatibility between 5.0.1-2 and 
5.0.2-1 when 

[gpfsug-discuss] Spectrum Scale for Windows Domain -User Requirements

2018-12-06 Thread Grunenberg, Renar
Hallo All,

i had a question about the domain-user root account on Windows. We have some 
requirements to restrict these level of authorization and found no info what is 
possible to change here.
Two questions:
1. It is possible to define a other Domain-Account other than as root for this.
2. If not, is it possible to define a local account as root on Windows-Clients?

Any hints are appreciate.
Thanks Renar

Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Fwd: Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-12-01 Thread Grunenberg, Renar

Hallo All,
We updated today our owning cluster with 5.0.2.1.. After that we testet our 
Case and our Problem seems to be fixed. Thanks to all for the hints.
Regards Renar



Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:  09561 96-44110
Telefax:  09561 96-44104
E-Mail:   renar.grunenb...@huk-coburg.de
Internet: www.huk.de
===
HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.
===
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.
===

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-11-29 Thread Grunenberg, Renar
Hallo Tomer,
thanks for this Info, but can you explain in witch release all these points 
fixed now?


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Tomer Perry
Gesendet: Donnerstag, 29. November 2018 08:45
An: gpfsug main discussion list ; Olaf Weiser 

Betreff: Re: [gpfsug-discuss] Status for Alert: remotely mounted filesystem 
panic on accessing cluster after upgrading the owning cluster first

Hi,

I remember there was some defect around tsctl and mixed domains - bot sure if 
it was fixed and in what version.
A workaround in the past was to "wrap" tsctl with a script that would strip 
those.

Olaf might be able to provide more info ( I believe he had some sample script).


Regards,

Tomer Perry
Scalable I/O Development (Spectrum Scale)
email: t...@il.ibm.com<mailto:t...@il.ibm.com>
1 Azrieli Center, Tel Aviv 67021, Israel
Global Tel:+1 720 3422758
Israel Tel:  +972 3 9188625
Mobile: +972 52 2554625




From:        "Grunenberg, Renar" 
mailto:renar.grunenb...@huk-coburg.de>>
To:'gpfsug main discussion list' 
mailto:gpfsug-discuss@spectrumscale.org>>
Date:29/11/2018 09:29
Subject:Re: [gpfsug-discuss] Status for Alert: remotely mounted 
filesystem panic on accessing cluster after upgrading the owning cluster first
Sent by:
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>




Hallo All,
in this relation to the Alert, i had some question about experiences to 
establish remote cluster with different FQDN’s.
What we see here that the owning (5.0.1.1) and the local Cluster (5.0.2.1) has 
different Domain-Names and both are connected to a firewall. Icmp,1191 and 
ephemeral Ports ports are open.
If we dump the tscomm component of both daemons, we see connections to nodes 
that are named [hostname+ FGDN localCluster+
FGDN remote Cluster]. We analyzed nscd, DNS and make some tcp-dumps and so on 
and come to the conclusion that tsctl generate this wrong nodename and then if 
a Cluster Manager takeover are happening, because of a shutdown of these daemon 
(at Owning Cluster side), the join protocol rejected these connection.
Are there any comparable experiences in the field. And if yes what are the 
solution of that?
Thanks Renar

Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:

09561 96-44110

Telefax:

09561 96-44104

E-Mail:

renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>

Internet:

www.huk.de


HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received th

Re: [gpfsug-discuss] Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-11-28 Thread Grunenberg, Renar
Hallo All,
in this relation to the Alert, i had some question about experiences to 
establish remote cluster with different FQDN’s.
What we see here that the owning (5.0.1.1) and the local Cluster (5.0.2.1) has 
different Domain-Names and both are connected to a firewall. Icmp,1191 and 
ephemeral Ports ports are open.
If we dump the tscomm component of both daemons, we see connections to nodes 
that are named [hostname+ FGDN localCluster+
FGDN remote Cluster]. We analyzed nscd, DNS and make some tcp-dumps and so on 
and come to the conclusion that tsctl generate this wrong nodename and then if 
a Cluster Manager takeover are happening, because of a shutdown of these daemon 
(at Owning Cluster side), the join protocol rejected these connection.
Are there any comparable experiences in the field. And if yes what are the 
solution of that?
Thanks Renar

Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von John T Olson
Gesendet: Montag, 26. November 2018 15:31
An: gpfsug main discussion list 
Betreff: Re: [gpfsug-discuss] Status for Alert: remotely mounted filesystem 
panic on accessing cluster after upgrading the owning cluster first


This sounds like a different issue because the alert was only for clusters with 
file audit logging enabled due to an incompatibility with the policy rules that 
are used in file audit logging. I would suggest opening a problem ticket.


Thanks,

John

John T. Olson, Ph.D., MI.C., K.EY.
Master Inventor, Software Defined Storage
957/9032-1 Tucson, AZ, 85744
(520) 799-5185, tie 321-5185 (FAX: 520-799-4237)
Email: jtol...@us.ibm.com<mailto:jtol...@us.ibm.com>
"Do or do not. There is no try." - Yoda

Olson's Razor:
Any situation that we, as humans, can encounter in life
can be modeled by either an episode of The Simpsons
or Seinfeld.

[Inactive hide details for "Grunenberg, Renar" ---11/23/2018 01:22:05 
AM---Hallo All, are there any news about these Alert in wi]"Grunenberg, Renar" 
---11/23/2018 01:22:05 AM---Hallo All, are there any news about these Alert in 
witch Version will it be fixed. We had yesterday

From: "Grunenberg, Renar" 
mailto:renar.grunenb...@huk-coburg.de>>
To: "gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" 
mailto:gpfsug-discuss@spectrumscale.org>>
Date: 11/23/2018 01:22 AM
Subject: [gpfsug-discuss] Status for Alert: remotely mounted filesystem panic 
on accessing cluster after upgrading the owning cluster first
Sent by: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>





Hallo All,
are there any news about these Alert in witch Version will it be fixed. We had 
yesterday this problem but with a reziproke szenario.
The owning Cluster are on 5.0.1.1 an the mounting Cluster has 5.0.2.1. On the 
Owning Cluster(3Node 3 Site Cluster) we do a shutdown of the deamon. But the 
Remote mount was paniced because of: A node join was rejected. This could be 
due to incompatible daemon versions, failure to find the node in the 
configuration database, or no configuration manager found. We had no FAL 
active, what the Alert says, and the owning Cluster are not on the affected 
Version.
Any Hint, please.
Regards Renar


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:  09561 96-44110
Telefax:  09561 96-44104
E-Mail:   renar.grunenb...@huk-coburg.

Re: [gpfsug-discuss] Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-11-26 Thread Grunenberg, Renar
Hallo John,

record is open, TS001631590.



Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von John T Olson
Gesendet: Montag, 26. November 2018 15:31
An: gpfsug main discussion list 
Betreff: Re: [gpfsug-discuss] Status for Alert: remotely mounted filesystem 
panic on accessing cluster after upgrading the owning cluster first


This sounds like a different issue because the alert was only for clusters with 
file audit logging enabled due to an incompatibility with the policy rules that 
are used in file audit logging. I would suggest opening a problem ticket.


Thanks,

John

John T. Olson, Ph.D., MI.C., K.EY.
Master Inventor, Software Defined Storage
957/9032-1 Tucson, AZ, 85744
(520) 799-5185, tie 321-5185 (FAX: 520-799-4237)
Email: jtol...@us.ibm.com<mailto:jtol...@us.ibm.com>
"Do or do not. There is no try." - Yoda

Olson's Razor:
Any situation that we, as humans, can encounter in life
can be modeled by either an episode of The Simpsons
or Seinfeld.

[Inactive hide details for "Grunenberg, Renar" ---11/23/2018 01:22:05 
AM---Hallo All, are there any news about these Alert in wi]"Grunenberg, Renar" 
---11/23/2018 01:22:05 AM---Hallo All, are there any news about these Alert in 
witch Version will it be fixed. We had yesterday

From: "Grunenberg, Renar" 
mailto:renar.grunenb...@huk-coburg.de>>
To: "gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" 
mailto:gpfsug-discuss@spectrumscale.org>>
Date: 11/23/2018 01:22 AM
Subject: [gpfsug-discuss] Status for Alert: remotely mounted filesystem panic 
on accessing cluster after upgrading the owning cluster first
Sent by: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>





Hallo All,
are there any news about these Alert in witch Version will it be fixed. We had 
yesterday this problem but with a reziproke szenario.
The owning Cluster are on 5.0.1.1 an the mounting Cluster has 5.0.2.1. On the 
Owning Cluster(3Node 3 Site Cluster) we do a shutdown of the deamon. But the 
Remote mount was paniced because of: A node join was rejected. This could be 
due to incompatible daemon versions, failure to find the node in the 
configuration database, or no configuration manager found. We had no FAL 
active, what the Alert says, and the owning Cluster are not on the affected 
Version.
Any Hint, please.
Regards Renar


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:  09561 96-44110
Telefax:  09561 96-44104
E-Mail:   renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>
Internet: www.huk.de<http://www.huk.de>
===
HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.
===
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese N

[gpfsug-discuss] Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-11-23 Thread Grunenberg, Renar
Hallo All,
are there any news about these Alert in witch Version will it be fixed. We had 
yesterday this problem but with a reziproke szenario.
The owning Cluster are on 5.0.1.1 an the mounting Cluster has 5.0.2.1. On the 
Owning Cluster(3Node 3 Site Cluster) we do a shutdown of the deamon. But the 
Remote mount was paniced because of: A node join was rejected. This could be 
due to incompatible daemon versions, failure to find the node in the 
configuration database, or no configuration manager found. We had no FAL 
active, what the Alert says, and the owning Cluster are not on the affected 
Version.
Any Hint, please.
Regards Renar


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:  09561 96-44110
Telefax:  09561 96-44104
E-Mail:   renar.grunenb...@huk-coburg.de
Internet: www.huk.de
===
HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.
===
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.
===

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] V5.0.2 and maxblocksize

2018-10-04 Thread Grunenberg, Renar
Hallo All,
i put a requirement for these gap. Link is here:
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=125603

Please Vote.
Regards Renar

Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas?

2018-08-14 Thread Grunenberg, Renar
+1  great answer Stephan. We also dont understand why funktions are existend, 
but every time we want to use it, the first step is make a requirement.

Von meinem iPhone gesendet


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.


Am 14.08.2018 um 06:50 schrieb Peinkofer, Stephan 
mailto:stephan.peinko...@lrz.de>>:

Dear Marc,


If you "must" exceed 1000 filesets because you are assigning each project to 
its own fileset, my suggestion is this:

Yes, there are scaling/performance/manageability benefits to using mmbackup 
over independent filesets.

But maybe you don't need 10,000 independent filesets --
maybe you can hash or otherwise randomly assign projects that each have their 
own (dependent) fileset name to a lesser number of independent filesets that 
will serve as management groups for (mm)backup...

OK, if that might be doable, whats then the performance impact of having to 
specify Include/Exclude lists for each independent fileset in order to specify 
which dependent fileset should be backed up and which one not?
I don’t remember exactly, but I think I’ve heard at some time, that 
Include/Exclude and mmbackup have to be used with caution. And the same 
question holds true for running mmapplypolicy for a “job” on a single dependent 
fileset? Is the scan runtime linear to the size of the underlying independent 
fileset or are there some optimisations when I just want to scan a 
subfolder/dependent fileset of an independent one?

Like many things in life, sometimes compromises are necessary!

Hmm, can I reference this next time, when we negotiate Scale License pricing 
with the ISS sales people? ;)

Best Regards,
Stephan Peinkofer

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] mmdf vs. df

2018-07-31 Thread Grunenberg, Renar
Hallo All,
a question whats happening here:

We are on GPFS 5.0.1.1 and host a TSM-Server-Cluster. A colleague from me want 
to add new nsd’s to grow its tsm-storagepool (filedevice class volumes).
The tsmpool fs has before 45TB of space after that 128TB. We create new 50 GB 
tsm-volumes with define volume cmd, but the cmd goes in error after the 
allocating of 89TB.
Following Outputs here:
[root@node_a tsmpool]# df -hT
Filesystem   Type  Size  Used Avail Use% Mounted on
tsmpool  gpfs  128T  128T   44G 100% 
/gpfs/tsmpool

root@node_a tsmpool]# mmdf tsmpool --block-size auto
diskdisk size  failure holdsholds free  
  free
name group metadata datain full blocks  
  in fragments
--- -   -  
---
Disks in storage pool: system (Maximum disk size allowed is 839.99 GB)
nsd_r2g8f_tsmpool_001  100G0 Yes  No  88G ( 
88%) 10.4M ( 0%)
nsd_c4g8f_tsmpool_001  100G1 Yes  No  88G ( 
88%) 10.4M ( 0%)
nsd_g4_tsmpool   256M2 No   No0 (  0%)  
   0 ( 0%)
-  
---
(pool total)   200.2G  176G ( 88%)  
   20.8M ( 0%)

Disks in storage pool: data01 (Maximum disk size allowed is 133.50 TB)
nsd_r2g8d_tsmpool_0168T0 No   Yes  3.208T ( 
40%)7.867M ( 0%)
nsd_r2g8d_tsmpool_0158T0 No   Yes  3.205T ( 
40%)7.867M ( 0%)
nsd_r2g8d_tsmpool_0148T0 No   Yes  3.208T ( 
40%)7.867M ( 0%)
nsd_r2g8d_tsmpool_0138T0 No   Yes  3.206T ( 
40%)7.867M ( 0%)
nsd_r2g8d_tsmpool_0128T0 No   Yes  3.208T ( 
40%)7.867M ( 0%)
nsd_r2g8d_tsmpool_0118T0 No   Yes  3.205T ( 
40%)7.867M ( 0%)
nsd_r2g8d_tsmpool_0018T0 No   Yes   1.48G (  
0%)14.49M ( 0%)
nsd_r2g8d_tsmpool_0028T0 No   Yes  1.582G (  
0%)16.12M ( 0%)
nsd_r2g8d_tsmpool_0038T0 No   Yes  1.801G (  
0%) 14.7M ( 0%)
nsd_r2g8d_tsmpool_0048T0 No   Yes  1.629G (  
0%)15.21M ( 0%)
nsd_r2g8d_tsmpool_0058T0 No   Yes  1.609G (  
0%)14.22M ( 0%)
nsd_r2g8d_tsmpool_0068T0 No   Yes  1.453G (  
0%) 17.4M ( 0%)
nsd_r2g8d_tsmpool_0108T0 No   Yes  3.208T ( 
40%)7.867M ( 0%)
nsd_r2g8d_tsmpool_0098T0 No   Yes  3.197T ( 
40%)7.867M ( 0%)
nsd_r2g8d_tsmpool_0078T0 No   Yes  3.194T ( 
40%)7.875M ( 0%)
nsd_r2g8d_tsmpool_0088T0 No   Yes  3.195T ( 
40%)7.867M ( 0%)
nsd_c4g8d_tsmpool_0168T1 No   Yes  3.195T ( 
40%)7.867M ( 0%)
nsd_c4g8d_tsmpool_0068T1 No   Yes888M (  
0%)21.63M ( 0%)
nsd_c4g8d_tsmpool_0058T1 No   Yes996M (  
0%)18.22M ( 0%)
nsd_c4g8d_tsmpool_0048T1 No   Yes920M (  
0%)11.21M ( 0%)
nsd_c4g8d_tsmpool_0038T1 No   Yes984M (  
0%) 14.7M ( 0%)
nsd_c4g8d_tsmpool_0028T1 No   Yes  1.082G (  
0%)11.89M ( 0%)
nsd_c4g8d_tsmpool_0018T1 No   Yes  1.035G (  
0%)14.49M ( 0%)
nsd_c4g8d_tsmpool_0078T1 No   Yes  3.281T ( 
41%)7.867M ( 0%)
nsd_c4g8d_tsmpool_0088T1 No   Yes  3.199T ( 
40%)7.867M ( 0%)
nsd_c4g8d_tsmpool_0098T1 No   Yes  3.195T ( 
40%)7.867M ( 0%)
nsd_c4g8d_tsmpool_0108T1 No   Yes  3.195T ( 
40%)7.867M ( 0%)
nsd_c4g8d_tsmpool_0118T1 No   Yes  3.195T ( 
40%)7.867M ( 0%)
nsd_c4g8d_tsmpool_0128T1 No   Yes  3.195T ( 
40%)7.867M ( 0%)
nsd_c4g8d_tsmpool_0138T1 No   Yes  3.195T ( 
40%)7.867M ( 0%)
nsd_c4g8d_tsmpool_0148T1 No   Yes  3.195T ( 
40%)7.875M ( 0%)
nsd_c4g8d_tsmpool_0158T1 No   Yes  3.194T ( 
40%)7.867M ( 0%)
-  
---
(pool total) 256T64.09T ( 

[gpfsug-discuss] Question about mmsdrrestore

2018-07-31 Thread Grunenberg, Renar
Hallo All,

are there some experiences about the possibility to install/upgrade some 
existing nodes in a GPFS 4.2.3.x Cluster (OS Rhel6.7) with a fresh OS install 
to rhel7.5 and reinstall then new GPFS code 5.0.1.1
and do a mmsdrrestore on these node from a 4.2.3 Node. Is it possible, or must 
we install 4.2.3 Code first, make the mmsdrestore step and then update to 
5.0.1.1?
Any Hints are appreciate.

Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Analyse steps if disk are down after reboot

2018-07-12 Thread Grunenberg, Renar
Hallo Achim, hallo Simon,
first thanks for your answers. I think Achims answers map these at best. The 
nsd-servers (only 2) for these disk were mistakenly restart in a same time 
window.


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Achim Rehor
Gesendet: Donnerstag, 12. Juli 2018 11:47
An: gpfsug main discussion list 
Betreff: Re: [gpfsug-discuss] Analyse steps if disk are down after reboot

Hi Renar,

whenever an access to a NSD happens, there is a potential that the node cannot 
access the disk, so if the (only) NSD server is down, there will be no chance 
to access the disk, and the disk will be set down.
If you have twintailed disks, the 'second' (or possibly some more) NSD server 
will be asked, switching to networked access, and in that case only if that 
also fails, the disk will be set to down as well.

Not sure how your setup is, but if you reboot 2 NSD servers, and some client 
possibly did IO to a file served by just these 2, then the 'down' state would 
be explainable.

Rebooting of an NSD server should never set a disk to down, except, he was the 
only one serving that NSD.


Mit freundlichen Grüßen / Kind regards

Achim Rehor





Software Technical Support Specialist AIX/ Emea HPC Support

[cid:image001.gif@01D419D7.A9373E60]

IBM Certified Advanced Technical Expert - Power Systems with AIX

TSCC Software Service, Dept. 7922

Global Technology Services



Phone:

+49-7034-274-7862

 IBM Deutschland

E-Mail:

achim.re...@de.ibm.com<mailto:achim.re...@de.ibm.com>

 Am Weiher 24





 65451 Kelsterbach





 Germany











IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter
Geschäftsführung: Martin Hartmann (Vorsitzender), Norbert Janzen, Stefan Lutz, 
Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 
14562 WEEE-Reg.-Nr. DE 99369940








From:    "Grunenberg, Renar" 
mailto:renar.grunenb...@huk-coburg.de>>
To:"'gpfsug-discuss@spectrumscale.org'" 
mailto:gpfsug-discuss@spectrumscale.org>>
Date:12/07/2018 10:17
Subject:[gpfsug-discuss] Analyse steps if disk are down after reboot
Sent by:
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>





Hallo All,

we see after a reboot of two NSD-Servers some disks in different filesystems 
are down and we don’t see why.
The logs (messages, dmesg, kern,..) are saying nothing. We are on Rhel7.4 and 
SS 5.0.1.1.
The question now, there are any log, structures in the gpfs deamon that log 
these situation? What was the reason why the deamon hast no access to the disks 
at that startup phase.
Any hints are appreciated.

Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:

09561 96-44110

Telefax:

09561 96-44104

E-Mail:

renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>

Internet:

www.huk.de


HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jör

[gpfsug-discuss] Analyse steps if disk are down after reboot

2018-07-12 Thread Grunenberg, Renar
Hallo All,

we see after a reboot of two NSD-Servers some disks in different filesystems 
are down and we don’t see why.
The logs (messages, dmesg, kern,..) are saying nothing. We are on Rhel7.4 and 
SS 5.0.1.1.
The question now, there are any log, structures in the gpfs deamon that log 
these situation? What was the reason why the deamon hast no access to the disks 
at that startup phase.
Any hints are appreciated.

Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Filesystem Operation error

2018-07-05 Thread Grunenberg, Renar
Hallo All,
we fixed our Problem here with Spectrum Scale Support. The fixing cmd were 
‚mmcommon recoverfs tsmconf‘ and “tsdeldisk tsmconf -d "nsd_g4_tsmconf". The 
final reason
for this problem, if I want to delete a disk in a filesystem all disk must be 
reachable from the requesting host. In our config the NSD-Server had no 
NSD-Server Definitions and the
Quorum Buster Node had no access to the SAN attached disk.
A Recommendation from my site here are:
This should be documented for a high available config with a 3 side 
implementation, or the cmds that want to update the nsd-descriptors for each 
disk should check are any disk reachable and don’t do a SG-Panic.

Regards Renar


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: Grunenberg, Renar
Gesendet: Mittwoch, 4. Juli 2018 07:47
An: 'gpfsug-discuss@spectrumscale.org' 
Betreff: Filesystem Operation error

Hallo All,
follow a short story from yesterday on Version 5.0.1.1. We had a 3 - Node 
cluster (2 Nodes for IO and the third for a quorum Buster function).
A Admin make a mistake an take a delete of the 3 Node (VM). We restored ist 
with a VM Snapshot no Problem. The only point here we lost complete
7 desconly disk. We defined new one and want to delete this disk with 
mmdeldisk. On 6 Filesystems no problem but one has now a Problem.
We delete this disk finaly with mmdeldisk fsname -p. And we see now after a 
successfully mmdelnsd the old disk already in following display.

mmlsdisk tsmconf -L
disk  driver   sector failure holdsholds
storage
name  type   size   group metadata data  status 
   availability disk id pool remarks
   -- ---  - 
-  ---  -
nsd_tsmconf001_DSK20 nsd 512   0 Yes  Yes   ready 
up 1 systemdesc
nsd_g4_tsmconf   nsd 512   2 No   Noremoving refs 
down   2 system
nsd_tsmconf001_DSK70 nsd 512   1 Yes  Yes   ready 
up 3 systemdesc
nsd_g4_tsmconf1  nsd 512   2 No   Noready 
up 4 systemdesc

After that all fs-cmd geneate a fs operation error here like this.
Error=MMFS_SYSTEM_UNMOUNT, ID=0xC954F85D, Tag=3882673:   Unrecoverable file 
system operation error.  Status code 65536.   Volume tsmconf
Questions:
1. What does this mean ‘removing refs’. Now we don’t have the possibility to 
handle these disk. The disk itself is no more existend, but in the stripegroup 
a referenz is available.
nsd_g4_tsmconf: uid 0A885085:577BB637, status ReferencesBeingRemoved, 
availability Unavailable,
 created on node 10.136.80.133, Tue Jul  5 15:29:27 2016
 type 'nsd', sector size 512, failureConfigVersion 424
 quorum weight {0,0}, failure group: id 2, fg index 1
 locality group: id 2, lg index 1
 failureGroupStrP: (2), rackId 2, locationId 0, extLgId 0
 nSectors 528384 (0:81000) (258 MB), inode0Sector 131072
 alloc region: no of bits 0, seg num -1, offset 0, len 72
 suballocator 0x18015B8A7A4 type 0 nBits 32 subSize 0 dataOffset 4
   nRows 0 len/off:
 storage pool: 0
 holds nothing
 sectors past efficient device boundary: 0
 isFenced: 1
 start Region No: -1 end Region No:-1
 start AllocMap Record: -1
2. Are there any cmd to handl

[gpfsug-discuss] Filesystem Operation error

2018-07-03 Thread Grunenberg, Renar
Hallo All,
follow a short story from yesterday on Version 5.0.1.1. We had a 3 - Node 
cluster (2 Nodes for IO and the third for a quorum Buster function).
A Admin make a mistake an take a delete of the 3 Node (VM). We restored ist 
with a VM Snapshot no Problem. The only point here we lost complete
7 desconly disk. We defined new one and want to delete this disk with 
mmdeldisk. On 6 Filesystems no problem but one has now a Problem.
We delete this disk finaly with mmdeldisk fsname -p. And we see now after a 
successfully mmdelnsd the old disk already in following display.

mmlsdisk tsmconf -L
disk  driver   sector failure holdsholds
storage
name  type   size   group metadata data  status 
   availability disk id pool remarks
   -- ---  - 
-  ---  -
nsd_tsmconf001_DSK20 nsd 512   0 Yes  Yes   ready 
up 1 systemdesc
nsd_g4_tsmconf   nsd 512   2 No   Noremoving refs 
down   2 system
nsd_tsmconf001_DSK70 nsd 512   1 Yes  Yes   ready 
up 3 systemdesc
nsd_g4_tsmconf1  nsd 512   2 No   Noready 
up 4 systemdesc

After that all fs-cmd geneate a fs operation error here like this.
Error=MMFS_SYSTEM_UNMOUNT, ID=0xC954F85D, Tag=3882673:   Unrecoverable file 
system operation error.  Status code 65536.   Volume tsmconf
Questions:
1. What does this mean ‘removing refs’. Now we don’t have the possibility to 
handle these disk. The disk itself is no more existend, but in the stripegroup 
a referenz is available.
nsd_g4_tsmconf: uid 0A885085:577BB637, status ReferencesBeingRemoved, 
availability Unavailable,
 created on node 10.136.80.133, Tue Jul  5 15:29:27 2016
 type 'nsd', sector size 512, failureConfigVersion 424
 quorum weight {0,0}, failure group: id 2, fg index 1
 locality group: id 2, lg index 1
 failureGroupStrP: (2), rackId 2, locationId 0, extLgId 0
 nSectors 528384 (0:81000) (258 MB), inode0Sector 131072
 alloc region: no of bits 0, seg num -1, offset 0, len 72
 suballocator 0x18015B8A7A4 type 0 nBits 32 subSize 0 dataOffset 4
   nRows 0 len/off:
 storage pool: 0
 holds nothing
 sectors past efficient device boundary: 0
 isFenced: 1
 start Region No: -1 end Region No:-1
 start AllocMap Record: -1
2. Are there any cmd to handle these?
3. Where can I find the Status code 65536?

A PMR is also open.

Any Hints?

Regards Renar

Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] PM_MONITOR refresh task failed

2018-06-27 Thread Grunenberg, Renar
Hallo Richard,
do have a private admin-interface-lan in your cluster if yes than the logic of 
query the collector-node, and the representing ccr value are wrong. Can you 
‘mmperfmon query cpu’? If not then you hit a problem that I had yesterday.


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Sobey, Richard 
A
Gesendet: Mittwoch, 27. Juni 2018 12:47
An: 'gpfsug-discuss@spectrumscale.org' 
Betreff: [gpfsug-discuss] PM_MONITOR refresh task failed

Hi all,

I’m getting the following error in the GUI, running 5.0.1:

“The following GUI refresh task(s) failed: PM_MONITOR”.

As yet, this is the only node I’ve upgraded to 5.0.1 – the rest are running 
(healthily, according to the GUI) 4.2.3.7. I’m not sure if this version 
mismatch is relevant to reporting this particular error.

All the usual steps of restarting gpfsgui / pmcollector / pmsensors have been 
done.

Will the error go away when I’ve completed the cluster upgrade, or is there 
some other foul play at work here?

Cheers
Richard
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] mmbackup issue

2018-06-25 Thread Grunenberg, Renar
Hallo All,
here the requirement for enhancement of mmbackup.
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=121687

Please vote.



Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:  09561 96-44110
Telefax:  09561 96-44104
E-Mail:   renar.grunenb...@huk-coburg.de
Internet: www.huk.de
===
HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.
===
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.
===

-Ursprüngliche Nachricht-
Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Jonathan 
Buzzard
Gesendet: Donnerstag, 21. Juni 2018 09:33
An: gpfsug-discuss@spectrumscale.org
Betreff: Re: [gpfsug-discuss] mmbackup issue

On 20/06/18 17:00, Grunenberg, Renar wrote:
> Hallo Valdis, first thanks for the explanation we understand that,
> but this problem generate only 2 Version at tsm server for the same
> file, in the same directory. This mean that mmbackup and the
> .shadow... has no possibility to have for the same file in the same
> directory more then 2 backup versions with tsm. The native ba-client
> manage this. (Here are there already different inode numbers
> existent.) But at TSM-Server side the file that are selected at 'ba
> incr' are merged to the right filespace and will be binded to the
> mcclass >2 Version exist.
>

I think what you are saying is that mmbackup is only keeping two
versions of the file in the backup, the current version and a single
previous version. Normally in TSM you can control how many previous
versions of the file that you can keep, for both active and inactive
(aka deleted). You can also define how long these version are kept for.

It sounds like you are saying that mmbackup is ignoring the policy that
you have set for this in TSM (q copy) and doing it's own thing?

JAB.

--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] mmbackup issue

2018-06-20 Thread Grunenberg, Renar
Hallo Valdis,
first thanks for the explanation we understand that, but this problem generate 
only 2 Version at tsm server for the same file, in the same directory. This 
mean that mmbackup and the .shadow... has no possibility to have for the same 
file in the same directory more then 2 backup versions with tsm. The native 
ba-client manage this. (Here are there already different inode numbers 
existent.) But at TSM-Server side the file that are selected at 'ba incr' are 
merged to the right filespace and will be binded to the mcclass >2 Version 
exist.




Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:  09561 96-44110
Telefax:  09561 96-44104
E-Mail:   renar.grunenb...@huk-coburg.de
Internet: www.huk.de
===
HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.
===
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.
===

-Ursprüngliche Nachricht-
Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von 
valdis.kletni...@vt.edu
Gesendet: Mittwoch, 20. Juni 2018 16:45
An: gpfsug main discussion list 
Betreff: Re: [gpfsug-discuss] mmbackup issue

On Wed, 20 Jun 2018 14:08:09 -, "Grunenberg, Renar" said:

> There are after each test (change of the content) the file became every time
> a new inode number. This behavior is the reason why the shadowfile think(or 
> the
> policyengine) the old file is never existent

That's because as far as the system is concerned, this is a new file that 
happens
to have the same name.

> At SAS the data file will updated with a xx.data.new file and after the close
> the xx.data.new will be renamed to the original name xx.data again. And the
> miss interpretation of different inodes happen again.

Note that all the interesting information about a file is contained in the
inode (the size, the owner/group, the permissions, creation time, disk blocks
allocated, and so on).  The *name* of the file is pretty much the only thing
about a file that isn't in the inode - and that's because it's not a unique
value for the file (there can be more than one link to a file).  The name(s) of
the file are stored in the parent directory as inode/name pairs.

So here's what happens.

You have the original file xx.data.  It has an inode number 9934 or whatever.
In the parent directory, there's an entry "name xx.data -> inode 9934".

SAS creates a new file xx.data.new with inode number 83425 or whatever.
Different file - the creation time, blocks allocated on disk, etc are all
different than the file described by inode 9934. The directory now has
"name xx.data -> 9934" "name xx.data.new -> inode 83425".

SAS then renames xx.data.new - and rename is defined as "change the name entry
for this inode, removing any old mappings for the same name" .  So...

0) 'rename xx.data.new xx.data'.
1) Find 'xx.data.new' in this directory. "xx.data.new -> 83425" . So we're 
working with that inode.
2) Check for occurrences of the new name. Aha.  There's 'xxx.data -> 9934'.  
Remove it.
(2a) This may or may not actually make the file go away, as there may be other 
links and/or
open file references to it.)
3) The directory now only has '83425 xx.data.new -> 83425'.
4) We now change the name. The directory now has 'xx.data -> 83425'.

And your backup program quite rightly concludes that this is a new file by a 
name that
was previously used - because it *is* a new file.  Created at a different time, 
different blocks
on disk, and so on.

The only time that writing a "new" file keeps the same inode number is if the 
program
actually opens the old file for writing and overwrites the old contents.  
However, this
isn't a

[gpfsug-discuss] mmbackup issue

2018-06-20 Thread Grunenberg, Renar
Hallo All,

we are working since two weeks(or more) on a PMR that mmbackup has problems 
with the MC Class in TSM. The result is that we have defined a  version exist 
of 5. But with each run, the
policy engine generate a expire list (where the mentioned files already 
selected) and at the end we see only (in every case) 2 Backup versions of a 
file.
We are at:
GPFS 5.0.1.1
TSM-Server 8.1.1.0
TSM-Client 7.1.6.2

After some testing we found the reason:
Our mmbackup Test is performed with vi , to change a files content and restart 
the next mmbackup testcycle. The Problem that we found here with the defaults 
in vi (set backupcopy=no, attention if no a backupcopy are generatetd)
There are after each test (change of the content) the file became every time a 
new inode number. This behavior is the reason why the shadowfile think(or the 
policyengine) the old file is never existent
And generate an delete request in the expire policy files for dsmc (correct me 
if I wrong here) . Ok vi is not the problem but we had also Applications that 
had the same dataset handling (as ex. SAS) At SAS the data file will updated 
with a xx.data.new file and after the close the xx.data.new will be renamed to 
the original name xx.data again. And the miss interpretation of different 
inodes happen again.

The question now are there code in the mmbackup or in gpfs for the shadow file 
to check or ignore the inode change for the same file.

Regards Renar



Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

2018-05-18 Thread Grunenberg, Renar
Hallo Smita,
thanks that sounds good.


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Smita J Raut
Gesendet: Freitag, 18. Mai 2018 18:10
An: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

Hi Renar,

Yes we plan to include newer pyOpenSSL in 5.0.1.1

Thanks,
Smita



From:    "Grunenberg, Renar" 
<renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>>
To:'gpfsug main discussion list' 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Date:05/17/2018 09:44 PM
Subject:Re: [gpfsug-discuss] 5.0.1.0 Update issue with python 
dependencies
Sent by:
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>




Hallo Smita,

I checks these  now, today there are no real way to get these package from a 
rhel channel. All are on 0.13.1. I checked the pike repository and see that 
following packages are available:
python2-pyOpenSSL-16.2.0-3.el7.noarch.rpm
python2-cryptography-1.7.2-1.el7.x86_64.rpm
python2-urllib3-1.21.1-1.el7.noarch.rpm

My Request and question here. Why are these packages are not in the 
pike-release that IBM shipped. Is it possible to implement and test these 
package for the next ptf 5.0.1.1.
Regards Renar.





Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:

09561 96-44110

Telefax:

09561 96-44104

E-Mail:

renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>

Internet:

www.huk.de


HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.


Von: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
 [mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Smita J Raut
Gesendet: Mittwoch, 16. Mai 2018 12:23
An: gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

You are right Simon, that rpm comes from object. Below two are the new 
dependencies that were added with Pike support in 5.0.1
pyOpenSSL-0.14-1.ibm.el7.noarch.rpm
python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm

From RHEL 7.0 to 7.5 the pyOpenSSL packag

Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

2018-05-17 Thread Grunenberg, Renar
Hallo Smita,

I checks these  now, today there are no real way to get these package from a 
rhel channel. All are on 0.13.1. I checked the pike repository and see that 
following packages are available:
python2-pyOpenSSL-16.2.0-3.el7.noarch.rpm
python2-cryptography-1.7.2-1.el7.x86_64.rpm
python2-urllib3-1.21.1-1.el7.noarch.rpm

My Request and question here. Why are these packages are not in the 
pike-release that IBM shipped. Is it possible to implement and test these 
package for the next ptf 5.0.1.1.
Regards Renar.





Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Smita J Raut
Gesendet: Mittwoch, 16. Mai 2018 12:23
An: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

You are right Simon, that rpm comes from object. Below two are the new 
dependencies that were added with Pike support in 5.0.1
pyOpenSSL-0.14-1.ibm.el7.noarch.rpm
python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm

From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is 
pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 
was packaged since it was not available.

One possible cause of the problem could be that the yum certs may have Unicode 
characters.  If so, then the SSL code may be rendering the cert as chars 
instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to 
unicode handling that are fixed in 0.15. Renar, could you try upgrading this 
package to 0.15?

Thanks,
Smita



From:"Simon Thompson (IT Research Support)" 
<s.j.thomp...@bham.ac.uk<mailto:s.j.thomp...@bham.ac.uk>>
To:gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Date:05/16/2018 01:44 PM
Subject:Re: [gpfsug-discuss] 5.0.1.0 Update issue with python 
dependencies
Sent by:
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>




I wondered if it came from the object RPMs maybe… I haven’t actually checked, 
but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I 
think!) and that typically requires newer RPMs if using RDO packages so maybe 
it came that route?

Simon

From: 
<gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>>
 on behalf of "olaf.wei...@de.ibm.com<mailto:olaf.wei...@de.ibm.com>" 
<olaf.wei...@de.ibm.com<mailto:olaf.wei...@de.ibm.com>>
Reply-To: 
"gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Date: Tuesday, 15 May 2018 at 08:10
To: "gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

Renar,
can you share , what gpfs packages you tried to install
I just did a fresh 5.0.1 install and it works fine for me... even though, I 
don't see this ibm python rpm

[root@tlinc04 ~]# rpm -qa | grep -i openssl
openssl-1.0.2k-12.el7.x86_64
openssl-libs-1.0.2k-12.el7.x86_64
pyOpenSSL-0.13.1-3.el7.x86_64
openssl-devel-1.0.2k-12.el7.x86_64
xmlsec1-openssl-1.2.20-7.el7_4.x86_64

So I assume, you installed GUI, or scale mgmt .. let us know -
thx




From: 

Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

2018-05-16 Thread Grunenberg, Renar
Hallo Smita,

i will search in wich rhel-release is the 0.15 release available. If we found 
one I want to install, and give feedback.
Regards Renar.


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104

+++ Bitte beachten Sie die neuen Telefonnummern +++
+++ Siehe auch: https://www.huk.de/presse/pressekontakt/ansprechpartner.html +++

E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Smita J Raut
Gesendet: Mittwoch, 16. Mai 2018 12:23
An: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

You are right Simon, that rpm comes from object. Below two are the new 
dependencies that were added with Pike support in 5.0.1
pyOpenSSL-0.14-1.ibm.el7.noarch.rpm
python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm

From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is 
pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 
was packaged since it was not available.

One possible cause of the problem could be that the yum certs may have Unicode 
characters.  If so, then the SSL code may be rendering the cert as chars 
instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to 
unicode handling that are fixed in 0.15. Renar, could you try upgrading this 
package to 0.15?

Thanks,
Smita



From:"Simon Thompson (IT Research Support)" 
<s.j.thomp...@bham.ac.uk<mailto:s.j.thomp...@bham.ac.uk>>
To:gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Date:05/16/2018 01:44 PM
Subject:Re: [gpfsug-discuss] 5.0.1.0 Update issue with python 
dependencies
Sent by:
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>




I wondered if it came from the object RPMs maybe… I haven’t actually checked, 
but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I 
think!) and that typically requires newer RPMs if using RDO packages so maybe 
it came that route?

Simon

From: 
<gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>>
 on behalf of "olaf.wei...@de.ibm.com<mailto:olaf.wei...@de.ibm.com>" 
<olaf.wei...@de.ibm.com<mailto:olaf.wei...@de.ibm.com>>
Reply-To: 
"gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Date: Tuesday, 15 May 2018 at 08:10
To: "gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

Renar,
can you share , what gpfs packages you tried to install
I just did a fresh 5.0.1 install and it works fine for me... even though, I 
don't see this ibm python rpm

[root@tlinc04 ~]# rpm -qa | grep -i openssl
openssl-1.0.2k-12.el7.x86_64
openssl-libs-1.0.2k-12.el7.x86_64
pyOpenSSL-0.13.1-3.el7.x86_64
openssl-devel-1.0.2k-12.el7.x86_64
xmlsec1-openssl-1.2.20-7.el7_4.x86_64

So I assume, you installed GUI, or scale mgmt .. let us know -
thx




From:"Grunenberg, Renar" 
<renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>>
To:"'gpfsug-discuss@spectrumscale.org'" 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsu

Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

2018-05-15 Thread Grunenberg, Renar
Hallo All,
follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. 
After the complete yum update to this version, we had a non-function yum cmd.
The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This 
package break the yum cmds.
The error are:
Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos
Traceback (most recent call last):
  File "/bin/yum", line 29, in 
yummain.user_main(sys.argv[1:], exit_code=True)
  File "/usr/share/yum-cli/yummain.py", line 370, in user_main
errcode = main(args)
  File "/usr/share/yum-cli/yummain.py", line 165, in main
base.getOptionsConfig(args)
  File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig
self.conf
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in 

conf = property(fget=lambda self: self._getConfig(),
  File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in 
_getConfig
self.plugins.run('init')
  File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run
func(conduitcls(self, self.base, conf, **kwargs))
  File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook
svrChannels = rhnChannel.getChannelDetails(timeout=timeout)
  File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in 
getChannelDetails
sourceChannels = getChannels(timeout=timeout)
  File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels
up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId())
  File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__
return rpcServer.doCall(method, *args, **kwargs)
  File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall
ret = method(*args, **kwargs)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
  File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1
ret = self._request(methodname, params)
  File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request
self._handler, request, verbose=self._verbose)
  File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in 
request
headers, fd = req.send_http(host, handler)
  File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in 
send_http
self._connection.connect()
  File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in 
connect
self.sock.init_ssl()
  File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl
self._ctx.load_verify_locations(f)
  File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in 
load_verify_locations
raise TypeError("cafile must be None or a byte string")
TypeError: cafile must be None or a byte string

My questions now: why does IBM patch here rhel python-libaries. This goes to a 
update nirvana.

The Dependencies does looks like this!!
rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch
error: Failed dependencies:
pyOpenSSL is needed by (installed) 
redhat-access-insights-0:1.0.13-2.el7_3.noarch
pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch
pyOpenSSL >= 0.14 is needed by (installed) 
python2-urllib3-1.21.1-1.ibm.el7.noarch

Its PMR time.

Regards Renar



Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] problems with collector 5 and grafana bridge 3

2018-04-25 Thread Grunenberg, Renar
Hallo Ivano,

we change the bridge port to query2port this is the multithreaded Query port. 
The Bridge in Version3 select these port automatically if the pmcollector 
config is updated(/opt/IBM/zimon/ZIMonCollector.cfg).
# The query port number defaults to 9084.
queryport = "9084"
query2port = "9094"
We use 5.0.0.2 here. What we also change was in the Datasource Panel for the 
bridge in Grafana the openTSDB to ==2.3. Hope this help.

Regards Renar



Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:  09561 96-44110
Telefax:  09561 96-44104
E-Mail:   renar.grunenb...@huk-coburg.de
Internet: www.huk.de
===
HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.
===
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.
===

-Ursprüngliche Nachricht-
Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Ivano Talamo
Gesendet: Mittwoch, 25. April 2018 10:47
An: gpfsug main discussion list 
Betreff: [gpfsug-discuss] problems with collector 5 and grafana bridge 3

Hi all,

I am actually testing the collector shipped with Spectrum Scale 5.0.0-1
together with the latest grafana bridge (version 3). At the UK UG
meeting I learned that this is the multi-threaded setup, so hopefully we
can get better performances.

But we are having a problem. Our existing grafana dashboard have metrics
like eg. "hostname|CPU|cpu_user". It was working and it also had a very
helpful completion when creating new graphs.
After the upgrade these metrics are not recognized anymore, and we are
getting the following errors in the grafana bridge log file:

2018-04-25 09:35:24,999 - zimonGrafanaIntf - ERROR - Metric
hostnameNetwork|team0|netdev_drops_s cannot be found. Please check if
the corresponding sensor is configured

The only way I found to make them work is using only the real metric
name, eg "cpu_user" and then use filter to restrict to a host
('node'='hostname'). The problem is that in many cases the metric is
complex, eg. you want to restrict to a filesystem, to a fileset, to a
network interface. And is not easy to get the field names to be used in
the filters.

So my questions are:
- is this supposed to be like that or the old metrics name can be
enabled somehow?
- if it has to be like that, how can I get the available field names to
use in the filters?


And then I saw in the new collector config file this:

queryport = "9084"
query2port = "9094"

Which one should be used by the bridge?

Thank you,
Ivano
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] UK Meeting - tooling Spectrum Scale

2018-04-20 Thread Grunenberg, Renar
Hallo Simon,
are there any reason why the link of the presentation from Yong ZY 
Zheng(Cognitive, ML, Hortonworks) is not linked.

Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] GPFS autoload - wait for IB portstobecomeactive

2018-03-27 Thread Grunenberg, Renar
Hallo Jeff,
you can check these with following cmd.

mmfsadm dump nsdcksum

Your in memory info is inconsistent with your descriptor structur on disk. The 
reason for this I had no idea.


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas.

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Jeffrey R. Lang
Gesendet: Montag, 26. März 2018 23:14
An: gpfsug main discussion list 
Betreff: Re: [gpfsug-discuss] GPFS autoload - wait for IB portstobecomeactive

Can someone provide some clarification to this error message in my system logs:
mmfs: [E] The on-disk StripeGroup descriptor of dcs3800u31b_lun7 sgId 
0x0B00620A:9C84DF56 is not valid because of bad checksum:
Mar 26 12:25:50 mmmnsd2 mmfs: 'mmfsadm writeDesc  sg 0B00620A:9C84DF56 
2 /var/mmfs/tmp/sg_gscratch_dcs3800u31b_lun7', where device is the device name 
of that NSD.
I’ve been unable to find anything while googling that provides any details 
about the error.   Anyone have any thoughts or commands?
We are using GPFS 4.2.3.-6, under RedHat 6 and 7.  The NSD nodes are all RHEL 6.
Any help appreciated.

Thanks
Jeff
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] V5 Experience -- maxblocksize

2018-02-09 Thread Grunenberg, Renar
Hallo Sven,
thanks, it‘s clear now. You have work now ;-)
Happy Weekend from Coburg.


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas (stv.).

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Sven Oehme
Gesendet: Freitag, 9. Februar 2018 16:09
An: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Betreff: Re: [gpfsug-discuss] V5 Experience -- maxblocksize

you can only create a filesystem with a blocksize of what ever current 
maxblocksize is set. let me discuss with felipe what//if we can share here to 
solve this.

sven

On Fri, Feb 9, 2018 at 6:59 AM Grunenberg, Renar 
<renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>> wrote:
Hallo Sven,
that stated a mmcrfs ‘newfs’ -B 4M is possible if the maxblocksize is 1M (from 
the upgrade) without the requirement to change this parameter before?? Correct 
or not?
Regards




Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:

09561 96-44110

Telefax:

09561 96-44104

E-Mail:

renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>

Internet:

www.huk.de<http://www.huk.de>

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas (stv.).
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
 
[mailto:gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>]
 Im Auftrag von Sven Oehme
Gesendet: Freitag, 9. Februar 2018 15:48

An: gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Betreff: Re: [gpfsug-discuss] V5 Experience -- maxblocksize

Renar,

if you specify the filesystem blocksize of 1M during mmcr you don't have to 
restart anything. scale 5 didn't change anything on the behaviour of 
maxblocksize change while the cluster is online, it only changed the default 
passed to the blocksize parameter for create a new filesystem. one thing we 
might consider doing is changing the command to use the current active  
maxblocksize as input for mmcrfs if maxblocksize is below current default.

Sven


On Fri, Feb 9, 2018 at 6:30 AM Grunenberg, Renar 
<renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>> wrote:
Felipe, all,
first thanks for clarification, but what was the reason for this logic? If i 
upgrade to Version 5 and want to create new filesystems, and the maxblocksize 
is on 1M, we must shutdown the hole cluster

Re: [gpfsug-discuss] V5 Experience -- maxblocksize

2018-02-09 Thread Grunenberg, Renar
Hallo Sven,
that stated a mmcrfs ‘newfs’ -B 4M is possible if the maxblocksize is 1M (from 
the upgrade) without the requirement to change this parameter before?? Correct 
or not?
Regards




Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas (stv.).

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Sven Oehme
Gesendet: Freitag, 9. Februar 2018 15:48
An: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Betreff: Re: [gpfsug-discuss] V5 Experience -- maxblocksize

Renar,

if you specify the filesystem blocksize of 1M during mmcr you don't have to 
restart anything. scale 5 didn't change anything on the behaviour of 
maxblocksize change while the cluster is online, it only changed the default 
passed to the blocksize parameter for create a new filesystem. one thing we 
might consider doing is changing the command to use the current active  
maxblocksize as input for mmcrfs if maxblocksize is below current default.

Sven


On Fri, Feb 9, 2018 at 6:30 AM Grunenberg, Renar 
<renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>> wrote:
Felipe, all,
first thanks for clarification, but what was the reason for this logic? If i 
upgrade to Version 5 and want to create new filesystems, and the maxblocksize 
is on 1M, we must shutdown the hole cluster to change this to the defaults to 
use the new one default. I had no understanding for that decision. We are at 7 
x 24h availability with our cluster today, we had no real maintenance window 
here! Any circumvention are welcome.

Regards Renar


Renar Grunenberg
Abteilung Informatik – Betrieb


HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:

09561 96-44110

Telefax:

09561 96-44104

E-Mail:

renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>

Internet:

www.huk.de<http://www.huk.de>

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas (stv.).
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
 
[mailto:gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>]
 Im Auftrag von Felipe Knop
Gesendet: Freitag, 9. Februar 2018 14:59
An: gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Betreff: Re: [gpfsug-discuss] V5 Experience -- maxblocksize


All,

Correct. There is no need to change the value of 'maxblocksize' for existing 
clusters which are upgraded to the 5.0.0 level. If a new file system needs to

Re: [gpfsug-discuss] V5 Experience -- maxblocksize

2018-02-09 Thread Grunenberg, Renar
Felipe, all,
first thanks for clarification, but what was the reason for this logic? If i 
upgrade to Version 5 and want to create new filesystems, and the maxblocksize 
is on 1M, we must shutdown the hole cluster to change this to the defaults to 
use the new one default. I had no understanding for that decision. We are at 7 
x 24h availability with our cluster today, we had no real maintenance window 
here! Any circumvention are welcome.

Regards Renar


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas (stv.).

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Felipe Knop
Gesendet: Freitag, 9. Februar 2018 14:59
An: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Betreff: Re: [gpfsug-discuss] V5 Experience -- maxblocksize


All,

Correct. There is no need to change the value of 'maxblocksize' for existing 
clusters which are upgraded to the 5.0.0 level. If a new file system needs to 
be created with a block size which exceeds the value of maxblocksize then the 
mmchconfig needs to be issued to increase the value of maxblocksize (which 
requires the entire cluster to be stopped).

For clusters newly created with 5.0.0, the value of maxblocksize is set to 4MB. 
See the references to maxblocksize in the mmchconfig and mmcrfs man pages in 
5.0.0 .

Felipe


Felipe Knop k...@us.ibm.com<mailto:k...@us.ibm.com>
GPFS Development and Security
IBM Systems
IBM Building 008
2455 South Rd, Poughkeepsie, NY 12601
(845) 433-9314 T/L 293-9314



[Inactive hide details for "Uwe Falke" ---02/09/2018 06:54:10 AM---I suppose 
the new maxBlockSize default is <>1MB, so your conf]"Uwe Falke" ---02/09/2018 
06:54:10 AM---I suppose the new maxBlockSize default is <>1MB, so your config 
parameter was properly translated.

From: "Uwe Falke" <uwefa...@de.ibm.com<mailto:uwefa...@de.ibm.com>>
To: gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Date: 02/09/2018 06:54 AM
Subject: Re: [gpfsug-discuss] V5 Experience
Sent by: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>





I suppose the new maxBlockSize default is <>1MB, so your config parameter
was properly translated. I'd see no need to change anything.



Mit freundlichen Grüßen / Kind regards


Dr. Uwe Falke

IT Specialist
High Performance Computing Services / Integrated Technology Services /
Data Center Services
---
IBM Deutschland
Rathausstr. 7
09111 Chemnitz
Phone: +49 371 6978 2165
Mobile: +49 175 575 2877
E-Mail: uwefa...@de.ibm.com<mailto:uwefa...@de.ibm.com>
---
IBM Deutschland Business & Technology Services GmbH / Geschäftsführung:
Thomas Wolter, Sven Schooß
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart,
HRB 17122




From:   "Grunenberg, Renar" 
<renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>>
To: "'gpfsug-discuss@spectrumscale.org'"
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Date:   02/09/2018 10:16 AM
Subject:[gpfsug-discuss] V5 Experience
Sent by:
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@sp

Re: [gpfsug-discuss] mm'add|del'node with ccr enabled

2017-12-09 Thread Grunenberg, Renar
Hallo Eric,
our experiences are add and delete new/old nodes is working only if this node 
is no quorum node in an ccr cluster, no problem. There are no mmshutdown steps 
necessary. We are on 4.2.3.6. I think this is already available since >4.2. If 
you want to add a new quorum node, than you must put this node first as a 
client and after that you can change this node to a quorum node.


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas (stv.).

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von J. Eric 
Wonderley
Gesendet: Freitag, 8. Dezember 2017 17:10
An: gpfsug main discussion list 
Betreff: [gpfsug-discuss] mm'add|del'node with ccr enabled

Hello:

If I recall correctly this does not work...correct?  I think the last time I 
attempted this was gpfs version <=4.1.  I think I attempted to add a quorum 
node.

The process was that I remember doing was mmshutdown -a, mmchcluster 
--ccr-disable, mmaddnode yadayada, mmchcluster --ccr-enable, mmstartup.

I think with ccr disabled mmaddnode can be run with gpfs up.  We would like to 
run with ccr enabled but it does make adding/removing nodes unpleasant.

Would this be required of a non-quorum node?

Any changes concerning this with gpfs version >=4.2?
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Systemd will not allow the mount of a filesystem

2017-08-02 Thread Grunenberg, Renar
Hallo John,
you are on a backlevel Spectrum Scale Release and a backlevel Systemd package.
Please see here: 
https://www.ibm.com/developerworks/community/forums/html/topic?id=00104bb5-acf5-4036-93ba-29ea7b1d43b7=25



Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas (stv.).

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von John Hearns
Gesendet: Mittwoch, 2. August 2017 11:50
An: gpfsug main discussion list 
Betreff: [gpfsug-discuss] Systemd will not allow the mount of a filesystem

I am setting up a filesystem for some tests, so this is not mission critical. 
This is on an OS with systemd
When I create a new filesystem, named gpfstest, then mmmount it the filesystem 
is logged as being mounted then immediately dismounted.
Having fought with this for several hours I now find this in the system 
messages file:

Aug  2 10:36:56 tosmn001 systemd: Unit hpc-gpfstest.mount is bound to inactive 
unit dev-gpfstest.device. Stopping, too.

I stopped then started gpfs. I have run a systemctl daemon-reload
I created a new filesystem, using the same physical disk, with a new filesystem 
name, testtest,  and a new mountpoint.

Aug  2 11:03:50 tosmn001 systemd: Unit hpc-testtest.mount is bound to inactive 
unit dev-testtest.device. Stopping, too.

GPFS itself logs:
Wed Aug  2 11:03:50.824 2017: [I] Command: successful mount testtest
Wed Aug  2 11:03:50.837 2017: [I] Command: unmount testtest
Wed Aug  2 11:03:51.192 2017: [I] Command: successful unmount testtest

If anyone else has seen this behavior please let me know. I found this issue 
https://github.com/systemd/systemd/issues/1741
However the only suggested fix is a system daemon-reload, and if this doesn’t 
work ???

Also if this is a stupid mistake on my part, I pro-actively hang my head in 
shame.


-- The information contained in this communication and any attachments is 
confidential and may be privileged, and is for the sole use of the intended 
recipient(s). Any unauthorized review, use, disclosure or distribution is 
prohibited. Unless explicitly stated otherwise in the body of this 
communication or the attachment thereto (if any), the information is provided 
on an AS-IS basis without any express or implied warranties or liabilities. To 
the extent you are relying on this information, you are doing so at your own 
risk. If you are not the intended recipient, please notify the sender 
immediately by replying to this message and destroy all copies of this message 
and any attachments. Neither the sender nor the company/group of companies he 
or she represents shall be liable for the proper and complete transmission of 
the information contained in this communication, or for any delay in its 
receipt.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Quota and hardlimit enforcement

2017-07-31 Thread Grunenberg, Renar
Hallo Kevin,
thanks for your hint i will check these tomorrow, and yes as root, lol.
Regards Renar


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:   www.huk.de

HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas (stv.).

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.

Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Buterbaugh, 
Kevin L
Gesendet: Montag, 31. Juli 2017 21:22
An: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Betreff: Re: [gpfsug-discuss] Quota and hardlimit enforcement

Hi Renar,

I’m sure this is the case, but I don’t see anywhere in this thread where this 
is explicitly stated … you’re not doing your tests as root, are you?  root, of 
course, is not bound by any quotas.

Kevin

On Jul 31, 2017, at 2:04 PM, Grunenberg, Renar 
<renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>> wrote:


Hallo J. Eric, hallo Jaime,
Ok after we hit the softlimit we see that the graceperiod are go to 7 days. I 
think that’s the default. But was does it mean.
After we reach the ‘hard’-limit. we see additionaly the gbytes  in_doubt.
My interpretation now we can write many gb to the nospace-left event in the 
filesystem.
But our intention is to restricted some application to write only to the 
hardlimit in the fileset. Any hints to accomplish this?


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:

09561 96-44110

Telefax:

09561 96-44104

E-Mail:

renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>

Internet:

www.huk.de<http://www.huk.de/>


HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas (stv.).

Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.





Von: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
 [mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von J. Eric 
Wonderley
Gesendet: Montag, 31. Juli 2017 19:55
An: gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Betreff: Re: [gpfsug-discuss] Quota and hardlimit enforcement

Hi Renar:
What does 'mmlsquota -j fileset filesystem' report?
I did not think you would get a grace period of none unless the 
hardlimit=softlimit.

On Mon, Jul 31, 2017 at 1:44 PM, Grunenberg, Renar 
<renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>> wrote:
Hallo All,
we are on Version 4.2.3.2 and see some missunderstandig in the enforcement of 
hardlimit definitions on a flieset quota. What we see is we put some 200 GB 
files on following quo