[gpfsug-discuss] Quota and hardlimit enforcement

2017-07-31 Thread Grunenberg, Renar
Hallo All, we are on Version 4.2.3.2 and see some missunderstandig in the enforcement of hardlimit definitions on a flieset quota. What we see is we put some 200 GB files on following quota definitions: quota 150 GB Limit 250 GB Grace none. After the creating of one 200 GB we hit the softquota l

Re: [gpfsug-discuss] Quota and hardlimit enforcement

2017-07-31 Thread Grunenberg, Renar
and hardlimit enforcement Hi Renar: What does 'mmlsquota -j fileset filesystem' report? I did not think you would get a grace period of none unless the hardlimit=softlimit. On Mon, Jul 31, 2017 at 1:44 PM, Grunenberg, Renar mailto:renar.grunenb...@huk-coburg.de>> wrote: Hall

Re: [gpfsug-discuss] Quota and hardlimit enforcement

2017-07-31 Thread Grunenberg, Renar
this is the case, but I don’t see anywhere in this thread where this is explicitly stated … you’re not doing your tests as root, are you? root, of course, is not bound by any quotas. Kevin On Jul 31, 2017, at 2:04 PM, Grunenberg, Renar mailto:renar.grunenb...@huk-coburg.de>> wrote: H

Re: [gpfsug-discuss] Systemd will not allow the mount of a filesystem

2017-08-02 Thread Grunenberg, Renar
Hallo John, you are on a backlevel Spectrum Scale Release and a backlevel Systemd package. Please see here: https://www.ibm.com/developerworks/community/forums/html/topic?id=00104bb5-acf5-4036-93ba-29ea7b1d43b7&ps=25 Renar Grunenberg Abteilung Informatik – Betrieb HUK-COBURG Bahnhofsplatz 9644

Re: [gpfsug-discuss] mm'add|del'node with ccr enabled

2017-12-09 Thread Grunenberg, Renar
Hallo Eric, our experiences are add and delete new/old nodes is working only if this node is no quorum node in an ccr cluster, no problem. There are no mmshutdown steps necessary. We are on 4.2.3.6. I think this is already available since >4.2. If you want to add a new quorum node, than you must

[gpfsug-discuss] V5 Experience

2018-02-09 Thread Grunenberg, Renar
Hallo All, we updated our Test-Cluster from 4.2.3.6 to V5.0.0.1. So good so fine, but I see after the mmchconfig release=LATEST a new common parameter ‘maxblocksize 1M’ (our fs are on these blocksizes) is happening. Ok, but if I will change this parameter the hole cluster was requestet that: ro

Re: [gpfsug-discuss] V5 Experience -- maxblocksize

2018-02-09 Thread Grunenberg, Renar
- IBM Deutschland Business & Technology Services GmbH / Geschäftsführung: Thomas Wolter, Sven Schooß Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From: "

Re: [gpfsug-discuss] V5 Experience -- maxblocksize

2018-02-09 Thread Grunenberg, Renar
he default passed to the blocksize parameter for create a new filesystem. one thing we might consider doing is changing the command to use the current active maxblocksize as input for mmcrfs if maxblocksize is below current default. Sven On Fri, Feb 9, 2018 at 6:30 AM Grunenberg, Renar mailto:ren

Re: [gpfsug-discuss] V5 Experience -- maxblocksize

2018-02-09 Thread Grunenberg, Renar
blocksize of what ever current maxblocksize is set. let me discuss with felipe what//if we can share here to solve this. sven On Fri, Feb 9, 2018 at 6:59 AM Grunenberg, Renar mailto:renar.grunenb...@huk-coburg.de>> wrote: Hallo Sven, that stated a mmcrfs ‘newfs’ -B 4M is possible

Re: [gpfsug-discuss] GPFS autoload - wait for IB portstobecomeactive

2018-03-26 Thread Grunenberg, Renar
Hallo Jeff, you can check these with following cmd. mmfsadm dump nsdcksum Your in memory info is inconsistent with your descriptor structur on disk. The reason for this I had no idea. Renar Grunenberg Abteilung Informatik – Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon:09561 9

Re: [gpfsug-discuss] UK Meeting - tooling Spectrum Scale

2018-04-20 Thread Grunenberg, Renar
Hallo Simon, are there any reason why the link of the presentation from Yong ZY Zheng(Cognitive, ML, Hortonworks) is not linked. Renar Grunenberg Abteilung Informatik – Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon:09561 96-44110 Telefax:09561 96-44104 E-Mail: renar.grune

Re: [gpfsug-discuss] problems with collector 5 and grafana bridge 3

2018-04-25 Thread Grunenberg, Renar
Hallo Ivano, we change the bridge port to query2port this is the multithreaded Query port. The Bridge in Version3 select these port automatically if the pmcollector config is updated(/opt/IBM/zimon/ZIMonCollector.cfg). # The query port number defaults to 9084. queryport = "9084" query2port = "90

Re: [gpfsug-discuss] problems with collector 5 and grafana bridge 3

2018-04-25 Thread Grunenberg, Renar
his information is strictly forbidden. === -Ursprüngliche Nachricht- Von: Ivano Talamo [mailto:ivano.tal...@psi.ch] Gesendet: Mittwoch, 25. April 2018 13:37 An: gpfsug main discussion list ; Grunenberg, Renar Betreff: Re: [gpf

[gpfsug-discuss] mmlsnsd -m or -M

2018-05-09 Thread Grunenberg, Renar
Hallo All, we experience some difficults in using mmlsnsd -m on 4.2.3.8 and 5.0.0.2. Are there any known bugs or changes happening here, that these function don’t does what it wants. The outputs are now for these suboption -m or -M the same!!??. Regards Renar Renar Grunenberg Abteilung Informat

Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

2018-05-14 Thread Grunenberg, Renar
Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plu

Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

2018-05-16 Thread Grunenberg, Renar
hon rpm [root@tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From:

Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

2018-05-17 Thread Grunenberg, Renar
1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From:"Grunenberg, Renar" mailto:renar.grunenb...@huk-

Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

2018-05-18 Thread Grunenberg, Renar
From:"Grunenberg, Renar" mailto:renar.grunenb...@huk-coburg.de>> To:'gpfsug main discussion list' mailto:gpfsug-discuss@spectrumscale.org>> Date:05/17/2018 09:44 PM Subject:Re: [gpfsug-discuss] 5.0.1.0 Update issue with python depe

[gpfsug-discuss] mmbackup issue

2018-06-20 Thread Grunenberg, Renar
Hallo All, we are working since two weeks(or more) on a PMR that mmbackup has problems with the MC Class in TSM. The result is that we have defined a version exist of 5. But with each run, the policy engine generate a expire list (where the mentioned files already selected) and at the end we s

Re: [gpfsug-discuss] mmbackup issue

2018-06-20 Thread Grunenberg, Renar
4:08:09 -, "Grunenberg, Renar" said: > There are after each test (change of the content) the file became every time > a new inode number. This behavior is the reason why the shadowfile think(or > the > policyengine) the old file is never existent That's because as f

Re: [gpfsug-discuss] mmbackup issue

2018-06-21 Thread Grunenberg, Renar
Buzzard Gesendet: Donnerstag, 21. Juni 2018 09:33 An: gpfsug-discuss@spectrumscale.org Betreff: Re: [gpfsug-discuss] mmbackup issue On 20/06/18 17:00, Grunenberg, Renar wrote: > Hallo Valdis, first thanks for the explanation we understand that, > but this problem generate only 2 Version at tsm

Re: [gpfsug-discuss] mmbackup issue

2018-06-25 Thread Grunenberg, Renar
] Im Auftrag von Jonathan Buzzard Gesendet: Donnerstag, 21. Juni 2018 09:33 An: gpfsug-discuss@spectrumscale.org Betreff: Re: [gpfsug-discuss] mmbackup issue On 20/06/18 17:00, Grunenberg, Renar wrote: > Hallo Valdis, first thanks for the explanation we understand that, > but this problem gener

Re: [gpfsug-discuss] PM_MONITOR refresh task failed

2018-06-27 Thread Grunenberg, Renar
Hallo Richard, do have a private admin-interface-lan in your cluster if yes than the logic of query the collector-node, and the representing ccr value are wrong. Can you ‘mmperfmon query cpu’? If not then you hit a problem that I had yesterday. Renar Grunenberg Abteilung Informatik – Betrieb H

[gpfsug-discuss] Filesystem Operation error

2018-07-03 Thread Grunenberg, Renar
Hallo All, follow a short story from yesterday on Version 5.0.1.1. We had a 3 - Node cluster (2 Nodes for IO and the third for a quorum Buster function). A Admin make a mistake an take a delete of the 3 Node (VM). We restored ist with a VM Snapshot no Problem. The only point here we lost complete

Re: [gpfsug-discuss] Filesystem Operation error

2018-07-05 Thread Grunenberg, Renar
__ Von: Grunenberg, Renar Gesendet: Mittwoch, 4. Juli 2018 07:47 An: 'gpfsug-discuss@spectrumscale.org' Betreff: Filesystem Operation error Hallo All, follow a short story from yesterday on Version 5.0.1.1. We had a 3 - Node cluster (2 Nodes for IO and the third for a quorum Buste

[gpfsug-discuss] Analyse steps if disk are down after reboot

2018-07-12 Thread Grunenberg, Renar
Hallo All, we see after a reboot of two NSD-Servers some disks in different filesystems are down and we don’t see why. The logs (messages, dmesg, kern,..) are saying nothing. We are on Rhel7.4 and SS 5.0.1.1. The question now, there are any log, structures in the gpfs deamon that log these situ

Re: [gpfsug-discuss] Analyse steps if disk are down after reboot

2018-07-12 Thread Grunenberg, Renar
tefan Lutz, Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 WEEE-Reg.-Nr. DE 99369940 From:"Grunenberg, Renar" mailto:renar.grunenb...@huk-coburg.de>> To:"'gpfsu

[gpfsug-discuss] Question about mmsdrrestore

2018-07-31 Thread Grunenberg, Renar
Hallo All, are there some experiences about the possibility to install/upgrade some existing nodes in a GPFS 4.2.3.x Cluster (OS Rhel6.7) with a fresh OS install to rhel7.5 and reinstall then new GPFS code 5.0.1.1 and do a mmsdrrestore on these node from a 4.2.3 Node. Is it possible, or must we

[gpfsug-discuss] mmdf vs. df

2018-07-31 Thread Grunenberg, Renar
Hallo All, a question whats happening here: We are on GPFS 5.0.1.1 and host a TSM-Server-Cluster. A colleague from me want to add new nsd’s to grow its tsm-storagepool (filedevice class volumes). The tsmpool fs has before 45TB of space after that 128TB. We create new 50 GB tsm-volumes with defin

Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas?

2018-08-13 Thread Grunenberg, Renar
+1 great answer Stephan. We also dont understand why funktions are existend, but every time we want to use it, the first step is make a requirement. Von meinem iPhone gesendet Renar Grunenberg Abteilung Informatik – Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon:09561 96-44110

[gpfsug-discuss] V5.0.2 and Maxblocksize

2018-10-02 Thread Grunenberg, Renar
Hallo Spectrumscale-team, We installed the new Version 5.0.2 and had the hope that the maxblocksize Parameter are online changeable. But dont. Are there a timeframe when this 24/7 gap are fixed. The Problem here we can not shuting down the complete Cluster. Regards Renar Von meinem iPhone gesend

[gpfsug-discuss] V5.0.2 and maxblocksize

2018-10-04 Thread Grunenberg, Renar
Hallo All, i put a requirement for these gap. Link is here: http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=125603 Please Vote. Regards Renar Renar Grunenberg Abteilung Informatik – Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon:09561 96-44110 Telefax:

[gpfsug-discuss] Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-11-23 Thread Grunenberg, Renar
Hallo All, are there any news about these Alert in witch Version will it be fixed. We had yesterday this problem but with a reziproke szenario. The owning Cluster are on 5.0.1.1 an the mounting Cluster has 5.0.2.1. On the Owning Cluster(3Node 3 Site Cluster) we do a shutdown of the deamon. But th

Re: [gpfsug-discuss] Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-11-26 Thread Grunenberg, Renar
modeled by either an episode of The Simpsons or Seinfeld. [Inactive hide details for "Grunenberg, Renar" ---11/23/2018 01:22:05 AM---Hallo All, are there any news about these Alert in wi]"Grunenberg, Renar" ---11/23/2018 01:22:05 AM---Hallo All, are there any news about these

Re: [gpfsug-discuss] Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-11-28 Thread Grunenberg, Renar
-5185 (FAX: 520-799-4237) Email: jtol...@us.ibm.com<mailto:jtol...@us.ibm.com> "Do or do not. There is no try." - Yoda Olson's Razor: Any situation that we, as humans, can encounter in life can be modeled by either an episode of The Simpsons or Seinfeld. [Inactive hide details f

Re: [gpfsug-discuss] Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-11-29 Thread Grunenberg, Renar
+972 52 2554625 From: "Grunenberg, Renar" mailto:renar.grunenb...@huk-coburg.de>> To:'gpfsug main discussion list' mailto:gpfsug-discuss@spectrumscale.org>> Date:29/11/2018 09:29 Subject:Re: [gpfsug-discuss] Status for Alert:

[gpfsug-discuss] Fwd: Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-12-01 Thread Grunenberg, Renar
Hallo All, We updated today our owning cluster with 5.0.2.1.. After that we testet our Case and our Problem seems to be fixed. Thanks to all for the hints. Regards Renar Renar Grunenberg Abteilung Informatik – Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09

[gpfsug-discuss] Spectrum Scale for Windows Domain -User Requirements

2018-12-06 Thread Grunenberg, Renar
Hallo All, i had a question about the domain-user root account on Windows. We have some requirements to restrict these level of authorization and found no info what is possible to change here. Two questions: 1. It is possible to define a other Domain-Account other than as root for this. 2. If no

Re: [gpfsug-discuss] A cautionary tale of upgrades

2019-01-11 Thread Grunenberg, Renar
Hallo Simon, Welcome to the Club. These behavior are a Bug in tsctl to change the DNS names . We had this already 4 weeks ago. The fix was Update to 5.0.2.1. Regards Renar Von meinem iPhone gesendet Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefo

[gpfsug-discuss] Spectrum Scale Cygwin cmd delays

2019-01-21 Thread Grunenberg, Renar
Hello All, We test spectrum scale on an windows only Client-Cluster (remote mounted to a linux Cluster) but the execution of mm commands in cygwin is very slow. We have tried the following adjustments to increase the execution speed. * We have installed Cygwin Server as a service (cygserver-

Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

2019-01-22 Thread Grunenberg, Renar
States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From:"Grunenberg, Renar" mailto:renar.grunenb...@huk-coburg.de>> To:&qu

Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

2019-01-23 Thread Grunenberg, Renar
. Von: Grunenberg, Renar Gesendet: Dienstag, 22. Januar 2019 18:10 An: 'gpfsug main discussion list' Betreff: AW: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays Hallo Roger, first thanks fort he tip. But we decided to separate the linux-io-Cluster

Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

2019-01-28 Thread Grunenberg, Renar
ou can reach the person managing the list at gpfsug-discuss-ow...@spectrumscale.org<mailto:gpfsug-discuss-ow...@spectrumscale.org> When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today'

Re: [gpfsug-discuss] Identifiable groups of disks?

2019-05-14 Thread Grunenberg, Renar
Hallo Aaron, the granularity to handle storagecapacity in scale is the disk during createing of the filssystem. These disk are created nsd’s that represent your physical lun’s. Per fs there are a unique count of nsd’s == disk per filesystem. What you want is possible, no problem. Regards Renar

Re: [gpfsug-discuss] Spectrum Scale with RHEL7.6 kernel 3.10.0-957.21.2

2019-06-10 Thread Grunenberg, Renar
Hallo Felippe, here are the change list: RHBA-2019:1337 kernel bug fix update Summary: Updated kernel packages that fix various bugs are now available for Red Hat Enterprise Linux 7. The kernel packages contain the Linux kernel, the core of any Linux operating system. This update fixes the

[gpfsug-discuss] WG: Spectrum Scale with RHEL7.6 kernel 3.10.0-957.21.2

2019-06-10 Thread Grunenberg, Renar
Hallo Felipe, here are the change list: RHBA-2019:1337 kernel bug fix update Summary: Updated kernel packages that fix various bugs are now available for Red Hat Enterprise Linux 7. The kernel packages contain the Linux kernel, the core of any Linux operating system. This update fixes the f

Re: [gpfsug-discuss] WG: Spectrum Scale with RHEL7.6 kernel 3.10.0-957.21.2

2019-06-11 Thread Grunenberg, Renar
0.0-957.19.1 are impacted. Felipe Felipe Knop k...@us.ibm.com<mailto:k...@us.ibm.com> GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 [Inactive hide details for "Grunenberg, Renar" ---06/10/2019 0

Re: [gpfsug-discuss] rescan-scsi-bus.sh and "Local access to NSD failed with EIO, switching to access the disk remotely."

2019-06-25 Thread Grunenberg, Renar
Hallo Son, you can check the access to the nsd with mmlsdisk -m. This give you a colum like ‚IO performed on node‘. On NSD-Server you should see localhost, on nsd-client you see the hostig nsd-server per device. Regards Renar Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnho

Re: [gpfsug-discuss] rescan-scsi-bus.sh and "Local access to NSD failed with EIO, switching to access the disk remotely."

2019-06-25 Thread Grunenberg, Renar
----------- > > Message: 2 > Date: Tue, 25 Jun 2019 12:10:53 + > From: "Grunenberg, Renar" > To: "gpfsug-discuss@spectrumscale.org" > > Subject: Re: [gpfsug-discuss] rescan-scsi-bus.sh and "Local access to >NSD failed with EIO, s

[gpfsug-discuss] Spectrum Scale Technote mmap

2019-08-20 Thread Grunenberg, Renar
Hallo All, can everyone clarify the effected Level´s in witch ptf is the problem and in witch is not. The Abstract mean for v5.0.3.0 to 5.0.3.2. But in the content it says 5.0.3.0 to 5.0.3.3? https://www-01.ibm.com/support/docview.wss?uid=ibm10960396 Regards Renar Renar Grunenberg Abteilung

Re: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full

2019-12-10 Thread Grunenberg, Renar
Hallo Juanma, ist save, the only change are only happen if you change the filesystem version with mmcfs device –V full. As a tip you schould update to 5.0.3.3 ist a very stable Level for us. Regards Renar Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Tele

Re: [gpfsug-discuss] Introduction: IBM Elastic Storage System (ESS) 3000 (Spectrum Scale)

2020-01-07 Thread Grunenberg, Renar
Hallo Farida, can you check your Links, it seems these doesnt work for the poeples outside the IBM network. Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon:09561 96-44110 Telefax:09561 96-44104 E-Mail: renar.grunenb...@huk-coburg.de

Re: [gpfsug-discuss] Disk in unrecovered state

2021-01-12 Thread Grunenberg, Renar
Hallo Iban, first you should check the path to the disk. (mmlsnsd -m) It seems to be broken from the OS view. This should fixed first. If you see no dev entry you have a HW problem. If this is fixed then you can start each disk individuell to see there are something start here. On wich scale ver

Re: [gpfsug-discuss] TSM errors restoring files with ACL's

2021-03-05 Thread Grunenberg, Renar
Hallo All, thge mentioned problem with protect was this: https://www.ibm.com/support/pages/node/6415985?myns=s033&mynp=OCSTXKQY&mync=E&cm_sp=s033-_-OCSTXKQY-_-E Regards Renar Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax:

Re: [gpfsug-discuss] GUI does not work after upgrade from 4.X to 5.1.1

2021-08-04 Thread Grunenberg, Renar
Hallo Iban, this is already fixed in 5.1.1.1. Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: renar.grunenb...@huk-coburg.de Internet: www.huk.de ===

Re: [gpfsug-discuss] WAS: alternative path; Now: RDMA

2021-12-10 Thread Grunenberg, Renar
Hallo Walter, we had many experiences now to change our Storage-Systems in our Backup-Environment to RDMA-IB with HDR and EDR Connections. What we see now (came from a 16Gbit FC Infrastructure) we enhance our throuhput from 7 GB/s to 30 GB/s. The main reason are the elimination of the driver-lay

Re: [gpfsug-discuss] IO sizes

2022-02-28 Thread Grunenberg, Renar
Hallo Uwe, are numactl already installed on that affected node? If it missed the numa scale stuff is not working. Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon:09561 96-44110 Telefax:09561 96-44104 E-Mail: renar.grunenb...@huk-cobu