Re: [gpfsug-discuss] IO sizes

2022-02-28 Thread Grunenberg, Renar
Hallo Uwe, are numactl already installed on that affected node? If it missed the numa scale stuff is not working. Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon:09561 96-44110 Telefax:09561 96-44104 E-Mail:

Re: [gpfsug-discuss] WAS: alternative path; Now: RDMA

2021-12-10 Thread Grunenberg, Renar
Hallo Walter, we had many experiences now to change our Storage-Systems in our Backup-Environment to RDMA-IB with HDR and EDR Connections. What we see now (came from a 16Gbit FC Infrastructure) we enhance our throuhput from 7 GB/s to 30 GB/s. The main reason are the elimination of the

Re: [gpfsug-discuss] GUI does not work after upgrade from 4.X to 5.1.1

2021-08-04 Thread Grunenberg, Renar
Hallo Iban, this is already fixed in 5.1.1.1. Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: renar.grunenb...@huk-coburg.de Internet: www.huk.de

Re: [gpfsug-discuss] TSM errors restoring files with ACL's

2021-03-05 Thread Grunenberg, Renar
Hallo All, thge mentioned problem with protect was this: https://www.ibm.com/support/pages/node/6415985?myns=s033=OCSTXKQY=E_sp=s033-_-OCSTXKQY-_-E Regards Renar Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561

Re: [gpfsug-discuss] Disk in unrecovered state

2021-01-12 Thread Grunenberg, Renar
Hallo Iban, first you should check the path to the disk. (mmlsnsd -m) It seems to be broken from the OS view. This should fixed first. If you see no dev entry you have a HW problem. If this is fixed then you can start each disk individuell to see there are something start here. On wich scale

Re: [gpfsug-discuss] Introduction: IBM Elastic Storage System (ESS) 3000 (Spectrum Scale)

2020-01-07 Thread Grunenberg, Renar
Hallo Farida, can you check your Links, it seems these doesnt work for the poeples outside the IBM network. Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon:09561 96-44110 Telefax:09561 96-44104 E-Mail: renar.grunenb...@huk-coburg.de

Re: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full

2019-12-10 Thread Grunenberg, Renar
Hallo Juanma, ist save, the only change are only happen if you change the filesystem version with mmcfs device –V full. As a tip you schould update to 5.0.3.3 ist a very stable Level for us. Regards Renar Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg

[gpfsug-discuss] Spectrum Scale Technote mmap

2019-08-20 Thread Grunenberg, Renar
Hallo All, can everyone clarify the effected Level´s in witch ptf is the problem and in witch is not. The Abstract mean for v5.0.3.0 to 5.0.3.2. But in the content it says 5.0.3.0 to 5.0.3.3? https://www-01.ibm.com/support/docview.wss?uid=ibm10960396 Regards Renar Renar Grunenberg Abteilung

Re: [gpfsug-discuss] rescan-scsi-bus.sh and "Local access to NSD failed with EIO, switching to access the disk remotely."

2019-06-25 Thread Grunenberg, Renar
----- > > Message: 2 > Date: Tue, 25 Jun 2019 12:10:53 + > From: "Grunenberg, Renar" > To: "gpfsug-discuss@spectrumscale.org" > > Subject: Re: [gpfsug-discuss] rescan-scsi-bus.sh and "Local access to >NSD failed with EIO, switchi

Re: [gpfsug-discuss] rescan-scsi-bus.sh and "Local access to NSD failed with EIO, switching to access the disk remotely."

2019-06-25 Thread Grunenberg, Renar
Hallo Son, you can check the access to the nsd with mmlsdisk -m. This give you a colum like ‚IO performed on node‘. On NSD-Server you should see localhost, on nsd-client you see the hostig nsd-server per device. Regards Renar Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG

Re: [gpfsug-discuss] WG: Spectrum Scale with RHEL7.6 kernel 3.10.0-957.21.2

2019-06-11 Thread Grunenberg, Renar
0.0-957.19.1 are impacted. Felipe Felipe Knop k...@us.ibm.com<mailto:k...@us.ibm.com> GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 [Inactive hide details for "Grunenberg, Renar" ---06/10/2019 0

[gpfsug-discuss] WG: Spectrum Scale with RHEL7.6 kernel 3.10.0-957.21.2

2019-06-10 Thread Grunenberg, Renar
Hallo Felipe, here are the change list: RHBA-2019:1337 kernel bug fix update Summary: Updated kernel packages that fix various bugs are now available for Red Hat Enterprise Linux 7. The kernel packages contain the Linux kernel, the core of any Linux operating system. This update fixes the

Re: [gpfsug-discuss] Spectrum Scale with RHEL7.6 kernel 3.10.0-957.21.2

2019-06-10 Thread Grunenberg, Renar
Hallo Felippe, here are the change list: RHBA-2019:1337 kernel bug fix update Summary: Updated kernel packages that fix various bugs are now available for Red Hat Enterprise Linux 7. The kernel packages contain the Linux kernel, the core of any Linux operating system. This update fixes the

Re: [gpfsug-discuss] Identifiable groups of disks?

2019-05-14 Thread Grunenberg, Renar
Hallo Aaron, the granularity to handle storagecapacity in scale is the disk during createing of the filssystem. These disk are created nsd’s that represent your physical lun’s. Per fs there are a unique count of nsd’s == disk per filesystem. What you want is possible, no problem. Regards Renar

Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

2019-01-28 Thread Grunenberg, Renar
e person managing the list at gpfsug-discuss-ow...@spectrumscale.org<mailto:gpfsug-discuss-ow...@spectrumscale.org> When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re:

Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

2019-01-23 Thread Grunenberg, Renar
. Von: Grunenberg, Renar Gesendet: Dienstag, 22. Januar 2019 18:10 An: 'gpfsug main discussion list' Betreff: AW: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays Hallo Roger, first thanks fort he tip. But we decided to separate the linux-io-Cluster from

Re: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays

2019-01-22 Thread Grunenberg, Renar
States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From:"Grunenberg, Renar" mailto:renar.grunenb...@huk-coburg.de>> To:&qu

[gpfsug-discuss] Spectrum Scale Cygwin cmd delays

2019-01-21 Thread Grunenberg, Renar
Hello All, We test spectrum scale on an windows only Client-Cluster (remote mounted to a linux Cluster) but the execution of mm commands in cygwin is very slow. We have tried the following adjustments to increase the execution speed. * We have installed Cygwin Server as a service

Re: [gpfsug-discuss] A cautionary tale of upgrades

2019-01-11 Thread Grunenberg, Renar
Hallo Simon, Welcome to the Club. These behavior are a Bug in tsctl to change the DNS names . We had this already 4 weeks ago. The fix was Update to 5.0.2.1. Regards Renar Von meinem iPhone gesendet Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg

[gpfsug-discuss] Spectrum Scale for Windows Domain -User Requirements

2018-12-06 Thread Grunenberg, Renar
Hallo All, i had a question about the domain-user root account on Windows. We have some requirements to restrict these level of authorization and found no info what is possible to change here. Two questions: 1. It is possible to define a other Domain-Account other than as root for this. 2. If

[gpfsug-discuss] Fwd: Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-12-01 Thread Grunenberg, Renar
Hallo All, We updated today our owning cluster with 5.0.2.1.. After that we testet our Case and our Problem seems to be fixed. Thanks to all for the hints. Regards Renar Renar Grunenberg Abteilung Informatik – Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax:

Re: [gpfsug-discuss] Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-11-29 Thread Grunenberg, Renar
+972 52 2554625 From: "Grunenberg, Renar" mailto:renar.grunenb...@huk-coburg.de>> To:'gpfsug main discussion list' mailto:gpfsug-discuss@spectrumscale.org>> Date:29/11/2018 09:29 Subject:Re: [gpfsug-discuss] Status for Alert: remotely

Re: [gpfsug-discuss] Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-11-28 Thread Grunenberg, Renar
-5185 (FAX: 520-799-4237) Email: jtol...@us.ibm.com<mailto:jtol...@us.ibm.com> "Do or do not. There is no try." - Yoda Olson's Razor: Any situation that we, as humans, can encounter in life can be modeled by either an episode of The Simpsons or Seinfeld. [Inactive hide details for &q

Re: [gpfsug-discuss] Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-11-26 Thread Grunenberg, Renar
odeled by either an episode of The Simpsons or Seinfeld. [Inactive hide details for "Grunenberg, Renar" ---11/23/2018 01:22:05 AM---Hallo All, are there any news about these Alert in wi]"Grunenberg, Renar" ---11/23/2018 01:22:05 AM---Hallo All, are there any news about these Aler

[gpfsug-discuss] Status for Alert: remotely mounted filesystem panic on accessing cluster after upgrading the owning cluster first

2018-11-23 Thread Grunenberg, Renar
Hallo All, are there any news about these Alert in witch Version will it be fixed. We had yesterday this problem but with a reziproke szenario. The owning Cluster are on 5.0.1.1 an the mounting Cluster has 5.0.2.1. On the Owning Cluster(3Node 3 Site Cluster) we do a shutdown of the deamon. But

[gpfsug-discuss] V5.0.2 and maxblocksize

2018-10-04 Thread Grunenberg, Renar
Hallo All, i put a requirement for these gap. Link is here: http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=125603 Please Vote. Regards Renar Renar Grunenberg Abteilung Informatik – Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon:09561 96-44110 Telefax:

Re: [gpfsug-discuss] GPFS Independent Fileset Limit vs Quotas?

2018-08-14 Thread Grunenberg, Renar
+1 great answer Stephan. We also dont understand why funktions are existend, but every time we want to use it, the first step is make a requirement. Von meinem iPhone gesendet Renar Grunenberg Abteilung Informatik – Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon:09561 96-44110

[gpfsug-discuss] mmdf vs. df

2018-07-31 Thread Grunenberg, Renar
Hallo All, a question whats happening here: We are on GPFS 5.0.1.1 and host a TSM-Server-Cluster. A colleague from me want to add new nsd’s to grow its tsm-storagepool (filedevice class volumes). The tsmpool fs has before 45TB of space after that 128TB. We create new 50 GB tsm-volumes with

[gpfsug-discuss] Question about mmsdrrestore

2018-07-31 Thread Grunenberg, Renar
Hallo All, are there some experiences about the possibility to install/upgrade some existing nodes in a GPFS 4.2.3.x Cluster (OS Rhel6.7) with a fresh OS install to rhel7.5 and reinstall then new GPFS code 5.0.1.1 and do a mmsdrrestore on these node from a 4.2.3 Node. Is it possible, or must

Re: [gpfsug-discuss] Analyse steps if disk are down after reboot

2018-07-12 Thread Grunenberg, Renar
Reimer, Dr. Klaus Seifert, Wolfgang Wendt Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 WEEE-Reg.-Nr. DE 99369940 From: "Grunenberg, Renar" mailto:renar.grunenb...@huk-coburg.de>> To:"'gpfsug-discuss@spectrumscale.o

[gpfsug-discuss] Analyse steps if disk are down after reboot

2018-07-12 Thread Grunenberg, Renar
Hallo All, we see after a reboot of two NSD-Servers some disks in different filesystems are down and we don’t see why. The logs (messages, dmesg, kern,..) are saying nothing. We are on Rhel7.4 and SS 5.0.1.1. The question now, there are any log, structures in the gpfs deamon that log these

Re: [gpfsug-discuss] Filesystem Operation error

2018-07-05 Thread Grunenberg, Renar
__ Von: Grunenberg, Renar Gesendet: Mittwoch, 4. Juli 2018 07:47 An: 'gpfsug-discuss@spectrumscale.org' Betreff: Filesystem Operation error Hallo All, follow a short story from yesterday on Version 5.0.1.1. We had a 3 - Node cluster (2 Nodes for IO and the third for a quorum Buster function

[gpfsug-discuss] Filesystem Operation error

2018-07-03 Thread Grunenberg, Renar
Hallo All, follow a short story from yesterday on Version 5.0.1.1. We had a 3 - Node cluster (2 Nodes for IO and the third for a quorum Buster function). A Admin make a mistake an take a delete of the 3 Node (VM). We restored ist with a VM Snapshot no Problem. The only point here we lost

Re: [gpfsug-discuss] PM_MONITOR refresh task failed

2018-06-27 Thread Grunenberg, Renar
Hallo Richard, do have a private admin-interface-lan in your cluster if yes than the logic of query the collector-node, and the representing ccr value are wrong. Can you ‘mmperfmon query cpu’? If not then you hit a problem that I had yesterday. Renar Grunenberg Abteilung Informatik – Betrieb

Re: [gpfsug-discuss] mmbackup issue

2018-06-25 Thread Grunenberg, Renar
von Jonathan Buzzard Gesendet: Donnerstag, 21. Juni 2018 09:33 An: gpfsug-discuss@spectrumscale.org Betreff: Re: [gpfsug-discuss] mmbackup issue On 20/06/18 17:00, Grunenberg, Renar wrote: > Hallo Valdis, first thanks for the explanation we understand that, > but this problem generate

Re: [gpfsug-discuss] mmbackup issue

2018-06-20 Thread Grunenberg, Renar
000, "Grunenberg, Renar" said: > There are after each test (change of the content) the file became every time > a new inode number. This behavior is the reason why the shadowfile think(or > the > policyengine) the old file is never existent That's because as far a

[gpfsug-discuss] mmbackup issue

2018-06-20 Thread Grunenberg, Renar
Hallo All, we are working since two weeks(or more) on a PMR that mmbackup has problems with the MC Class in TSM. The result is that we have defined a version exist of 5. But with each run, the policy engine generate a expire list (where the mentioned files already selected) and at the end we

Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

2018-05-18 Thread Grunenberg, Renar
newer pyOpenSSL in 5.0.1.1 Thanks, Smita From: "Grunenberg, Renar" <renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>> To:'gpfsug main discussion list' <gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>

Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

2018-05-17 Thread Grunenberg, Renar
ibs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From:"Grunenberg, Renar" <renar.grunenb...@huk-coburg.de<mailto:renar.grunenb..

Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

2018-05-16 Thread Grunenberg, Renar
| grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From:"Grunenberg, Renar"

Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies

2018-05-15 Thread Grunenberg, Renar
Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded

Re: [gpfsug-discuss] problems with collector 5 and grafana bridge 3

2018-04-25 Thread Grunenberg, Renar
Hallo Ivano, we change the bridge port to query2port this is the multithreaded Query port. The Bridge in Version3 select these port automatically if the pmcollector config is updated(/opt/IBM/zimon/ZIMonCollector.cfg). # The query port number defaults to 9084. queryport = "9084" query2port =

Re: [gpfsug-discuss] UK Meeting - tooling Spectrum Scale

2018-04-20 Thread Grunenberg, Renar
Hallo Simon, are there any reason why the link of the presentation from Yong ZY Zheng(Cognitive, ML, Hortonworks) is not linked. Renar Grunenberg Abteilung Informatik – Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon:09561 96-44110 Telefax:09561 96-44104 E-Mail:

Re: [gpfsug-discuss] GPFS autoload - wait for IB portstobecomeactive

2018-03-27 Thread Grunenberg, Renar
Hallo Jeff, you can check these with following cmd. mmfsadm dump nsdcksum Your in memory info is inconsistent with your descriptor structur on disk. The reason for this I had no idea. Renar Grunenberg Abteilung Informatik – Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon:09561

Re: [gpfsug-discuss] V5 Experience -- maxblocksize

2018-02-09 Thread Grunenberg, Renar
ou can only create a filesystem with a blocksize of what ever current maxblocksize is set. let me discuss with felipe what//if we can share here to solve this. sven On Fri, Feb 9, 2018 at 6:59 AM Grunenberg, Renar <renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>

Re: [gpfsug-discuss] V5 Experience -- maxblocksize

2018-02-09 Thread Grunenberg, Renar
:30 AM Grunenberg, Renar <renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>> wrote: Felipe, all, first thanks for clarification, but what was the reason for this logic? If i upgrade to Version 5 and want to create new filesystems, and the maxblocksize is on 1M, we must

Re: [gpfsug-discuss] V5 Experience -- maxblocksize

2018-02-09 Thread Grunenberg, Renar
6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefa...@de.ibm.com<mailto:uwefa...@de.ibm.com> --- IBM Deutschland Business & Technology Services GmbH / Geschäftsführung: Thomas Wolter, Sven Schooß Sitz der Gesel

Re: [gpfsug-discuss] mm'add|del'node with ccr enabled

2017-12-09 Thread Grunenberg, Renar
Hallo Eric, our experiences are add and delete new/old nodes is working only if this node is no quorum node in an ccr cluster, no problem. There are no mmshutdown steps necessary. We are on 4.2.3.6. I think this is already available since >4.2. If you want to add a new quorum node, than you

Re: [gpfsug-discuss] Systemd will not allow the mount of a filesystem

2017-08-02 Thread Grunenberg, Renar
Hallo John, you are on a backlevel Spectrum Scale Release and a backlevel Systemd package. Please see here: https://www.ibm.com/developerworks/community/forums/html/topic?id=00104bb5-acf5-4036-93ba-29ea7b1d43b7=25 Renar Grunenberg Abteilung Informatik – Betrieb HUK-COBURG Bahnhofsplatz 96444

Re: [gpfsug-discuss] Quota and hardlimit enforcement

2017-07-31 Thread Grunenberg, Renar
Quota and hardlimit enforcement Hi Renar, I’m sure this is the case, but I don’t see anywhere in this thread where this is explicitly stated … you’re not doing your tests as root, are you? root, of course, is not bound by any quotas. Kevin On Jul 31, 2017, at 2:04 PM, Grunenberg, Renar <renar.grune