Hallo All,
we are on Version 4.2.3.2 and see some missunderstandig in the enforcement of
hardlimit definitions on a flieset quota. What we see is we put some 200 GB
files on following quota definitions: quota 150 GB Limit 250 GB Grace none.
After the creating of one 200 GB we hit the softquota l
and hardlimit enforcement
Hi Renar:
What does 'mmlsquota -j fileset filesystem' report?
I did not think you would get a grace period of none unless the
hardlimit=softlimit.
On Mon, Jul 31, 2017 at 1:44 PM, Grunenberg, Renar
mailto:renar.grunenb...@huk-coburg.de>> wrote:
Hall
this is the case, but I don’t see anywhere in this thread where this
is explicitly stated … you’re not doing your tests as root, are you? root, of
course, is not bound by any quotas.
Kevin
On Jul 31, 2017, at 2:04 PM, Grunenberg, Renar
mailto:renar.grunenb...@huk-coburg.de>> wrote:
H
Hallo John,
you are on a backlevel Spectrum Scale Release and a backlevel Systemd package.
Please see here:
https://www.ibm.com/developerworks/community/forums/html/topic?id=00104bb5-acf5-4036-93ba-29ea7b1d43b7&ps=25
Renar Grunenberg
Abteilung Informatik – Betrieb
HUK-COBURG
Bahnhofsplatz
9644
Hallo Eric,
our experiences are add and delete new/old nodes is working only if this node
is no quorum node in an ccr cluster, no problem. There are no mmshutdown steps
necessary. We are on 4.2.3.6. I think this is already available since >4.2. If
you want to add a new quorum node, than you must
Hallo All,
we updated our Test-Cluster from 4.2.3.6 to V5.0.0.1. So good so fine, but I
see after the mmchconfig release=LATEST a new common parameter ‘maxblocksize 1M’
(our fs are on these blocksizes) is happening.
Ok, but if I will change this parameter the hole cluster was requestet that:
ro
-
IBM Deutschland Business & Technology Services GmbH / Geschäftsführung:
Thomas Wolter, Sven Schooß
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart,
HRB 17122
From: "
he default
passed to the blocksize parameter for create a new filesystem. one thing we
might consider doing is changing the command to use the current active
maxblocksize as input for mmcrfs if maxblocksize is below current default.
Sven
On Fri, Feb 9, 2018 at 6:30 AM Grunenberg, Renar
mailto:ren
blocksize of what ever current
maxblocksize is set. let me discuss with felipe what//if we can share here to
solve this.
sven
On Fri, Feb 9, 2018 at 6:59 AM Grunenberg, Renar
mailto:renar.grunenb...@huk-coburg.de>> wrote:
Hallo Sven,
that stated a mmcrfs ‘newfs’ -B 4M is possible
Hallo Jeff,
you can check these with following cmd.
mmfsadm dump nsdcksum
Your in memory info is inconsistent with your descriptor structur on disk. The
reason for this I had no idea.
Renar Grunenberg
Abteilung Informatik – Betrieb
HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 9
Hallo Simon,
are there any reason why the link of the presentation from Yong ZY
Zheng(Cognitive, ML, Hortonworks) is not linked.
Renar Grunenberg
Abteilung Informatik – Betrieb
HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grune
Hallo Ivano,
we change the bridge port to query2port this is the multithreaded Query port.
The Bridge in Version3 select these port automatically if the pmcollector
config is updated(/opt/IBM/zimon/ZIMonCollector.cfg).
# The query port number defaults to 9084.
queryport = "9084"
query2port = "90
his
information is strictly forbidden.
===
-Ursprüngliche Nachricht-
Von: Ivano Talamo [mailto:ivano.tal...@psi.ch]
Gesendet: Mittwoch, 25. April 2018 13:37
An: gpfsug main discussion list ; Grunenberg,
Renar
Betreff: Re: [gpf
Hallo All,
we experience some difficults in using mmlsnsd -m on 4.2.3.8 and 5.0.0.2. Are
there any known bugs or changes happening here, that these function don’t does
what it wants.
The outputs are now for these suboption -m or -M the same!!??.
Regards Renar
Renar Grunenberg
Abteilung Informat
Hallo All,
follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4.
After the complete yum update to this version, we had a non-function yum cmd.
The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This
package break the yum cmds.
The error are:
Loaded plu
hon rpm
[root@tlinc04 ~]# rpm -qa | grep -i openssl
openssl-1.0.2k-12.el7.x86_64
openssl-libs-1.0.2k-12.el7.x86_64
pyOpenSSL-0.13.1-3.el7.x86_64
openssl-devel-1.0.2k-12.el7.x86_64
xmlsec1-openssl-1.2.20-7.el7_4.x86_64
So I assume, you installed GUI, or scale mgmt .. let us know -
thx
From:
1.0.2k-12.el7.x86_64
openssl-libs-1.0.2k-12.el7.x86_64
pyOpenSSL-0.13.1-3.el7.x86_64
openssl-devel-1.0.2k-12.el7.x86_64
xmlsec1-openssl-1.2.20-7.el7_4.x86_64
So I assume, you installed GUI, or scale mgmt .. let us know -
thx
From:"Grunenberg, Renar"
mailto:renar.grunenb...@huk-
From:"Grunenberg, Renar"
mailto:renar.grunenb...@huk-coburg.de>>
To:'gpfsug main discussion list'
mailto:gpfsug-discuss@spectrumscale.org>>
Date:05/17/2018 09:44 PM
Subject:Re: [gpfsug-discuss] 5.0.1.0 Update issue with python
depe
Hallo All,
we are working since two weeks(or more) on a PMR that mmbackup has problems
with the MC Class in TSM. The result is that we have defined a version exist
of 5. But with each run, the
policy engine generate a expire list (where the mentioned files already
selected) and at the end we s
4:08:09 -, "Grunenberg, Renar" said:
> There are after each test (change of the content) the file became every time
> a new inode number. This behavior is the reason why the shadowfile think(or
> the
> policyengine) the old file is never existent
That's because as f
Buzzard
Gesendet: Donnerstag, 21. Juni 2018 09:33
An: gpfsug-discuss@spectrumscale.org
Betreff: Re: [gpfsug-discuss] mmbackup issue
On 20/06/18 17:00, Grunenberg, Renar wrote:
> Hallo Valdis, first thanks for the explanation we understand that,
> but this problem generate only 2 Version at tsm
] Im Auftrag von Jonathan
Buzzard
Gesendet: Donnerstag, 21. Juni 2018 09:33
An: gpfsug-discuss@spectrumscale.org
Betreff: Re: [gpfsug-discuss] mmbackup issue
On 20/06/18 17:00, Grunenberg, Renar wrote:
> Hallo Valdis, first thanks for the explanation we understand that,
> but this problem gener
Hallo Richard,
do have a private admin-interface-lan in your cluster if yes than the logic of
query the collector-node, and the representing ccr value are wrong. Can you
‘mmperfmon query cpu’? If not then you hit a problem that I had yesterday.
Renar Grunenberg
Abteilung Informatik – Betrieb
H
Hallo All,
follow a short story from yesterday on Version 5.0.1.1. We had a 3 - Node
cluster (2 Nodes for IO and the third for a quorum Buster function).
A Admin make a mistake an take a delete of the 3 Node (VM). We restored ist
with a VM Snapshot no Problem. The only point here we lost complete
__
Von: Grunenberg, Renar
Gesendet: Mittwoch, 4. Juli 2018 07:47
An: 'gpfsug-discuss@spectrumscale.org'
Betreff: Filesystem Operation error
Hallo All,
follow a short story from yesterday on Version 5.0.1.1. We had a 3 - Node
cluster (2 Nodes for IO and the third for a quorum Buste
Hallo All,
we see after a reboot of two NSD-Servers some disks in different filesystems
are down and we don’t see why.
The logs (messages, dmesg, kern,..) are saying nothing. We are on Rhel7.4 and
SS 5.0.1.1.
The question now, there are any log, structures in the gpfs deamon that log
these situ
tefan Lutz,
Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB
14562 WEEE-Reg.-Nr. DE 99369940
From:"Grunenberg, Renar"
mailto:renar.grunenb...@huk-coburg.de>>
To:"'gpfsu
Hallo All,
are there some experiences about the possibility to install/upgrade some
existing nodes in a GPFS 4.2.3.x Cluster (OS Rhel6.7) with a fresh OS install
to rhel7.5 and reinstall then new GPFS code 5.0.1.1
and do a mmsdrrestore on these node from a 4.2.3 Node. Is it possible, or must
we
Hallo All,
a question whats happening here:
We are on GPFS 5.0.1.1 and host a TSM-Server-Cluster. A colleague from me want
to add new nsd’s to grow its tsm-storagepool (filedevice class volumes).
The tsmpool fs has before 45TB of space after that 128TB. We create new 50 GB
tsm-volumes with defin
+1 great answer Stephan. We also dont understand why funktions are existend,
but every time we want to use it, the first step is make a requirement.
Von meinem iPhone gesendet
Renar Grunenberg
Abteilung Informatik – Betrieb
HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Hallo Spectrumscale-team,
We installed the new Version 5.0.2 and had the hope that the maxblocksize
Parameter are online changeable. But dont. Are there a timeframe when this 24/7
gap are fixed. The Problem here we can not shuting down the complete Cluster.
Regards Renar
Von meinem iPhone gesend
Hallo All,
i put a requirement for these gap. Link is here:
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=125603
Please Vote.
Regards Renar
Renar Grunenberg
Abteilung Informatik – Betrieb
HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:
Hallo All,
are there any news about these Alert in witch Version will it be fixed. We had
yesterday this problem but with a reziproke szenario.
The owning Cluster are on 5.0.1.1 an the mounting Cluster has 5.0.2.1. On the
Owning Cluster(3Node 3 Site Cluster) we do a shutdown of the deamon. But th
modeled by either an episode of The Simpsons
or Seinfeld.
[Inactive hide details for "Grunenberg, Renar" ---11/23/2018 01:22:05
AM---Hallo All, are there any news about these Alert in wi]"Grunenberg, Renar"
---11/23/2018 01:22:05 AM---Hallo All, are there any news about these
-5185 (FAX: 520-799-4237)
Email: jtol...@us.ibm.com<mailto:jtol...@us.ibm.com>
"Do or do not. There is no try." - Yoda
Olson's Razor:
Any situation that we, as humans, can encounter in life
can be modeled by either an episode of The Simpsons
or Seinfeld.
[Inactive hide details f
+972 52 2554625
From: "Grunenberg, Renar"
mailto:renar.grunenb...@huk-coburg.de>>
To:'gpfsug main discussion list'
mailto:gpfsug-discuss@spectrumscale.org>>
Date:29/11/2018 09:29
Subject:Re: [gpfsug-discuss] Status for Alert:
Hallo All,
We updated today our owning cluster with 5.0.2.1.. After that we testet our
Case and our Problem seems to be fixed. Thanks to all for the hints.
Regards Renar
Renar Grunenberg
Abteilung Informatik – Betrieb
HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon: 09561 96-44110
Telefax: 09
Hallo All,
i had a question about the domain-user root account on Windows. We have some
requirements to restrict these level of authorization and found no info what is
possible to change here.
Two questions:
1. It is possible to define a other Domain-Account other than as root for this.
2. If no
Hallo Simon,
Welcome to the Club. These behavior are a Bug in tsctl to change the DNS names
. We had this already 4 weeks ago. The fix was Update to 5.0.2.1.
Regards Renar
Von meinem iPhone gesendet
Renar Grunenberg
Abteilung Informatik - Betrieb
HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefo
Hello All,
We test spectrum scale on an windows only Client-Cluster (remote mounted to a
linux Cluster) but
the execution of mm commands in cygwin is very slow.
We have tried the following adjustments to increase the execution speed.
* We have installed Cygwin Server as a service (cygserver-
States or your local IBM Service Center in other countries.
The forum is informally monitored as time permits and should not be used for
priority messages to the Spectrum Scale (GPFS) team.
From:"Grunenberg, Renar"
mailto:renar.grunenb...@huk-coburg.de>>
To:&qu
.
Von: Grunenberg, Renar
Gesendet: Dienstag, 22. Januar 2019 18:10
An: 'gpfsug main discussion list'
Betreff: AW: [gpfsug-discuss] Spectrum Scale Cygwin cmd delays
Hallo Roger,
first thanks fort he tip. But we decided to separate the linux-io-Cluster
ou can reach the person managing the list at
gpfsug-discuss-ow...@spectrumscale.org<mailto:gpfsug-discuss-ow...@spectrumscale.org>
When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."
Today'
Hallo Aaron,
the granularity to handle storagecapacity in scale is the disk during
createing of the filssystem. These disk are created nsd’s that represent your
physical lun’s. Per fs there are a unique count of nsd’s == disk per
filesystem. What you want is possible, no problem.
Regards Renar
Hallo Felippe,
here are the change list:
RHBA-2019:1337 kernel bug fix update
Summary:
Updated kernel packages that fix various bugs are now available for Red Hat
Enterprise Linux 7.
The kernel packages contain the Linux kernel, the core of any Linux operating
system.
This update fixes the
Hallo Felipe,
here are the change list:
RHBA-2019:1337 kernel bug fix update
Summary:
Updated kernel packages that fix various bugs are now available for Red Hat
Enterprise Linux 7.
The kernel packages contain the Linux kernel, the core of any Linux operating
system.
This update fixes the f
0.0-957.19.1 are impacted.
Felipe
Felipe Knop k...@us.ibm.com<mailto:k...@us.ibm.com>
GPFS Development and Security
IBM Systems
IBM Building 008
2455 South Rd, Poughkeepsie, NY 12601
(845) 433-9314 T/L 293-9314
[Inactive hide details for "Grunenberg, Renar" ---06/10/2019 0
Hallo Son,
you can check the access to the nsd with mmlsdisk -m. This give you a
colum like ‚IO performed on node‘. On NSD-Server you should see localhost, on
nsd-client you see the hostig nsd-server per device.
Regards Renar
Renar Grunenberg
Abteilung Informatik - Betrieb
HUK-COBURG
Bahnho
-----------
>
> Message: 2
> Date: Tue, 25 Jun 2019 12:10:53 +
> From: "Grunenberg, Renar"
> To: "gpfsug-discuss@spectrumscale.org"
>
> Subject: Re: [gpfsug-discuss] rescan-scsi-bus.sh and "Local access to
>NSD failed with EIO, s
Hallo All,
can everyone clarify the effected Level´s in witch ptf is the problem and in
witch is not.
The Abstract mean for v5.0.3.0 to 5.0.3.2. But in the content it says 5.0.3.0
to 5.0.3.3?
https://www-01.ibm.com/support/docview.wss?uid=ibm10960396
Regards Renar
Renar Grunenberg
Abteilung
Hallo Juanma,
ist save, the only change are only happen if you change the filesystem version
with mmcfs device –V full.
As a tip you schould update to 5.0.3.3 ist a very stable Level for us.
Regards Renar
Renar Grunenberg
Abteilung Informatik - Betrieb
HUK-COBURG
Bahnhofsplatz
96444 Coburg
Tele
Hallo Farida,
can you check your Links, it seems these doesnt work for the poeples outside
the IBM network.
Renar Grunenberg
Abteilung Informatik - Betrieb
HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Hallo Iban,
first you should check the path to the disk. (mmlsnsd -m) It seems to be broken
from the OS view. This should fixed first. If you see no dev entry you have a
HW problem. If this is fixed then you can start each disk individuell to see
there are something start here. On wich scale ver
Hallo All,
thge mentioned problem with protect was this:
https://www.ibm.com/support/pages/node/6415985?myns=s033&mynp=OCSTXKQY&mync=E&cm_sp=s033-_-OCSTXKQY-_-E
Regards Renar
Renar Grunenberg
Abteilung Informatik - Betrieb
HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon: 09561 96-44110
Telefax:
Hallo Iban,
this is already fixed in 5.1.1.1.
Renar Grunenberg
Abteilung Informatik - Betrieb
HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon: 09561 96-44110
Telefax: 09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet: www.huk.de
===
Hallo Walter,
we had many experiences now to change our Storage-Systems in our
Backup-Environment to RDMA-IB with HDR and EDR Connections. What we see now
(came from a 16Gbit FC Infrastructure) we enhance our throuhput from 7 GB/s to
30 GB/s. The main reason are the elimination of the driver-lay
Hallo Uwe,
are numactl already installed on that affected node? If it missed the numa
scale stuff is not working.
Renar Grunenberg
Abteilung Informatik - Betrieb
HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:09561 96-44110
Telefax:09561 96-44104
E-Mail: renar.grunenb...@huk-cobu
57 matches
Mail list logo