Felipe, all,
first thanks for clarification, but what was the reason for this logic? If i 
upgrade to Version 5 and want to create new filesystems, and the maxblocksize 
is on 1M, we must shutdown the hole cluster to change this to the defaults to 
use the new one default. I had no understanding for that decision. We are at 7 
x 24h availability with our cluster today, we had no real maintenance window 
here! Any circumvention are welcome.

Regards Renar


Renar Grunenberg
Abteilung Informatik – Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:        09561 96-44110
Telefax:        09561 96-44104
E-Mail: renar.grunenb...@huk-coburg.de
Internet:       www.huk.de
________________________________
HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter Deutschlands 
a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav 
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas (stv.).
________________________________
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist 
nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information in 
error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in this 
information is strictly forbidden.
________________________________
Von: gpfsug-discuss-boun...@spectrumscale.org 
[mailto:gpfsug-discuss-boun...@spectrumscale.org] Im Auftrag von Felipe Knop
Gesendet: Freitag, 9. Februar 2018 14:59
An: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Betreff: Re: [gpfsug-discuss] V5 Experience -- maxblocksize


All,

Correct. There is no need to change the value of 'maxblocksize' for existing 
clusters which are upgraded to the 5.0.0 level. If a new file system needs to 
be created with a block size which exceeds the value of maxblocksize then the 
mmchconfig needs to be issued to increase the value of maxblocksize (which 
requires the entire cluster to be stopped).

For clusters newly created with 5.0.0, the value of maxblocksize is set to 4MB. 
See the references to maxblocksize in the mmchconfig and mmcrfs man pages in 
5.0.0 .

Felipe

----
Felipe Knop k...@us.ibm.com<mailto:k...@us.ibm.com>
GPFS Development and Security
IBM Systems
IBM Building 008
2455 South Rd, Poughkeepsie, NY 12601
(845) 433-9314 T/L 293-9314



[Inactive hide details for "Uwe Falke" ---02/09/2018 06:54:10 AM---I suppose 
the new maxBlockSize default is <>1MB, so your conf]"Uwe Falke" ---02/09/2018 
06:54:10 AM---I suppose the new maxBlockSize default is <>1MB, so your config 
parameter was properly translated.

From: "Uwe Falke" <uwefa...@de.ibm.com<mailto:uwefa...@de.ibm.com>>
To: gpfsug main discussion list 
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Date: 02/09/2018 06:54 AM
Subject: Re: [gpfsug-discuss] V5 Experience
Sent by: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>

________________________________



I suppose the new maxBlockSize default is <>1MB, so your config parameter
was properly translated. I'd see no need to change anything.



Mit freundlichen Grüßen / Kind regards


Dr. Uwe Falke

IT Specialist
High Performance Computing Services / Integrated Technology Services /
Data Center Services
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland
Rathausstr. 7
09111 Chemnitz
Phone: +49 371 6978 2165
Mobile: +49 175 575 2877
E-Mail: uwefa...@de.ibm.com<mailto:uwefa...@de.ibm.com>
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland Business & Technology Services GmbH / Geschäftsführung:
Thomas Wolter, Sven Schooß
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart,
HRB 17122




From:   "Grunenberg, Renar" 
<renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>>
To:     "'gpfsug-discuss@spectrumscale.org'"
<gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>>
Date:   02/09/2018 10:16 AM
Subject:        [gpfsug-discuss] V5 Experience
Sent by:        
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>



Hallo All,
we updated our Test-Cluster from 4.2.3.6 to V5.0.0.1. So good so fine, but
I see after the mmchconfig release=LATEST a new common parameter
?maxblocksize 1M?
(our fs are on these blocksizes) is happening.
Ok, but if I will change this parameter the hole cluster was requestet
that:

 root @sbdl7003(rhel7.4)> mmchconfig maxblocksize=DEFAULT
Verifying GPFS is stopped on all nodes ...
mmchconfig: GPFS is still active on SAPL7012x1.t7.lan.tuhuk.de
mmchconfig: GPFS is still active on SBDL7001x1.t7.lan.tuhuk.de
mmchconfig: GPFS is still active on SAPL7013x1.t7.lan.tuhuk.de
mmchconfig: GPFS is still active on SAPL7009x1.t7.lan.tuhuk.de
mmchconfig: GPFS is still active on SAPL7008x1.t7.lan.tuhuk.de
mmchconfig: GPFS is still active on SBDL7003x1.t7.lan.tuhuk.de
mmchconfig: GPFS is still active on SBDL7004x1.t7.lan.tuhuk.de
mmchconfig: GPFS is still active on SAPL7001x1.t7.lan.tuhuk.de
mmchconfig: Command failed. Examine previous error messages to determine
cause.
Can someone explain the behavior here, and same clarification in an update
plan what can we do to go to the defaults without clusterdown.
Is this a bug or a feature;-)

Regards Renar
Renar Grunenberg
Abteilung Informatik ? Betrieb

HUK-COBURG
Bahnhofsplatz
96444 Coburg
Telefon:
09561 96-44110
Telefax:
09561 96-44104
E-Mail:
renar.grunenb...@huk-coburg.de<mailto:renar.grunenb...@huk-coburg.de>
Internet:
www.huk.de<http://www.huk.de>
HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter
Deutschlands a. G. in Coburg
Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021
Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav
Herøy, Dr. Jörg Rheinländer (stv.), Sarah Rössler, Daniel Thomas (stv.).
Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrtümlich
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese
Nachricht.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht
ist nicht gestattet.

This information may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this information
in error) please notify the
sender immediately and destroy this information.
Any unauthorized copying, disclosure or distribution of the material in
this information is strictly forbidden.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIFAw&c=jf_iaSHvJObTbx-siA1ZOg&r=oNT2koCZX0xmWlSlLblR9Q&m=6lyCPEFGZrRBZrhH_iGlkum-CJi5MkJpfNnkOgs3mO0&s=VLofD771s6d1PyNl8EDOhntcFwAcZTrFbwdsWN9mcas&e=




_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIFAw&c=jf_iaSHvJObTbx-siA1ZOg&r=oNT2koCZX0xmWlSlLblR9Q&m=6lyCPEFGZrRBZrhH_iGlkum-CJi5MkJpfNnkOgs3mO0&s=VLofD771s6d1PyNl8EDOhntcFwAcZTrFbwdsWN9mcas&e=



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to