Hi, Damir, 

I have not done that, but a rolling upgrade from 3.5.x to 4.1.x (maybe 
even to 4.2) is supported. 
So, as long as you do not need all 500 nodes of your compute cluster 
permanently active, you might upgrade them in batches without fully-blown 
downtime. Nicely orchestrated by some scripts it could be done quite 
smoothly (depending on the percentage of compute nodes which can go down 
at once and on the run time / wall clocks of your jobs this will take 
between few hours and many days ...).


 
Mit freundlichen Grüßen / Kind regards

 
Dr. Uwe Falke
 
IT Specialist
High Performance Computing Services / Integrated Technology Services / 
Data Center Services
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland
Rathausstr. 7
09111 Chemnitz
Phone: +49 371 6978 2165
Mobile: +49 175 575 2877
E-Mail: uwefa...@de.ibm.com
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland Business & Technology Services GmbH / Geschäftsführung: 
Frank Hammer, Thorsten Moehring
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, 
HRB 17122 




From:   Damir Krstic <damir.krs...@gmail.com>
To:     gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   03/16/2016 07:08 PM
Subject:        Re: [gpfsug-discuss] cross-cluster mounting different 
versions of     gpfs
Sent by:        gpfsug-discuss-boun...@spectrumscale.org



Sven,

For us, at least, at this point in time, we have to create new filesystem 
with version flag. The reason is we can't take downtime to upgrade all of 
our 500+ compute nodes that will cross-cluster mount this new storage. We 
can take downtime in June and get all of the nodes up to 4.2 gpfs version 
but we have users today that need to start using the filesystem. 

So at this point in time, we either have ESS built with 4.1 version and 
cross mount its filesystem (also built with --version flag I assume) to 
our 3.5 compute cluster, or...we proceed with 4.2 ESS and build 
filesystems with --version flag and then in June when we get all of our 
clients upgrade we run =latest gpfs command and then mmchfs -V to get 
filesystem back up to 4.2 features. 

It's unfortunate that we are in this bind with the downtime of the compute 
cluster. If we were allowed to upgrade our compute nodes before June, we 
could proceed with 4.2 build without having to worry about filesystem 
versions. 

Thanks for your reply.

Damir

On Wed, Mar 16, 2016 at 12:18 PM Sven Oehme <oeh...@gmail.com> wrote:
while this is all correct people should think twice about doing this.
if you create a filesystem with older versions, it might prevent you from 
using some features like data-in-inode, encryption, adding 4k disks to 
existing filesystem, etc even if you will eventually upgrade to the latest 
code. 

for some customers its a good point in time to also migrate to larger 
blocksizes compared to what they run right now and migrate the data. i 
have seen customer systems gaining factors of performance improvements 
even on existing HW by creating new filesystems with larger blocksize and 
latest filesystem layout (that they couldn't before due to small file 
waste which is now partly solved by data-in-inode). while this is heavily 
dependent on workload and environment its at least worth thinking about. 

sven 



On Wed, Mar 16, 2016 at 4:20 PM, Marc A Kaplan <makap...@us.ibm.com> 
wrote:
The key point is that you must create the file system so that is "looks" 
like a 3.5 file system.  See mmcrfs ... --version.  Tip: create or find a 
test filesystem back on the 3.5 cluster and look at the version string. 
 mmslfs xxx -V.  Then go to the 4.x system and try to create a file system 
with the same version string....





_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss[attachment 
"atthrpb5.gif" deleted by Uwe Falke/Germany/IBM] 
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to