Re: [gpfsug-discuss] Online data migration tool

2017-11-29 Thread Yugendra Guvvala
Hi, I am trying to understand the technical challenges to migrate to GPFS 5.0 from GPFS 4.3. We currently run GPFS 4.3 and i was all exited to see 5.0 release and hear about some promising features available. But not sure about complexity involved to migrate. ​ Thanks, Yugi

Re: [gpfsug-discuss] 5.0 features?

2017-11-29 Thread Marc A Kaplan
Which features of 5.0 require a not-in-place upgrade of a file system? Where has this information been published? ___ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Re: [gpfsug-discuss] 5.0 features?

2017-11-29 Thread Simon Thompson (IT Research Support)
You can in place upgrade. I think what people are referring to is likely things like the new sub block sizing for **new** filesystems. Simon From: gpfsug-discuss-boun...@spectrumscale.org [gpfsug-discuss-boun...@spectrumscale.org] on behalf of

Re: [gpfsug-discuss] 5.0 features?

2017-11-29 Thread Fosburgh,Jonathan
I haven’t even heard it’s been released or has been announced. I’ve requested a roadmap discussion. From: on behalf of Marc A Kaplan Reply-To: gpfsug main discussion list Date: Wednesday,

Re: [gpfsug-discuss] SOBAR restore with new blocksize and/or inodesize

2017-11-29 Thread Marc A Kaplan
This redbook http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/3af3af29ce1f19cf86256c7100727a9f/335d8a48048ea78d85258059006dad33/$FILE/SOBAR_Migration_SpectrumScale_v1.0.pdf has these and other hints: -B blocksize, should match the file system block size of the source system, but can also be

Re: [gpfsug-discuss] Online data migration tool

2017-11-29 Thread Jonathan Buzzard
On Wed, 2017-11-29 at 11:38 -0500, scott wrote: > Question: Who at IBM is going to reach out to ESPN - a 24/7 online > user  > - with >15PETABYTES of content? > > Asking customers to copy, reformat, copy back will just cause IBM to  > have to support the older version for a longer period of time

Re: [gpfsug-discuss] Online data migration tool

2017-11-29 Thread Jonathan Buzzard
On Wed, 2017-11-29 at 16:51 +, Buterbaugh, Kevin L wrote: [SNIP] > And now GPFS (and it will always be GPFS … it will never be > Spectrum Scale) Splitter, its Tiger Shark forever ;-) JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator,

Re: [gpfsug-discuss] Online data migration tool

2017-11-29 Thread Jonathan Buzzard
On Wed, 2017-11-29 at 11:00 -0500, Yugendra Guvvala wrote: > Hi,  > > I am trying to understand the technical challenges to migrate to GPFS > 5.0 from GPFS 4.3. We currently run GPFS 4.3 and i was all exited to > see 5.0 release and hear about some promising features available. But > not sure

Re: [gpfsug-discuss] 5.0 features?

2017-11-29 Thread Buterbaugh, Kevin L
Simon in correct … I’d love to be able to support a larger block size for my users who have sane workflows while still not wasting a ton of space for the biomedical folks…. ;-) A question … will the new, much improved, much faster mmrestripefs that was touted at SC17 require a filesystem that

Re: [gpfsug-discuss] 5.0 features? -- mmrestripefs -b

2017-11-29 Thread Felipe Knop
Kevin, The improved rebalance function (mmrestripefs -b) only depends on the cluster level being (at least) 5.0.0, and will work with older file system formats as well. This particular improvement did not require a change in the format/structure of the file system. Felipe Felipe Knop

Re: [gpfsug-discuss] Online data migration tool

2017-11-29 Thread Nikhil Khandelwal
Hi, I would like to clarify migration path to 5.0.0 from 4.X.X clusters. For all Spectrum Scale clusters that are currently at 4.X.X, it is possible to migrate to 5.0.0 with no offline data migration and no need to move data. Once these clusters are at 5.0.0, they will benefit from the

Re: [gpfsug-discuss] Online data migration tool

2017-11-29 Thread Stephen Ulmer
Thank you. -- Stephen > On Nov 29, 2017, at 2:08 PM, Nikhil Khandelwal > wrote: > > Hi, > > I would like to clarify migration path to 5.0.0 from 4.X.X clusters. For all > Spectrum Scale clusters that are currently at 4.X.X, it is possible

Re: [gpfsug-discuss] Online data migration tool

2017-11-29 Thread Sobey, Richard A
Could we utilise free capacity in the existing filesystem and empty NSDs, create a new FS and AFM migrate data in stages? Terribly long winded and frought with danger and peril... do not pass go... ah, answered my own question.  Richard -Original Message- From:

Re: [gpfsug-discuss] Online data migration tool

2017-11-29 Thread scott
Question: Who at IBM is going to reach out to ESPN - a 24/7 online user - with >15PETABYTES of content? Asking customers to copy, reformat, copy back will just cause IBM to have to support the older version for a longer period of time Just my $.03 (adjusted for inflation) On 11/29/2017

Re: [gpfsug-discuss] Online data migration tool

2017-11-29 Thread Buterbaugh, Kevin L
Hi All, Well, actually a year ago we started the process of doing pretty much what Richard describes below … the exception being that we rsync’d data over to the new filesystem group by group. It was no fun but it worked. And now GPFS (and it will always be GPFS … it will never be Spectrum

Re: [gpfsug-discuss] Online data migration tool

2017-11-29 Thread Aaron Knister
Thanks, Nikhil. Most of that was consistent with my understnading, however I was under the impression that the >32 subblocks code is required to achieve the touted 50k file creates/second that Sven has talked about a bunch of times:

Re: [gpfsug-discuss] 5.0 features?

2017-11-29 Thread Andrew Beattie
I think the other feature that you will need to be clear about is the changes to buffering architecture from memory this requires more than just a file copy to move the data to the file system.   It would need input from Sven or Development I think to clarify what the requirements are to take

Re: [gpfsug-discuss] Online data migration tool

2017-11-29 Thread Nikhil Khandelwal
Hi Aaron, By large block size we are primarily talking about block sizes 4 MB and greater. You are correct, in my previous message I neglected to mention the file create performance for small files on these larger block sizes due to the subblock change. In addition to the added space