Thank you.

-- 
Stephen



> On Nov 29, 2017, at 2:08 PM, Nikhil Khandelwal <[email protected] 
> <mailto:[email protected]>> wrote:
> 
> Hi,
> 
> I would like to clarify migration path to 5.0.0 from 4.X.X clusters. For all 
> Spectrum Scale clusters that are currently at 4.X.X, it is possible to 
> migrate to 5.0.0 with no offline data migration and no need to move data. 
> Once these clusters are at 5.0.0, they will benefit from the performance 
> improvements, new features (such as file audit logging), and various 
> enhancements that are included in 5.0.0.
> 
> That being said, there is one enhancement that will not be applied to these 
> clusters, and that is the increased number of sub-blocks per block for small 
> file allocation. This means that for file systems with a large block size and 
> a lot of small files, the overall space utilization will be the same it 
> currently is in 4.X.X. Since file systems created at 4.X.X and earlier used a 
> block size that kept this allocation in mind, there should be very little 
> impact on existing file systems.
> 
> Outside of that one particular function, the remainder of the performance 
> improvements, metadata improvements, updated compatibility, new 
> functionality, and all of the other enhancements will be immediately 
> available to you once you complete the upgrade to 5.0.0 -- with no need to 
> reformat, move data, or take your data offline.
> 
> I hope that clarifies things a little and makes the upgrade path more 
> accessible.
> 
> Please let me know if there are any other questions or concerns.
> 
> Thank you,
> Nikhil Khandelwal
> Spectrum Scale Development
> Client Adoption
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/>
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to