Brian,

I stole your wording and created an RFE for this:

http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=115012

-Aaron

On 1/8/18 6:48 PM, Bryan Banister wrote:
> Hey Aaron... I have been talking about the same idea here and would say it 
> would be a massive feature and management improvement.
> 
> I would like to have many GPFS storage pools in my file system, each with 
> tuned blocksize and subblock sizes to suite the application, using 
> independent filesets and the data placement policy to store the data in the 
> right GPFS storage pool.  Migrating the data with the policy engine between 
> these pools as you described would be a lot faster and a lot safer than 
> trying to migrate files individually (like with rsync).
> 
> NSDs can only belong to one storage pool, so I don't see why the block 
> allocation map would be difficult to manage in this case.
> 
> Cheers,
> -Bryan
> 
> -----Original Message-----
> From: [email protected] 
> [mailto:[email protected]] On Behalf Of Aaron Knister
> Sent: Monday, January 08, 2018 4:57 PM
> To: gpfsug main discussion list <[email protected]>
> Subject: [gpfsug-discuss] Multiple Block Sizes in a Filesystem (Was: Online 
> data migration tool)
> 
> Note: External Email
> -------------------------------------------------
> 
> I was thinking some more about the >32 subblock feature in scale 5.0. As
> mentioned by IBM the biggest advantage of that feature is on filesystems
> with large blocks (e.g. multiple MB). The majority of our filesystems
> have a block size of 1MB which got me thinking... wouldn't it be nice if
> they had a larger block size (there seem to be compelling performance
> reasons for large file I/O to do this).
> 
> I'm wondering, what's the feasibility is of supporting filesystem pools
> of varying block sizes within a single filesystem? I thought allocation
> maps exist on a per-pool basis which gives me some hope it's not too hard.
> 
> If one could do this then, yes, you'd still need new hardware to migrate
> to a larger block size (and >32 subblocks), but it could be done as part
> of a refresh cycle *and* (and this is the part most important to me) it
> could be driven entirely by the policy engine which means storage admins
> are largely hands off and the migration is by and large transparent to
> the end user.
> 
> This doesn't really address the need for a tool to address a filesystem
> migration to 4k metadata blocks (although I wonder if something could be
> done to create a system_4k pool that contains 4k-aligned metadata NSDs
> where key data structures get re-written during a restripe in a
> 4k-aligned manner, but that's really grasping at straws for me).
> 
> -Aaorn
> 
> --
> Aaron Knister
> NASA Center for Climate Simulation (Code 606.2)
> Goddard Space Flight Center
> (301) 286-2776
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 
> 
> ________________________________
> 
> Note: This email is for the confidential use of the named addressee(s) only 
> and may contain proprietary, confidential or privileged information. If you 
> are not the intended recipient, you are hereby notified that any review, 
> dissemination or copying of this email is strictly prohibited, and to please 
> notify the sender immediately and destroy this email and any attachments. 
> Email transmission cannot be guaranteed to be secure or error-free. The 
> Company, therefore, does not make any guarantees as to the completeness or 
> accuracy of this email or any attachments. This email is for informational 
> purposes only and does not constitute a recommendation, offer, request or 
> solicitation of any kind to buy, sell, subscribe, redeem or perform any type 
> of transaction of a financial product.
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 

-- 
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to