Just a note on Marc's "mmcrfs -i" suggestion in (y) below.
 
FSes created at GPFS 4.1 or later automatically use inode size 4K unless a smaller size is explicitly specified with -i.
 
Jamie
 
Jamie Davis
GPFS Functional Verification Test (FVT)
[email protected]
 
 
----- Original message -----
From: Marc A Kaplan/Watson/IBM@IBMUS
Sent by: [email protected]
To: Luke Raimbach <[email protected]>
Cc: gpfsug main discussion list <[email protected]>
Subject: Re: [gpfsug-discuss] Placement Policy Installation and RDM Considerations
Date: Thu, Jun 18, 2015 11:37 AM
 
(1) There is no secret flag.  I assume that the existing policy is okay but the new one is better. So start using the better one ASAP, but why stop the system if you don't have to?
The not secret way to quiesce/resume a filesystem without unmounting is  fsctl <fsname> {suspend | suspend-write | resume};  

(2) The policy rules text is passed as a string through a GPFS rpc protocol (not a standard RPC) and the designer/coder chose 1MB as a safety-limit. I think it could be increased, but suppose you did have 4000 rules, each 200 bytes - you'd be at 800KB, still short of the 1MB limit.  

(x) Personally, I wouldn't worry much about setting, say 10 extended attribute values in each rule.  I'd worry more about the impact of having 100s of rules.
(y) When designing/deploying a new GPFS filesystem, consider explicitly setting the inode size so that all anticipated extended attributes will be stored in the inode, rather than spilling into other disk blocks. See mmcrfs ... -i InodeSize.  You can build a test filesystem with just one NSD/LUN and test your anticipated usage.  Use tsdbfs ...  xattr ...  to see how EAs are stored.  Caution: tsdbfs display commands are harmless, BUT there are some patch and patch-like subcommands that could foul up your filesystem.


From:        Luke Raimbach <[email protected]>



Hi Marc,
 
Thanks for the pointer to the updated syntax. That indeed looks nicer.
 
(1)    Asynchronous policy propagation sounds good in our scenario. We don’t want to potentially interrupt other running experiments by having to quiesce the filesystem for a new one coming online. It is useful to know that you could quiesce if desired. Presumably this is a secret flag one might pass to mmchpolicy?
 
(2)    I was concerned about the evaluation time if I tried to set all extended attributes at creation time. That’s why I thought about adding a few ‘system’ defined tags which could later be used to link the files to an asynchronously applied policy on the home cluster. I think I calculated around 4,000 rules (dependent on the size of the attribute names and values), which might limit the number of experiments supported on a single ingest file system. However, I can’t envisage we will ever have 4,000 experiments running at once! I was really interested in why the limitation existed from a file-system architecture point of view.
 
Thanks for the responses.
Luke.
 
 
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to