Hello!

A follow up to my previous question about automatically sending files to Amazon 
s3 as they arrive in GPFS.

I have created an interface script to manage Amazon s3 storage as an external 
pool, I have created a migration policy that pre-migrates all files to the 
external pool and I have set that as the default policy for the file system.

All good so far, but the problem I'm now facing is: Only some of the cluster 
nodes have access to Amazon due to network constraints.  I read the statement 
"The mmapplypolicy command invokes the external pool script on all nodes in the 
cluster that have installed the script in its designated location."[1] and 
thought, 'Great! I'll only install the script on nodes that have access to 
Amazon' but that appears not to work for a placement policy/default policy and 
instead, the script runs on precisely no nodes.

I assumed this happened because running the script on a non-Amazon facing node 
resulted in a horrible error (i.e. file not found), so I edited my script to 
return a non-zero response if being run on a node that isn't in my cloudNode 
class, then installed the script every where. But this appears to have had no 
effect what-so-ever.

The only thing I can think of now is to control where a migration policy runs 
based on node class. But I don't know how to do that, or if it's possible, or 
where the documentation might be as I can't find any. Any assistance would once 
again be greatly appreciated.


[1]=https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/bl1adv_impstorepool.htm
 

Regards,

Peter Chase
GPCS Team
Met Office  FitzRoy Road  Exeter  Devon  EX1 3PB  United Kingdom
Email: [email protected] Website: www.metoffice.gov.uk 

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to