HI All,

We have two clusters and are using AFM between them to compartmentalise 
performance. We have the opportunity to run AFM over GPFS protocol (over IB 
verbs), which I would imagine gives much greater performance than trying to 
push it over NFS over Ethernet.

We will have a whole raft of instrument ingest filesets in one storage cluster 
which are single-writer caches of the final destination in the analytics 
cluster. My slight concern with running this relationship over native GPFS is 
that if the analytics cluster goes offline (e.g. for maintenance, etc.), there 
is an entry in the manual which says:

"In the case of caches based on native GPFS™ protocol, unavailability of the 
home file system on the cache cluster puts the caches into unmounted state. 
These caches never enter the disconnected state. For AFM filesets that use GPFS 
protocol to connect to the home cluster, if the remote mount becomes 
unresponsive due to issues at the home cluster not related to disconnection 
(such as a deadlock), operations that require remote mount access such as 
revalidation or reading un-cached contents also hang until remote mount becomes 
available again. One way to continue accessing all cached contents without 
disruption is to temporarily disable all the revalidation intervals until the 
home mount is accessible again."

What I'm unsure of is whether this applies to single-writer caches as they 
(presumably) never do revalidation. We don't want instrument data capture to be 
interrupted on our ingest storage cluster if the analytics cluster goes away.

Is anyone able to clear this up, please?

Cheers,
Luke.

Luke Raimbach​
Senior HPC Data and Storage Systems Engineer,
The Francis Crick Institute,
Gibbs Building,
215 Euston Road,
London NW1 2BE.

E: [email protected]
W: www.crick.ac.uk

The Francis Crick Institute Limited is a registered charity in England and 
Wales no. 1140062 and a company registered in England and Wales no. 06885462, 
with its registered office at 215 Euston Road, London NW1 2BE.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to