Going way off topic... For reasons that are not entirely understood, Spectrum Scale AFM developers who work from India are unable to subscribe to the gpfsug-discuss mailing list. Their mail servers and gpfsug servers don't want to play nice together. So if you want to reach more AFM experts, I recommend going the developerWorks GPFS forum route:
https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479&ps=25 yuri From: Luke Raimbach <[email protected]> To: gpfsug main discussion list <[email protected]>, "gpfsug main discussion list" <[email protected]>, Date: 03/02/2016 08:43 AM Subject: Re: [gpfsug-discuss] AFM over NFS vs GPFS Sent by: [email protected] Anybody know the answer? > HI All, > > We have two clusters and are using AFM between them to compartmentalise > performance. We have the opportunity to run AFM over GPFS protocol (over IB > verbs), which I would imagine gives much greater performance than trying to > push it over NFS over Ethernet. > > We will have a whole raft of instrument ingest filesets in one storage cluster > which are single-writer caches of the final destination in the analytics cluster. > My slight concern with running this relationship over native GPFS is that if the > analytics cluster goes offline (e.g. for maintenance, etc.), there is an entry in the > manual which says: > > "In the case of caches based on native GPFS™ protocol, unavailability of the > home file system on the cache cluster puts the caches into unmounted state. > These caches never enter the disconnected state. For AFM filesets that use GPFS > protocol to connect to the home cluster, if the remote mount becomes > unresponsive due to issues at the home cluster not related to disconnection > (such as a deadlock), operations that require remote mount access such as > revalidation or reading un-cached contents also hang until remote mount > becomes available again. One way to continue accessing all cached contents > without disruption is to temporarily disable all the revalidation intervals until the > home mount is accessible again." > > What I'm unsure of is whether this applies to single-writer caches as they > (presumably) never do revalidation. We don't want instrument data capture to > be interrupted on our ingest storage cluster if the analytics cluster goes away. > > Is anyone able to clear this up, please? > > Cheers, > Luke. > > Luke Raimbach > Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs > Building, > 215 Euston Road, > London NW1 2BE. > > E: [email protected] > W: www.crick.ac.uk > > The Francis Crick Institute Limited is a registered charity in England and Wales > no. 1140062 and a company registered in England and Wales no. 06885462, with > its registered office at 215 Euston Road, London NW1 2BE. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
