Ours are about 50 and 100 km from the home cluster, but it’s over 100Gb fiber.

> On Nov 23, 2020, at 4:54 PM, Andrew Beattie <[email protected]> wrote:
> 
> Rob,
> 
> Talk to Jake Carroll from the University of Queensland, he has done a number 
> of presentations at Scale User Groups of UQ’s MeDiCI data fabric which is 
> based on Spectrum Scale and does very aggressive use of AFM.
> 
> Their use of AFM is not only on campus, but to remote Storage clusters 
> between 30km and 1500km away from their Home cluster. They have also tested 
> AFM between Australia, Japan, and USA
> 
> Sent from my iPhone
> 
> > On 24 Nov 2020, at 01:20, Robert Horton <[email protected]> wrote:
> > 
> > Hi all,
> > 
> > We're thinking about deploying AFM and would be interested in hearing
> > from anyone who has used it in anger - particularly independent writer.
> > 
> > Our scenario is we have a relatively large but slow (mainly because it
> > is stretched over two sites with a 10G link) cluster for long/medium-
> > term storage and a smaller but faster cluster for scratch storage in
> > our HPC system. What we're thinking of doing is using some/all of the
> > scratch capacity as an IW cache of some/all of the main cluster, the
> > idea to reduce the need for people to manually move data between the
> > two.
> > 
> > It seems to generally work as expected in a small test environment,
> > although we have a few concerns:
> > 
> > - Quota management on the home cluster - we need a way of ensuring
> > people don't write data to the cache which can't be accomodated on
> > home. Probably not insurmountable but needs a bit of thought...
> > 
> > - It seems inodes on the cache only get freed when they are deleted on
> > the cache cluster - not if they get deleted from the home cluster or
> > when the blocks are evicted from the cache. Does this become an issue
> > in time?
> > 
> > If anyone has done anything similar I'd be interested to hear how you
> > got on. It would be intresting to know if you created a cache fileset
> > for each home fileset or just one for the whole lot, as well as any
> > other pearls of wisdom you may have to offer.
> > 
> > Thanks!
> > Rob
> > 
> > -- 
> > Robert Horton | Research Data Storage Lead
> > The Institute of Cancer Research | 237 Fulham Road | London | SW3 6JB
> > T +44 (0)20 7153 5350 | E [email protected] | W www.icr.ac.uk |
> > Twitter @ICR_London
> > Facebook: www.facebook.com/theinstituteofcancerresearch
> > 
> > The Institute of Cancer Research: Royal Cancer Hospital, a charitable 
> > Company Limited by Guarantee, Registered in England under Company No. 
> > 534147 with its Registered Office at 123 Old Brompton Road, London SW7 3RP.
> > 
> > This e-mail message is confidential and for use by the addressee only. If 
> > the message is received by anyone other than the addressee, please return 
> > the message to the sender by replying to it and then delete the message 
> > from your computer and network.

--
#BlackLivesMatter
____
|| \\UTGERS,     |---------------------------*O*---------------------------
||_// the State  |         Ryan Novosielski - [email protected]
|| \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
||  \\    of NJ  | Office of Advanced Research Computing - MSB C630, Newark
     `'

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to