Re: [gpfsug-discuss] AFM experiences?

2020-11-24 Thread Luke Raimbach
Hi Rob,

Some things to think about from experiences a year or so ago...

If you intend to perform any HPC workload (writing / updating / deleting
files) inside a cache, then appropriately specified gateway nodes will be
your friend:

1. When creating, updating or deleting files in the cache, each operation
requires acknowledgement from the gateway handling that particular cache,
before returning ACK to the application. This will add a latency overhead
to the workload - if your storage is IB connected to the compute cluster
and using verbsRdmaSend for example, this will increase your happiness.
Connecting low-spec gateway nodes over 10GbE with the expectation that they
will "drain down" over time was a sore learning experience in the early
days of AFM for me.

2. AFM queues can quickly eat up memory. I think around 350bytes of memory
is consumed for each operation in the AFM queue, so if you have huge file
churn inside a cache then the queue will grow very quickly. If you run out
of memory, the node dies and you enter cache recovery when it comes back up
(or another node takes over). This can end up cycling the node as it tries
to revalidate a cache and keep up with any other queues. Get more memory!

I've not used AFM for a while now and I think the latter enormity has some
mitigation against create / delete cycles (i.e. the create operation is
expunged from the queue instead of two operations being played back to the
home). I expect IBM experts will tell you more about those improvements.
Also, several smaller caches are better than one large one (parallel
execution of queues helps utilise the available bandwidth and you have a
better failover spread if you have multiple gateways, for example).

Independent Writer mode comes with some small danger (user error or
impatience mainly) inasmuch as whoever updates a file last will win; e.g.
home user A writes a file, then cache user B updates the file after reading
it and tells user A the update is complete, when really the gateway queue
is long and the change is waiting to go back home. User A uses the file
expecting the changes are made, then updates it with some results.
Meanwhile the AFM queue drains down and user B's change arrives after user
A has completed their changes. The interim version of the file user B
modified will persist at home and user A's latest changes are lost. Some
careful thought about workflow (or good user training about eventual
consistency) will save some potential misery on this front.

Hope this helps,
Luke




On Mon, 23 Nov 2020 at 15:19, Robert Horton  wrote:

> Hi all,
>
> We're thinking about deploying AFM and would be interested in hearing
> from anyone who has used it in anger - particularly independent writer.
>
> Our scenario is we have a relatively large but slow (mainly because it
> is stretched over two sites with a 10G link) cluster for long/medium-
> term storage and a smaller but faster cluster for scratch storage in
> our HPC system. What we're thinking of doing is using some/all of the
> scratch capacity as an IW cache of some/all of the main cluster, the
> idea to reduce the need for people to manually move data between the
> two.
>
> It seems to generally work as expected in a small test environment,
> although we have a few concerns:
>
> - Quota management on the home cluster - we need a way of ensuring
> people don't write data to the cache which can't be accomodated on
> home. Probably not insurmountable but needs a bit of thought...
>
> - It seems inodes on the cache only get freed when they are deleted on
> the cache cluster - not if they get deleted from the home cluster or
> when the blocks are evicted from the cache. Does this become an issue
> in time?
>
> If anyone has done anything similar I'd be interested to hear how you
> got on. It would be intresting to know if you created a cache fileset
> for each home fileset or just one for the whole lot, as well as any
> other pearls of wisdom you may have to offer.
>
> Thanks!
> Rob
>
> --
> Robert Horton | Research Data Storage Lead
> The Institute of Cancer Research | 237 Fulham Road | London | SW3 6JB
> T +44 (0)20 7153 5350 | E robert.hor...@icr.ac.uk | W www.icr.ac.uk |
> Twitter @ICR_London
> Facebook: www.facebook.com/theinstituteofcancerresearch
>
> The Institute of Cancer Research: Royal Cancer Hospital, a charitable
> Company Limited by Guarantee, Registered in England under Company No.
> 534147 with its Registered Office at 123 Old Brompton Road, London SW7 3RP.
>
> This e-mail message is confidential and for use by the addressee only.  If
> the message is received by anyone other than the addressee, please return
> the message to the sender by replying to it and then delete the message
> from your computer and network.
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
___

Re: [gpfsug-discuss] AFM experiences?

2020-11-23 Thread Venkateswara R Puvvada
Dean,

This is one of the corner case which is associated with sparse files at 
the home cluster. You could try with latest versions of scale, AFM 
indepedent-writer mode have many performance/functional improvements in 
newer releases. 

~Venkat (vpuvv...@in.ibm.com)



From:   "Flanders, Dean" 
To: gpfsug main discussion list 
Date:   11/23/2020 11:44 PM
Subject:[EXTERNAL] Re: [gpfsug-discuss] AFM experiences?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hello Rob,
 
We looked at AFM years ago for DR, but after reading the bug reports, we 
avoided it, and also have had seen a case where it had to be removed from 
one customer, so we have kept things simple. Now looking again a few years 
later there are still issues, IBM Spectrum Scale Active File Management 
(AFM) issues which may result in undetected data corruption, and that was 
just my first google hit. We have kept it simple, and use a parallel rsync 
process with policy engine and can hit wire speed for copying of millions 
of small files in order to have isolation between the sites at GB/s. I am 
not saying it is bad, just that it needs an appropriate risk/reward ratio 
to implement as it increases overall complexity.
 
Kind regards,
 
Dean
 
From: gpfsug-discuss-boun...@spectrumscale.org 
 On Behalf Of Ryan Novosielski
Sent: Monday, November 23, 2020 4:31 PM
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] AFM experiences?
 
We use it similar to how you describe it. We now run 5.0.4.1 on the client 
side (I mean actual client nodes, not the home or cache clusters). Before 
that, we had reliability problems (failure to cache libraries of programs 
that were executing, etc.). The storage clusters in our case are 
5.0.3-2.3. 
 
We also got bit by the quotas thing. You have to set them the same on both 
sides, or you will have problems. It seems a little silly that they are 
not kept in sync by GPFS, but that’s how it is. If memory serves, the 
result looked like an AFM failure (queue not being cleared), but it turned 
out to be that the files just could not be written at the home cluster 
because the user was over quota there. I also think I’ve seen load average 
increase due to this sort of thing, but I may be mixing that up with 
another problem scenario. 

We monitor via Nagios which I believe monitors using mmafmctl commands. 
Really can’t think of a single time, apart from the other day, where the 
queue backed up. The instance the other day only lasted a few minutes (if 
you suddenly create many small files, like installing new software, it may 
not catch up instantly). 
 
-- 
#BlackLivesMatter

|| \\UTGERS, |---*O*---
||_// the State | Ryan Novosielski - novos...@rutgers.edu
|| \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS 
Campus
||  \\of NJ | Office of Advanced Research Computing - MSB C630, 
Newark
`'


On Nov 23, 2020, at 10:19, Robert Horton  wrote:
Hi all,

We're thinking about deploying AFM and would be interested in hearing
from anyone who has used it in anger - particularly independent writer.

Our scenario is we have a relatively large but slow (mainly because it
is stretched over two sites with a 10G link) cluster for long/medium-
term storage and a smaller but faster cluster for scratch storage in
our HPC system. What we're thinking of doing is using some/all of the
scratch capacity as an IW cache of some/all of the main cluster, the
idea to reduce the need for people to manually move data between the
two.

It seems to generally work as expected in a small test environment,
although we have a few concerns:

- Quota management on the home cluster - we need a way of ensuring
people don't write data to the cache which can't be accomodated on
home. Probably not insurmountable but needs a bit of thought...

- It seems inodes on the cache only get freed when they are deleted on
the cache cluster - not if they get deleted from the home cluster or
when the blocks are evicted from the cache. Does this become an issue
in time?

If anyone has done anything similar I'd be interested to hear how you
got on. It would be intresting to know if you created a cache fileset
for each home fileset or just one for the whole lot, as well as any
other pearls of wisdom you may have to offer.

Thanks!
Rob

-- 
Robert Horton | Research Data Storage Lead
The Institute of Cancer Research | 237 Fulham Road | London | SW3 6JB
T +44 (0)20 7153 5350 | E robert.hor...@icr.ac.uk | W www.icr.ac.uk |
Twitter @ICR_London
Facebook: www.facebook.com/theinstituteofcancerresearch

The Institute of Cancer Research: Royal Cancer Hospital, a charitable 
Company Limited by Guarantee, Registered in England under Company No. 
534147 with its Registered Office at 123 Old Brompton Road, London SW7 
3RP.

This e-mail message is confidential and for use by the addressee only.  If 
the message is received by anyone other than the

Re: [gpfsug-discuss] AFM experiences?

2020-11-23 Thread Venkateswara R Puvvada
>- Quota management on the home cluster - we need a way of ensuring
>people don't write data to the cache which can't be accomodated on
>home. Probably not insurmountable but needs a bit of thought...

You could set same quotas between cache and home clusters. AFM does not 
support replication of filesystem metadata like quotas, fileset 
configuration etc...

>- It seems inodes on the cache only get freed when they are deleted on
>the cache cluster - not if they get deleted from the home cluster or
>when the blocks are evicted from the cache. Does this become an issue
>in time?

AFM periodically revalidates with home cluster. If the files/dirs were 
already deleted at home cluster, AFM moves them to /.ptrash 
directory at cache cluster during the revalidation. These files can be 
removed manually by user or auto eviction process. If the .ptrash 
directory is not cleaned up on time, it might result into quota issues at 
cache cluster.

~Venkat (vpuvv...@in.ibm.com)



From:   Robert Horton 
To: "gpfsug-discuss@spectrumscale.org" 

Date:   11/23/2020 08:51 PM
Subject:[EXTERNAL] [gpfsug-discuss] AFM experiences?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi all,

We're thinking about deploying AFM and would be interested in hearing
from anyone who has used it in anger - particularly independent writer.

Our scenario is we have a relatively large but slow (mainly because it
is stretched over two sites with a 10G link) cluster for long/medium-
term storage and a smaller but faster cluster for scratch storage in
our HPC system. What we're thinking of doing is using some/all of the
scratch capacity as an IW cache of some/all of the main cluster, the
idea to reduce the need for people to manually move data between the
two.

It seems to generally work as expected in a small test environment,
although we have a few concerns:

- Quota management on the home cluster - we need a way of ensuring
people don't write data to the cache which can't be accomodated on
home. Probably not insurmountable but needs a bit of thought...

- It seems inodes on the cache only get freed when they are deleted on
the cache cluster - not if they get deleted from the home cluster or
when the blocks are evicted from the cache. Does this become an issue
in time?

If anyone has done anything similar I'd be interested to hear how you
got on. It would be intresting to know if you created a cache fileset
for each home fileset or just one for the whole lot, as well as any
other pearls of wisdom you may have to offer.

Thanks!
Rob

-- 
Robert Horton | Research Data Storage Lead
The Institute of Cancer Research | 237 Fulham Road | London | SW3 6JB
T +44 (0)20 7153 5350 | E robert.hor...@icr.ac.uk | W www.icr.ac.uk |
Twitter @ICR_London
Facebook: www.facebook.com/theinstituteofcancerresearch

The Institute of Cancer Research: Royal Cancer Hospital, a charitable 
Company Limited by Guarantee, Registered in England under Company No. 
534147 with its Registered Office at 123 Old Brompton Road, London SW7 
3RP.

This e-mail message is confidential and for use by the addressee only.  If 
the message is received by anyone other than the addressee, please return 
the message to the sender by replying to it and then delete the message 
from your computer and network.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss 






___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] AFM experiences?

2020-11-23 Thread Ryan Novosielski
Ours are about 50 and 100 km from the home cluster, but it’s over 100Gb fiber.

> On Nov 23, 2020, at 4:54 PM, Andrew Beattie  wrote:
> 
> Rob,
> 
> Talk to Jake Carroll from the University of Queensland, he has done a number 
> of presentations at Scale User Groups of UQ’s MeDiCI data fabric which is 
> based on Spectrum Scale and does very aggressive use of AFM.
> 
> Their use of AFM is not only on campus, but to remote Storage clusters 
> between 30km and 1500km away from their Home cluster. They have also tested 
> AFM between Australia, Japan, and USA
> 
> Sent from my iPhone
> 
> > On 24 Nov 2020, at 01:20, Robert Horton  wrote:
> > 
> > Hi all,
> > 
> > We're thinking about deploying AFM and would be interested in hearing
> > from anyone who has used it in anger - particularly independent writer.
> > 
> > Our scenario is we have a relatively large but slow (mainly because it
> > is stretched over two sites with a 10G link) cluster for long/medium-
> > term storage and a smaller but faster cluster for scratch storage in
> > our HPC system. What we're thinking of doing is using some/all of the
> > scratch capacity as an IW cache of some/all of the main cluster, the
> > idea to reduce the need for people to manually move data between the
> > two.
> > 
> > It seems to generally work as expected in a small test environment,
> > although we have a few concerns:
> > 
> > - Quota management on the home cluster - we need a way of ensuring
> > people don't write data to the cache which can't be accomodated on
> > home. Probably not insurmountable but needs a bit of thought...
> > 
> > - It seems inodes on the cache only get freed when they are deleted on
> > the cache cluster - not if they get deleted from the home cluster or
> > when the blocks are evicted from the cache. Does this become an issue
> > in time?
> > 
> > If anyone has done anything similar I'd be interested to hear how you
> > got on. It would be intresting to know if you created a cache fileset
> > for each home fileset or just one for the whole lot, as well as any
> > other pearls of wisdom you may have to offer.
> > 
> > Thanks!
> > Rob
> > 
> > -- 
> > Robert Horton | Research Data Storage Lead
> > The Institute of Cancer Research | 237 Fulham Road | London | SW3 6JB
> > T +44 (0)20 7153 5350 | E robert.hor...@icr.ac.uk | W www.icr.ac.uk |
> > Twitter @ICR_London
> > Facebook: www.facebook.com/theinstituteofcancerresearch
> > 
> > The Institute of Cancer Research: Royal Cancer Hospital, a charitable 
> > Company Limited by Guarantee, Registered in England under Company No. 
> > 534147 with its Registered Office at 123 Old Brompton Road, London SW7 3RP.
> > 
> > This e-mail message is confidential and for use by the addressee only. If 
> > the message is received by anyone other than the addressee, please return 
> > the message to the sender by replying to it and then delete the message 
> > from your computer and network.

--
#BlackLivesMatter

|| \\UTGERS, |---*O*---
||_// the State  | Ryan Novosielski - novos...@rutgers.edu
|| \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
||  \\of NJ  | Office of Advanced Research Computing - MSB C630, Newark
 `'

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] AFM experiences?

2020-11-23 Thread Andrew Beattie

Rob,

Talk to Jake Carroll from the University of Queensland, he has done a
number of presentations at Scale User Groups of UQ’s MeDiCI data fabric
which is based on Spectrum Scale and does very aggressive use of AFM.

Their use of AFM is not only on campus, but to remote Storage clusters
between 30km and 1500km away from their Home cluster.  They have also
tested AFM between Australia, Japan, and USA

Sent from my iPhone

> On 24 Nov 2020, at 01:20, Robert Horton  wrote:
>
> Hi all,
>
> We're thinking about deploying AFM and would be interested in hearing
> from anyone who has used it in anger - particularly independent writer.
>
> Our scenario is we have a relatively large but slow (mainly because it
> is stretched over two sites with a 10G link) cluster for long/medium-
> term storage and a smaller but faster cluster for scratch storage in
> our HPC system. What we're thinking of doing is using some/all of the
> scratch capacity as an IW cache of some/all of the main cluster, the
> idea to reduce the need for people to manually move data between the
> two.
>
> It seems to generally work as expected in a small test environment,
> although we have a few concerns:
>
> - Quota management on the home cluster - we need a way of ensuring
> people don't write data to the cache which can't be accomodated on
> home. Probably not insurmountable but needs a bit of thought...
>
> - It seems inodes on the cache only get freed when they are deleted on
> the cache cluster - not if they get deleted from the home cluster or
> when the blocks are evicted from the cache. Does this become an issue
> in time?
>
> If anyone has done anything similar I'd be interested to hear how you
> got on. It would be intresting to know if you created a cache fileset
> for each home fileset or just one for the whole lot, as well as any
> other pearls of wisdom you may have to offer.
>
> Thanks!
> Rob
>
> --
> Robert Horton | Research Data Storage Lead
> The Institute of Cancer Research | 237 Fulham Road | London | SW3 6JB
> T +44 (0)20 7153 5350 | E robert.hor...@icr.ac.uk | W www.icr.ac.uk |
> Twitter @ICR_London
> Facebook: www.facebook.com/theinstituteofcancerresearch
>
> The Institute of Cancer Research: Royal Cancer Hospital, a charitable
Company Limited by Guarantee, Registered in England under Company No.
534147 with its Registered Office at 123 Old Brompton Road, London SW7 3RP.
>
> This e-mail message is confidential and for use by the addressee only.
If the message is received by anyone other than the addressee, please
return the message to the sender by replying to it and then delete the
message from your computer and network.
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

>
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] AFM experiences?

2020-11-23 Thread Flanders, Dean
Hello Rob,

We looked at AFM years ago for DR, but after reading the bug reports, we 
avoided it, and also have had seen a case where it had to be removed from one 
customer, so we have kept things simple. Now looking again a few years later 
there are still issues, IBM Spectrum Scale Active File Management (AFM) issues 
which may result in undetected data 
corruption<https://www.ibm.com/support/pages/ibm-spectrum-scale-active-file-management-afm-issues-which-may-result-undetected-data-corruption>,
 and that was just my first google hit. We have kept it simple, and use a 
parallel rsync process with policy engine and can hit wire speed for copying of 
millions of small files in order to have isolation between the sites at GB/s. I 
am not saying it is bad, just that it needs an appropriate risk/reward ratio to 
implement as it increases overall complexity.

Kind regards,

Dean

From: gpfsug-discuss-boun...@spectrumscale.org 
 On Behalf Of Ryan Novosielski
Sent: Monday, November 23, 2020 4:31 PM
To: gpfsug main discussion list 
Subject: Re: [gpfsug-discuss] AFM experiences?

We use it similar to how you describe it. We now run 5.0.4.1 on the client side 
(I mean actual client nodes, not the home or cache clusters). Before that, we 
had reliability problems (failure to cache libraries of programs that were 
executing, etc.). The storage clusters in our case are 5.0.3-2.3.

We also got bit by the quotas thing. You have to set them the same on both 
sides, or you will have problems. It seems a little silly that they are not 
kept in sync by GPFS, but that’s how it is. If memory serves, the result looked 
like an AFM failure (queue not being cleared), but it turned out to be that the 
files just could not be written at the home cluster because the user was over 
quota there. I also think I’ve seen load average increase due to this sort of 
thing, but I may be mixing that up with another problem scenario.

We monitor via Nagios which I believe monitors using mmafmctl commands. Really 
can’t think of a single time, apart from the other day, where the queue backed 
up. The instance the other day only lasted a few minutes (if you suddenly 
create many small files, like installing new software, it may not catch up 
instantly).

--
#BlackLivesMatter

|| \\UTGERS,   
|---*O*---
||_// the State | Ryan Novosielski - 
novos...@rutgers.edu<mailto:novos...@rutgers.edu>
|| \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
||  \\of NJ | Office of Advanced Research Computing - MSB C630, Newark
`'


On Nov 23, 2020, at 10:19, Robert Horton 
mailto:robert.hor...@icr.ac.uk>> wrote:
Hi all,

We're thinking about deploying AFM and would be interested in hearing
from anyone who has used it in anger - particularly independent writer.

Our scenario is we have a relatively large but slow (mainly because it
is stretched over two sites with a 10G link) cluster for long/medium-
term storage and a smaller but faster cluster for scratch storage in
our HPC system. What we're thinking of doing is using some/all of the
scratch capacity as an IW cache of some/all of the main cluster, the
idea to reduce the need for people to manually move data between the
two.

It seems to generally work as expected in a small test environment,
although we have a few concerns:

- Quota management on the home cluster - we need a way of ensuring
people don't write data to the cache which can't be accomodated on
home. Probably not insurmountable but needs a bit of thought...

- It seems inodes on the cache only get freed when they are deleted on
the cache cluster - not if they get deleted from the home cluster or
when the blocks are evicted from the cache. Does this become an issue
in time?

If anyone has done anything similar I'd be interested to hear how you
got on. It would be intresting to know if you created a cache fileset
for each home fileset or just one for the whole lot, as well as any
other pearls of wisdom you may have to offer.

Thanks!
Rob

--
Robert Horton | Research Data Storage Lead
The Institute of Cancer Research | 237 Fulham Road | London | SW3 6JB
T +44 (0)20 7153 5350 | E 
robert.hor...@icr.ac.uk<mailto:robert.hor...@icr.ac.uk> | W 
www.icr.ac.uk<http://www.icr.ac.uk> |
Twitter @ICR_London
Facebook: 
www.facebook.com/theinstituteofcancerresearch<http://www.facebook.com/theinstituteofcancerresearch>

The Institute of Cancer Research: Royal Cancer Hospital, a charitable Company 
Limited by Guarantee, Registered in England under Company No. 534147 with its 
Registered Office at 123 Old Brompton Road, London SW7 3RP.

This e-mail message is confidential and for use by the addressee only.  If the 
message is received by anyone other than the addressee, please return the 
message to the sender by replying to it and then delete the message from your 
computer and network.
___

Re: [gpfsug-discuss] AFM experiences?

2020-11-23 Thread Ryan Novosielski
We use it similar to how you describe it. We now run 5.0.4.1 on the client side 
(I mean actual client nodes, not the home or cache clusters). Before that, we 
had reliability problems (failure to cache libraries of programs that were 
executing, etc.). The storage clusters in our case are 5.0.3-2.3.

We also got bit by the quotas thing. You have to set them the same on both 
sides, or you will have problems. It seems a little silly that they are not 
kept in sync by GPFS, but that’s how it is. If memory serves, the result looked 
like an AFM failure (queue not being cleared), but it turned out to be that the 
files just could not be written at the home cluster because the user was over 
quota there. I also think I’ve seen load average increase due to this sort of 
thing, but I may be mixing that up with another problem scenario.

We monitor via Nagios which I believe monitors using mmafmctl commands. Really 
can’t think of a single time, apart from the other day, where the queue backed 
up. The instance the other day only lasted a few minutes (if you suddenly 
create many small files, like installing new software, it may not catch up 
instantly).

--
#BlackLivesMatter

|| \\UTGERS,   |---*O*---
||_// the State | Ryan Novosielski - 
novos...@rutgers.edu
|| \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
||  \\of NJ | Office of Advanced Research Computing - MSB C630, Newark
`'

On Nov 23, 2020, at 10:19, Robert Horton  wrote:

Hi all,

We're thinking about deploying AFM and would be interested in hearing
from anyone who has used it in anger - particularly independent writer.

Our scenario is we have a relatively large but slow (mainly because it
is stretched over two sites with a 10G link) cluster for long/medium-
term storage and a smaller but faster cluster for scratch storage in
our HPC system. What we're thinking of doing is using some/all of the
scratch capacity as an IW cache of some/all of the main cluster, the
idea to reduce the need for people to manually move data between the
two.

It seems to generally work as expected in a small test environment,
although we have a few concerns:

- Quota management on the home cluster - we need a way of ensuring
people don't write data to the cache which can't be accomodated on
home. Probably not insurmountable but needs a bit of thought...

- It seems inodes on the cache only get freed when they are deleted on
the cache cluster - not if they get deleted from the home cluster or
when the blocks are evicted from the cache. Does this become an issue
in time?

If anyone has done anything similar I'd be interested to hear how you
got on. It would be intresting to know if you created a cache fileset
for each home fileset or just one for the whole lot, as well as any
other pearls of wisdom you may have to offer.

Thanks!
Rob

--
Robert Horton | Research Data Storage Lead
The Institute of Cancer Research | 237 Fulham Road | London | SW3 6JB
T +44 (0)20 7153 5350 | E robert.hor...@icr.ac.uk | W www.icr.ac.uk |
Twitter @ICR_London
Facebook: www.facebook.com/theinstituteofcancerresearch

The Institute of Cancer Research: Royal Cancer Hospital, a charitable Company 
Limited by Guarantee, Registered in England under Company No. 534147 with its 
Registered Office at 123 Old Brompton Road, London SW7 3RP.

This e-mail message is confidential and for use by the addressee only.  If the 
message is received by anyone other than the addressee, please return the 
message to the sender by replying to it and then delete the message from your 
computer and network.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss