Do you know is there anything that prevents me from just setting the quotas the 
same on the IW cache, if there’s no way to inherit? For the case of the home 
directories, it’s simple, as they are all 100G with some exceptions, so a 
default user quota takes care of almost all of it. Luckily, that’s right now 
where our problem is, but we have the potential with other filesets later.

I’m also wondering if you can confirm that I should /not/ need to be looking at 
people who are writing to the at home fileset, where the quotas are set, as a 
problem syncing TO the cache, e.g. they don’t add to the queue. I assume GPFS 
sees the over quota and just denies the write, yes? I originally thought the 
problem was in that direction and was totally perplexed about how it could be 
so stupid. 😅

--
____
|| \\UTGERS,       |---------------------------*O*---------------------------
||_// the State     |         Ryan Novosielski - 
[email protected]<mailto:[email protected]>
|| \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
||  \\    of NJ     | Office of Advanced Research Computing - MSB C630, Newark
    `'

On Oct 11, 2019, at 15:56, Simon Thompson <[email protected]> wrote:


Yes.

When we ran AFM, we had exactly this issue. What would happen is that a 
user/fileset quota would be hit and a compute job would continue writing. This 
would eventually fill the AFM queue. If you were lucky you could stop and 
restart the queue and it would process other files from other users but 
inevitably we'd get back to the same state. The solution was to increase the 
quota at home to clear the queue, kill user workload and then reduce their 
quota again.

At home we had replication of two so it wasn't straight forward to set the same 
quotas on cache, we could just about fudge it for user home directories but not 
for most of our project storage as we use dependent fileaet quotas.

We also saw issues with data in inode at home as this doesn't work at AFM cache 
so it goes into a block. I've forgotten the exact issues around that now.

So our experience was much like you describe.

Simon

________________________________
From: <[email protected]> on behalf of Ryan Novosielski 
<[email protected]>
Sent: Friday, 11 October 2019, 18:43
To: gpfsug main discussion list
Subject: [gpfsug-discuss] Quotas and AFM

Does anyone have any good resources or experience with quotas and AFM caches? 
Our scenario is that we have an AFM home one one site, an AFM cache on another 
site, and then a client cluster on that remote site that mounts the cache. The 
AFM filesets are IW. One of them contains our home directories, which have a 
quota set on the home side. Quotas were disabled entirely on the cache side (I 
enabled them recently, but did not set them to anything). What I believe we’re 
running into is scary long AFM queues that are caused by people writing an 
amount that is over the home quota to the cache, but the cache is accepting it 
and then failing to sync back to the home because the user is at their hard 
limit. I believe we’re also seeing delays on unaffected users who are not over 
their quota, but that’s harder to tell. We have the AFM gateways poorly/not 
tuned, so that is likely interacting. Is there any way to make the quotas 
apparent to the cache cluster too, beyond setting a quota there as well, or do 
I just fundamentally misunderstand this in some other way? We really just want 
the quotas on the home cluster to be enforced everywhere, more or less. Thanks! 
-- ____ || \\UTGERS, |---------------------------*O*--------------------------- 
||_// the State | Ryan Novosielski - [email protected] || \\ University | 
Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office 
of Advanced Research Computing - MSB C630, Newark `' 
_______________________________________________ gpfsug-discuss mailing list 
gpfsug-discuss at spectrumscale.org 
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to