So the way our throttling works is (intentionally) very simplistic. 

(1) When someone mounts an NFS share, we tag the frame with a 32 bit hash of 
the export name they were authorized to mount.
(2) io-stats keeps track of the "current rate" of fops we're seeing for that 
particular mount, using a sampling of fops and a moving average over a short 
period of time.
(3) Based on whether the share violated its allowed rate (which is defined in a 
config file), we tag the FOP as "least-pri". Of course this makes the 
assumption that all NFS endpoints are receiving roughly the same # of FOPs. The 
rate defined in the config file is a *per* NFS endpoint number. So if your 
cluster has 10 NFS endpoints, and you've pre-computed that it can do roughly 
1000 FOPs per second, the rate in the config file would be 100.
(4) IO-Threads then shoves the FOP into the least-pri queue, rather than its 
default. The value is honored all the way down to the bricks.

The code is actually complete, and I'll put it up for review after we iron out 
a few minor issues.

> On Jan 27, 2016, at 9:48 PM, Ravishankar N <[email protected]> wrote:
> 
> On 01/26/2016 08:41 AM, Richard Wareing wrote:
>> In any event, it might be worth having Shreyas detail his throttling feature 
>> (that can throttle any directory hierarchy no less) to illustrate how a 
>> simpler design can achieve similar results to these more complicated (and it 
>> follows....bug prone) approaches.
>> 
>> Richard
> Hi Shreyas,
> 
> Wondering if you can share the details of the throttling feature you're 
> working on. Even if there's no code, a description of what it is trying to 
> achieve and how will be great.
> 
> Thanks,
> Ravi

_______________________________________________
Gluster-devel mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to