On 16/09/2016 11:45 p.m., VerĂ³nica Ovando wrote:
> Hi!
> 
> I am trying to set up delay pools for AD authenticated users.
> 
> I run Squid 3.4.8 in a Debian 8 server.
> 
> 
> I configured come delay pools, but they really don't have effect. What I want 
> to do is to provide full bandwidth for some pages and create a delay for ALL 
> the others. This is because I can't restrict internet surfing and I need a 
> solution and control bandwidth usage.
> 
> 
> (Squid is working without problems with AD, so I will omit some directives 
> about authentication )
> 

delay_access is a 'fast' category access control. It cannot do auth or
group lookups itself. In order to work with those type of ACL it
requires a previous access control (usually http_access) to have checked
them first and recorded the results as part of the transacion state.


> 
> For example:
> 
> 
> #****************************ACLs**************************#
> 
> acl AD_Standard external Grupos_AD Standard
> 
> acl redLocal src 90.0.0.0/22
> 
> 
> 
> #****************************Delay Pools**************************#
> 
> delay_pools 3
> 
> 
> delay_class 1 4
> 
> delay_access 1 allow AD_Standard socialNets
> delay_access 1 deny all
> delay_parameters 1 32000/32000 8000/8000 600/64000 1000/10000
> 
> delay_class 2 1
> delay_parameters 2 -1/-1

This is a useless pool. It wastes time calculating bandwidth caps
delays, only to not do any limiting.

Instead of having an "unlimited" pool, simply deny these transactions
from having one of the other pools applied to them. By definition
anything which does not have a pool assigned is unlimited.


Especially since when a transaction meets the criteria for multiple
pools they will *all* have some effect on that transactions bandwidth.
Which can lead to some weird behaviours and (negative!) available
bandwidth values in the byte accounting.


> delay_access 2 allow redLocal redLocal
> delay_access 2 allow redLocal oficiales
> delay_access 2 allow redLocal diarios
> delay_access 2 allow redLocal bancos
> delay_access 2 allow redLocal tarjCred
> delay_access 2 allow redLocal inmueble
> delay_access 2 allow redLocal mails
> delay_access 2 allow redLocal externos
> delay_access 2 allow redLocal varias
> delay_access 2 allow redLocal servicios
> delay_access 2 deny all
> 
> delay_class 3 4
> delay_parameters 3 32000/32000 8000/8000 10000/64000 15000/50000
> delay_access 3 allow AD_Standard all
> delay_access 3 deny all
> 
> #*******************************************************************************************#
> 
> 
> So, I am creating three delay_pools, the first one provides 10KB for
> each user (for the AD group Standard), no matters hoy many hosts are
> logged in; the second one provides full usage of the bandwidth for my
> local network to access those pages; the third delay provides up to 50KB
> for the ALL the websites, with exception of those defined in the delay
> 2. Is this correct?

No. The 'restore' is the averaged N/sec byte amount. The smallest of the
parameters for each pool will be the limiting factor.


A transaction that gets assigned to pool #3 will be able to download at
most (ever) 8000 bytes in one second (due to the 8000/8000 bucket). The
other buckets are all larger, so they will refill faster than they are
allowed to drain.
 ** Also the 8000 bucket is the per-network bucket. So you have 8000
bytes/sec being shared by each /24 subnet of clients.


A transaction that gets assigned to pool #1 will be able to download at
most (ever) 8000 bytes in one second (due to the 8000/8000 bucket).
However, the other pools refill at slower rates so it gets more complex...

Assuming that pool #1 is completely full to begin with, and only 1
transaction happens:
 For the 1st second of transfer that maximum 8000 B/sec will happen.
 For the 2nd second of transfer the bandwidth will drop to 3000 B/sec
(there will now only be 3000 bytes in the per-user bucket).
 For the 3rd second of transfer the bandwidth will drop to 1000 B/sec
(the refill rate of the per-user bucket).
 Then 135 seconds later if the transaction is still going the 64000
bucket will be drained and start limiting the transfer to 600 B/sec (the
refill rate of the per-network bucket).

If anything else is going on these buckets may drain faster than
mentioned above as they may (or not) get shared by parallel transaction
and clients.
 If the proxy gets loaded you will increasingly not see the initial
'high' speeds, just the 1000 B/sec or 600 B/sec rates being applied most
of the time.

Amos

_______________________________________________
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

Reply via email to