[squid-users] Re: Delay Pools
Thanks for the help but did not solve the problem. Remains the same. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Delay-Pools-tp4665836p4665865.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] Re: Delay Pools
On Wed, May 7, 2014 at 8:31 PM, Tomas Waldow to...@waldow.ws wrote: Thanks for the help but did not solve the problem. Remains the same. What I posted works as expected for me, so the question is what did you do different? For starters, what you posted is not the correct format. Where did you get that from? The actual limit should be in bits/sec, not 100KB. What exactly are you looking at? Other possibilities etc, include what squid.conf you are seeing is not what you are actually using. What trouble-shooting steps have you taken so far?
Re: [squid-users] Re: Delay Pools
Take a look here: http://bugs.squid-cache.org/show_bug.cgi?id=3536
[squid-users] Re: Delay Pools
Please, anyone? -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Delay-Pools-tp4665836p4665853.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Re: delay pools and deny_info error messages
I have tried this several ways but the user never sees the error page when the limits are reached.. they just cannot browse anymore.. any help would be appreciated On Wed, Nov 14, 2012 at 9:58 PM, dor eiram dorei...@gmail.com wrote: how does one get deny_info error messages to work with delay pools so you can message the user that they have reached their bandwidth limits. i have tried the config below which stops the user from browsinng once the limit is reached but never triggers the error message acl testuser proxy_auth test deny_info ERR_NO_BW testuser delay_pools 1 delay_class 1 1 #256 Kbit/s fill rate, 1024 Kbit/s reserve delay_parameters 1 32000/128000 delay_access 1 allow testuser delay_access 1 deny all
Re: [squid-users] Re: delay pools and deny_info error messages
On 18/11/2012 1:41 a.m., dor eiram wrote: I have tried this several ways but the user never sees the error page when the limits are reached.. they just cannot browse anymore.. any help would be appreciated deny_info operates when access is denied... thus its name. Delay pools does not particularly deny anything, it just *delays* traffic I/O to maintain a certain speed profile. You need something like http_access deny testuser. For presenting it on some dynamic criteria like bandwidth exceeded we generally use a external_acl_type helper to calculate the timing and present an ERR response. With squid.conf containing something like this: external_acl_type foo ... acl testuser external foo http_access deny !testuser deny_info ... testuser Amos On Wed, Nov 14, 2012 at 9:58 PM, dor eiram dorei...@gmail.com wrote: how does one get deny_info error messages to work with delay pools so you can message the user that they have reached their bandwidth limits. i have tried the config below which stops the user from browsinng once the limit is reached but never triggers the error message acl testuser proxy_auth test deny_info ERR_NO_BW testuser delay_pools 1 delay_class 1 1 #256 Kbit/s fill rate, 1024 Kbit/s reserve delay_parameters 1 32000/128000 delay_access 1 allow testuser delay_access 1 deny all
[squid-users] Re: delay pools and deny_info error messages
how does one get deny_info error messages to work with delay pools so you can message the user that they have reached their bandwidth limits. i have tried the config below which stops the user from browsinng once the limit is reached but never triggers the error message acl testuser proxy_auth test deny_info ERR_NO_BW testuser delay_pools 1 delay_class 1 1 #256 Kbit/s fill rate, 1024 Kbit/s reserve delay_parameters 1 32000/128000 delay_access 1 allow testuser delay_access 1 deny all
[squid-users] Re: Delay pools bucket refill
On Tue, 23 Dec 2008 11:01:33 -0800 (PST) Chuck Kollars ckolla...@yahoo.com wrote: ... If I make a time-based acl with a delay-pool, does it refill in the time the acl is inactive or is the amount stopped and continued when the acl starts again? It doesn't matter hardly at all. The bucket will overflow and never grow beyond the second parameter no matter what. So at most you're just asking if the bucket _starts_out_ full or empty when the ACL starts again. After a few tens of seconds the initial value won't make any difference; you're just talking about a transient condition that might last up to one minute. You skipped the part where I tell the parameters I'd like to use: Like, if I have a pool acl going from 9:00 till 20:00 with a size of 3GB and a rate of 1200 B/s, and a client runs low on the bucket at 20:00. What will he be able to download at 9:00 the next day? It will take one month to refill the bucket. That is what I want to do: offer a 3GB download limit each month, and if the bucket is empty, the user will be able to download at 1200B/s (or has to wait a while). It matters in my scenario. On Wed, 24 Dec 2008 15:47:12 +1300 Amos Jeffries squ...@treenet.co.nz wrote: Johannes Buchner wrote: Hi! I have a question about delay_pools: If I make a time-based acl with a delay-pool, does it refill in the time the acl is inactive or is the amount stopped and continued when the acl starts again? Pools refill at the constant rate unless the are full or reconfigured. Client usage is not taken into consideration on the filling, only on the emptying. I'm not talking about client usage, just wether the acl is active or not (since it has time constraints). ...if I would define one bucket for 9:00 till 20:00 and another one for 20:00 till 9:00 of different sizes and rates, would they share their amount? There's really only one bucket per node at a time no matter what. (It may be possible with some uses of ACLs to make the first bucket go away and the second bucket [exactly like it] replace it. In that case I'd reframe your question as 'does the existing content of the first bucket become the initial value in the second bucket?'.) Again, it doesn't much matter. Since every bucket will spill over when its defined size is reached, you're again just asking what will happen in the first few seconds, whether the bucket will initially be empty or same as previous or full. After a few tens of seconds any such initial value will be completely swamped out by the ongoing action of the system. Amos has a different answer on this one: On Wed, 24 Dec 2008 15:47:12 +1300 Amos Jeffries squ...@treenet.co.nz wrote: Correct. No. They are different pools. Amos, ckollars, thank you for your answers. I'll just try it out and report back. Regards, Johannes -- Emails können geändert, gefälscht und eingesehen werden. Signiere oder verschüssele deine Mails mit GPG. http://web.student.tuwien.ac.at/~e0625457/pgp.html pgpsFZVfJFqV9.pgp Description: PGP signature
[squid-users] Re: Delay Pools question
... what the number after the slash in delay_parameters represents. [?] [for example what means:] delay_parameters 2 64000/64000 1/1000 You can think of the two numbers separated by a slash as the average rate and the peak burst size. In other words the first number is how much bandwidth on average and the second number is the largest file that can sometimes be transferred faster than the the average rate. So in the example the burst size of 64000 not being different from the average rate of 64000 pretty much eliminates bursting at that level. The burst size of 1000 allows larger files to transfer quickly (of course subject to higher levels) if nothing else is going on, even though the average rate is the much lower 1. The numbers are really the two parameters of the 'leaky bucket' algorithm: the replenishment rate and the bucket volume (numbers that exceed the bucket volume spill over the top of the 'leaky bucket' and are lost). But this real meaning tends to not be all that helpful in choosing your configuration values. So use the ideas average rate and peak burst size to choose your initial configuration values. Then be prepared to 'adjust' them up or down quite a bit -typically by a factor of two or even more- based on your actual experience. thanks! -Chuck Kollars
[squid-users] Re: Delay-pools
Vaughan Roberts wrote: I am thinking about implementing delay-pools in my squid transparent proxy on my Linux box. The reason is that my ISP (cable modem) has a monthly limit on the number of bytes I can download. This didn't use to be a problem, but recently my two kids have got laptops from school and all of a sudden I am hitting the size limit as they plug into my LAN and play games, mp3s, msn etc. Squid's delay pools aren't the right tool for this. Squid's delay pools will not count all bytes passed over the Internet connection (i.e. protocol overhead), whereas your cable provider almost certainly does. Furthermore, games and IM very likely aren't being tunneled over HTTP (unless you are somehow forcing them to), so Squid won't even see that traffic. I would suggest using a general IP-level accounting tool for tracking total usage (though I don't know of one to suggest off-hand). Adam
[squid-users] Re: delay pools
azeem ahmad wrote: i have configured delay pools and its working well. delay_parameters 1 -1/-1 1000/1000 delay_access 1 allow all but it limits users to 1000 B/s even if there is only one user using the internet all the remaining bandwidth is wasted. how can i make it to give the user more bandwidth if other users are not consuming their shares. The Delay Pools FAQ explains the different delay pool classes: http://www.squid-cache.org/Doc/FAQ/FAQ-19.html#ss19.8 Use a class 1 delay pool instead, which sets a total limit for the pool, which is then shared equally between all connections. and another thing is that i m not getting wot does delay_initial_bucket_level means in fact According to the Delay Pools FAQ and the default squid.conf, it is a percentage of how full the delay pool's bucket (count of available bytes) is when Squid starts, is reconfigured, or receives its first connection. This allows users to start surfing immediately after Squid starts, instead of having to wait for the delay pool's bucket to fill. It appears to default to 50%. Adam
RE: [squid-users] Re: delay pools
From: Adam Aube [EMAIL PROTECTED] To: squid-users@squid-cache.org Subject: [squid-users] Re: delay pools Date: Wed, 20 Apr 2005 09:01:31 -0400 azeem ahmad wrote: i have configured delay pools and its working well. delay_parameters 1 -1/-1 1000/1000 delay_access 1 allow all but it limits users to 1000 B/s even if there is only one user using the internet all the remaining bandwidth is wasted. how can i make it to give the user more bandwidth if other users are not consuming their shares. The Delay Pools FAQ explains the different delay pool classes: http://www.squid-cache.org/Doc/FAQ/FAQ-19.html#ss19.8 i read this and also default squid.conf. but these concepts are really hard to grasp by a newbie. and also there are really unsuffiecient examples Use a class 1 delay pool instead, which sets a total limit for the pool, which is then shared equally between all connections. if i configure a class one pool. and two of user start downloading while 3 others are browsing. will all these 5 get the equal bandwidth and another thing is that i m not getting wot does delay_initial_bucket_level means in fact According to the Delay Pools FAQ and the default squid.conf, it is a percentage of how full the delay pool's bucket (count of available bytes) is when Squid starts, is reconfigured, or receives its first connection. This allows users to start surfing immediately after Squid starts, instead of having to wait for the delay pool's bucket to fill. It appears to default to 50%. Adam isnt it possible to create different buckets for all the users using class two pool to make them use maximum say 3KB but when the bandwidth is free then they can use more than 3KB. Regards Azeem _ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
RE: [squid-users] Re: Delay Pools for Robots
Adam, Thank you for replying. Here is my second delay pool attempt. Do you think it will serve the intended purpose, slowing down robots while allowing humans full speed access? Does using buckets have any detrimental impact on the Squid machine's load? My overall goal is to try to minimize robot's impact on machine load on BOTH the Squid server machine and the back-end webservers its accelerating. Are any special build configuration parameters required to use browser? # Common browsers acl humans browser Explorer Netscape Mozilla Firefox Navigator Communicator Opera Safari Shiira Konqueror Amaya AOL Camino Chimera Mosaic OmniWeb wKiosk KidsBrowser Firebird # Delay Pools delay_pools 2 # 2 delay pools delay_class 1 2# pool 1 is a class 2 pool for humans delay_class 2 2# pool 2 is a class 2 pool for robots delay_access 1 allow humans delay_access 1 deny all delay_parameters 1 -1/-1 64000/64000 delay_parameters 2 -1/-1 7000/8000 # Non-humans get this slow bucket Thank you, John Kent Webmaster NRL Monterey http://www.nrlmry.navy.mil/sat_products.html -Original Message- From: news [mailto:[EMAIL PROTECTED] Behalf Of Adam Aube Sent: Tuesday, December 21, 2004 5:40 PM To: [EMAIL PROTECTED] Subject: [squid-users] Re: Delay Pools for Robots Kent, Mr. John (Contractor) wrote: Have an image intensive website (satellite weather photos). Using Squid as an accelerator. Want to slow down robots and spiders while basically not affecting human users who access the web pages. Would the following delay_pool parameters be correct for this purpose or would other values be better? delay_pools 1 # 1 delay pools delay_class 1 2# pool 1 is a class 2 pool delay_parameters 1 -1/-1 32000/64000 This makes no distinction between robots and normal visitors. For that you can use the browser acl (which matches on the User-Agent string the client sends), then use different delay pools for the common browsers and robots. Adam
[squid-users] Re: Delay Pools for Robots
Kent, Mr. John (Contractor) wrote: Have an image intensive website (satellite weather photos). Using Squid as an accelerator. Want to slow down robots and spiders while basically not affecting human users who access the web pages. Would the following delay_pool parameters be correct for this purpose or would other values be better? delay_pools 1 # 1 delay pools delay_class 1 2# pool 1 is a class 2 pool delay_parameters 1 -1/-1 32000/64000 This makes no distinction between robots and normal visitors. For that you can use the browser acl (which matches on the User-Agent string the client sends), then use different delay pools for the common browsers and robots. Adam
[squid-users] Re: Delay Pools
Robert Trouchet wrote: I wish to implement a speed limiter for users in the NT group SLowInternet. All other users have full speed connections. I do not notice any difference in speed of the Internet connection using my test acl SlowAccount external NT_global_group SlowInternet delay_pools 2 delay_class 1 1 delay_class 2 1 delay_access 1 allow SlowAccount delay_access 1 deny all Insert delay_access 2 deny SlowAccount here. delay_access 2 allow all delay_parameters 1 200/300 delay_parameters 2 20/25 Squid will test each delay pool's ACLs independent of the results of other delay pools' ACLs. You're allowing all connections into the second delay pool, so this will include users matched by the SlowAccount acl. That is why you need the explicit deny for SlowAccount in delay pool 2. Adam
Re: [squid-users] Re: delay pools starvation
- Original Message - From: Bar [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, April 22, 2004 11:31 AM Subject: Re: [squid-users] Re: delay pools starvation Yes, but there is no problem using the same list of acls as you use in delay_access to limit the number of connections before accepting the request, and you can define multiple different maxconn acls. The conlimit check needs to go into http_access. http_access deny downloads conlimit Yes. I've tried this scenario already but ie. if someone opening page his browser opens 1-5 connections. If on this page is a link for downloading and he clicks on it he immediately get deny to download because connlimit is matched by browsing page. He have to wait some time for browser to close connction and most of people think that something is broken. Could anybody tell me if it is possible to implement mentioned parameter (maxconn per delay_pool) in the next release of squid? Regards Bar
Re: [squid-users] Re: delay pools starvation
On Sun, 25 Apr 2004, Bar wrote: Could anybody tell me if it is possible to implement mentioned parameter (maxconn per delay_pool) in the next release of squid? If I get a patch implementing this feature in Squid-3 I have no problem including it in Squid-3.1. Regards Henrik
Re: [squid-users] Re: delay pools starvation
Yes, but there is no problem using the same list of acls as you use in delay_access to limit the number of connections before accepting the request, and you can define multiple different maxconn acls. The conlimit check needs to go into http_access. http_access deny downloads conlimit Yes. I've tried this scenario already but ie. if someone opening page his browser opens 1-5 connections. If on this page is a link for downloading and he clicks on it he immediately get deny to download because connlimit is matched by browsing page. He have to wait some time for browser to close connction and most of people think that something is broken. Regards Bar
[squid-users] Re: delay pools starvation
Bar wrote: It would be a great feature if each delay pool have its own maxconn per ip address parameter solving the accelerators problem and delay pools starvation. How is the existing maxconn acl insufficient? http://www.squid-cache.org/Doc/FAQ/FAQ-10.html#ss10.22 Adam
Re: [squid-users] Re: delay pools starvation
It would be a great feature if each delay pool have its own maxconn per ip address parameter solving the accelerators problem and delay pools starvation. How is the existing maxconn acl insufficient? Parameter maxconn is matched against all connections from ip irrespective of delay pool number used or request not going through delay pool. I wolud like to limit only downloads on fair share basis by delay pool where everyone has only 1 connection available or so but I want also that if someone is downloading he could browse without delay. If I limit connections by maxconn it will limit also browsing. I've expirienced browsing problem when maxconn is too low because browsers use more connections to speed up browsing. ie I set sth like this: acl conlimit maxcon 1 acl downloads urlpath_regex \.zip$ \.mp3$ .. delay pools 2 delay_class 1 1 delay_class 2 1 delay_access 1 allow downloads conlimit delay_access 1 deny all delay_access 2 allow downloads delay_parameters 1 35000/35000 delay_parameters 2 1000/1000 Delay pool 1 will never be matched because browser open some connections. All downloads go through pool 2. Browsing going without any pool. I choose pool type 1 because I don't want limit user to static limit if there is ie 35k/s available for downloading(I read some on dynamic delay pools patches but can't get any). Without maxconn scenario all downloads going throug pool 1 and there is no need for pool 2 but if someone is using download accelerator bandwidth is shared uneqally. The worst thing is if there is more such users the delay pool is simply starving and normal user with only one thread can't download anything. So maxconn parameter per delay pool will solve many problems with fair sharing and download accelerators if properly configured. Best regards, Bar
Re: [squid-users] Re: delay pools starvation
On Wed, 21 Apr 2004, Bar wrote: Parameter maxconn is matched against all connections from ip irrespective of delay pool number used or request not going through delay pool. Yes, but there is no problem using the same list of acls as you use in delay_access to limit the number of connections before accepting the request, and you can define multiple different maxconn acls. acl conlimit maxcon 1 acl downloads urlpath_regex \.zip$ \.mp3$ .. delay pools 2 delay_class 1 1 delay_class 2 1 delay_access 1 allow downloads conlimit delay_access 1 deny all delay_access 2 allow downloads delay_parameters 1 35000/35000 delay_parameters 2 1000/1000 Delay pool 1 will never be matched because browser open some connections. All downloads go through pool 2. The conlimit check needs to go into http_access. http_access deny downloads conlimit Regards Henrik
Re: [squid-users] Re: delay pools help
On Fri, Apr 16, 2004 at 11:50:43PM -0400, Adam Aube wrote: Payal Rathod wrote: One software company some distance from us have agreed to share their bandwidth with us for2 months. They will give us 128KBps. Who will enforce this 128kbps limit - the software company, or you? Luckily it the software company which will enforce through their router. I want to allow only few IPs (192.168.1.1 and 192.168.1.11) full use of bandwidth, the rest should use only 64 KBps. If the software company enforces the limit, then it's easy. Just create a single class 1 delay pool with a 64 Kbps limit. Deny delay pool access to the IP's that you want to have full speed, and allow everything else. Can you give some example here please. I am pretty much confused with the usage of delay pools. Thanks a lot for the help. With warm regards, -Payal
[squid-users] Re: delay pools help
Payal Rathod wrote: One software company some distance from us have agreed to share their bandwidth with us for2 months. They will give us 128KBps. Who will enforce this 128kbps limit - the software company, or you? I want to allow only few IPs (192.168.1.1 and 192.168.1.11) full use of bandwidth, the rest should use only 64 KBps. If the software company enforces the limit, then it's easy. Just create a single class 1 delay pool with a 64 Kbps limit. Deny delay pool access to the IP's that you want to have full speed, and allow everything else. If you have to enforce the limit, then it takes a little more work. You will need to use your OS's traffice-shaping capabilities to throttle total bandwidth to 128 Kbps. For Linux, search for the Linux Advanced Routing HOWTO. The BSDs are also capable of doing this, though I don't know enough about them to tell you what tool you'll need. Adam