On 30/05/11 23:38, patric.gla...@arz.at wrote:
Hello!

We are using squid to solve bandwith regulation for our patchmanagment.

What is our purpose:
- we have a limited bandwith to our central PM-Server so each squid has a
connection bandwith max 20 KB!
   therefore we are using delay pools class 1
  delay_pools 1
  delay_class 1 1
  delay_parameters 1 20000/20000 20000/20000
  delay_access 1 allow localnet

- the clients in the internal Lan should get the download from squid as
fast as possible,
   same request should be handelt as one - first fill the cache and then
serve the download to the clients
   the cache should be cleaned after 1 year!

hierarchy_stoplist cgi-bin ?

refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      525600
refresh_pattern .               0       20%     4320

  range_offset_limit -1
  collapsed_forwarding on

rightnow it looks like that all clients are sharing the 20 KB and therfore
not one is getting the update

Yes. Assuming this is 2.7 (where collapsed forwarding works) one client will be downloading and others waiting for the same URL will be sharing the trickle as it arrives.

- the cache is staing empty I donĀ“t know why

If these are publicly accessible URL you can plug one of them into redbot.org. It will tell you if there are any caching problems that will hinder Squid.

Otherwise you will have to locate and figure out the headers (both request and reply) manually.

Not being able to cache these objects could be part of the below problem...

- a trace on the central PM Server shows that the squid Servers are
donwloading a huge amount of data

Check what that is. You can expect all clients to parallel download many objects. But they should still be bandwidth limited by Squid within the delay pool limits.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1

Reply via email to