On 2/15/26 2:38 AM, Amos Jeffries wrote:

On 15/02/2026 08:45, Brad House wrote:
I've got a squid deployment where serving from cache can be slower than an uncached download.  I'm seeing speeds of around 50MB/s when serving from cache, which is much slower than anticipated.  Infact, when hitting fast upstream servers, serving of a non-cached asset is faster (even though its still hitting squid to fetch it).

I'm thinking there's got to be something wrong with my squid configuration, I'm currently running on Rocky Linux 10 with Squid 6.10-6.


When your networks I/O is faster than disk I/O, it is best not to store at all.

Like so:
  acl fast_servers dst ...
  store_miss deny fast_servers


Our Disk I/O is a few orders of magnitude faster than our internet speed, so caching should be able to serve much much faster.   I provided benchmarks of our Disk I/O measured with FIO on the VM.  We also want to be nice to upstream providers we are fetching from.


The VM I'm using currently has 4 cores, 16G RAM and 100G of usable space.

You have configured your Squid to use 318 GB of cache. That will not fit within 100 GB.


Sorry, I typo'd that number, it was supposed to say 400G:

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda4       399G   14G  385G   4% /


We have a large on site build system that spins up runners for GitHub actions, and they're constantly fetching large assets from the internet for each build, hence our desire for a caching proxy.  We'd rather not switch to Apache Traffic Server as that doesn't have SSL bump capability (we haven't yet enabled that capability in squid, however). Hopefully there's a simple configuration I'm missing.

In this case I think you want to prevent small objects from being stored in the disk cache. They can benefit from the fast network speed and should not inflate your bandwidth use much.

  cache_dir ... min-size=102400


We'd like to cache even small objects due to rate limiting we can hit at the remote sites.  We're spawning thousands of GHAs a day all, a large number of which fetch the same files (and we can't just change the fetch location as we are building OSS packages which fetch these packages as part of their build system).



Just for testing I was pulling large image via http that is below my max object size: http://mirrors.edge.kernel.org/ubuntu-releases/20.04.6/ ubuntu-20.04.6-live-server-amd64.iso

Configuration below:

acl public src 0.0.0.0/0

The above is the same as:
  acl public src ipv4


Thanks



acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager

http_access allow public

This is bad. You have an Open Proxy.

Even though "public" does not include all the IPv6 range, it does include every possible IPv4 machine on the Internet.


This is on a private internal network segment that only certain systems can access (like GHA runners).  We use our firewalls to control access.



http_access deny to_localhost
http_access deny to_linklocal
http_access deny all

A series of deny followed by "deny all" is only useful if you are supplying custom error pages.


Ok, these entries just came from the stock config install with the package from rocky.


FYI, All these ...

refresh_pattern deb$   129600 100% 129600
refresh_pattern udeb$   129600 100% 129600
refresh_pattern tar.gz$  129600 100% 129600
refresh_pattern tar.xz$  129600 100% 129600
refresh_pattern tar.bz2$  129600 100% 129600
refresh_pattern \/(Packages|Sources)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern \/Release(|\.gpg)$ 0 0% 0 refresh-ims
refresh_pattern \/InRelease$ 0 0% 0 refresh-ims
refresh_pattern \/(Translation-.*)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern changelogs.ubuntu.com\/.*  0  1% 1


... are only useful when the repository service does not obey HTTP/1.1 properly. Otherwise they are detrimental.

Good example, are those package tar/deb files. In a repository, packages contain their version details in the filename and URL. Once created they remain unchanged forever.

Whereas, the above rules are forcing Squid to stop using any cached object and replace it once these files reach 90 days old.


That would obviously assume the upstream server is sending proper cache-control headers, I haven't verified if that's the case.  I took these rules from the squid-deb-proxy package, from what you're saying it sounds like that package shouldn't exist anymore.

Are there any particular things I should try?  One use said to use rock ... are there particular settings?  Are there any known performance issues with the 6.10 release I'm using?

Thanks.

-Brad

_______________________________________________
squid-users mailing list
[email protected]
https://lists.squid-cache.org/listinfo/squid-users

Reply via email to