Just as an FYI, I did a test today of squid's efficacy with
the ssl-bumping feature. This is a preliminary result with
little or no review of the logs -- just going by access log
I was interested because I've been running squid @ home for over
10 years to try to squeeze speed out of a home connection using
a largish-cache (at least for 1-2 users) of around 80G used
on a dedicated, 128G partition.
Over the years, I've gotten a vague feeling for what to expect and
have generally gotten around a 15-30% cache hit ratio.
This dropped as google pushed https. I noticed the web slowing
as my local cache hit-rate dropped and encryption overhead increased
This was somewhat unscientific, but not so much in that it
does reflect a part of my traffic.
I opened a bunch of (30+) news articles from news.google.com w/
my new ssl-bumping enabled and decided I wanted to get an idea of
cache-hit differences. So changed proxy to go through a
non-bumping port and used the browser's saved-session
to quit the browser and restart all the tabs -- twice --
once for the https test, and a 2nd time for a repeat test.
Intial opening of the sites w/ssl bump got 730/3365 hits/requests.
The reload in solid https-CONNECT streams showed 40/1588 hits/requests.
And the 2nd reload of the same sites got 1268/2263 hits/requests.
cold-view w/SSL-bump: 22% hit
no SSL-bump: 3% hit
repeat w/SSL-bump: 56% hit
Simply inter/intra-site redundancy resulted in 22% cache-hit
ratio, with a "semi-real" case of bringing up the same content
a second time, gave a 56% hit rate.
I'll have to see if how hard it is to get byte counts out of my
logs to get more detail, but since many of these request are small
there is a large delay caused / request.
FWIW, using a FF-clone (64-bit Palemoon) with no local disk cache
(it does have a memory cache, but that would have been cleared
between runs when I restarted the browser).
Initial results look good for using squid to subvert google's
campaign to keep your webtraffic content hidden, but mostly
squid-users mailing list