Thanks - that did the trick. BTW, I replied a few days ago, but my post
didn't make it through (I used my email client instead of replying here).
Thanks again,
M
--
View this message in context:
Vary in Squid is currently treated as an exact-match text key. So when asked
for a gzip,deflate variant Squid does not have enough smarts to serve the
deflate variant. So it MISSes and gets a fresh one, which may or may not
be gzipped, but is served gzipped to the client anyway.
Right on,
lör 2011-01-22 klockan 12:16 -0500 skrev James P. Ashton:
Does anyone have any thoughts on this? I am not fond of the idea that both
squid instances stopped responding to SSL requests at the same time.
Is your OpenSSL up to date?
Regards
Henrik
tor 2011-01-20 klockan 02:50 -0500 skrev Max Feil:
Thanks. I am looking at the squid access.log and the delay is caused by
a GET which for some reason does not result in a response from the
server. Either there is no response or Squid is missing the response.
After a 120 second time-out the
fre 2011-01-21 klockan 05:45 +1300 skrev Amos Jeffries:
empty? No. If they have no content length indicated they have to be
assumed as being infinite length transfers. HTTP specs require this 411
reply message.
Not quite. Requests without an entity is always headers-only. The
infinite
fre 2011-01-21 klockan 11:31 +0100 skrev Ralf Hildebrandt:
1294685115.286 0 10.43.120.109 NONE/501 4145 POST
https://enis.eurotransplant.nl/donor-webservice/dpa?WDSL - HIER_NONE/-
text/html
So, I enabled SSL using --enable-ssl and now I'm getting:
1295605546.943313
lör 2011-01-22 klockan 23:04 +1300 skrev Amos Jeffries:
Squid caches only one of N variants so the expected behviour is that
each new variant is a MISS but becomes a HIT on repeated duplicate
requests until a new variant pushes it out of cache.
No it caches all N variants seen if the origin
sön 2011-01-23 klockan 14:14 -0800 skrev Jonathan Wolfe:
I'm using the values of asdf for a bogus Accept-Encoding value that
shouldn't trigger gzipping, and gzip for when I actually want to
invoke the module. To be clear, the webserver isn't zipping at all.
Is the web server responding with
In my test, yes, the web server was responding with Vary:
Accept-Encoding. But that's only because of the behavior below, where
once a non-gzipped version is cached (i.e. a request comes in first
with no Accept-Encoding header at all) all subsequent requests get the
unzipped version, even if
Already did use Wireshark. Here is some more info:
If you look through the traces you'll notice that at some point Squid sends a
TCP [FIN, ACK] right in the middle of a connection for seemingly no reason.
(Attempting to close the connection) The server ignores this and sends the rest
of the
On 24/01/11 13:43, Henrik Nordström wrote:
lör 2011-01-22 klockan 23:04 +1300 skrev Amos Jeffries:
Squid caches only one of N variants so the expected behviour is that
each new variant is a MISS but becomes a HIT on repeated duplicate
requests until a new variant pushes it out of cache.
No
On 24/01/2011 06:35, Max Feil wrote:
Already did use Wireshark. Here is some more info:
If you look through the traces you'll notice that at some point Squid
sends a TCP [FIN, ACK] right in the middle of a connection for seemingly
no reason. (Attempting to close the connection) The server
Hello Everyone,
My Squid Configuration is pretty much default, except
the fact, that I have added some refresh_patterns myself and collected
from internet in order to get more hits. The server is a Squid3.1.10 (in
3128 intercept transparent) running on Ubuntu 10.10 and comprises of
Intel(R)
Thanks for your prompt reply. Well I am not much experienced admin in linux or
in terms of squid, and therefore I havent installed wireshark/tshark/tcpdump in
squid yet, but I will install it now to go in deep.
My previous version of squid was 2.7 Stable downloaded from aptitude which was
Some results of TCPDUMP in -vv mode.
13:12:04.191180 IP (tos 0x0, ttl 127, id 2750, offset 0, flags [DF], proto TCP
(6), length 40)
172.16.80.2.1155 77.67.29.42.www: Flags [.], cksum 0x6de4 (correct), seq
1127903567, ack 4192021369, win 64700, length 0
13:12:04.192822 IP (tos 0x0, ttl 64,
well i have found the problem..
it's not your proxy...
your proxy is doing fine cause it's identifying files mimes and stuff=20
like that.
have you ever heard of ZIP BOMB?
well it's not it but it's something like it.
the site itself working fine and the page is getting to your computer in=20
16 matches
Mail list logo