On Mon, 10 Jan 2005, Robert Borkowski wrote:

A wget in a loop retrieving the main page of our site will occasionally take just under 15 minutes to complete the retrieval. Normally it takes 0.02 seconds.

A related note: The default timeout waiting for data from the server is 15 minutes. (read_timeout).


When I look at the access.log for that retrieval and work back to the time the request was placed I often find that some client out on the internet had issued a request with a no-cache header resulting in TCP_CLIENT_REFRESH_MISS for the main page.

Which will cause all clients to join this request to your server. If this requests takes a long time to complete then all clients will experience this delay.


The Age + the time to retrieve the object = the read_timeout in squid.conf. I changed it to 9 minutes on one server and started seeing wget fail with 8+ instead of 14+ minutes.

Ok, so your server is not finishing the page properly to Squid.

The object is transferred quickly, but the connection stays open until some timer in squid elapses (read_timeout) and only then squid closes the connection.

Most likely there is some bytes at the end missing.

You can try working around it by setting "server_persistent_connections off" in squid.conf, but I would recommend identifying exacly what is going wrong first.

A good step on the way is to save a packet trace of the failing server request

  tcpdump -s 1600 -w traffic.out -i any host ip.of.your.web.server

then analyze this with ngrep / ethereal etc to try to figure out why the response never finishes proper.

Regards
Henrik

Reply via email to