(15.47.03) mnot: hno: my only remaining concern is the deep ctx's - unfortunately I'm having a real problem reproducing them (although they're unfortunately common)
would help a bit if you could make it coredump with a binary having debug symbols... I suspect it's related to the LRU problem in the sense that they are both triggered by the same family of inter-object dependencies. (collapsed forwarding, async refresh etc) The higher the load the deeper such dependency chains will become until things time out or resolves in some other manner. collapsed_forwarding is most likely the bigger culpit in creating these long chains. (15.49.47) mnot: the other issue I see is TCP_MEM_HITs taking a few hundred milliseconds, even on a lightly loaded box, with responses smaller than the write buffer. (and no, hno, they're not collpased ;) If there is Vary+ETag involved then those MAY be partial cache misses. There is a slight grey zone there an If-None-Match query for finding which object to respond with results in TCP_(MEM_)HIT if the 304 indicated object is a hit. Could also be delays due to acl lookups or url rewriters. (15.54.48) mnot: hno: is running a proxy and accelerator on different ports in the same squid process no longer supported? I forget where that ended up (15.54.58) mnot: yeah, that's definitely a limitation It is. The reason is that we can't tell for sure if the request is accelerated or proxied when following RFC2616. We can guess based on if we receive and URL-path or and absultute URL, but HTTP/1.1 requires servers to accept requests with an absolute URL. Now in most setups this is no problem, but many is used to be able to use the same port for both proxy and WPAD pac servicing... Regards Henrik
signature.asc
Description: This is a digitally signed message part
