I have caching turned off at the moment because of this (not a big deal -- the 
cache hit rate would be very low regardless).

It's a bit awkward to work around and this is the only case I can think of 
where varnish would cause a request that would otherwise succeed to fail.

I'm planning to have multiple caches (small object cache + large object cache 
for example) but this would not be possible if the response used chunked 
transfer encoding.

Setting nuke limit very high would work with chunked transfers but also makes 
it possible for a single response to evict everything else in the cache.

james

From: varnish-misc 
<varnish-misc-bounces+jmathiesen=tripadvisor....@varnish-cache.org> on behalf 
of Carlos Abalde <[email protected]>
Date: Monday, December 11, 2017 at 4:11 AM
To: Radu Moisa <[email protected]>
Cc: varnish-misc <[email protected]>
Subject: Re: Varnish sending incomplete responses when nuking objects




On 11 Dec 2017, at 07:51, Radu Moisa 
<[email protected]<mailto:[email protected]>> wrote:

Hi!

Thanks a lot for the hint!

Just so that I understand it better, nuke_limit is the "Maximum number of 
objects we attempt to nuke in order to make space for a object body."
If I set it to something like 9999999, varnish will throw out only the number 
of objects needed to make room for the new request, not the nuke_limit number 
of objects, right?

Yes, that's right. While trying to store an object in the cache, if not enough 
free space is available, Varnish will nuke up to 'nuke_limit' objects. This 
will happen incrementally, while the object is being fetched from the backend, 
stored in the cache, and eventually also being streamed to one or more clients. 
If the 'nuke_limit' is reached the object won't be cached and client responses 
will be closed (and therefore clients will end up with a truncated response).

Best,

--
Carlos Abalde

_______________________________________________
varnish-misc mailing list
[email protected]
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc

Reply via email to