Oooooooh, thanks Dridi for checking, I was wrong. -- Guillaume Quintard
On Thu, Apr 13, 2017 at 11:27 AM, Dridi Boukelmoune <[email protected]> wrote: > On Thu, Apr 13, 2017 at 8:44 AM, Guillaume Quintard > <[email protected]> wrote: > > You are right, subsequent requests will just be passed to the backend, > so no > > gzip manipulation/processing will occur. > > I had no idea [1] so I wrote a test case [2] to clear up my doubts: > > varnishtest "uncacheable gzip" > > server s1 { > rxreq > txresp -bodylen 100 > } -start > > varnish v1 -vcl+backend { > sub vcl_backend_response { > set beresp.do_gzip = true; > set beresp.uncacheable = true; > return (deliver); > } > } -start > > client c1 { > txreq > rxresp > } -run > > varnish v1 -expect n_gzip == 1 > varnish v1 -expect n_gunzip == 1 > > Despite the fact that the response is not cached, it is actually > gzipped, because in all cases backend responses are buffered through > storage (in this case Transient). It means that for clients that don't > advertise gzip support like in this example, on passed transactions > you will effectively waste cycles on doing both on-the-fly gzip and > gunzip for a single client transaction. > > That being said, it might be worth it if you have a high rate of > non-cacheable contents, but suitable for compression: less transient > storage consumption. I'd say it's a trade off between CPU and memory, > depending on what you wish to preserve you can decide how to go about > that. > > You can even do on-the-fly gzip on passed transactions only if the > client supports it and the backend doesn't, so that you save storage > and bandwidth, at the expense of CPU time you'd have consumed on the > client side if you wanted to save bandwidth anyway. > > The only caveat I see is the handling of the built-in VCL: > > > I am wondering if it is safe to do this even on responses that may > > subsequently get set as uncacheable by later code? > > If you let your VCL flow through the built-in rules, then you have no > way to cancel the do_gzip if the response is marked as uncacheable. > > Dridi > > [1] well I had an idea that turned out to be correct, but wasn't sure > [2] tested only with 5.0, but I'm convinced it is stable behavior for 4.0+ >
_______________________________________________ varnish-misc mailing list [email protected] https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
