Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses
Thanks Dridi and Guillaume for clarification! On Thu, Jun 15, 2023, 18:30 Guillaume Quintard wrote: > Adding to what Dridi said, and just to be clear: the "cleaning" of those > well-known headers only occurs when the req object is copied into a beteq, > so there's nothing preventing you from stashing the "cache-control" header > into "x-cache-control" during vcl_recv, and then copying it back to > "cache-control" during vcl_backend_response. > ___ varnish-misc mailing list varnish-misc@varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses
Adding to what Dridi said, and just to be clear: the "cleaning" of those well-known headers only occurs when the req object is copied into a beteq, so there's nothing preventing you from stashing the "cache-control" header into "x-cache-control" during vcl_recv, and then copying it back to "cache-control" during vcl_backend_response. ___ varnish-misc mailing list varnish-misc@varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses
On Thu, Jun 15, 2023 at 9:33 AM Uday Kumar wrote: > > >> There is this in the code: >> >> > H("Cache-Control", H_Cache_Control, F ) // 2616 >> > 14.9 >> >> We remove the this header when we create a normal fetch task, hence >> the F flag. There's a reference to RFC2616 section 14.9, but this RFC >> has been updated by newer documents. > > > Where can I find details about the above code, could not find it in RFC 2616 > 14.9! This is from include/tbl/http_headers.h in the Varnish code base. I'm not going to break it down in details, but that's basically where we declare well-known headers and when to strip them when we perform a req->bereq or beresp->resp transition. In this case, we strip the cache-control header from the initial beresp when it is a cache miss. Dridi ___ varnish-misc mailing list varnish-misc@varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses
> There is this in the code: > > * > H("Cache-Control", H_Cache_Control, F ) *// > 2616 14.9 > > We remove the this header when we create a normal fetch task, hence > the F flag. There's a reference to RFC2616 section 14.9, but this RFC > has been updated by newer documents. > Where can I find details about the above code, could not find it in RFC 2616 14.9! Thanks & Regards, Uday Kumar ___ varnish-misc mailing list varnish-misc@varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses
On Wed, Jun 14, 2023 at 9:02 AM Uday Kumar wrote: > > Hi Guillaume, > > Thanks for the response. > > Can you provide us with a log of the transaction please? > > I have sent a Request to VARNISH which Contains Cache-Control: no-cache > header, we have made sure the request with cache-control header is a MISS > with a check in vcl_recv subroutine, so it's a MISS as expected. > > The problem as mentioned before: > Cache-Control: no-cache header is not being passed to the Backend even though > its a MISS. There is this in the code: > H("Cache-Control", H_Cache_Control, F ) // 2616 14.9 We remove the this header when we create a normal fetch task, hence the F flag. There's a reference to RFC2616 section 14.9, but this RFC has been updated by newer documents. Also that section is fairly long and I don't have time to dissect it, but I suspect the RFC reference is only here to point to the Cache-Control definition, not the F flag. I suspect the rationale for the F flag is that on cache misses we act as a generic client, not just on behalf of the client that triggered the cache miss. If you want pass-like behavior on a cache miss, you need to implement it in VCL: - store cache-control in a different header in vcl_recv - restore cache-control in vcl_backend_fetch if applicable Please note that you open yourself to malicious clients forcing no-cache on your origin server upon cache misses. Come to think of it, we should probably give Pragma both P and F flags. Dridi ___ varnish-misc mailing list varnish-misc@varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses
Hi Guillaume, Thanks for the response. Can you provide us with a log of the transaction please? I have sent a R*equest *to VARNISH which Contains *Cache-Control: no-cache header*, we have made sure the request with *cache-control header* is a MISS with a check in *vcl_recv subroutine*, so it's a *MISS *as expected. *The problem as mentioned before: * *Cache-Control: no-cache header is not being passed to the Backend even though its a MISS.* *Please find below the transaction log of Varnish.* * << Request >> 2293779 - Begin req 2293778 rxreq - Timestamp Start: 1686730406.463326 0.00 0.00 - Timestamp Req: 1686730406.463326 0.00 0.00 - ReqStart IPAddress 61101 - ReqMethod GET - ReqURL someURL - ReqProtocolHTTP/1.1 - ReqHeader Host: IP:Port - ReqHeader Connection: keep-alive - ReqHeader Pragma: no-cache - *ReqHeader Cache-Control: no-cache* - ReqHeader Upgrade-Insecure-Requests: 1 - ReqHeader User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 - ReqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7 - ReqHeader Accept-Encoding: gzip, deflate - ReqHeader Accept-Language: en-US,en;q=0.9 - ReqHeader X-Forwarded-For: IPAddress - VCL_call RECV - VCL_LogURL:someURL - ReqURL someURL - ReqHeader X-contentencode: gzip, deflate - VCL_LogHTTP_X_Compression:gzip, deflate - VCL_return hash - ReqUnset Accept-Encoding: gzip, deflate - ReqHeader Accept-Encoding: gzip - VCL_call HASH - ReqHeader hash-url: someURL - ReqUnset hash-url: someURL - ReqHeader hash-url: someURL - VCL_Loghash-url: someURL - ReqUnset hash-url: someURL - VCL_return lookup - VCL_call MISS - VCL_return fetch *- Link bereq 2293780 fetch* - Timestamp Fetch: 1686730406.515526 0.052200 0.052200 - RespProtocol HTTP/1.1 - RespStatus 200 - RespReason OK - RespHeader add_in_varnish_logs: ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA - RespHeader Content-Type: text/html;charset=UTF-8 - RespHeader Content-Encoding: gzip - RespHeader Vary: Accept-Encoding - RespHeader Date: Wed, 14 Jun 2023 08:13:25 GMT - RespHeader Server: Intermesh Caching Servers/2.0.1 - RespHeader X-Varnish: 2293779 - RespHeader Age: 0 - RespHeader Via: 1.1 varnish (Varnish/5.2) - VCL_call DELIVER - RespHeader X-Edge: MISS - VCL_Log addvg:ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA - RespUnset add_in_varnish_logs: ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA - VCL_return deliver - Timestamp Process: 1686730406.515554 0.052228 0.29 - RespHeader Accept-Ranges: bytes - RespHeader Transfer-Encoding: chunked - RespHeader Connection: keep-alive - Timestamp Resp: 1686730406.518064 0.054738 0.002510 - ReqAcct569 0 569 331 36932 37263 - End *** << BeReq>> 2293780* -- Begin bereq 2293779 fetch -- Timestamp Start: 1686730406.463456 0.00 0.00 -- BereqMethodGET -- BereqURL someURL -- BereqProtocol HTTP/1.1 -- BereqHeaderHost: IP:Port -- BereqHeaderPragma: no-cache -- BereqHeaderUpgrade-Insecure-Requests: 1 -- BereqHeaderUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 -- BereqHeaderAccept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7 -- BereqHeaderAccept-Language: en-US,en;q=0.9 -- BereqHeaderX-Forwarded-For: IPAddress -- BereqHeaderX-contentencode: gzip, deflate -- BereqHeaderAccept-Encoding: gzip -- BereqHeaderX-Varnish: 2293780 -- VCL_call BACKEND_FETCH -- BereqUnset Accept-Encoding: gzip -- BereqHeaderAccept-Encoding: gzip, deflate -- BereqUnset X-contentencode: gzip, deflate -- VCL_return fetch -- BackendOpen27 reload_2023-06-07T091359.node66 127.0.0.1 8984 127.0.0.1 39154 -- BackendStart 127.0.0.1 8984 -- Timestamp Bereq: 1686730406.463621 0.000165 0.000165 -- Timestamp Beresp: 1686730406.515400 0.051944 0.051779 -- BerespProtocol HTTP/1.1 -- BerespStatus 200 -- BerespReason OK -- BerespHeader Server: Apache-Coyote/1.1 -- BerespHeader add_in_varnish_logs: ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA -- BerespHeader Content-Type: text/html;charset=UTF-8 -- BerespHeader Transfer-Encoding: chunked -- BerespHeader Content-Encoding: gzip -- BerespHeader Vary: Accept-Encoding -- BerespHeader Date: Wed, 14 Jun 2023
Issue with passing Cache-Control: no-cache header to Tomcat during cache misses
Hello, When a user refreshes(F5) or performs a hard refresh(ctrl+F5) in their browser, the browser includes the *Cache-Control: no-cache* header in the request. However, in our* production Varnish setup*, we have implemented a check that treats* requests with Cache-Control: no-cache as cache misses*, meaning it bypasses the cache and goes directly to the backend server (Tomcat) to fetch the content. *Example:* in vcl_recv subroutine of default.vcl: sub vcl_recv{ #other Code # Serve fresh data from backend while F5 and CTRL+F5 from user if (req.http.Cache-Control ~ "(no-cache|max-age=0)") { set req.hash_always_miss = true; } #other Code } However, we've noticed that the *Cache-Control: no-cache header is not being passed* to Tomcat even when there is a cache miss. We're unsure why this is happening and would appreciate your assistance in understanding the cause. *Expected Functionality:* If the request contains *Cache-Control: no-cache header then it should be passed to Tomcat* at Backend. Thanks & Regards Uday Kumar ___ varnish-misc mailing list varnish-misc@varnish-cache.org https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc