Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching

2023-05-30 Thread Uday Kumar
Hello everyone,

In our system, we're currently using Varnish Cache in front of our Tomcat
Server for caching content.

As part of our new requirement, we've started passing a unique parameter
with every URL. The addition of this unique parameter in each request is
causing a cache miss, as Varnish treats each request as distinct due to the
difference in the parameter. Our intent is to have Varnish ignore this
specific parameter for caching purposes, so that it can treat similar
requests with different unique parameters as identical for caching purposes.

Expected Functionality of Varnish:

1. We need Varnish to ignore the unique parameter when determining if a
request is in the cache or while caching a request.

2. We also need Varnish to retain this unique parameter in the request URL
when it's passed along to the Tomcat Server.

We're looking for a way to modify our Varnish configuration to address the
above issues, your assistance would be greatly appreciated.


Thanks & Regards
Uday Kumar
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching

2023-05-30 Thread Uday Kumar
Hello Guillaume,

Thank you so much for your help, will try modifying vcl_hash as suggested!


> Last note: it would probably be better if the tomcat server didn't need
> that unique parameter, or at the very least, if Varnish could just add it
> itself rather than relying on client information as you're caching
> something public using something that was user-specific, so there's
> potential for snafus here.
>

 Could you please also suggest how to configure Varnish so that Varnish can
add Unique Parameter by itself??
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching

2023-05-31 Thread Uday Kumar
Hello,

We would like to configure varnish to create unique parameter such that its
value should be of 20 characters (alphanumeric characters that are URL
safe).

On Wed, May 31, 2023, 13:34 Guillaume Quintard 
wrote:

> >  Could you please also suggest how to configure Varnish so that Varnish
> can add Unique Parameter by itself??
>
> We'd need more context, is there any kind of check that tomcat does on
> this parameter, does it need to have a specific length, or match a regex?
> If we know that, we can have Varnish check the user request to make sure
> it's valid, and potentially generate its own parameter.
>
> But it all depends on what Tomcat expects from that parameter.
>
> --
> Guillaume Quintard
>
>
> On Tue, May 30, 2023 at 11:18 PM Uday Kumar 
> wrote:
>
>> Hello Guillaume,
>>
>> Thank you so much for your help, will try modifying vcl_hash as suggested!
>>
>>
>>> Last note: it would probably be better if the tomcat server didn't need
>>> that unique parameter, or at the very least, if Varnish could just add
>>> it itself rather than relying on client information as you're caching
>>> something public using something that was user-specific, so there's
>>> potential for snafus here.
>>>
>>
>>  Could you please also suggest how to configure Varnish so that Varnish
>> can add Unique Parameter by itself??
>>
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching

2023-05-31 Thread Uday Kumar
> Does it need to be unique? can't we just get away with
> ""?
>

Our Requirements:
1. New Parameter should be *appended *to already existing parameters in
Query String. (should not replace entire query string)
2. Parameter Value *Must be Unique for each request* (ideally unique
randomness is preferred)
3. Allowed Characters are Alphanumeric which are *URL safe* [can be
lowercase, uppercase in case of alphabets]
4. Characters can be repeated in parameter value EX: Gn4lT*Y*gBgpPaRi6hw6*Y*S
(here, Y is repeated) But as mentioned above the value must be unique as a
whole.

Ex: Parameter value for 1st request can be "Gn4lT*Y*gBgpPaRi6hw6*Y*S",
2nd request can be
"G34lTYgBgpPaRi6hyaaF" and so on


Thanks & Regards
Uday Kumar
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching

2023-06-01 Thread Uday Kumar
Thanks for the prompt response!

Thanks & Regards
Uday Kumar


On Thu, Jun 1, 2023 at 11:12 AM Guillaume Quintard <
guillaume.quint...@gmail.com> wrote:

> Thanks, so, to make things clean you are going to need to use a couple of
> vmods, which means being able to compile them first:
> - https://github.com/otto-de/libvmod-uuid as Geoff offered
> - https://github.com/Dridi/libvmod-querystring that will allow easy
> manipulation of the querystring
>
> unfortunately, the install-vmod tool that is bundled into the Varnish
> docker image isn't able to cleanly compile/install them. I'll have a look
> this week-end if I can, or at least I'll open a ticket on
> https://github.com/varnish/docker-varnish
>
> But, if you are able to install those two, then your life is easy:
> - once you receive a request, you can start by creating a unique ID,
> which'll be the the vcl equivalent of `uuidgen | sed -E
> 's/(\w+)-(\w+)-(\w+)-(\w+).*/\1\2\3\4/'` (without having testing it,
> probably `regsub(uuid.uuid_v4(), "s/(\w+)-(\w+)-(\w+)-(\w+).*",
> "\1\2\3\4/"`)
> - then just add/replace the parameter in the querystring with
> vmod_querystring
>
> and...that's about it?
>
> Problem is getting the vmods to compile/install which I can help with this
> week-end. There's black magic that you can do using regex to manipulate
> querystring, but it's a terrible idea.
>
> --
> Guillaume Quintard
>
>
> On Wed, May 31, 2023 at 6:48 PM Uday Kumar 
> wrote:
>
>>
>> Does it need to be unique? can't we just get away with
>>> ""?
>>>
>>
>> Our Requirements:
>> 1. New Parameter should be *appended *to already existing parameters in
>> Query String. (should not replace entire query string)
>> 2. Parameter Value *Must be Unique for each request* (ideally unique
>> randomness is preferred)
>> 3. Allowed Characters are Alphanumeric which are *URL safe* [can be
>> lowercase, uppercase in case of alphabets]
>> 4. Characters can be repeated in parameter value EX: Gn4lT*Y*gBgpPaRi6hw6
>> *Y*S (here, Y is repeated) But as mentioned above the value must be
>> unique as a whole.
>>
>> Ex: Parameter value for 1st request can be "Gn4lT*Y*gBgpPaRi6hw6*Y*S",
>> 2nd request can be
>> "G34lTYgBgpPaRi6hyaaF" and so on
>>
>>
>> Thanks & Regards
>> Uday Kumar
>>
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching

2023-06-05 Thread Uday Kumar
Hello Guillaume,

Thanks for the update!


(It's done by default if you don't have a vcl_hash section in your VCL)
 We can tweak it slightly so that we ignore the whole querystring:
 sub vcl_hash {
 hash_data(regsub(req.url, "\?.*",""));
 if (req.http.host) {
 hash_data(req.http.host);
 } else {
 hash_data(server.ip);
 }
 return (lookup);
 }

>>>
Would like to discuss about above suggestion.

*FYI:*
*In our current vcl_hash subroutine, we didnt had any return lookup
statement in production , and the code is as below*
#Working
sub vcl_hash{
   hash_data(req.url);
   hash_data(req.http.Accept-Encoding);
}
The above code is *working without any issues on production even without
return (lookup)* statement.

For our new requirement * to ignore the parameter in URL while caching, * as
per your suggestion we have made changes to the vcl_hash subroutine, new
code is as below.

#Not Working
sub vcl_hash{
set req.http.hash-url = regsuball(req.url, "traceId=.*?(\&|$)", "");
hash_data(req.http.hash-url);
unset req.http.hash-url;
hash_data(req.http.Accept-Encoding);
}

The above code is *not hashing the URL by ignoring traceId (not as
expected)** but if I add return lookup at the end of subroutine its working
as expected.*

#Working Code
sub vcl_hash{
set req.http.hash-url = regsuball(req.url, "traceId=.*?(\&|$)", "");
hash_data(req.http.hash-url);
unset req.http.hash-url;
hash_data(req.http.Accept-Encoding);
*return (lookup);*
}


*I have few doubts to be clarified:*
1. May I know what difference return (lookup) statement makes?
2. Will there be any side effects with modified code, if I use return
(lookup)? (Because original code was not causing any issue even without
return lookup in production)
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Issue with passing Cache-Control: no-cache header to Tomcat during cache misses

2023-06-12 Thread Uday Kumar
Hello,

When a user refreshes(F5) or performs a hard refresh(ctrl+F5) in their
browser, the browser includes the *Cache-Control: no-cache* header in the
request.
However, in our* production Varnish setup*, we have implemented a check
that treats* requests with Cache-Control: no-cache as cache misses*,
meaning it bypasses the cache and goes directly to the backend server
(Tomcat) to fetch the content.

*Example:*
in vcl_recv subroutine of default.vcl:

sub vcl_recv{
  #other Code
  # Serve fresh data from backend while F5 and CTRL+F5 from user
if (req.http.Cache-Control ~ "(no-cache|max-age=0)") {
set req.hash_always_miss = true;
}
   #other Code
}


However, we've noticed that the *Cache-Control: no-cache header is not
being passed* to Tomcat even when there is a cache miss.
We're unsure why this is happening and would appreciate your assistance in
understanding the cause.

*Expected Functionality:*
If the request contains *Cache-Control: no-cache header then it should be
passed to Tomcat* at Backend.

Thanks & Regards
Uday Kumar
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses

2023-06-14 Thread Uday Kumar
r   Date: Wed, 14 Jun 2023 08:13:25 GMT
--  TTLRFC 120 10 0 1686730407 1686730407 1686730405 0 0
--  VCL_call   BACKEND_RESPONSE
--  BerespUnsetServer: Apache-Coyote/1.1
--  BerespHeader   Server: Caching Servers/2.0.1
--  TTLVCL 120 604800 0 1686730407
--  TTLVCL 86400 604800 0 1686730407
--  VCL_return deliver
--  Storagemalloc s0
--  ObjProtocolHTTP/1.1
--  ObjStatus  200
--  ObjReason  OK
--  ObjHeader  add_in_varnish_logs:
ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA
--  ObjHeader  Content-Type: text/html;charset=UTF-8
--  ObjHeader  Content-Encoding: gzip
--  ObjHeader  Vary: Accept-Encoding
--  ObjHeader  Date: Wed, 14 Jun 2023 08:13:25 GMT
--  ObjHeader  Server: Caching Servers/2.0.1
--  Fetch_Body 2 chunked stream
--  Gzip   u F - 36932 291926 80 211394 295386
--  BackendReuse   27 reload_2023-06-07T091359.node66
--  Timestamp  BerespBody: 1686730406.518050 0.054594 0.002650
--  Length 36932
--  BereqAcct  574 0 574 276 36932 37208
--  End


Thanks & Regards
Uday Kumar


On Tue, Jun 13, 2023 at 2:13 AM Guillaume Quintard <
guillaume.quint...@gmail.com> wrote:

> Hi Uday,
>
> Can you provide us with a log of the transaction please? You can run this
> on the Varnish server:
>
> varnishlog -g request -q 'ReqHeader:Cache-Control'
>
> And you should see something as soon as you send a request with that
> header to Varnish. Note that we need the backend part of the transaction,
> so please don't truncate the block.
>
> Kind regards,
>
> --
> Guillaume Quintard
>
>
> On Mon, Jun 12, 2023 at 10:33 PM Uday Kumar 
> wrote:
>
>> Hello,
>>
>> When a user refreshes(F5) or performs a hard refresh(ctrl+F5) in their
>> browser, the browser includes the *Cache-Control: no-cache* header in
>> the request.
>> However, in our* production Varnish setup*, we have implemented a check
>> that treats* requests with Cache-Control: no-cache as cache misses*,
>> meaning it bypasses the cache and goes directly to the backend server
>> (Tomcat) to fetch the content.
>>
>> *Example:*
>> in vcl_recv subroutine of default.vcl:
>>
>> sub vcl_recv{
>>   #other Code
>>   # Serve fresh data from backend while F5 and CTRL+F5 from user
>> if (req.http.Cache-Control ~ "(no-cache|max-age=0)") {
>> set req.hash_always_miss = true;
>> }
>>#other Code
>> }
>>
>>
>> However, we've noticed that the *Cache-Control: no-cache header is not
>> being passed* to Tomcat even when there is a cache miss.
>> We're unsure why this is happening and would appreciate your assistance
>> in understanding the cause.
>>
>> *Expected Functionality:*
>> If the request contains *Cache-Control: no-cache header then it should
>> be passed to Tomcat* at Backend.
>>
>> Thanks & Regards
>> Uday Kumar
>> ___
>> varnish-misc mailing list
>> varnish-misc@varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses

2023-06-15 Thread Uday Kumar
> There is this in the code:
>
> * > H("Cache-Control",  H_Cache_Control,  F  )  *//
> 2616 14.9
>
> We remove the this header when we create a normal fetch task, hence
> the F flag. There's a reference to RFC2616 section 14.9, but this RFC
> has been updated by newer documents.
>

Where can I find details about the above code, could not find it in RFC
2616 14.9!


Thanks & Regards,
Uday Kumar
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses

2023-06-15 Thread Uday Kumar
Thanks Dridi and Guillaume for clarification!

On Thu, Jun 15, 2023, 18:30 Guillaume Quintard 
wrote:

> Adding to what Dridi said, and just to be clear: the "cleaning" of those
> well-known headers only occurs when the req object is copied into a beteq,
> so there's nothing preventing you from stashing the "cache-control" header
> into "x-cache-control" during vcl_recv, and then copying it back to
> "cache-control" during vcl_backend_response.
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Unexpected Cache-Control Header Transmission in Dual-Server API Setup

2023-06-28 Thread Uday Kumar
Hello All,

Our application operates on a dual-server setup, where each server is
dedicated to running a distinct API.

*Technical specifications:*
Framework: Spring-boot v2.4 (Java 1.8)
Runtime Environment: Tomcat
Version: Apache Tomcat/7.0.42
Server1 runs API-1 and Server2 runs API-2. Both servers are equipped with
an installed Varnish application. When either API is accessed, the request
is processed through the Varnish instance associated with the respective
server.

*Issue Description:*
In a typical scenario, a client (browser) sends a request to API-1, which
is handled by the Varnish instance on Server1. After initial processing,
API-1 makes a subsequent request to API-2 on Server2.

The Request Flow is as follows:
*Browser --> Varnish on Server1 --> Tomcat on Server1 --> Varnish on
Server2 --> Tomcat on Server2*

*Assuming, the request from Browser will be a miss at Server1 Varnish so
that the request reaches Tomcat(Backend) on server1.*

In cases where the browser *does not include any cache-control headers in
the request* (e.g., no-cache, max-age=0), the Server1 Varnish instance
correctly *does not receive any cache-control headers*.

*However, when API-1 calls API-2, we observe that a cache-control: no-cache
and p**ragma: no-cache headers are being transmitted to the Varnish
instance on Server2*, despite the following conditions:

1. We are not explicitly sending any cache-control header in our
application code during the call from API-1 to API-2.
2. Our application does not use the Spring-security dependency, which by
default might add such a header.
3. The cache-control header is not being set by the Varnish instance on
Server2.

This unexpected behavior of receiving a cache-control header at Server2's
Varnish instance when invoking API-2 from API-1 is the crux of our issue.

We kindly request your assistance in understanding the cause of this
unexpected behavior. Additionally, we would greatly appreciate any guidance
on how to effectively prevent this issue from occurring in the future.

Thanks & Regards
Uday Kumar
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Unexpected Cache-Control Header Transmission in Dual-Server API Setup

2023-06-28 Thread Uday Kumar
Hi Guillaume,

You are right!
varnish is not adding any cache-control headers.


*Observations when trying to replicate the issue locally:*
I was trying to replicate the issue using Local Machine by creating a
Spring Boot Application that acts as API-1 and tried hitting API-2 that's
on Server2.

*Request Flow:* Local Machine > Server2 varnish --> Server2 Tomcat

Point-1: When using* integrated tomcat (Tomcat 9) the spring-boot* issue
was *not *replicable [*Just ran Application in intellij*] (meaning, the
cache-control header is *not *being transmitted to Varnish of Server2)

*Point-2:* When *Tomcat 9 was explicitly installed in my local machine* and
built the* corresponding war of API-1 and used this to hit API-2* that's on
Server2, *Now issue got replicated* (meaning, *cache-control: no-cache,
pragma: no-cache is being transmitted to Varnish of Server2*)


Any insights?

Thanks & Regards
Uday Kumar


On Wed, Jun 28, 2023 at 8:32 PM Guillaume Quintard <
guillaume.quint...@gmail.com> wrote:

> Hi Uday,
>
> That one should be quick: Varnish doesn't add cache-control headers on its
> own.
>
> So, from what I understand it can come from two places:
> - either the VCL in varnish1
> - something in tomcat1
>
> It should be very easy to check with varnishlog's. Essentially, run
> "varnishlog -H request -q 'ReqHeader:uday'" on both varnish nodes and send
> a curl request like "curl http://varnish1/some/request/not/in/cache.html
> -H "uday: true"
>
> You should see the request going through both varnish and should be able
> to pinpoint what created the header. Or at least identify whether it's a
> varnish thing or not.
>
> Kind regards
>
> For a reminder on varnishlog:
> https://docs.varnish-software.com/tutorials/vsl-query/
>
>
> On Wed, Jun 28, 2023, 06:28 Uday Kumar  wrote:
>
>> Hello All,
>>
>> Our application operates on a dual-server setup, where each server is
>> dedicated to running a distinct API.
>>
>> *Technical specifications:*
>> Framework: Spring-boot v2.4 (Java 1.8)
>> Runtime Environment: Tomcat
>> Version: Apache Tomcat/7.0.42
>> Server1 runs API-1 and Server2 runs API-2. Both servers are equipped with
>> an installed Varnish application. When either API is accessed, the request
>> is processed through the Varnish instance associated with the respective
>> server.
>>
>> *Issue Description:*
>> In a typical scenario, a client (browser) sends a request to API-1, which
>> is handled by the Varnish instance on Server1. After initial processing,
>> API-1 makes a subsequent request to API-2 on Server2.
>>
>> The Request Flow is as follows:
>> *Browser --> Varnish on Server1 --> Tomcat on Server1 --> Varnish on
>> Server2 --> Tomcat on Server2*
>>
>> *Assuming, the request from Browser will be a miss at Server1 Varnish so
>> that the request reaches Tomcat(Backend) on server1.*
>>
>> In cases where the browser *does not include any cache-control
>> headers in the request* (e.g., no-cache, max-age=0), the Server1 Varnish
>> instance correctly *does not receive any cache-control headers*.
>>
>> *However, when API-1 calls API-2, we observe that a cache-control:
>> no-cache and p**ragma: no-cache headers are being transmitted to the
>> Varnish instance on Server2*, despite the following conditions:
>>
>> 1. We are not explicitly sending any cache-control header in our
>> application code during the call from API-1 to API-2.
>> 2. Our application does not use the Spring-security dependency, which by
>> default might add such a header.
>> 3. The cache-control header is not being set by the Varnish instance on
>> Server2.
>>
>> This unexpected behavior of receiving a cache-control header at Server2's
>> Varnish instance when invoking API-2 from API-1 is the crux of our issue.
>>
>> We kindly request your assistance in understanding the cause of this
>> unexpected behavior. Additionally, we would greatly appreciate any guidance
>> on how to effectively prevent this issue from occurring in the future.
>>
>> Thanks & Regards
>> Uday Kumar
>> ___
>> varnish-misc mailing list
>> varnish-misc@varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Unexpected Cache-Control Header Transmission in Dual-Server API Setup

2023-06-28 Thread Uday Kumar
Okay thank you!

On Wed, Jun 28, 2023, 22:36 Guillaume Quintard 
wrote:

> Not really, I have no tomcat expertise, which is where the issue should be
> fixed. That being said, if you can't prevent tomcat from adding the header,
> then you can use the VCL on varnish2 to scrub the headers ("unset
> req.http.cache-control;").
>
> --
> Guillaume Quintard
>
>
> On Wed, Jun 28, 2023 at 10:03 AM Uday Kumar 
> wrote:
>
>> Hi Guillaume,
>>
>> You are right!
>> varnish is not adding any cache-control headers.
>>
>>
>> *Observations when trying to replicate the issue locally:*
>> I was trying to replicate the issue using Local Machine by creating a
>> Spring Boot Application that acts as API-1 and tried hitting API-2 that's
>> on Server2.
>>
>> *Request Flow:* Local Machine > Server2 varnish --> Server2 Tomcat
>>
>> Point-1: When using* integrated tomcat (Tomcat 9) the spring-boot* issue
>> was *not *replicable [*Just ran Application in intellij*] (meaning, the
>> cache-control header is *not *being transmitted to Varnish of Server2)
>>
>> *Point-2:* When *Tomcat 9 was explicitly installed in my local machine*
>> and built the* corresponding war of API-1 and used this to hit API-2*
>> that's on Server2, *Now issue got replicated* (meaning, *cache-control:
>> no-cache, pragma: no-cache is being transmitted to Varnish of Server2*)
>>
>>
>> Any insights?
>>
>> Thanks & Regards
>> Uday Kumar
>>
>>
>> On Wed, Jun 28, 2023 at 8:32 PM Guillaume Quintard <
>> guillaume.quint...@gmail.com> wrote:
>>
>>> Hi Uday,
>>>
>>> That one should be quick: Varnish doesn't add cache-control headers on
>>> its own.
>>>
>>> So, from what I understand it can come from two places:
>>> - either the VCL in varnish1
>>> - something in tomcat1
>>>
>>> It should be very easy to check with varnishlog's. Essentially, run
>>> "varnishlog -H request -q 'ReqHeader:uday'" on both varnish nodes and send
>>> a curl request like "curl http://varnish1/some/request/not/in/cache.html
>>> -H "uday: true"
>>>
>>> You should see the request going through both varnish and should be able
>>> to pinpoint what created the header. Or at least identify whether it's a
>>> varnish thing or not.
>>>
>>> Kind regards
>>>
>>> For a reminder on varnishlog:
>>> https://docs.varnish-software.com/tutorials/vsl-query/
>>>
>>>
>>> On Wed, Jun 28, 2023, 06:28 Uday Kumar  wrote:
>>>
>>>> Hello All,
>>>>
>>>> Our application operates on a dual-server setup, where each server is
>>>> dedicated to running a distinct API.
>>>>
>>>> *Technical specifications:*
>>>> Framework: Spring-boot v2.4 (Java 1.8)
>>>> Runtime Environment: Tomcat
>>>> Version: Apache Tomcat/7.0.42
>>>> Server1 runs API-1 and Server2 runs API-2. Both servers are equipped
>>>> with an installed Varnish application. When either API is accessed, the
>>>> request is processed through the Varnish instance associated with the
>>>> respective server.
>>>>
>>>> *Issue Description:*
>>>> In a typical scenario, a client (browser) sends a request to API-1,
>>>> which is handled by the Varnish instance on Server1. After initial
>>>> processing, API-1 makes a subsequent request to API-2 on Server2.
>>>>
>>>> The Request Flow is as follows:
>>>> *Browser --> Varnish on Server1 --> Tomcat on Server1 --> Varnish on
>>>> Server2 --> Tomcat on Server2*
>>>>
>>>> *Assuming, the request from Browser will be a miss at Server1 Varnish
>>>> so that the request reaches Tomcat(Backend) on server1.*
>>>>
>>>> In cases where the browser *does not include any cache-control
>>>> headers in the request* (e.g., no-cache, max-age=0), the Server1
>>>> Varnish instance correctly *does not receive any cache-control headers*
>>>> .
>>>>
>>>> *However, when API-1 calls API-2, we observe that a cache-control:
>>>> no-cache and p**ragma: no-cache headers are being transmitted to the
>>>> Varnish instance on Server2*, despite the following conditions:
>>>>
>>>> 1. We are not explicitly sending any cache-control header in our
>>>> application code during

Caching Modified URLs by Varnish instead of the original requested URL

2023-08-22 Thread Uday Kumar
Hello All,


For our spring boot application, we are using Varnish Caching in a
production environment.




Requirement: [To utilize cache effectively]

Modify the URL (Removal of unnecessary parameters) while caching the user
request, so that the modified URL can be cached by varnish which helps
improve cache HITS for similar URLs.


For Example:

Let's consider the below Request URL

Url at time t, 1. samplehost.com/search/ims?q=bags&source=android
&options.start=0


Our Requirement:

To make varnish consider URLs with options.start=0 and without
options.start parameter
as EQUIVALENT, such that a single cached response(Single Key) can be
utilized in both cases.


*1st URL after modification:*

samplehost.com/search/ims?q=bags&source=android


*Cached URL at Varnish:*

samplehost.com/search/ims?q=bags&source=android



Now, Url at time t+1, 2. samplehost.com/search/ims?q=bags&source=android


At present, varnish considers the above URL as different from 1st URL and
uses a different key while caching the 2nd URL[So, it will be a miss]


*So, URL after Modification:*

samplehost.com/search/ims?q=bags&source=android


Now, 2nd URL will be a HIT at varnish, effectively utilizing the cache.



NOTE:

We aim to execute this URL Modification without implementing the logic directly
within the default.VCL file. Our intention is to maintain a clean and
manageable codebase in the VCL.



To address this requirement effectively, we have explored two potential
Approaches:


Approach-1:



Approach-2:




1. Please go through the approaches mentioned above and let me know the
effective solution.

2. Regarding Approach-2

At Step 2:

May I know if there is any way to access and execute a custom subroutine
from another VCL, for modifying the Request URL? if yes, pls help with
details.

At Step 3:

Tomcat Backend should receive the Original Request URL instead of the
Modified URL.

3. Please let us know if there is any better approach that can be
implemented.



Thanks & Regards
Uday Kumar
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Caching Modified URLs by Varnish instead of the original requested URL

2023-08-22 Thread Uday Kumar
Hi Guillaume,

*use includes and function calls*
This is great, thank you so much for your help!

Thanks & Regards
Uday Kumar


On Wed, Aug 23, 2023 at 1:32 AM Guillaume Quintard <
guillaume.quint...@gmail.com> wrote:

> Hi Uday,
>
> I'm not exactly sure how to read those diagrams, so I apologize if I'm
> missing the mark or if I'm too broad here.
>
> There are a few points I'd like to attract your attention to. The first
> one is that varnish doesn't cache the request or the URL. The cache is
> essentially a big hashmap/dictionary/database, in which you store the
> response. The request/url is the key for it, so you need to have it in its
> "final" form before you do anything.
>
> From what I read, you are not against it, and you just want to sanitize
> the URL in vcl_recv, but you don't like the idea of making the main file
> too unwieldy. If I got that right, then I have a nice answer for you: use
> includes and function calls.
>
> As an example:
>
> # cat /etc/varnish/url.vcl
> sub sanitize_url {
>   # do whatever modifications you need here
> }
>
> # cat /etc/varnish/default.vcl
> include "./url.vcl";
>
> sub vcl_recvl {
>   call sanitize_url;
> }
>
>
> That should get you going.
>
> Hopefully I didn't miss the mark too much here, let me know if I did.
>
> --
> Guillaume Quintard
>
>
> On Tue, Aug 22, 2023 at 3:45 AM Uday Kumar 
> wrote:
>
>> Hello All,
>>
>>
>> For our spring boot application, we are using Varnish Caching in a
>> production environment.
>>
>>
>>
>>
>> Requirement: [To utilize cache effectively]
>>
>> Modify the URL (Removal of unnecessary parameters) while caching the user
>> request, so that the modified URL can be cached by varnish which helps
>> improve cache HITS for similar URLs.
>>
>>
>> For Example:
>>
>> Let's consider the below Request URL
>>
>> Url at time t, 1. samplehost.com/search/ims?q=bags&source=android
>> &options.start=0
>>
>>
>> Our Requirement:
>>
>> To make varnish consider URLs with options.start=0 and without
>> options.start parameter as EQUIVALENT, such that a single cached
>> response(Single Key) can be utilized in both cases.
>>
>>
>> *1st URL after modification:*
>>
>> samplehost.com/search/ims?q=bags&source=android
>>
>>
>> *Cached URL at Varnish:*
>>
>> samplehost.com/search/ims?q=bags&source=android
>>
>>
>>
>> Now, Url at time t+1, 2. samplehost.com/search/ims?q=bags&source=android
>>
>>
>> At present, varnish considers the above URL as different from 1st URL
>> and uses a different key while caching the 2nd URL[So, it will be a miss]
>>
>>
>> *So, URL after Modification:*
>>
>> samplehost.com/search/ims?q=bags&source=android
>>
>>
>> Now, 2nd URL will be a HIT at varnish, effectively utilizing the cache.
>>
>>
>>
>> NOTE:
>>
>> We aim to execute this URL Modification without implementing the logic 
>> directly
>> within the default.VCL file. Our intention is to maintain a clean and
>> manageable codebase in the VCL.
>>
>>
>>
>> To address this requirement effectively, we have explored two potential
>> Approaches:
>>
>>
>> Approach-1:
>>
>>
>>
>> Approach-2:
>>
>>
>>
>>
>> 1. Please go through the approaches mentioned above and let me know the
>> effective solution.
>>
>> 2. Regarding Approach-2
>>
>> At Step 2:
>>
>> May I know if there is any way to access and execute a custom subroutine
>> from another VCL, for modifying the Request URL? if yes, pls help with
>> details.
>>
>> At Step 3:
>>
>> Tomcat Backend should receive the Original Request URL instead of the
>> Modified URL.
>>
>> 3. Please let us know if there is any better approach that can be
>> implemented.
>>
>>
>>
>> Thanks & Regards
>> Uday Kumar
>> ___
>> varnish-misc mailing list
>> varnish-misc@varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Caching Modified URLs by Varnish instead of the original requested URL

2023-08-31 Thread Uday Kumar
Hi Guillaume,

In the process of modifying the query string in VCL code, we have a
requirement of *lowercasing value of specific parameter*, instead of the *whole
query string*

*Example Request URL:*
/search/ims?q=*CRICKET bat*&country_code=IN

*Requirement:*
We have to modify the request URL by lowercasing the value of only the *q *
parameter
i.e ./search/ims?q=*cricket bat*&country_code=IN

*For that, we have found below regex:*
set req.http.hash-url = regsuball(req.http.hash-url, "(q=)(.*?)(\&|$)",
"\1"+*std.tolower("\2")*+"\3");

*ISSUE:*
*std.tolower("\2")* in the above statement is *not lowercasing* the string
that's captured, but if I test it using *std.tolower("SAMPLE"),* its
lowercasing as expected.

1. May I know why it's not lowercasing if *std.tolower("\2") is used*?
2. Also, please provide possible optimal solutions for the same. (using
regex)

Thanks & Regards
Uday Kumar


On Wed, Aug 23, 2023 at 12:01 PM Uday Kumar  wrote:

> Hi Guillaume,
>
> *use includes and function calls*
> This is great, thank you so much for your help!
>
> Thanks & Regards
> Uday Kumar
>
>
> On Wed, Aug 23, 2023 at 1:32 AM Guillaume Quintard <
> guillaume.quint...@gmail.com> wrote:
>
>> Hi Uday,
>>
>> I'm not exactly sure how to read those diagrams, so I apologize if I'm
>> missing the mark or if I'm too broad here.
>>
>> There are a few points I'd like to attract your attention to. The first
>> one is that varnish doesn't cache the request or the URL. The cache is
>> essentially a big hashmap/dictionary/database, in which you store the
>> response. The request/url is the key for it, so you need to have it in its
>> "final" form before you do anything.
>>
>> From what I read, you are not against it, and you just want to sanitize
>> the URL in vcl_recv, but you don't like the idea of making the main file
>> too unwieldy. If I got that right, then I have a nice answer for you: use
>> includes and function calls.
>>
>> As an example:
>>
>> # cat /etc/varnish/url.vcl
>> sub sanitize_url {
>>   # do whatever modifications you need here
>> }
>>
>> # cat /etc/varnish/default.vcl
>> include "./url.vcl";
>>
>> sub vcl_recvl {
>>   call sanitize_url;
>> }
>>
>>
>> That should get you going.
>>
>> Hopefully I didn't miss the mark too much here, let me know if I did.
>>
>> --
>> Guillaume Quintard
>>
>>
>> On Tue, Aug 22, 2023 at 3:45 AM Uday Kumar 
>> wrote:
>>
>>> Hello All,
>>>
>>>
>>> For our spring boot application, we are using Varnish Caching in a
>>> production environment.
>>>
>>>
>>>
>>>
>>> Requirement: [To utilize cache effectively]
>>>
>>> Modify the URL (Removal of unnecessary parameters) while caching the
>>> user request, so that the modified URL can be cached by varnish which
>>> helps improve cache HITS for similar URLs.
>>>
>>>
>>> For Example:
>>>
>>> Let's consider the below Request URL
>>>
>>> Url at time t, 1. samplehost.com/search/ims?q=bags&source=android
>>> &options.start=0
>>>
>>>
>>> Our Requirement:
>>>
>>> To make varnish consider URLs with options.start=0 and without
>>> options.start parameter as EQUIVALENT, such that a single cached
>>> response(Single Key) can be utilized in both cases.
>>>
>>>
>>> *1st URL after modification:*
>>>
>>> samplehost.com/search/ims?q=bags&source=android
>>>
>>>
>>> *Cached URL at Varnish:*
>>>
>>> samplehost.com/search/ims?q=bags&source=android
>>>
>>>
>>>
>>> Now, Url at time t+1, 2. samplehost.com/search/ims?q=bags&source=android
>>>
>>>
>>> At present, varnish considers the above URL as different from 1st URL
>>> and uses a different key while caching the 2nd URL[So, it will be a miss
>>> ]
>>>
>>>
>>> *So, URL after Modification:*
>>>
>>> samplehost.com/search/ims?q=bags&source=android
>>>
>>>
>>> Now, 2nd URL will be a HIT at varnish, effectively utilizing the cache.
>>>
>>>
>>>
>>> NOTE:
>>>
>>> We aim to execute this URL Modification without implementing the logic 
>>> direct

Re: Caching Modified URLs by Varnish instead of the original requested URL

2023-09-03 Thread Uday Kumar
Thanks Guillaume, I'll look into it.

Thanks & Regards
Uday Kumar


On Fri, Sep 1, 2023 at 1:36 AM Guillaume Quintard <
guillaume.quint...@gmail.com> wrote:

> I'm pretty sure it's correctly lowercasing "\2" correctly. The problem is
> that you want to lowercase the *value* referenced by "\2" instead.
>
> On this, I don't think you have a choice, you need to make that captured
> group its own string, lowercase it, and only then concatenate it. Something
> like:
>
> set req.http.hash-url = regsuball(req.http.hash-url,
> ".*(q=)(.*?)(\&|$).*", "\1") + *std.tolower("regsuball(req.http.hash-url,
> ".*(q=)(.*?)(\&|$).*", "\2")") + *regsuball(req.http.hash-url,
> ".*(q=)(.*?)(\&|$).*", "\3"));
>
> It's disgusting, but eh, we started with regex, so...
>
> Other options include vmod_querystring
> <https://github.com/Dridi/libvmod-querystring/blob/master/src/vmod_querystring.vcc.in>
> (Dridi might possibly be of assistance on this topic) and vmod_urlplus
> <https://docs.varnish-software.com/varnish-enterprise/vmods/urlplus/#query_get>
>  (Varnish
> Enterprise), and the last, and possibly most promising one, vmod_re2
> <https://gitlab.com/uplex/varnish/libvmod-re2/-/blob/master/README.md> which
> would allow you to do something like
>
> if (myset.match(".*(q=)(.*?)(\&|$).*", "\1")) {
>set req.http.hash-url = myset.matched(1) + std.lower(myset.matched(2))
> + myset.matched(3)
> }
>
> --
> Guillaume Quintard
>
>
> On Thu, Aug 31, 2023 at 1:03 AM Uday Kumar 
> wrote:
>
>> Hi Guillaume,
>>
>> In the process of modifying the query string in VCL code, we have a
>> requirement of *lowercasing value of specific parameter*, instead of the 
>> *whole
>> query string*
>>
>> *Example Request URL:*
>> /search/ims?q=*CRICKET bat*&country_code=IN
>>
>> *Requirement:*
>> We have to modify the request URL by lowercasing the value of only the *q
>> *parameter
>> i.e ./search/ims?q=*cricket bat*&country_code=IN
>>
>> *For that, we have found below regex:*
>> set req.http.hash-url = regsuball(req.http.hash-url, "(q=)(.*?)(\&|$)",
>> "\1"+*std.tolower("\2")*+"\3");
>>
>> *ISSUE:*
>> *std.tolower("\2")* in the above statement is *not lowercasing* the
>> string that's captured, but if I test it using *std.tolower("SAMPLE"),* its
>> lowercasing as expected.
>>
>> 1. May I know why it's not lowercasing if *std.tolower("\2") is used*?
>> 2. Also, please provide possible optimal solutions for the same. (using
>> regex)
>>
>> Thanks & Regards
>> Uday Kumar
>>
>>
>> On Wed, Aug 23, 2023 at 12:01 PM Uday Kumar 
>> wrote:
>>
>>> Hi Guillaume,
>>>
>>> *use includes and function calls*
>>> This is great, thank you so much for your help!
>>>
>>> Thanks & Regards
>>> Uday Kumar
>>>
>>>
>>> On Wed, Aug 23, 2023 at 1:32 AM Guillaume Quintard <
>>> guillaume.quint...@gmail.com> wrote:
>>>
>>>> Hi Uday,
>>>>
>>>> I'm not exactly sure how to read those diagrams, so I apologize if I'm
>>>> missing the mark or if I'm too broad here.
>>>>
>>>> There are a few points I'd like to attract your attention to. The first
>>>> one is that varnish doesn't cache the request or the URL. The cache is
>>>> essentially a big hashmap/dictionary/database, in which you store the
>>>> response. The request/url is the key for it, so you need to have it in its
>>>> "final" form before you do anything.
>>>>
>>>> From what I read, you are not against it, and you just want to sanitize
>>>> the URL in vcl_recv, but you don't like the idea of making the main file
>>>> too unwieldy. If I got that right, then I have a nice answer for you: use
>>>> includes and function calls.
>>>>
>>>> As an example:
>>>>
>>>> # cat /etc/varnish/url.vcl
>>>> sub sanitize_url {
>>>>   # do whatever modifications you need here
>>>> }
>>>>
>>>> # cat /etc/varnish/default.vcl
>>>> include "./url.vcl";
>>>>
>>>> sub vcl_recvl {
>>>>   call sanitize_url;
>>>> }
>>>>
>>>>
>>>> Th

Block Unauthorized Requests at Varnish [Code Optimization]

2023-10-12 Thread Uday Kumar
Hello everyone,

We use varnish in our production environment for caching content.

Our Requirement:

We are trying to block unauthorized requests at varnish based on the source
parameter in the URL and the client IP in the request header.

For example:

Sample URL:

www.hostname:port/path?source=mobile&keyword= bags

Let's assume there are 3 IPs [which are allowed to access varnish]
associated with the above request of mobile source.

i.e *IP1, IP2, IP3*

So if any request comes with the source as *mobile *and client-ip as *IP4*,
it's treated as an unauthorized request and should be blocked at varnish.


What we have done for blocking?

*Sample URL:*
www.hostname:port/path?source=mobile&keyword= bags

Created a map using ACL as below:

acl mobile_source{

  "IP1";

  "IP2";

  "IP3";

}

If(req.url ~ "source=mobile" && client.ip !~ mobile_source) {

   return(Synth(403, "varnish access denied!"))

}


The problem we are facing:

The source parameter can have different values like mobile, desktop,
laptop, tablet, etc. and each value can have different IPs associated with
it.

ACL Rules will be as below:

acl mobile_source{

  "IP1";

  "IP2";

  "IP3";

}

acl desktop_source{

  "IP4";

  "IP5";

  "IP6";

}

and so on,


If we wanted to block unauthorized access from different source vs IP
combinations, we would have to add that many conditions as below.

If(

(req.url ~ "source=mobile" && client.ip != mobile_source) ||

(req.url ~ "source=desktop" && client.ip != desktop_source) ||

(req.url ~ "source=laptop" && client.ip != laptop_source) ||

(req.url ~ "source=tablet" && client.ip != tablet_source)

){

   return(Synth(403, "access denied!"))

}

This becomes worse, if we have 10's or 20's of source values.

Our question:

We would like to know if there is any way to optimize the code by removing
redundant checks so that we can scale it even if we have many sources vs IP
combinations.


Thanks & Regards
Uday Kumar
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Block Unauthorized Requests at Varnish [Code Optimization]

2023-10-12 Thread Uday Kumar
Hi Guillaume,

I don't think those are redundant checks, from what you are showing, they
are all justified. Sure, there may be a bunch of them, but you have to go
through to them.

By redundant I meant, I have to write multiple checks for each source and
list of IPs associated with it. [which would be *worse *if the number of
sources are huge]

*Example:*

If(

(req.url ~ "source=mobile" && client.ip != mobile_source) ||

(req.url ~ "source=desktop" && client.ip != desktop_source) ||

(req.url ~ "source=laptop" && client.ip != laptop_source) ||

(req.url ~ "source=tablet" && client.ip != tablet_source)

){

   return(Synth(403, "access denied!"))

}


In the above example, if the request URL is source=tablet *[for which
condition is present at the end]*, still I have to check all the above
conditions.





One thing I would do though is to generate the VCL from a source file, like
a YAML one:

Didn't understand, can you please elaborate?

Thanks & Regards
Uday Kumar


On Thu, Oct 12, 2023 at 11:11 PM Guillaume Quintard <
guillaume.quint...@gmail.com> wrote:

> Hi Uday,
>
> I don't think those are redundant checks, from what you are showing, they
> are all justified. Sure, there may be a bunch of them, but you have to go
> through to them.
>
> One thing I would do though is to generate the VCL from a source file,
> like a YAML one:
>
> mobile:
>   - IP1
>   - IP2
>   - IP3
> desktop:
>   - IP4
>   - IP5
>   - IP6
>
>
> From that, you can build the VCL without having to manually write
> "client.ip" or "(req.url ~ "source=" every time.
>
> --
> Guillaume Quintard
>
>
> On Thu, Oct 12, 2023 at 10:17 AM Uday Kumar 
> wrote:
>
>> Hello everyone,
>>
>> We use varnish in our production environment for caching content.
>>
>> Our Requirement:
>>
>> We are trying to block unauthorized requests at varnish based on the
>> source parameter in the URL and the client IP in the request header.
>>
>> For example:
>>
>> Sample URL:
>>
>> www.hostname:port/path?source=mobile&keyword= bags
>>
>> Let's assume there are 3 IPs [which are allowed to access varnish]
>> associated with the above request of mobile source.
>>
>> i.e *IP1, IP2, IP3*
>>
>> So if any request comes with the source as *mobile *and client-ip as
>> *IP4*, it's treated as an unauthorized request and should be blocked at
>> varnish.
>>
>>
>> What we have done for blocking?
>>
>> *Sample URL:*
>> www.hostname:port/path?source=mobile&keyword= bags
>>
>> Created a map using ACL as below:
>>
>> acl mobile_source{
>>
>>   "IP1";
>>
>>   "IP2";
>>
>>   "IP3";
>>
>> }
>>
>> If(req.url ~ "source=mobile" && client.ip !~ mobile_source) {
>>
>>return(Synth(403, "varnish access denied!"))
>>
>> }
>>
>>
>> The problem we are facing:
>>
>> The source parameter can have different values like mobile, desktop,
>> laptop, tablet, etc. and each value can have different IPs associated with
>> it.
>>
>> ACL Rules will be as below:
>>
>> acl mobile_source{
>>
>>   "IP1";
>>
>>   "IP2";
>>
>>   "IP3";
>>
>> }
>>
>> acl desktop_source{
>>
>>   "IP4";
>>
>>   "IP5";
>>
>>   "IP6";
>>
>> }
>>
>> and so on,
>>
>>
>> If we wanted to block unauthorized access from different source vs IP
>> combinations, we would have to add that many conditions as below.
>>
>> If(
>>
>> (req.url ~ "source=mobile" && client.ip != mobile_source) ||
>>
>> (req.url ~ "source=desktop" && client.ip != desktop_source) ||
>>
>> (req.url ~ "source=laptop" && client.ip != laptop_source) ||
>>
>> (req.url ~ "source=tablet" && client.ip != tablet_source)
>>
>> ){
>>
>>return(Synth(403, "access denied!"))
>>
>> }
>>
>> This becomes worse, if we have 10's or 20's of source values.
>>
>> Our question:
>>
>> We would like to know if there is any way to optimize the code by
>> removing redundant checks so that we can scale it even if we have many
>> sources vs IP combinations.
>>
>>
>> Thanks & Regards
>> Uday Kumar
>> ___
>> varnish-misc mailing list
>> varnish-misc@varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Block Unauthorized Requests at Varnish [Code Optimization]

2023-10-12 Thread Uday Kumar
> That's mainly how computers work, processing will be linear. You *could*
create a vmod that packs ACLs into a hashmap to simplify the apparent
logic, but you will pay that price developing the vmod, and for a very
modest performance gain. If you have less than 50 sources, or even less
than 100, I don't think it's worth agonizing over that kind of optimization
(unless you've actually measured and you did see a performance drop).

Okay, Thanks for your suggestion!

>  I assume that the VCL is currently committed in a repo somewhere and
gets edited every time you need to add a new IP or source. If so, it's not
great because editing such repetitive code is error-prone, and therefore
you should use templating to create the VCL from a simpler, more
maintainable source.

Sure, will definitely explore!

Thanks & Regards
Uday Kumar


On Fri, Oct 13, 2023 at 12:35 AM Guillaume Quintard <
guillaume.quint...@gmail.com> wrote:

> > In the above example, if the request URL is source=tablet [for which
> condition is present at the end], still I have to check all the above
> conditions.
>
> That's mainly how computers work, processing will be linear. You *could*
> create a vmod that packs ACLs into a hashmap to simplify the apparent
> logic, but you will pay that price developing the vmod, and for a very
> modest performance gain. If you have less than 50 sources, or even less
> than a 100, I don't think it's worth agonizing over that kind of
> optimization (unless you've actually measured and you did see a
> performance drop).
>
> > One thing I would do though is to generate the VCL from a source file,
> like a YAML one:
>
> All I'm saying is that you should focus on increasing the maintainability
> of the project before worrying about performance. I assume that the VCL is
> currently committed in a repo somewhere and gets edited every time you need
> to add a new IP or source. If so, it's not great because editing such
> repetitive code is error-prone, and therefore you should use templating to
> create the VCL from a simpler, more maintainable source.
>
> Tools like go templates or jinja can provide that feature and save you
> from repeating yourself when writing configuration.
>
> --
> Guillaume Quintard
>
>
> On Thu, Oct 12, 2023 at 11:46 AM Uday Kumar 
> wrote:
>
>> Hi Guillaume,
>>
>> I don't think those are redundant checks, from what you are showing, they
>> are all justified. Sure, there may be a bunch of them, but you have to go
>> through to them.
>>
>> By redundant I meant, I have to write multiple checks for each source and
>> list of IPs associated with it. [which would be *worse *if the number of
>> sources are huge]
>>
>> *Example:*
>>
>> If(
>>
>> (req.url ~ "source=mobile" && client.ip != mobile_source) ||
>>
>> (req.url ~ "source=desktop" && client.ip != desktop_source) ||
>>
>> (req.url ~ "source=laptop" && client.ip != laptop_source) ||
>>
>> (req.url ~ "source=tablet" && client.ip != tablet_source)
>>
>> ){
>>
>>    return(Synth(403, "access denied!"))
>>
>> }
>>
>>
>> In the above example, if the request URL is source=tablet *[for which
>> condition is present at the end]*, still I have to check all the above
>> conditions.
>>
>>
>>
>>
>>
>> One thing I would do though is to generate the VCL from a source file,
>> like a YAML one:
>>
>> Didn't understand, can you please elaborate?
>>
>> Thanks & Regards
>> Uday Kumar
>>
>>
>> On Thu, Oct 12, 2023 at 11:11 PM Guillaume Quintard <
>> guillaume.quint...@gmail.com> wrote:
>>
>>> Hi Uday,
>>>
>>> I don't think those are redundant checks, from what you are showing,
>>> they are all justified. Sure, there may be a bunch of them, but you have to
>>> go through to them.
>>>
>>> One thing I would do though is to generate the VCL from a source file,
>>> like a YAML one:
>>>
>>> mobile:
>>>   - IP1
>>>   - IP2
>>>   - IP3
>>> desktop:
>>>   - IP4
>>>   - IP5
>>>   - IP6
>>>
>>>
>>> From that, you can build the VCL without having to manually write
>>> "client.ip" or "(req.url ~ "source=" every time.
>>>
>>> --
>>> Guillaume Quintard
>>>
>>>
>>> On Thu, Oct 12, 2023 at 10:17 AM Uday Kumar 
>>> wrote:
>>>
>

Re: Block Unauthorized Requests at Varnish [Code Optimization]

2023-10-15 Thread Uday Kumar
Hello Guillaume,

Thank you so much!

I'll check it out!

Thanks & Regards
Uday Kumar


On Sun, Oct 15, 2023 at 7:32 AM Guillaume Quintard <
guillaume.quint...@gmail.com> wrote:

> Hello Uday,
>
> Quick follow-up as I realize that templating can be a bit scary when
> confronted for the first time, and you are far from the first one to be
> curious about, so I've committed this:
> https://github.com/varnish/toolbox/tree/master/gotemplate-example
> It probably won't get you very far, but it should at least get you
> started, and help understand how templating can make things a tiny be
> simpler but splitting data from business logic, for example to add more
> IPs/ACLs or source without edit the VCL manually.
>
> Hope that helps.
>
> --
> Guillaume Quintard
>
>
> On Thu, Oct 12, 2023 at 12:36 PM Uday Kumar 
> wrote:
>
>> > That's mainly how computers work, processing will be linear. You
>> *could* create a vmod that packs ACLs into a hashmap to simplify the
>> apparent logic, but you will pay that price developing the vmod, and for a
>> very modest performance gain. If you have less than 50 sources, or even
>> less than 100, I don't think it's worth agonizing over that kind of
>> optimization (unless you've actually measured and you did see a
>> performance drop).
>>
>> Okay, Thanks for your suggestion!
>>
>> >  I assume that the VCL is currently committed in a repo somewhere and
>> gets edited every time you need to add a new IP or source. If so, it's not
>> great because editing such repetitive code is error-prone, and therefore
>> you should use templating to create the VCL from a simpler, more
>> maintainable source.
>>
>> Sure, will definitely explore!
>>
>> Thanks & Regards
>> Uday Kumar
>>
>>
>> On Fri, Oct 13, 2023 at 12:35 AM Guillaume Quintard <
>> guillaume.quint...@gmail.com> wrote:
>>
>>> > In the above example, if the request URL is source=tablet [for which
>>> condition is present at the end], still I have to check all the above
>>> conditions.
>>>
>>> That's mainly how computers work, processing will be linear. You *could*
>>> create a vmod that packs ACLs into a hashmap to simplify the apparent
>>> logic, but you will pay that price developing the vmod, and for a very
>>> modest performance gain. If you have less than 50 sources, or even less
>>> than a 100, I don't think it's worth agonizing over that kind of
>>> optimization (unless you've actually measured and you did see a
>>> performance drop).
>>>
>>> > One thing I would do though is to generate the VCL from a source file,
>>> like a YAML one:
>>>
>>> All I'm saying is that you should focus on increasing the
>>> maintainability of the project before worrying about performance. I assume
>>> that the VCL is currently committed in a repo somewhere and gets edited
>>> every time you need to add a new IP or source. If so, it's not great
>>> because editing such repetitive code is error-prone, and therefore you
>>> should use templating to create the VCL from a simpler, more maintainable
>>> source.
>>>
>>> Tools like go templates or jinja can provide that feature and save you
>>> from repeating yourself when writing configuration.
>>>
>>> --
>>> Guillaume Quintard
>>>
>>>
>>> On Thu, Oct 12, 2023 at 11:46 AM Uday Kumar 
>>> wrote:
>>>
>>>> Hi Guillaume,
>>>>
>>>> I don't think those are redundant checks, from what you are showing,
>>>> they are all justified. Sure, there may be a bunch of them, but you have to
>>>> go through to them.
>>>>
>>>> By redundant I meant, I have to write multiple checks for each source
>>>> and list of IPs associated with it. [which would be *worse *if the
>>>> number of sources are huge]
>>>>
>>>> *Example:*
>>>>
>>>> If(
>>>>
>>>> (req.url ~ "source=mobile" && client.ip != mobile_source) ||
>>>>
>>>> (req.url ~ "source=desktop" && client.ip != desktop_source) ||
>>>>
>>>> (req.url ~ "source=laptop" && client.ip != laptop_source) ||
>>>>
>>>> (req.url ~ "source=tablet" && client.ip != tablet_source)
>>>>
>>>> ){
>>>>
>>>>   

Append uniqueid to a http request at varnish

2024-04-24 Thread Uday Kumar
Hello all,

We follow below architecture in our production environment:
User request ---> Varnish > Tomcat Backend

We have a requirement of generating an unique id at varnish that can be
appended to a request url.
So that it can be propagated to the backend and also will be useful in
tracking errors efficiently

varnish version used: varnish-5.2.1

Example:
original request:
/search/test?q=bags&source=mobile

After appending unique id [This needs to be sent to backend and to be
stored in varnish logs]:
/search/test?q=bags&source=mobile&uniqueid=abc123

Please help us know if there is any way to do this at varnish

*Thanks & Regards,*
*Uday Kumar*
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Append uniqueid to a http request at varnish

2024-04-24 Thread Uday Kumar
Hi Guillaume,

Thanks for this reminder, I will check this and get back to you!


*Thanks & Regards,*
*Uday Kumar*


On Thu, Apr 25, 2024 at 1:12 AM Guillaume Quintard <
guillaume.quint...@gmail.com> wrote:

> Hi Uday,
>
> I feel like we've explored this last year:
> https://varnish-cache.org/lists/pipermail/varnish-misc/2023-May/027238.html
>
> I don't think the answer has changed much: vmod-uuid is your best bet here.
>
> Please let me know if I'm missing some requirements.
>
> Kind regards,
>
> --
> Guillaume Quintard
>
>
> On Wed, Apr 24, 2024 at 4:26 AM Uday Kumar 
> wrote:
>
>> Hello all,
>>
>> We follow below architecture in our production environment:
>> User request ---> Varnish > Tomcat Backend
>>
>> We have a requirement of generating an unique id at varnish that can be
>> appended to a request url.
>> So that it can be propagated to the backend and also will be useful in
>> tracking errors efficiently
>>
>> varnish version used: varnish-5.2.1
>>
>> Example:
>> original request:
>> /search/test?q=bags&source=mobile
>>
>> After appending unique id [This needs to be sent to backend and to be
>> stored in varnish logs]:
>> /search/test?q=bags&source=mobile&uniqueid=abc123
>>
>> Please help us know if there is any way to do this at varnish
>>
>> *Thanks & Regards,*
>> *Uday Kumar*
>> ___
>> varnish-misc mailing list
>> varnish-misc@varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Append uniqueid to a http request at varnish

2024-04-30 Thread Uday Kumar
hello Guillaume,

I am trying to install vmod_uuid on my centOS 7 machine

Resource i used:
https://github.com/otto-de/libvmod-uuid/blob/5.x/INSTALL.rst

varnish version: 5.2.1

I am getting below errors while running *make *command

make[1]: Entering directory `/usr/local/src/libvmod-uuid'
Making all in src
make[2]: Entering directory `/usr/local/src/libvmod-uuid/src'
  CC   vmod_uuid.lo
In file included from vmod_uuid.c:35:0:
vcc_if.h:11:1: error: unknown type name ‘VCL_STRING’
 VCL_STRING vmod_uuid(VRT_CTX, struct vmod_priv *);
 ^
vcc_if.h:11:31: error: expected ‘)’ before ‘struct’
 VCL_STRING vmod_uuid(VRT_CTX, struct vmod_priv *);
   ^
vcc_if.h:12:1: error: unknown type name ‘VCL_STRING’
 VCL_STRING vmod_uuid_v1(VRT_CTX, struct vmod_priv *);
 ^
vcc_if.h:12:34: error: expected ‘)’ before ‘struct’
 VCL_STRING vmod_uuid_v1(VRT_CTX, struct vmod_priv *);
  ^
vcc_if.h:13:1: error: unknown type name ‘VCL_STRING’
 VCL_STRING vmod_uuid_v3(VRT_CTX, struct vmod_priv *, VCL_STRING,
 ^
vcc_if.h:13:34: error: expected ‘)’ before ‘struct’
 VCL_STRING vmod_uuid_v3(VRT_CTX, struct vmod_priv *, VCL_STRING,
  ^
vcc_if.h:15:1: error: unknown type name ‘VCL_STRING’
 VCL_STRING vmod_uuid_v4(VRT_CTX, struct vmod_priv *);
 ^
vcc_if.h:15:34: error: expected ‘)’ before ‘struct’
 VCL_STRING vmod_uuid_v4(VRT_CTX, struct vmod_priv *);
  ^
vcc_if.h:16:1: error: unknown type name ‘VCL_STRING’
 VCL_STRING vmod_uuid_v5(VRT_CTX, struct vmod_priv *, VCL_STRING,
 ^
vcc_if.h:16:34: error: expected ‘)’ before ‘struct’
 VCL_STRING vmod_uuid_v5(VRT_CTX, struct vmod_priv *, VCL_STRING,
  ^
vmod_uuid.c:48:17: error: expected ‘)’ before ‘int’
 mkuuid(VRT_CTX, int utype, uuid_t *uuid, const char *str, va_list ap)
 ^
vmod_uuid.c:76:1: error: unknown type name ‘VCL_STRING’
 _uuid(VRT_CTX, uuid_t *uuid, int utype, ...)
 ^
vmod_uuid.c:76:16: error: expected ‘)’ before ‘uuid_t’
 _uuid(VRT_CTX, uuid_t *uuid, int utype, ...)
^
vmod_uuid.c:104:21: error: expected ‘)’ before ‘void’
 free_uuids(VRT_CTX, void *priv)
 ^
vmod_uuid.c:116:39: error: array type has incomplete element type
 static const struct vmod_priv_methods uuid_priv_task_methods[1] = {{
   ^
vmod_uuid.c:117:3: error: field name not in record or union initializer
   .magic = VMOD_PRIV_METHODS_MAGIC,
   ^
vmod_uuid.c:117:3: error: (near initialization for ‘uuid_priv_task_methods’)
vmod_uuid.c:117:12: error: ‘VMOD_PRIV_METHODS_MAGIC’ undeclared here (not
in a function)
   .magic = VMOD_PRIV_METHODS_MAGIC,
^
vmod_uuid.c:118:3: error: field name not in record or union initializer
   .type = "vmod_uuid_priv_task",
   ^
vmod_uuid.c:118:3: error: (near initialization for ‘uuid_priv_task_methods’)
vmod_uuid.c:119:3: error: field name not in record or union initializer
   .fini = free_uuids
   ^
vmod_uuid.c:119:3: error: (near initialization for ‘uuid_priv_task_methods’)
vmod_uuid.c:119:11: error: ‘free_uuids’ undeclared here (not in a function)
   .fini = free_uuids
   ^
vmod_uuid.c:123:20: error: expected ‘)’ before ‘struct’
 get_uuids(VRT_CTX, struct vmod_priv *priv, uuid_t **uuid_ns)
^
vmod_uuid.c:163:1: error: unknown type name ‘VCL_STRING’
 VCL_STRING
 ^
vmod_uuid.c:164:23: error: expected ‘)’ before ‘struct’
 vmod_uuid_v1(VRT_CTX, struct vmod_priv *priv)
   ^
vmod_uuid.c:172:1: error: unknown type name ‘VCL_STRING’
 VCL_STRING
 ^
vmod_uuid.c:173:23: error: expected ‘)’ before ‘struct’
 vmod_uuid_v3(VRT_CTX, struct vmod_priv *priv, VCL_STRING ns, VCL_STRING
name)
   ^
vmod_uuid.c:182:1: error: unknown type name ‘VCL_STRING’
 VCL_STRING
 ^
vmod_uuid.c:183:23: error: expected ‘)’ before ‘struct’
 vmod_uuid_v4(VRT_CTX, struct vmod_priv *priv)
   ^
vmod_uuid.c:191:1: error: unknown type name ‘VCL_STRING’
 VCL_STRING
 ^
vmod_uuid.c:192:23: error: expected ‘)’ before ‘struct’
 vmod_uuid_v5(VRT_CTX, struct vmod_priv *priv, VCL_STRING ns, VCL_STRING
name)
   ^
vmod_uuid.c:201:1: error: unknown type name ‘VCL_STRING’
 VCL_STRING
 ^
vmod_uuid.c:202:20: error: expected ‘)’ before ‘struct’
 vmod_uuid(VRT_CTX, struct vmod_priv *priv)
^
vmod_uuid.c:116:39: error: ‘uuid_priv_task_methods’ defined but not used
[-Werror=unused-variable]
 static const struct vmod_priv_methods uuid_priv_task_methods[1] = {{
   ^
cc1: all warnings being treated as errors
make[2]: *** [vmod_uuid.lo] Error 1
make[2]: Leaving directory `/usr/local/src/libvmod-uuid/src'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/usr/local/src/libvmod-uuid'
make: *** [all] Error 2

Please help

*Thanks & Regards,*
*Uday Kumar*


On Thu, Apr 25, 2024 at 6:18 AM Uday Kumar  wrot

Re: Append uniqueid to a http request at varnish

2024-05-01 Thread Uday Kumar
Hello,

Am I missing anything here?

On Tue, Apr 30, 2024, 13:37 Uday Kumar  wrote:

> hello Guillaume,
>
> I am trying to install vmod_uuid on my centOS 7 machine
>
> Resource i used:
> https://github.com/otto-de/libvmod-uuid/blob/5.x/INSTALL.rst
>
> varnish version: 5.2.1
>
> I am getting below errors while running *make *command
>
> make[1]: Entering directory `/usr/local/src/libvmod-uuid'
> Making all in src
> make[2]: Entering directory `/usr/local/src/libvmod-uuid/src'
>   CC   vmod_uuid.lo
> In file included from vmod_uuid.c:35:0:
> vcc_if.h:11:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING vmod_uuid(VRT_CTX, struct vmod_priv *);
>  ^
> vcc_if.h:11:31: error: expected ‘)’ before ‘struct’
>  VCL_STRING vmod_uuid(VRT_CTX, struct vmod_priv *);
>^
> vcc_if.h:12:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING vmod_uuid_v1(VRT_CTX, struct vmod_priv *);
>  ^
> vcc_if.h:12:34: error: expected ‘)’ before ‘struct’
>  VCL_STRING vmod_uuid_v1(VRT_CTX, struct vmod_priv *);
>   ^
> vcc_if.h:13:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING vmod_uuid_v3(VRT_CTX, struct vmod_priv *, VCL_STRING,
>  ^
> vcc_if.h:13:34: error: expected ‘)’ before ‘struct’
>  VCL_STRING vmod_uuid_v3(VRT_CTX, struct vmod_priv *, VCL_STRING,
>   ^
> vcc_if.h:15:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING vmod_uuid_v4(VRT_CTX, struct vmod_priv *);
>  ^
> vcc_if.h:15:34: error: expected ‘)’ before ‘struct’
>  VCL_STRING vmod_uuid_v4(VRT_CTX, struct vmod_priv *);
>   ^
> vcc_if.h:16:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING vmod_uuid_v5(VRT_CTX, struct vmod_priv *, VCL_STRING,
>  ^
> vcc_if.h:16:34: error: expected ‘)’ before ‘struct’
>  VCL_STRING vmod_uuid_v5(VRT_CTX, struct vmod_priv *, VCL_STRING,
>   ^
> vmod_uuid.c:48:17: error: expected ‘)’ before ‘int’
>  mkuuid(VRT_CTX, int utype, uuid_t *uuid, const char *str, va_list ap)
>  ^
> vmod_uuid.c:76:1: error: unknown type name ‘VCL_STRING’
>  _uuid(VRT_CTX, uuid_t *uuid, int utype, ...)
>  ^
> vmod_uuid.c:76:16: error: expected ‘)’ before ‘uuid_t’
>  _uuid(VRT_CTX, uuid_t *uuid, int utype, ...)
> ^
> vmod_uuid.c:104:21: error: expected ‘)’ before ‘void’
>  free_uuids(VRT_CTX, void *priv)
>  ^
> vmod_uuid.c:116:39: error: array type has incomplete element type
>  static const struct vmod_priv_methods uuid_priv_task_methods[1] = {{
>^
> vmod_uuid.c:117:3: error: field name not in record or union initializer
>.magic = VMOD_PRIV_METHODS_MAGIC,
>^
> vmod_uuid.c:117:3: error: (near initialization for
> ‘uuid_priv_task_methods’)
> vmod_uuid.c:117:12: error: ‘VMOD_PRIV_METHODS_MAGIC’ undeclared here (not
> in a function)
>.magic = VMOD_PRIV_METHODS_MAGIC,
> ^
> vmod_uuid.c:118:3: error: field name not in record or union initializer
>.type = "vmod_uuid_priv_task",
>^
> vmod_uuid.c:118:3: error: (near initialization for
> ‘uuid_priv_task_methods’)
> vmod_uuid.c:119:3: error: field name not in record or union initializer
>.fini = free_uuids
>^
> vmod_uuid.c:119:3: error: (near initialization for
> ‘uuid_priv_task_methods’)
> vmod_uuid.c:119:11: error: ‘free_uuids’ undeclared here (not in a function)
>.fini = free_uuids
>^
> vmod_uuid.c:123:20: error: expected ‘)’ before ‘struct’
>  get_uuids(VRT_CTX, struct vmod_priv *priv, uuid_t **uuid_ns)
> ^
> vmod_uuid.c:163:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING
>  ^
> vmod_uuid.c:164:23: error: expected ‘)’ before ‘struct’
>  vmod_uuid_v1(VRT_CTX, struct vmod_priv *priv)
>^
> vmod_uuid.c:172:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING
>  ^
> vmod_uuid.c:173:23: error: expected ‘)’ before ‘struct’
>  vmod_uuid_v3(VRT_CTX, struct vmod_priv *priv, VCL_STRING ns, VCL_STRING
> name)
>^
> vmod_uuid.c:182:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING
>  ^
> vmod_uuid.c:183:23: error: expected ‘)’ before ‘struct’
>  vmod_uuid_v4(VRT_CTX, struct vmod_priv *priv)
>^
> vmod_uuid.c:191:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING
>  ^
> vmod_uuid.c:192:23: error: expected ‘)’ before ‘struct’
>  vmod_uuid_v5(VRT_CTX, struct vmod_priv *priv, VCL_STRING ns, VCL_STRING
> name)
>^
> vmod_uuid.c:201:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING
>  ^
> vmod_uuid.c:202:20: error: e

Preventing Caching in Varnish Based on Backend Response Header

2024-05-28 Thread Uday Kumar
Hello all,

We need to prevent caching in Varnish based on a specific header from the
backend.

Could you please suggest the best approach to achieve this?


*Thanks & Regards,*
*Uday Kumar*
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Preventing Caching in Varnish Based on Backend Response Header

2024-05-28 Thread Uday Kumar
Hello Guillaume,
Great to know about this, it should work for us!
will check this out

*Thanks & Regards,*
*Uday Kumar*


On Tue, May 28, 2024 at 5:53 PM Guillaume Quintard <
guillaume.quint...@gmail.com> wrote:

> Hi Uday,
>
> Sure, the classic practice will do nicely:
>
> sub vcl_backend_response {
> if (beresp.http.that-specific-header) {
> # TTL should match the time during which that header is unlikely
> to change
> # do NOT set it to 0s or less (
> https://info.varnish-software.com/blog/hit-for-miss-and-why-a-null-ttl-is-bad-for-you
> )
> set beresp.ttl = 2m;
> set beresp.uncacheable = true;
> return (deliver);
> }
> }
>
> The main trick here is beresp.uncacheable, you do not have to return
> immediately if you still have modifications/checks to do on that response.
>
> Would that work for you?
>
> --
> Guillaume Quintard
>
>
> On Tue, May 28, 2024 at 4:55 AM Uday Kumar 
> wrote:
>
>> Hello all,
>>
>> We need to prevent caching in Varnish based on a specific header from the
>> backend.
>>
>> Could you please suggest the best approach to achieve this?
>>
>>
>> *Thanks & Regards,*
>> *Uday Kumar*
>> ___
>> varnish-misc mailing list
>> varnish-misc@varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Preventing Caching in Varnish Based on Backend Response Header

2024-06-11 Thread Uday Kumar
Hello Guillaume,
We have made required changes at our end, but we have doubt on giving
suitable TTL for uncacheable objects

if (beresp.http.Cache-Control ~ "no-cache") {
set beresp.ttl = *doubt*;
set beresp.uncacheable = true;
}

FYI;
we have ttl of 24hrs for normal objects which are cacheable

May I know if there is any way to find the best possible TTL?

*Thanks & Regards,*
*Uday Kumar*


On Tue, May 28, 2024 at 7:07 PM Uday Kumar  wrote:

> Hello Guillaume,
> Great to know about this, it should work for us!
> will check this out
>
> *Thanks & Regards,*
> *Uday Kumar*
>
>
> On Tue, May 28, 2024 at 5:53 PM Guillaume Quintard <
> guillaume.quint...@gmail.com> wrote:
>
>> Hi Uday,
>>
>> Sure, the classic practice will do nicely:
>>
>> sub vcl_backend_response {
>> if (beresp.http.that-specific-header) {
>> # TTL should match the time during which that header is unlikely
>> to change
>> # do NOT set it to 0s or less (
>> https://info.varnish-software.com/blog/hit-for-miss-and-why-a-null-ttl-is-bad-for-you
>> )
>> set beresp.ttl = 2m;
>> set beresp.uncacheable = true;
>> return (deliver);
>> }
>> }
>>
>> The main trick here is beresp.uncacheable, you do not have to return
>> immediately if you still have modifications/checks to do on that response.
>>
>> Would that work for you?
>>
>> --
>> Guillaume Quintard
>>
>>
>> On Tue, May 28, 2024 at 4:55 AM Uday Kumar 
>> wrote:
>>
>>> Hello all,
>>>
>>> We need to prevent caching in Varnish based on a specific header from
>>> the backend.
>>>
>>> Could you please suggest the best approach to achieve this?
>>>
>>>
>>> *Thanks & Regards,*
>>> *Uday Kumar*
>>> ___
>>> varnish-misc mailing list
>>> varnish-misc@varnish-cache.org
>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>>
>>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Preventing Caching in Varnish Based on Backend Response Header

2024-06-11 Thread Uday Kumar
May I know if there is any way to find the best possible TTL?
I meant to ask for uncacheable objects

*Thanks & Regards,*
*Uday Kumar*


On Tue, Jun 11, 2024 at 4:11 PM Uday Kumar  wrote:

> Hello Guillaume,
> We have made required changes at our end, but we have doubt on giving
> suitable TTL for uncacheable objects
>
> if (beresp.http.Cache-Control ~ "no-cache") {
> set beresp.ttl = *doubt*;
> set beresp.uncacheable = true;
> }
>
> FYI;
> we have ttl of 24hrs for normal objects which are cacheable
>
> May I know if there is any way to find the best possible TTL?
>
> *Thanks & Regards,*
> *Uday Kumar*
>
>
> On Tue, May 28, 2024 at 7:07 PM Uday Kumar 
> wrote:
>
>> Hello Guillaume,
>> Great to know about this, it should work for us!
>> will check this out
>>
>> *Thanks & Regards,*
>> *Uday Kumar*
>>
>>
>> On Tue, May 28, 2024 at 5:53 PM Guillaume Quintard <
>> guillaume.quint...@gmail.com> wrote:
>>
>>> Hi Uday,
>>>
>>> Sure, the classic practice will do nicely:
>>>
>>> sub vcl_backend_response {
>>> if (beresp.http.that-specific-header) {
>>> # TTL should match the time during which that header is unlikely
>>> to change
>>> # do NOT set it to 0s or less (
>>> https://info.varnish-software.com/blog/hit-for-miss-and-why-a-null-ttl-is-bad-for-you
>>> )
>>> set beresp.ttl = 2m;
>>> set beresp.uncacheable = true;
>>> return (deliver);
>>> }
>>> }
>>>
>>> The main trick here is beresp.uncacheable, you do not have to return
>>> immediately if you still have modifications/checks to do on that response.
>>>
>>> Would that work for you?
>>>
>>> --
>>> Guillaume Quintard
>>>
>>>
>>> On Tue, May 28, 2024 at 4:55 AM Uday Kumar 
>>> wrote:
>>>
>>>> Hello all,
>>>>
>>>> We need to prevent caching in Varnish based on a specific header from
>>>> the backend.
>>>>
>>>> Could you please suggest the best approach to achieve this?
>>>>
>>>>
>>>> *Thanks & Regards,*
>>>> *Uday Kumar*
>>>> ___
>>>> varnish-misc mailing list
>>>> varnish-misc@varnish-cache.org
>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>>>
>>>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Preventing Caching in Varnish Based on Backend Response Header

2024-06-12 Thread Uday Kumar
Noted.

Thanks for suggestion

*Thanks & Regards,*
*Uday Kumar*


On Tue, Jun 11, 2024 at 10:35 PM Guillaume Quintard <
guillaume.quint...@gmail.com> wrote:

> Hi,
>
> Don't worry too much about it. Uncacheable objects take a minimal amount
> of space in the cache, and if an object suddenly becomes cacheable, you can
> insert it in the cache, pushing the uncacheable version out.
>
> I'd say keep 24 hours and worry about big stuff :-)
>
> Cheers,
>
> --
> Guillaume Quintard
>
>
> On Tue, Jun 11, 2024 at 4:22 AM Uday Kumar 
> wrote:
>
>> May I know if there is any way to find the best possible TTL?
>> I meant to ask for uncacheable objects
>>
>> *Thanks & Regards,*
>> *Uday Kumar*
>>
>>
>> On Tue, Jun 11, 2024 at 4:11 PM Uday Kumar 
>> wrote:
>>
>>> Hello Guillaume,
>>> We have made required changes at our end, but we have doubt on giving
>>> suitable TTL for uncacheable objects
>>>
>>> if (beresp.http.Cache-Control ~ "no-cache") {
>>> set beresp.ttl = *doubt*;
>>> set beresp.uncacheable = true;
>>> }
>>>
>>> FYI;
>>> we have ttl of 24hrs for normal objects which are cacheable
>>>
>>> May I know if there is any way to find the best possible TTL?
>>>
>>> *Thanks & Regards,*
>>> *Uday Kumar*
>>>
>>>
>>> On Tue, May 28, 2024 at 7:07 PM Uday Kumar 
>>> wrote:
>>>
>>>> Hello Guillaume,
>>>> Great to know about this, it should work for us!
>>>> will check this out
>>>>
>>>> *Thanks & Regards,*
>>>> *Uday Kumar*
>>>>
>>>>
>>>> On Tue, May 28, 2024 at 5:53 PM Guillaume Quintard <
>>>> guillaume.quint...@gmail.com> wrote:
>>>>
>>>>> Hi Uday,
>>>>>
>>>>> Sure, the classic practice will do nicely:
>>>>>
>>>>> sub vcl_backend_response {
>>>>> if (beresp.http.that-specific-header) {
>>>>> # TTL should match the time during which that header is
>>>>> unlikely to change
>>>>> # do NOT set it to 0s or less (
>>>>> https://info.varnish-software.com/blog/hit-for-miss-and-why-a-null-ttl-is-bad-for-you
>>>>> )
>>>>> set beresp.ttl = 2m;
>>>>> set beresp.uncacheable = true;
>>>>> return (deliver);
>>>>> }
>>>>> }
>>>>>
>>>>> The main trick here is beresp.uncacheable, you do not have to return
>>>>> immediately if you still have modifications/checks to do on that response.
>>>>>
>>>>> Would that work for you?
>>>>>
>>>>> --
>>>>> Guillaume Quintard
>>>>>
>>>>>
>>>>> On Tue, May 28, 2024 at 4:55 AM Uday Kumar 
>>>>> wrote:
>>>>>
>>>>>> Hello all,
>>>>>>
>>>>>> We need to prevent caching in Varnish based on a specific header from
>>>>>> the backend.
>>>>>>
>>>>>> Could you please suggest the best approach to achieve this?
>>>>>>
>>>>>>
>>>>>> *Thanks & Regards,*
>>>>>> *Uday Kumar*
>>>>>> ___
>>>>>> varnish-misc mailing list
>>>>>> varnish-misc@varnish-cache.org
>>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>>>>>
>>>>>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Varnish unresponsive on our server

2024-06-20 Thread Uday Kumar
Hello all,

We are facing frequent issues of varnish unresponsiveness for sometime on
our production server.

During this time we have seen that pz_list is being increased to ~3000 and
recv_queue is increased to ~130
Also, varnish is responding with response code '0' for sometime, which
meant unresponsive.

This is causing multiple 5xx on front ends.

FYR:
User request count during this time is normal.

Note:
During this time, we have confirmed that our backend servers are healthy
without any issues.


May I know what could be the reason for this behaviour at varnish?

Please give me the direction on how to debug this issue.
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish unresponsive on our server

2024-06-20 Thread Uday Kumar
   7  .   In use
MEMPOOL.sess1.pool   11  .   In Pool
MEMPOOL.sess1.sz_wanted 512  .   Size requested
MEMPOOL.sess1.sz_actual 480  .   Size allocated
MEMPOOL.sess1.allocs  11515527411.56 Allocations
MEMPOOL.sess1.frees   11515526711.56 Frees
MEMPOOL.sess1.recycle 11514484711.56 Recycled from
pool
MEMPOOL.sess1.timeout   6114536 0.61 Timed out from
pool
MEMPOOL.sess1.toosmall0 0.00 Too small to
recycle
MEMPOOL.sess1.surplus  2953 0.00 Too many for
pool
MEMPOOL.sess1.randry  10427 0.00 Pool ran dry
VBE.reload_2024-06-20T181903.node66.happy 18446744073709551615  .
Happy health probes
VBE.reload_2024-06-20T181903.node66.bereq_hdrbytes193871874
 19.46 Request header bytes
VBE.reload_2024-06-20T181903.node66.bereq_bodybytes0
0.00 Request body bytes
VBE.reload_2024-06-20T181903.node66.beresp_hdrbytes 65332553
6.56 Response header bytes
VBE.reload_2024-06-20T181903.node66.beresp_bodybytes  40260910590
 4042.04 Response body bytes
VBE.reload_2024-06-20T181903.node66.pipe_hdrbytes   0
0.00 Pipe request header bytes
VBE.reload_2024-06-20T181903.node66.pipe_out0
0.00 Piped bytes to backend
VBE.reload_2024-06-20T181903.node66.pipe_in 0
0.00 Piped bytes from backend
VBE.reload_2024-06-20T181903.node66.conn1
 .   Concurrent connections to backend
VBE.reload_2024-06-20T181903.node66.req247959
0.02 Backend requests sent
VBE.reload_2024-06-20T181903.node67.happy18446744073709551615
   .   Happy health probes
VBE.reload_2024-06-20T181903.node67.bereq_hdrbytes  193960668
 19.47 Request header bytes
VBE.reload_2024-06-20T181903.node67.bereq_bodybytes 0
0.00 Request body bytes
VBE.reload_2024-06-20T181903.node67.beresp_hdrbytes  65315238
6.56 Response header bytes
VBE.reload_2024-06-20T181903.node67.beresp_bodybytes  40142940116
 4030.19 Response body bytes
VBE.reload_2024-06-20T181903.node67.pipe_hdrbytes   0
0.00 Pipe request header bytes
VBE.reload_2024-06-20T181903.node67.pipe_out0
0.00 Piped bytes to backend
VBE.reload_2024-06-20T181903.node67.pipe_in 0
0.00 Piped bytes from backend
VBE.reload_2024-06-20T181903.node67.conn3
 .   Concurrent connections to backend
VBE.reload_2024-06-20T181903.node67.req247956
0.02 Backend requests sent

*Thanks & Regards,*
*Uday Kumar*


On Thu, Jun 20, 2024 at 11:36 PM Guillaume Quintard <
guillaume.quint...@gmail.com> wrote:

> Hi Uday,
>
> pz_list and recv_queue are not (to my knowledge) Varnish counters, where
> are you seeing them?
>
> I doubt Varnish is actually replying with 0, so that probably is your
> client faking a response code to have something to show. But that's a
> detail, as the unresponsiveness is real.
>
> Could you share a "varnishstat -1" of the impacted machine?
>
> --
> Guillaume Quintard
>
>
> On Thu, Jun 20, 2024 at 9:30 AM Uday Kumar 
> wrote:
>
>> Hello all,
>>
>> We are facing frequent issues of varnish unresponsiveness for sometime on
>> our production server.
>>
>> During this time we have seen that pz_list is being increased to ~3000
>> and recv_queue is increased to ~130
>> Also, varnish is responding with response code '0' for sometime, which
>> meant unresponsive.
>>
>> This is causing multiple 5xx on front ends.
>>
>> FYR:
>> User request count during this time is normal.
>>
>> Note:
>> During this time, we have confirmed that our backend servers are healthy
>> without any issues.
>>
>>
>> May I know what could be the reason for this behaviour at varnish?
>>
>> Please give me the direction on how to debug this issue.
>> ___
>> varnish-misc mailing list
>> varnish-misc@varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Matching clients ipv6 addresses using ACLs

2024-06-23 Thread Uday Kumar
Hello everyone,

We currently use ACLs in our Varnish configuration to match clients' IPv4
addresses.

Could you please advise if we can directly replace these IPv4
addresses/subnets with IPv6 addresses/subnets in our ACLs when clients send
IPv6 addresses instead of ipv4 addresses?


Thanks and regards,

Uday Kumar
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc