Re: Matching clients ipv6 addresses using ACLs

2024-06-23 Thread Guillaume Quintard
Cc'ing the mailing list back

On Sun, Jun 23, 2024, 12:16 Guillaume Quintard 
wrote:

> Hi Uday,
>
> Yes, but only replace them if the IPv4 won't be used at all. If they are,
> just add the IPv6 on top on the existing IPv4 ones.
>
> Hope that helps,
>
> On Sun, Jun 23, 2024, 12:15 Uday Kumar  wrote:
>
>> Hello everyone,
>>
>> We currently use ACLs in our Varnish configuration to match clients' IPv4
>> addresses.
>>
>> Could you please advise if we can directly replace these IPv4
>> addresses/subnets with IPv6 addresses/subnets in our ACLs when clients send
>> IPv6 addresses instead of ipv4 addresses?
>>
>>
>> Thanks and regards,
>>
>> Uday Kumar
>> ___
>> varnish-misc mailing list
>> varnish-misc@varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish unresponsive on our server

2024-06-20 Thread Guillaume Quintard
Hi Uday,

pz_list and recv_queue are not (to my knowledge) Varnish counters, where
are you seeing them?

I doubt Varnish is actually replying with 0, so that probably is your
client faking a response code to have something to show. But that's a
detail, as the unresponsiveness is real.

Could you share a "varnishstat -1" of the impacted machine?

-- 
Guillaume Quintard


On Thu, Jun 20, 2024 at 9:30 AM Uday Kumar  wrote:

> Hello all,
>
> We are facing frequent issues of varnish unresponsiveness for sometime on
> our production server.
>
> During this time we have seen that pz_list is being increased to ~3000 and
> recv_queue is increased to ~130
> Also, varnish is responding with response code '0' for sometime, which
> meant unresponsive.
>
> This is causing multiple 5xx on front ends.
>
> FYR:
> User request count during this time is normal.
>
> Note:
> During this time, we have confirmed that our backend servers are healthy
> without any issues.
>
>
> May I know what could be the reason for this behaviour at varnish?
>
> Please give me the direction on how to debug this issue.
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish suddenly started using much more memory

2024-06-13 Thread Guillaume Quintard
Sorry Batanun, this thread got lost in my inbox. Would you be able to
upgrade to 7.5 and see if you get the same results? I'm pretty sure it's a
jemalloc issue, but upgrading should make it clear.
You are on Ubuntu, right? Which version?
-- 
Guillaume Quintard


On Mon, May 20, 2024 at 1:50 AM Batanun B  wrote:

> > Sorry, I should have been clearer, I meant: where are the varnish
> packages coming from? Are they from the official repositories, from
> https://packagecloud.io/varnishcache/ or built from source maybe?
>
> Ah, I see. They come from varnishcache packagecloud. More specifically, we
> use:
>
>
> https://packagecloud.io/install/repositories/varnishcache/varnish60lts/script.deb.sh
>
>
> > you should really invest some time in something like prometheus, it
> would probably have made the issue obvious
>
> Yes, in hindsight we definitely should have done that. I will discuss this
> with my coworkers going forward.
>
>
> > Is there any chance you can run the old version on the server to explore
> the differences?
>
> Possibly, for a limited time. If so, what types of tests would I do? And
> how long time would I need to run the old version?
>
> Note that with our setup, we wouldn't be able to run two different images
> at the same time, in the same environment, with both recieving traffic. So
> all traffic would be routed to this version (multiple servers, but all
> running the same image).
>
> An alternative approach that I'm considering, is to switch to the old
> image, but manually update the VCL to the new version. If the problem
> remains, then the issue is almost certainly with the VLC. But if the
> problem disapears, then it's more likely something else.
>
>
> > what's the output of: varnishstat -1 -f '*g_bytes'
>
> SMA.default.g_bytes  10951750929  .   Bytes outstanding
> SMA.large.g_bytes 8587329728  .   Bytes outstanding
> SMA.Transient.g_bytes  3177920  .   Bytes outstanding
>
> So, the default storage usage has gone up with 2GB since my first message
> here, while the others have remained the same. Meanwhile, the total memory
> usage of Varnish has gone up to 26 GB, an increase of 3 GB. So now the
> overhead has gone up with 1GB to a total of 6 GB.
>
> Going forward, it will be interesting to see how the memory consumption
> changes after the default storage has reached its max (2 GB from where it
> is now). If we're lucky, it will stabilize, and then I'm not sure if it's
> worth it to troubleshoot any further. Otherwise, the free memory would get
> a bit too close to zero for our comfort, with no indication of stopping.
>
> Does Varnish keep track of total available OS memory, and start releasing
> memory by throwing out objects from the cache? Or will it continue to eat
> memory until something fails?
>
>
> > have you tweaked any workspaces/thread parameters?
>
> Nope. As I said, we haven't changed any OS or Varnish configuration.
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Preventing Caching in Varnish Based on Backend Response Header

2024-06-11 Thread Guillaume Quintard
Hi,

Don't worry too much about it. Uncacheable objects take a minimal amount of
space in the cache, and if an object suddenly becomes cacheable, you can
insert it in the cache, pushing the uncacheable version out.

I'd say keep 24 hours and worry about big stuff :-)

Cheers,

-- 
Guillaume Quintard


On Tue, Jun 11, 2024 at 4:22 AM Uday Kumar  wrote:

> May I know if there is any way to find the best possible TTL?
> I meant to ask for uncacheable objects
>
> *Thanks & Regards,*
> *Uday Kumar*
>
>
> On Tue, Jun 11, 2024 at 4:11 PM Uday Kumar 
> wrote:
>
>> Hello Guillaume,
>> We have made required changes at our end, but we have doubt on giving
>> suitable TTL for uncacheable objects
>>
>> if (beresp.http.Cache-Control ~ "no-cache") {
>> set beresp.ttl = *doubt*;
>> set beresp.uncacheable = true;
>> }
>>
>> FYI;
>> we have ttl of 24hrs for normal objects which are cacheable
>>
>> May I know if there is any way to find the best possible TTL?
>>
>> *Thanks & Regards,*
>> *Uday Kumar*
>>
>>
>> On Tue, May 28, 2024 at 7:07 PM Uday Kumar 
>> wrote:
>>
>>> Hello Guillaume,
>>> Great to know about this, it should work for us!
>>> will check this out
>>>
>>> *Thanks & Regards,*
>>> *Uday Kumar*
>>>
>>>
>>> On Tue, May 28, 2024 at 5:53 PM Guillaume Quintard <
>>> guillaume.quint...@gmail.com> wrote:
>>>
>>>> Hi Uday,
>>>>
>>>> Sure, the classic practice will do nicely:
>>>>
>>>> sub vcl_backend_response {
>>>> if (beresp.http.that-specific-header) {
>>>> # TTL should match the time during which that header is
>>>> unlikely to change
>>>> # do NOT set it to 0s or less (
>>>> https://info.varnish-software.com/blog/hit-for-miss-and-why-a-null-ttl-is-bad-for-you
>>>> )
>>>> set beresp.ttl = 2m;
>>>> set beresp.uncacheable = true;
>>>> return (deliver);
>>>> }
>>>> }
>>>>
>>>> The main trick here is beresp.uncacheable, you do not have to return
>>>> immediately if you still have modifications/checks to do on that response.
>>>>
>>>> Would that work for you?
>>>>
>>>> --
>>>> Guillaume Quintard
>>>>
>>>>
>>>> On Tue, May 28, 2024 at 4:55 AM Uday Kumar 
>>>> wrote:
>>>>
>>>>> Hello all,
>>>>>
>>>>> We need to prevent caching in Varnish based on a specific header from
>>>>> the backend.
>>>>>
>>>>> Could you please suggest the best approach to achieve this?
>>>>>
>>>>>
>>>>> *Thanks & Regards,*
>>>>> *Uday Kumar*
>>>>> ___
>>>>> varnish-misc mailing list
>>>>> varnish-misc@varnish-cache.org
>>>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>>>>
>>>>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Preventing Caching in Varnish Based on Backend Response Header

2024-05-28 Thread Guillaume Quintard
Hi Uday,

Sure, the classic practice will do nicely:

sub vcl_backend_response {
if (beresp.http.that-specific-header) {
# TTL should match the time during which that header is unlikely to
change
# do NOT set it to 0s or less (
https://info.varnish-software.com/blog/hit-for-miss-and-why-a-null-ttl-is-bad-for-you
)
set beresp.ttl = 2m;
set beresp.uncacheable = true;
return (deliver);
}
}

The main trick here is beresp.uncacheable, you do not have to return
immediately if you still have modifications/checks to do on that response.

Would that work for you?

-- 
Guillaume Quintard


On Tue, May 28, 2024 at 4:55 AM Uday Kumar  wrote:

> Hello all,
>
> We need to prevent caching in Varnish based on a specific header from the
> backend.
>
> Could you please suggest the best approach to achieve this?
>
>
> *Thanks & Regards,*
> *Uday Kumar*
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish suddenly started using much more memory

2024-05-18 Thread Guillaume Quintard
Sorry, I should have been clearer, I meant: where are the varnish packages
coming from? Are they from the official repositories, from
https://packagecloud.io/varnishcache/ or built from source maybe?

If you don't have old metrics (you should really invest some time in
something like prometheus, it would probably have made the issue obvious),
then we can't really compare anything. Is there any chance you can run the
old version on the server to explore the differences?

Two extra questions:
- what's the output of: varnishstat -1 -f '*g_bytes'
- have you tweaked any workspaces/thread parameters?

Cheers,

On Fri, May 17, 2024, 06:17 Batanun B  wrote:

> Hi,
>
> Naturally, I can't be certain that the "in my mind" trivial VCL changes
> can't be the culprit. But I just can't see the logic in those changes
> causing this massive change in memory usage. But I'll summarize the changes
> here, and maybe you can identify a suspect:
>
> * Modified the xkey header used by the xkey vmod, adding the id of the
> current website
> * Modified the TTL, from 1w to 1h, for a specific type of resource
> existing in maybe 20 versions (ie different urls), each being about 5 kB in
> size
> * Modified the backend probe url, from the startpage (ie full html) to a
> dedicated healthcheck endpoint (much smaller footprint, and much quicker)
>
> That's it. That's all the VCL changes we made in that deployment. And,
> like I said, we did no changes in the OS or Varnish config.
>
>
> > check the difference in passes, if they are about the same, look for
> hit-for-misses,
>
> We don't have those statistics from the old server, so I can't do a
> comparison. But here are the current statistics:
>
> MAIN.s_pass   180721 0.14 Total pass-ed
> requests seen
> MAIN.cache_hitpass 0 0.00 Cache hits for pass.
> MAIN.cache_hit   3718468 2.86 Cache hits
> MAIN.cache_hit_grace   53903 0.04 Cache grace hits
> MAIN.cache_hitpass 0 0.00 Cache hits for pass.
> MAIN.cache_hitmiss  1129 0.00 Cache hits for miss.
>
>
> > and lastly, look at how long Varnish is trying to cache the average
> object.
>
> I'm not sure how I do that. Is there a varnishstat counter I can look at?
>
> > which packages are you using?
>
> Instead of giving you the full list, I guess it makes more sense to just
> list the one that differ. Below is a diff output of "apt list --installed"
> of before and after the deploy.
>
> Regards
>
>
> 2c2
> < accountsservice/now 0.6.55-0ubuntu12~20.04.5 amd64 [installed,upgradable
> to: 0.6.55-0ubuntu12~20.04.7]
> ---
> > accountsservice/focal-updates,focal-security,now
> 0.6.55-0ubuntu12~20.04.7 amd64 [installed,automatic]
> 23,25c23,25
> < bind9-dnsutils/now 1:9.16.1-0ubuntu2.12 amd64 [installed,upgradable to:
> 1:9.16.48-0ubuntu0.20.04.1]
> < bind9-host/now 1:9.16.1-0ubuntu2.12 amd64 [installed,upgradable to:
> 1:9.16.48-0ubuntu0.20.04.1]
> < bind9-libs/now 1:9.16.1-0ubuntu2.12 amd64 [installed,upgradable to:
> 1:9.16.48-0ubuntu0.20.04.1]
> ---
> > bind9-dnsutils/focal-updates,focal-security,now
> 1:9.16.48-0ubuntu0.20.04.1 amd64 [installed,automatic]
> > bind9-host/focal-updates,focal-security,now 1:9.16.48-0ubuntu0.20.04.1
> amd64 [installed,automatic]
> > bind9-libs/focal-updates,focal-security,now 1:9.16.48-0ubuntu0.20.04.1
> amd64 [installed,automatic]
> 31c31
> < bsdutils/focal-security,now 1:2.34-0.1ubuntu9.3 amd64
> [installed,upgradable to: 1:2.34-0.1ubuntu9.4]
> ---
> > bsdutils/now 1:2.34-0.1ubuntu9.3 amd64 [installed,upgradable to:
> 1:2.34-0.1ubuntu9.6]
> 41c41
> < cloud-init/now 22.4.2-0ubuntu0~20.04.2 all [installed,upgradable to:
> 23.4.4-0ubuntu0~20.04.1]
> ---
> > cloud-init/now 22.4.2-0ubuntu0~20.04.2 all [installed,upgradable to:
> 24.1.3-0ubuntu1~20.04.1]
> 48c48
> < cpio/focal-updates,focal-security,now 2.13+dfsg-2ubuntu0.3 amd64
> [installed,automatic]
> ---
> > cpio/now 2.13+dfsg-2ubuntu0.3 amd64 [installed,upgradable to:
> 2.13+dfsg-2ubuntu0.4]
> 56c56
> < curl/focal-updates,focal-security,now 7.68.0-1ubuntu2.21 amd64
> [installed]
> ---
> > curl/focal-updates,focal-security,now 7.68.0-1ubuntu2.22 amd64
> [installed]
> 68c68
> < distro-info-data/now 0.43ubuntu1.11 all [installed,upgradable to:
> 0.43ubuntu1.15]
> ---
> > distro-info-data/now 0.43ubuntu1.11 all [installed,upgradable to:
> 0.43ubuntu1.16]
> 82c82
> < fdisk/focal-security,now 2.34-0.1ubuntu9.3 amd64 [installed,upgradable
> to: 2.34-0.1ubuntu9.4]
> ---
> > fdisk/now 2.34-0.1ubuntu9.3 amd64 [installed,upgradable to:
> 2.34-0.1ubuntu9.6]
> 119,120c119,120
> < grub-efi-amd64-bin/now 2.06-2ubuntu14.1 amd64 [installed,upgradable to:
> 2.06-2ubuntu14.4]
> < grub-efi-amd64-signed/now 1.187.3~20.04.1+2.06-2ubuntu14.1 amd64
> [installed,upgradable to: 1.187.6~20.04.1+2.06-2ubuntu14.4]
> ---
> > grub-efi-amd64-bin/focal-updates,focal-security,now 2.06-2ubuntu14.4
> amd64 [installed]
> > 

Re: Varnish suddenly started using much more memory

2024-05-16 Thread Guillaume Quintard
Hi,

I feel like the answer is there, somewhere. You said that the deploy
changed something, but that it can't possibly be the deploy.

I'm going to bet that it's the deploy. Most likely you changed something
that messed up the willingness to cache, or your TTL.
First, check the difference in passes, if they are about the same, look for
hit-for-misses, and lastly, look at how long Varnish is trying to cache the
average object. I'm pretty one of those changed.

That being said, the memory shouldn't explode like that, which packages are
you using?

-- 
Guillaume Quintard


On Thu, May 16, 2024 at 2:19 AM Batanun B  wrote:

> Hi,
>
> About two weeks ago we deployed some minor changes to our Varnish servers
> in production, and after that we have noticed a big change in the memory
> that Varnish consumes.
>
> Before the deploy, the amount of available memory on the servers were very
> stable, around 25 GB, for months on end. After the deploy, the amount of
> available memory dropped below 25 GB within 6 hours, and is dropping about
> 1 GB more each day, with no indication that it will level out before
> hitting rock bottom.
>
> There was no change in traffic patterns during the time of the deploy. And
> we didn't change any OS or Varnish configuration. The deplow consisted only
> of trivial VCL changes, like changing the backend probe url to a dedicated
> healthcheck endpoint, and tweaking the ttl for a minor resource. Nothing of
> which could explain this massive change in memory usage.
>
> We have configured varnish with "-s default=malloc,12G -s
> large=malloc,8G", where the combined 20GB is about 60% of the total server
> RAM of 32GB. This is below the recommended 75% maximum I've seen in many
> places.
>
> Currently Varnish uses about 73% of the server memory, or 23GB (the RES
> column in htop). The default storage uses about 10 GB
> (SMA.default.g_bytes), while the large storage uses 8 GB. And the transient
> storage is currently about 2 MB (SMA.Transient.g_bytes). In total this
> results in about 18 GB. So what is that additional 5 GB used for? How can I
> troubleshoot that?
>
> And, more importantly, what could possibly explain this sudden change?
>
> The Ubuntu version stayed the same (20.04.5 LTS), and the Varnish version
> too (6.0.11-1~focal), as well as varnish-modules (0.15.1). I notice some
> differences in some installed packages of the servers, but nothing that
> stands out to me (but I'm no linux expert).
>
> Regards
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Append uniqueid to a http request at varnish

2024-05-01 Thread Guillaume Quintard
Hi Uday,

I'm not sure what went wrong, but this Dockerfile works:
from centos:7

RUN \
set -ex; \
ulimit -n 10240; \
yum install -y make automake pkg-config libtool python-docutils
autoconf-archive uuid-devel epel-release python-docutils; \
curl -s
https://packagecloud.io/install/repositories/varnishcache/varnish5/script.rpm.sh
| bash; \
yum install -y varnish-devel; \
curl -O
https://raw.githubusercontent.com/varnish/toolbox/master/install-vmod/install-vmod;
\
chmod +x install-vmod ;\
./install-vmod
https://github.com/otto-de/libvmod-uuid/archive/refs/heads/5.x.tar.gz

Also, I know you have heard it before, but Varnish 5 is ancient, please
don't use it and upgrade to something more recent, either 7.5 or to the 6.0
LTS.

Let us know how it goes.

-- 
Guillaume Quintard


On Tue, Apr 30, 2024 at 1:07 AM Uday Kumar  wrote:

> hello Guillaume,
>
> I am trying to install vmod_uuid on my centOS 7 machine
>
> Resource i used:
> https://github.com/otto-de/libvmod-uuid/blob/5.x/INSTALL.rst
>
> varnish version: 5.2.1
>
> I am getting below errors while running *make *command
>
> make[1]: Entering directory `/usr/local/src/libvmod-uuid'
> Making all in src
> make[2]: Entering directory `/usr/local/src/libvmod-uuid/src'
>   CC   vmod_uuid.lo
> In file included from vmod_uuid.c:35:0:
> vcc_if.h:11:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING vmod_uuid(VRT_CTX, struct vmod_priv *);
>  ^
> vcc_if.h:11:31: error: expected ‘)’ before ‘struct’
>  VCL_STRING vmod_uuid(VRT_CTX, struct vmod_priv *);
>^
> vcc_if.h:12:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING vmod_uuid_v1(VRT_CTX, struct vmod_priv *);
>  ^
> vcc_if.h:12:34: error: expected ‘)’ before ‘struct’
>  VCL_STRING vmod_uuid_v1(VRT_CTX, struct vmod_priv *);
>   ^
> vcc_if.h:13:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING vmod_uuid_v3(VRT_CTX, struct vmod_priv *, VCL_STRING,
>  ^
> vcc_if.h:13:34: error: expected ‘)’ before ‘struct’
>  VCL_STRING vmod_uuid_v3(VRT_CTX, struct vmod_priv *, VCL_STRING,
>   ^
> vcc_if.h:15:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING vmod_uuid_v4(VRT_CTX, struct vmod_priv *);
>  ^
> vcc_if.h:15:34: error: expected ‘)’ before ‘struct’
>  VCL_STRING vmod_uuid_v4(VRT_CTX, struct vmod_priv *);
>   ^
> vcc_if.h:16:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING vmod_uuid_v5(VRT_CTX, struct vmod_priv *, VCL_STRING,
>  ^
> vcc_if.h:16:34: error: expected ‘)’ before ‘struct’
>  VCL_STRING vmod_uuid_v5(VRT_CTX, struct vmod_priv *, VCL_STRING,
>   ^
> vmod_uuid.c:48:17: error: expected ‘)’ before ‘int’
>  mkuuid(VRT_CTX, int utype, uuid_t *uuid, const char *str, va_list ap)
>  ^
> vmod_uuid.c:76:1: error: unknown type name ‘VCL_STRING’
>  _uuid(VRT_CTX, uuid_t *uuid, int utype, ...)
>  ^
> vmod_uuid.c:76:16: error: expected ‘)’ before ‘uuid_t’
>  _uuid(VRT_CTX, uuid_t *uuid, int utype, ...)
> ^
> vmod_uuid.c:104:21: error: expected ‘)’ before ‘void’
>  free_uuids(VRT_CTX, void *priv)
>  ^
> vmod_uuid.c:116:39: error: array type has incomplete element type
>  static const struct vmod_priv_methods uuid_priv_task_methods[1] = {{
>^
> vmod_uuid.c:117:3: error: field name not in record or union initializer
>.magic = VMOD_PRIV_METHODS_MAGIC,
>^
> vmod_uuid.c:117:3: error: (near initialization for
> ‘uuid_priv_task_methods’)
> vmod_uuid.c:117:12: error: ‘VMOD_PRIV_METHODS_MAGIC’ undeclared here (not
> in a function)
>.magic = VMOD_PRIV_METHODS_MAGIC,
> ^
> vmod_uuid.c:118:3: error: field name not in record or union initializer
>.type = "vmod_uuid_priv_task",
>^
> vmod_uuid.c:118:3: error: (near initialization for
> ‘uuid_priv_task_methods’)
> vmod_uuid.c:119:3: error: field name not in record or union initializer
>.fini = free_uuids
>^
> vmod_uuid.c:119:3: error: (near initialization for
> ‘uuid_priv_task_methods’)
> vmod_uuid.c:119:11: error: ‘free_uuids’ undeclared here (not in a function)
>.fini = free_uuids
>^
> vmod_uuid.c:123:20: error: expected ‘)’ before ‘struct’
>  get_uuids(VRT_CTX, struct vmod_priv *priv, uuid_t **uuid_ns)
> ^
> vmod_uuid.c:163:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING
>  ^
> vmod_uuid.c:164:23: error: expected ‘)’ before ‘struct’
>  vmod_uuid_v1(VRT_CTX, struct vmod_priv *priv)
>^
> vmod_uuid.c:172:1: error: unknown type name ‘VCL_STRING’
>  VCL_STRING
>  ^
> vmod_uuid.c:173:23: error: expected ‘)’ before ‘str

Re: Append uniqueid to a http request at varnish

2024-04-24 Thread Guillaume Quintard
Hi Uday,

I feel like we've explored this last year:
https://varnish-cache.org/lists/pipermail/varnish-misc/2023-May/027238.html

I don't think the answer has changed much: vmod-uuid is your best bet here.

Please let me know if I'm missing some requirements.

Kind regards,

-- 
Guillaume Quintard


On Wed, Apr 24, 2024 at 4:26 AM Uday Kumar  wrote:

> Hello all,
>
> We follow below architecture in our production environment:
> User request ---> Varnish > Tomcat Backend
>
> We have a requirement of generating an unique id at varnish that can be
> appended to a request url.
> So that it can be propagated to the backend and also will be useful in
> tracking errors efficiently
>
> varnish version used: varnish-5.2.1
>
> Example:
> original request:
> /search/test?q=bags=mobile
>
> After appending unique id [This needs to be sent to backend and to be
> stored in varnish logs]:
> /search/test?q=bags=mobile=abc123
>
> Please help us know if there is any way to do this at varnish
>
> *Thanks & Regards,*
> *Uday Kumar*
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


A plea for a more useful and discoverable built-in VCL

2023-11-05 Thread Guillaume Quintard
Hi everybody!

A bunch of questions I regularly get regarding Varnish behavior revolve
around the built-in vcl, mainly, I get one of these three:
- why is Varnish not caching?
- how come something is happening in vcl_X even though I don't have it in
my vcl?
- what on earth is that built-in vcl you are talking about?

As usual, I have a half-baked solution with a bunch of problems, which will
hopefully inspire smarter people to fix the issue properly.

What I came up with is here:
https://github.com/varnish/toolbox/tree/verbose_builtin/vcls/verbose_builtin
Essentially, use std.log() to explain what the built-in code is doing.

At the moment, it's a purely opt-in solution, meaning that you need to know
about builtin.vcl to find it, which doesn't really help with
discoverability, but I intend on including that code in the docker image,
which should raise awareness a bit.
The absolute best in my mind would be to have something similar in core,
but I can see how importing std would be a hurdle. Maybe as part of
packaging, we could include that file in the provided default.vcl?

I dismissed the performance penalty of printing a few more lines as
negligible, but I could be wrong about that.

There's also the question of phrasing, so we can have a message that is
concise but also gives enough information to debug the behavior. But that's
very minor, and the least of our worries here.

Thoughts?

-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Cache poisening

2023-10-29 Thread Guillaume Quintard
Hi Rudd,

Sorry for the delay, for some reason your email ended up in my spam folder,
I just saw it today.

Cache poisoning is a vast subject, and in absence of more context the
answer to your question is probably going to be "yes, but no but still,
intrinsically yes".

Yes, because you can mess up your configuration with something like:
sub vcl_hash {
hash_data("foo");
return(lookup);
}
and boom, all objects are basically going to be cached under the same cache
key, which is super bad, don't do that.
The freedom you get through configuration can turn against you. Here's my
favorite example to explain it:
sub vcl_hash {
hash_data(req.url);
hash_data(req.http.host);
if (req.http.a) {
hash_data(req.http.a);
}
  if (req.http.b) {
hash_data(req.http.b)
}
return(lookup);
}
Which isn't nearly as dumb as the original example, but which will hash
these two requests the same way:
curl example.com/foo -H "a: bar"
curl example.com/foo -H "b: bar"
And if somebody knows about how you hash your object and there's a similar
flaw in the hashing logic, you can get cache

No, because Varnish is an extremely secure piece of software with an
excellent security track record and I don't think it ever got a CVE that
poisoned the cache. not to say it can't/won't happen, but sometimes past
performance is a good indicator of future results.

So, even though the software is safe and secure, you can still shoot
yourself in the foot if you want to (or are not careful). Thousands of
cases of cache poisoning happens yearly because somebody forgot to tell
their CDN that the querystring needs to be part of the cache key AND sorted.

Hopefully this helps, let me know if you have more context to narrow the
scope of that very vast topic :-)

Ah, and while I'm here: please don't use massively antiquated Varnish
versions. 4.1 has been EOL a while ago, it's really not recommended to use.

Cheers,

-- 
Guillaume Quintard


On Fri, Oct 27, 2023 at 12:54 AM  wrote:

> Hi,
>
>
>
> Is there anything known that Varnish has problems with cache poisening?
> And if yes, how can this be avoided in the config?
>
> We are running a old version of Varnish (varnish-4.1.8 revision d266ac5c6)
>
>
>
>
>
> Met vriendelijke groet / With kind regards,
>
>
>
>
>
> *Ruud Peters*
>
> *Technisch Beheerder TAM3*
>
> Integration SA DevOps 3
>
>
>
> Email: ruud.pet...@kpn.com
>
> Phone  : +31630736741
>
>
>
> Stationsplein 18 6221 BT, Maastricht
>
>
> (On Mondays and Thursdays I’m in the office until about 14:00)
>
>
>
> Handelsregister KvK Den Haag
>
> Nr. 27124701
>
>
>
> [image: twitter]
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__twitter.com_kpn=DwMGaQ=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE=SHw-AgeWmMkMA0HlnhzHhxKjC0-3ZvNfsNAC7uRfT_M=PPlxN7TMhT2xr2QgTxCcLKJXrujT3E_BtoULxbTfOuU=__LlIYz1us6athyMaicWUENl0eXliwsKc6ZOuLjthxA=>[image:
> facebook]
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.facebook.com_kpn=DwMGaQ=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE=SHw-AgeWmMkMA0HlnhzHhxKjC0-3ZvNfsNAC7uRfT_M=PPlxN7TMhT2xr2QgTxCcLKJXrujT3E_BtoULxbTfOuU=Zxz20RO2KypBQqvxBL2tDdL29IvpFS3LvGxQrytAtdY=>[image:
> linkedin]
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_company_kpn=DwMGaQ=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE=SHw-AgeWmMkMA0HlnhzHhxKjC0-3ZvNfsNAC7uRfT_M=PPlxN7TMhT2xr2QgTxCcLKJXrujT3E_BtoULxbTfOuU=CJB3bkdHr0lzGaD_Jwd6PDj5r4RpEXY-YqKEP9Z0DVg=>[image:
> youtube]
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_user_KPN=DwMGaQ=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE=SHw-AgeWmMkMA0HlnhzHhxKjC0-3ZvNfsNAC7uRfT_M=PPlxN7TMhT2xr2QgTxCcLKJXrujT3E_BtoULxbTfOuU=qsRYQVgKH5enM9ot1yuxgeDHFD_rMJZQ1D8WtoKznkA=>
>
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Block Unauthorized Requests at Varnish [Code Optimization]

2023-10-14 Thread Guillaume Quintard
Hello Uday,

Quick follow-up as I realize that templating can be a bit scary when
confronted for the first time, and you are far from the first one to be
curious about, so I've committed this:
https://github.com/varnish/toolbox/tree/master/gotemplate-example
It probably won't get you very far, but it should at least get you started,
and help understand how templating can make things a tiny be simpler but
splitting data from business logic, for example to add more IPs/ACLs or
source without edit the VCL manually.

Hope that helps.

-- 
Guillaume Quintard


On Thu, Oct 12, 2023 at 12:36 PM Uday Kumar  wrote:

> > That's mainly how computers work, processing will be linear. You *could*
> create a vmod that packs ACLs into a hashmap to simplify the apparent
> logic, but you will pay that price developing the vmod, and for a very
> modest performance gain. If you have less than 50 sources, or even less
> than 100, I don't think it's worth agonizing over that kind of optimization
> (unless you've actually measured and you did see a performance drop).
>
> Okay, Thanks for your suggestion!
>
> >  I assume that the VCL is currently committed in a repo somewhere and
> gets edited every time you need to add a new IP or source. If so, it's not
> great because editing such repetitive code is error-prone, and therefore
> you should use templating to create the VCL from a simpler, more
> maintainable source.
>
> Sure, will definitely explore!
>
> Thanks & Regards
> Uday Kumar
>
>
> On Fri, Oct 13, 2023 at 12:35 AM Guillaume Quintard <
> guillaume.quint...@gmail.com> wrote:
>
>> > In the above example, if the request URL is source=tablet [for which
>> condition is present at the end], still I have to check all the above
>> conditions.
>>
>> That's mainly how computers work, processing will be linear. You *could*
>> create a vmod that packs ACLs into a hashmap to simplify the apparent
>> logic, but you will pay that price developing the vmod, and for a very
>> modest performance gain. If you have less than 50 sources, or even less
>> than a 100, I don't think it's worth agonizing over that kind of
>> optimization (unless you've actually measured and you did see a
>> performance drop).
>>
>> > One thing I would do though is to generate the VCL from a source file,
>> like a YAML one:
>>
>> All I'm saying is that you should focus on increasing the maintainability
>> of the project before worrying about performance. I assume that the VCL is
>> currently committed in a repo somewhere and gets edited every time you need
>> to add a new IP or source. If so, it's not great because editing such
>> repetitive code is error-prone, and therefore you should use templating to
>> create the VCL from a simpler, more maintainable source.
>>
>> Tools like go templates or jinja can provide that feature and save you
>> from repeating yourself when writing configuration.
>>
>> --
>> Guillaume Quintard
>>
>>
>> On Thu, Oct 12, 2023 at 11:46 AM Uday Kumar 
>> wrote:
>>
>>> Hi Guillaume,
>>>
>>> I don't think those are redundant checks, from what you are showing,
>>> they are all justified. Sure, there may be a bunch of them, but you have to
>>> go through to them.
>>>
>>> By redundant I meant, I have to write multiple checks for each source
>>> and list of IPs associated with it. [which would be *worse *if the
>>> number of sources are huge]
>>>
>>> *Example:*
>>>
>>> If(
>>>
>>> (req.url ~ "source=mobile" && client.ip != mobile_source) ||
>>>
>>> (req.url ~ "source=desktop" && client.ip != desktop_source) ||
>>>
>>> (req.url ~ "source=laptop" && client.ip != laptop_source) ||
>>>
>>> (req.url ~ "source=tablet" && client.ip != tablet_source)
>>>
>>> ){
>>>
>>>return(Synth(403, "access denied!"))
>>>
>>> }
>>>
>>>
>>> In the above example, if the request URL is source=tablet *[for which
>>> condition is present at the end]*, still I have to check all the above
>>> conditions.
>>>
>>>
>>>
>>>
>>>
>>> One thing I would do though is to generate the VCL from a source file,
>>> like a YAML one:
>>>
>>> Didn't understand, can you please elaborate?
>>>
>>> Thanks & Regards
>>> Uday Kumar
>>>
>>>
>>> On Thu, Oct 12, 2023 at 11:11 PM Guillaume Quintard <

Re: Block Unauthorized Requests at Varnish [Code Optimization]

2023-10-12 Thread Guillaume Quintard
> In the above example, if the request URL is source=tablet [for which
condition is present at the end], still I have to check all the above
conditions.

That's mainly how computers work, processing will be linear. You *could*
create a vmod that packs ACLs into a hashmap to simplify the apparent
logic, but you will pay that price developing the vmod, and for a very
modest performance gain. If you have less than 50 sources, or even less
than a 100, I don't think it's worth agonizing over that kind of
optimization (unless you've actually measured and you did see a
performance drop).

> One thing I would do though is to generate the VCL from a source file,
like a YAML one:

All I'm saying is that you should focus on increasing the maintainability
of the project before worrying about performance. I assume that the VCL is
currently committed in a repo somewhere and gets edited every time you need
to add a new IP or source. If so, it's not great because editing such
repetitive code is error-prone, and therefore you should use templating to
create the VCL from a simpler, more maintainable source.

Tools like go templates or jinja can provide that feature and save you from
repeating yourself when writing configuration.

-- 
Guillaume Quintard


On Thu, Oct 12, 2023 at 11:46 AM Uday Kumar  wrote:

> Hi Guillaume,
>
> I don't think those are redundant checks, from what you are showing, they
> are all justified. Sure, there may be a bunch of them, but you have to go
> through to them.
>
> By redundant I meant, I have to write multiple checks for each source and
> list of IPs associated with it. [which would be *worse *if the number of
> sources are huge]
>
> *Example:*
>
> If(
>
> (req.url ~ "source=mobile" && client.ip != mobile_source) ||
>
> (req.url ~ "source=desktop" && client.ip != desktop_source) ||
>
> (req.url ~ "source=laptop" && client.ip != laptop_source) ||
>
> (req.url ~ "source=tablet" && client.ip != tablet_source)
>
> ){
>
>return(Synth(403, "access denied!"))
>
> }
>
>
> In the above example, if the request URL is source=tablet *[for which
> condition is present at the end]*, still I have to check all the above
> conditions.
>
>
>
>
>
> One thing I would do though is to generate the VCL from a source file,
> like a YAML one:
>
> Didn't understand, can you please elaborate?
>
> Thanks & Regards
> Uday Kumar
>
>
> On Thu, Oct 12, 2023 at 11:11 PM Guillaume Quintard <
> guillaume.quint...@gmail.com> wrote:
>
>> Hi Uday,
>>
>> I don't think those are redundant checks, from what you are showing, they
>> are all justified. Sure, there may be a bunch of them, but you have to go
>> through to them.
>>
>> One thing I would do though is to generate the VCL from a source file,
>> like a YAML one:
>>
>> mobile:
>>   - IP1
>>   - IP2
>>   - IP3
>> desktop:
>>   - IP4
>>   - IP5
>>   - IP6
>>
>>
>> From that, you can build the VCL without having to manually write
>> "client.ip" or "(req.url ~ "source=" every time.
>>
>> --
>> Guillaume Quintard
>>
>>
>> On Thu, Oct 12, 2023 at 10:17 AM Uday Kumar 
>> wrote:
>>
>>> Hello everyone,
>>>
>>> We use varnish in our production environment for caching content.
>>>
>>> Our Requirement:
>>>
>>> We are trying to block unauthorized requests at varnish based on the
>>> source parameter in the URL and the client IP in the request header.
>>>
>>> For example:
>>>
>>> Sample URL:
>>>
>>> www.hostname:port/path?source=mobile= bags
>>>
>>> Let's assume there are 3 IPs [which are allowed to access varnish]
>>> associated with the above request of mobile source.
>>>
>>> i.e *IP1, IP2, IP3*
>>>
>>> So if any request comes with the source as *mobile *and client-ip as
>>> *IP4*, it's treated as an unauthorized request and should be blocked at
>>> varnish.
>>>
>>>
>>> What we have done for blocking?
>>>
>>> *Sample URL:*
>>> www.hostname:port/path?source=mobile= bags
>>>
>>> Created a map using ACL as below:
>>>
>>> acl mobile_source{
>>>
>>>   "IP1";
>>>
>>>   "IP2";
>>>
>>>   "IP3";
>>>
>>> }
>>>
>>> If(req.url ~ "source=mobile" && client.ip !~ mobile_source) {
&

Re: Block Unauthorized Requests at Varnish [Code Optimization]

2023-10-12 Thread Guillaume Quintard
Hi Uday,

I don't think those are redundant checks, from what you are showing, they
are all justified. Sure, there may be a bunch of them, but you have to go
through to them.

One thing I would do though is to generate the VCL from a source file, like
a YAML one:

mobile:
  - IP1
  - IP2
  - IP3
desktop:
  - IP4
  - IP5
  - IP6


>From that, you can build the VCL without having to manually write
"client.ip" or "(req.url ~ "source=" every time.

-- 
Guillaume Quintard


On Thu, Oct 12, 2023 at 10:17 AM Uday Kumar  wrote:

> Hello everyone,
>
> We use varnish in our production environment for caching content.
>
> Our Requirement:
>
> We are trying to block unauthorized requests at varnish based on the
> source parameter in the URL and the client IP in the request header.
>
> For example:
>
> Sample URL:
>
> www.hostname:port/path?source=mobile= bags
>
> Let's assume there are 3 IPs [which are allowed to access varnish]
> associated with the above request of mobile source.
>
> i.e *IP1, IP2, IP3*
>
> So if any request comes with the source as *mobile *and client-ip as *IP4*,
> it's treated as an unauthorized request and should be blocked at varnish.
>
>
> What we have done for blocking?
>
> *Sample URL:*
> www.hostname:port/path?source=mobile= bags
>
> Created a map using ACL as below:
>
> acl mobile_source{
>
>   "IP1";
>
>   "IP2";
>
>   "IP3";
>
> }
>
> If(req.url ~ "source=mobile" && client.ip !~ mobile_source) {
>
>return(Synth(403, "varnish access denied!"))
>
> }
>
>
> The problem we are facing:
>
> The source parameter can have different values like mobile, desktop,
> laptop, tablet, etc. and each value can have different IPs associated with
> it.
>
> ACL Rules will be as below:
>
> acl mobile_source{
>
>   "IP1";
>
>   "IP2";
>
>   "IP3";
>
> }
>
> acl desktop_source{
>
>   "IP4";
>
>   "IP5";
>
>   "IP6";
>
> }
>
> and so on,
>
>
> If we wanted to block unauthorized access from different source vs IP
> combinations, we would have to add that many conditions as below.
>
> If(
>
> (req.url ~ "source=mobile" && client.ip != mobile_source) ||
>
> (req.url ~ "source=desktop" && client.ip != desktop_source) ||
>
> (req.url ~ "source=laptop" && client.ip != laptop_source) ||
>
> (req.url ~ "source=tablet" && client.ip != tablet_source)
>
> ){
>
>return(Synth(403, "access denied!"))
>
> }
>
> This becomes worse, if we have 10's or 20's of source values.
>
> Our question:
>
> We would like to know if there is any way to optimize the code by
> removing redundant checks so that we can scale it even if we have many
> sources vs IP combinations.
>
>
> Thanks & Regards
> Uday Kumar
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Request restriction based on IP and url parameter

2023-10-05 Thread Guillaume Quintard
Hi!

It's completely possible, easy and recommended to do that at the
varnish level (at least it you want to cache that content)

How many ips are you actually allowing, are the actual ips or CIDR blocks?

Cheers,

It's definitely something I
On Thu, Oct 5, 2023, 09:55 Anjali Maurya 
wrote:

> Hi team,
> We are trying to restrict unauthorized requests at varnish based on a
> parameter and IP associated with the parameter. The parameter value is
> present in the URL and the IP is present in the header. So, we want to know
> if it is possible to implement this restriction based on parameter value
> and associated IP.
> We have different values and associated IPs.
>
> For example:
> URL: hostname:port/path?source=mobile= bags
> There are 3 IPs associated with mobile source.
> mobile: IP1, IP2, IP3
>
> So if any request comes with mobile and IP4, that is an unauthorized
> request and should be blocked at varnish.
>
> Can we do this at varnish?
>
> If yes, then which approach will be more appropriate handling this at the
> varnish level or handling it using Java code at the API level?
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Caching Modified URLs by Varnish instead of the original requested URL

2023-09-01 Thread Guillaume Quintard
Thank you so much Geoff for that very useful knowledge dump!

Good call out on the .*, I realized I carried them over too, when I
copy-pasted the regex from the pure vcl example (where it's needed) to the
vmod one.

And so, just to be clear about it:
- vmod-re is based on libpcre2
- vmod-re2 is based on libre2
Correct?

I see no way I'm going to misremember that, at all :-D

-- 
Guillaume Quintard


On Fri, Sep 1, 2023 at 7:47 AM Geoff Simmons  wrote:

> Sorry, I get nerdy about this subject and can't help following up.
>
> I said:
>
> > - pcre2 regex matching is generally faster than re2 matching. The point
> > of re2 regexen is that matches won't go into catastrophic backtracking
> > on pathological cases.
>
> Should have mentioned that pcre2 is even better at subexpression
> capture, which is what the OP's question is all about.
>
> > sub vcl_init {
> >  new query_pattern = re.regex(".*(q=)(.*?)(\&|$).*");
> > }
>
> OMG no. Like this please:
>
> new query_pattern = re.regex("\b(q=)(.*?)(?:\&|$)");
>
> I have sent an example of a pcre regex with .* (two of them!) to a
> public mailing list, for which I will burn in hell.
>
> To match a name-value pair in a cookie, use a regex with \b for 'word
> boundary' in front of the name. That way it will match either at the
> beginning of the Cookie value, or following an ampersand.
>
> And ?: tells pcre not to bother capturing the last expression in
> parentheses (they're just for grouping).
>
> Avoid .* in pcre regexen if you possibly can. You can, almost always.
>
> With .* at the beginning, the pcre matcher searches all the way to the
> end of the string, and then backtracks all the way back, looking for the
> first letter to match. In this case 'q', and it will stop and search and
> backtrack at any other 'q' that it may find while working backwards.
>
> pcre2 fortunately has an optimization that ignores a trailing .* if it
> has found a match up until there, so that it doesn't busily match the
> dot against every character left in the string. So this time .* does no
> harm, but it's superfluous, and violates the golden rule of pcre: avoid
> .* if at all possible.
>
> Incidentally, this is an area where re2 does have an advantage over
> pcre2. The efficiency of pcre2 matching depends crucially on how you
> write the regex, because details like \b instead of .* give it hints for
> pruning the search. While re2 matching usually isn't as fast as pcre2
> matching against well-written patterns, re2 doesn't depend so much on
> that sort of thing.
>
>
> OK I can chill now,
> Geoff
> --
> ** * * UPLEX - Nils Goroll Systemoptimierung
>
> Scheffelstraße 32
> 22301 Hamburg
>
> Tel +49 40 2880 5731
> Mob +49 176 636 90917
> Fax +49 40 42949753
>
> http://uplex.de
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Caching Modified URLs by Varnish instead of the original requested URL

2023-08-31 Thread Guillaume Quintard
I'm pretty sure it's correctly lowercasing "\2" correctly. The problem is
that you want to lowercase the *value* referenced by "\2" instead.

On this, I don't think you have a choice, you need to make that captured
group its own string, lowercase it, and only then concatenate it. Something
like:

set req.http.hash-url = regsuball(req.http.hash-url, ".*(q=)(.*?)(\&|$).*",
"\1") + *std.tolower("regsuball(req.http.hash-url, ".*(q=)(.*?)(\&|$).*",
"\2")") + *regsuball(req.http.hash-url, ".*(q=)(.*?)(\&|$).*", "\3"));

It's disgusting, but eh, we started with regex, so...

Other options include vmod_querystring
<https://github.com/Dridi/libvmod-querystring/blob/master/src/vmod_querystring.vcc.in>
(Dridi might possibly be of assistance on this topic) and vmod_urlplus
<https://docs.varnish-software.com/varnish-enterprise/vmods/urlplus/#query_get>
(Varnish
Enterprise), and the last, and possibly most promising one, vmod_re2
<https://gitlab.com/uplex/varnish/libvmod-re2/-/blob/master/README.md> which
would allow you to do something like

if (myset.match(".*(q=)(.*?)(\&|$).*", "\1")) {
   set req.http.hash-url = myset.matched(1) + std.lower(myset.matched(2)) +
myset.matched(3)
}

-- 
Guillaume Quintard


On Thu, Aug 31, 2023 at 1:03 AM Uday Kumar  wrote:

> Hi Guillaume,
>
> In the process of modifying the query string in VCL code, we have a
> requirement of *lowercasing value of specific parameter*, instead of the 
> *whole
> query string*
>
> *Example Request URL:*
> /search/ims?q=*CRICKET bat*_code=IN
>
> *Requirement:*
> We have to modify the request URL by lowercasing the value of only the *q
> *parameter
> i.e ./search/ims?q=*cricket bat*_code=IN
>
> *For that, we have found below regex:*
> set req.http.hash-url = regsuball(req.http.hash-url, "(q=)(.*?)(\&|$)",
> "\1"+*std.tolower("\2")*+"\3");
>
> *ISSUE:*
> *std.tolower("\2")* in the above statement is *not lowercasing* the
> string that's captured, but if I test it using *std.tolower("SAMPLE"),* its
> lowercasing as expected.
>
> 1. May I know why it's not lowercasing if *std.tolower("\2") is used*?
> 2. Also, please provide possible optimal solutions for the same. (using
> regex)
>
> Thanks & Regards
> Uday Kumar
>
>
> On Wed, Aug 23, 2023 at 12:01 PM Uday Kumar 
> wrote:
>
>> Hi Guillaume,
>>
>> *use includes and function calls*
>> This is great, thank you so much for your help!
>>
>> Thanks & Regards
>> Uday Kumar
>>
>>
>> On Wed, Aug 23, 2023 at 1:32 AM Guillaume Quintard <
>> guillaume.quint...@gmail.com> wrote:
>>
>>> Hi Uday,
>>>
>>> I'm not exactly sure how to read those diagrams, so I apologize if I'm
>>> missing the mark or if I'm too broad here.
>>>
>>> There are a few points I'd like to attract your attention to. The first
>>> one is that varnish doesn't cache the request or the URL. The cache is
>>> essentially a big hashmap/dictionary/database, in which you store the
>>> response. The request/url is the key for it, so you need to have it in its
>>> "final" form before you do anything.
>>>
>>> From what I read, you are not against it, and you just want to sanitize
>>> the URL in vcl_recv, but you don't like the idea of making the main file
>>> too unwieldy. If I got that right, then I have a nice answer for you: use
>>> includes and function calls.
>>>
>>> As an example:
>>>
>>> # cat /etc/varnish/url.vcl
>>> sub sanitize_url {
>>>   # do whatever modifications you need here
>>> }
>>>
>>> # cat /etc/varnish/default.vcl
>>> include "./url.vcl";
>>>
>>> sub vcl_recvl {
>>>   call sanitize_url;
>>> }
>>>
>>>
>>> That should get you going.
>>>
>>> Hopefully I didn't miss the mark too much here, let me know if I did.
>>>
>>> --
>>> Guillaume Quintard
>>>
>>>
>>> On Tue, Aug 22, 2023 at 3:45 AM Uday Kumar 
>>> wrote:
>>>
>>>> Hello All,
>>>>
>>>>
>>>> For our spring boot application, we are using Varnish Caching in a
>>>> production environment.
>>>>
>>>>
>>>>
>>>>
>>>> Requirement: [To utilize cache effectively]
>>>>
>>>> Modify the URL (Removal of unnecessary parameters) while caching the
>>>> user requ

Re: Caching Modified URLs by Varnish instead of the original requested URL

2023-08-22 Thread Guillaume Quintard
Hi Uday,

I'm not exactly sure how to read those diagrams, so I apologize if I'm
missing the mark or if I'm too broad here.

There are a few points I'd like to attract your attention to. The first one
is that varnish doesn't cache the request or the URL. The cache is
essentially a big hashmap/dictionary/database, in which you store the
response. The request/url is the key for it, so you need to have it in its
"final" form before you do anything.

>From what I read, you are not against it, and you just want to sanitize the
URL in vcl_recv, but you don't like the idea of making the main file too
unwieldy. If I got that right, then I have a nice answer for you: use
includes and function calls.

As an example:

# cat /etc/varnish/url.vcl
sub sanitize_url {
  # do whatever modifications you need here
}

# cat /etc/varnish/default.vcl
include "./url.vcl";

sub vcl_recvl {
  call sanitize_url;
}


That should get you going.

Hopefully I didn't miss the mark too much here, let me know if I did.

-- 
Guillaume Quintard


On Tue, Aug 22, 2023 at 3:45 AM Uday Kumar  wrote:

> Hello All,
>
>
> For our spring boot application, we are using Varnish Caching in a
> production environment.
>
>
>
>
> Requirement: [To utilize cache effectively]
>
> Modify the URL (Removal of unnecessary parameters) while caching the user
> request, so that the modified URL can be cached by varnish which helps
> improve cache HITS for similar URLs.
>
>
> For Example:
>
> Let's consider the below Request URL
>
> Url at time t, 1. samplehost.com/search/ims?q=bags=android
> =0
>
>
> Our Requirement:
>
> To make varnish consider URLs with options.start=0 and without
> options.start parameter as EQUIVALENT, such that a single cached
> response(Single Key) can be utilized in both cases.
>
>
> *1st URL after modification:*
>
> samplehost.com/search/ims?q=bags=android
>
>
> *Cached URL at Varnish:*
>
> samplehost.com/search/ims?q=bags=android
>
>
>
> Now, Url at time t+1, 2. samplehost.com/search/ims?q=bags=android
>
>
> At present, varnish considers the above URL as different from 1st URL and
> uses a different key while caching the 2nd URL[So, it will be a miss]
>
>
> *So, URL after Modification:*
>
> samplehost.com/search/ims?q=bags=android
>
>
> Now, 2nd URL will be a HIT at varnish, effectively utilizing the cache.
>
>
>
> NOTE:
>
> We aim to execute this URL Modification without implementing the logic 
> directly
> within the default.VCL file. Our intention is to maintain a clean and
> manageable codebase in the VCL.
>
>
>
> To address this requirement effectively, we have explored two potential
> Approaches:
>
>
> Approach-1:
>
>
>
> Approach-2:
>
>
>
>
> 1. Please go through the approaches mentioned above and let me know the
> effective solution.
>
> 2. Regarding Approach-2
>
> At Step 2:
>
> May I know if there is any way to access and execute a custom subroutine
> from another VCL, for modifying the Request URL? if yes, pls help with
> details.
>
> At Step 3:
>
> Tomcat Backend should receive the Original Request URL instead of the
> Modified URL.
>
> 3. Please let us know if there is any better approach that can be
> implemented.
>
>
>
> Thanks & Regards
> Uday Kumar
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


IRC #varnish room is now invite-only?

2023-07-16 Thread Guillaume Quintard
Hi team,

Somebody popped up this morning on the discord channel saying that the
#varnish room on IRC is invite-only, I just checked and it does seem to be
the case.

Is that on purpose?

Cheers,

-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Making vmods more approachable.

2023-07-15 Thread Guillaume Quintard
TL;DR: here's a bunch of examples building vmods from source so you can get
better at it:
https://github.com/varnish/docker-varnish/tree/master/vmod-examples

Varnish is great, stable and very versatile, however part of that
versatility comes from vmods, and those tend to not be packaged by
distributions. Some are, of course, but they are few and far between,
mostly relying on the  judgement (for selection) and good will (for actual
packaging) of a few packagers. And this obviously limits access to that
versatility, hurting Varnish's reach.

I do package a couple of vmods for a couple of distributions, but the
benefits are fairly limited because fragmentation sucks. So, truly, the
best way to distribute vmods is through source code. Sadly, it still seems
like people are afraid of compiling C code (can't blame them, it's a scary
new experience), and efforts like
https://github.com/varnish/toolbox/tree/master/install-vmod and
https://github.com/xcir/vmod-packager to help install/package do make
things a bit easier, but we are not there yet.

With that context in mind, I've created a new vmod-examples/ directory in
https://github.com/varnish/docker-varnish/ which shows how to build 14
vmods (for now) on a docker container. Hopefully, we can reach a few more
people, and reassure them, showing that compiling vmods is easy and viable.
We do this by:
- putting the examples in a central, fairly visible location
- using docker as a base, since the containers are easy to build/trash, I
hope it'll encourage people to experiment more
- providing the actual instructions to build for a specific distributions,
rather that pointing at libraries that your system may or may not package,
and possibly under a weird, badly discoverable name
- targeting the latest Varnish, guaranteeing users will at least have
something that compiles on the first try without losing time tracking the
right branch/commit/tarball.

Hopefully, we can grow that list and make more vmods accessible to more
users. If you want more vmods in there, please let me know, either here or
in the discord channel, I'll be happy to push more of them as long as they
are maintained (or at least compile) and that there's no clear better
alternative.

Lastly, I really need to thank the maintainers of the the featured vmods,
without them there wouldn;t be anything to showcase:
- UPLEX (https://code.uplex.de/uplex-varnish and https://gitlab.com/uplex)
- Carlos Abalde (https://github.com/carlosabalde)
- Shohei Tanaka (https://github.com/xcir/)
- otto-de (https://github.com/otto-de)
- varnishcache-friends (https://github.com/varnishcache-friends)

That's it for me, as usual, comments, questions and PRs are more than
welcome!

-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Conditional requests for cached 404 responses

2023-07-14 Thread Guillaume Quintard
Hi Mark,

You are correct:
https://github.com/varnishcache/varnish-cache/blob/varnish-7.3.0/bin/varnishd/cache/cache_fetch.c#L699-L703

We only set the OF_IMSCAND flag (that we use to say that we can conditional
download) if:
- the object is not a Hit-For-Miss (HFM)
- if the status is 200
- we either have a convincing Last-modified, or an Etag header

You can also test it with this VTC:
varnishtest "conditional requests"

server s1 {
rxreq
txresp -status 200 -hdr "ETag: 1234" -hdr "Last-Modified: Wed, 21
Oct 2015 07:28:00 GMT" -body "dad"

rxreq
expect req.http.if-none-match == "1234"
expect req.http.if-modified-since == "Wed, 21 Oct 2015 07:28:00 GMT"
txresp
} -start

varnish v1 -vcl+backend {
sub vcl_backend_response {
set beresp.ttl = 0.1s;
set beresp.grace = 0s;
set beresp.keep = 1y;
return (deliver);
}
} -start

client c1 {
txreq
rxresp

delay 0.2

txreq
rxresp
} -run

Change the 200 to a 404 and the test will now fail.

I quickly skimmed the HTTP spec and see no reason for us to actually check
the status, but I'm sure somebody closer to the code will pop up to provide
some light on the topic.

Cheers,

-- 
Guillaume Quintard


On Fri, Jul 14, 2023 at 7:30 AM Mark Slater  wrote:

> Hi,
>
> I'm running Varnish in front of a back end that has to do some work to
> determine whether a request should receive a 404 response.  However, it can
> cheaply determine whether a previous 404 is still valid.
>
> I see Varnish issuing conditional requests for cached 200 responses, but I
> haven't managed to achieve the same for cached 404 responses.
>
> Here's my sample VCL:
>
>
> vcl 4.1;
>
> backend default {
> .host = "localhost";
> .port = "8081";
> }
>
> sub vcl_backend_response {
>  set beresp.keep = 5m;
> }
>
>
> I'm testing with canned responses on port 8081  For the working 200 case,
> I return:
>
>
> HTTP/1.1 200 OK
> cache-control: max-age=5
> etag: "foo"
> content-length: 13
> connection: close
>
> Hello, World!
>
>
> When I make requests to Varnish, I see, as expected, a first request to
> the back end, followed by five seconds of nothing to the back end, because
> Varnish is responding with its cached copy, followed by a conditional
> request to the back end:
>
>
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.68.0
> Accept: */*
> X-Forwarded-For: 127.0.0.1
> Accept-Encoding: gzip
> X-Varnish: 3
>
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.68.0
> Accept: */*
> X-Forwarded-For: 127.0.0.1
> Accept-Encoding: gzip
> If-None-Match: "foo"
> X-Varnish: 32773
>
>
> For the failing 404 case, my canned back end responds:
>
>
> HTTP/1.1 404 Not Found
> cache-control: max-age=5
> etag: "foo"
> content-length: 13
> connection: close
>
> Hello, World!
>
>
> Now when I make requests to Varnish, I get a cached response for five
> seconds as before, but when the response goes stale, rather than issuing a
> conditional request to revalidate it, Varnish is issuing unconditional
> requests:
>
>
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.68.0
> Accept: */*
> X-Forwarded-For: 127.0.0.1
> Accept-Encoding: gzip
> X-Varnish: 3
>
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.68.0
> Accept: */*
> X-Forwarded-For: 127.0.0.1
> Accept-Encoding: gzip
> X-Varnish: 32771
>
>
> Is that something I can adjust with configuration?  If it's relevant, I'm
> running:
>
> Debug: Version: varnish-6.2.1 revision
> 9f8588e4ab785244e06c3446fe09bf9db5dd8753
> Debug: Platform:
> Linux,5.4.0-153-generic,x86_64,-jnone,-sdefault,-sdefault,-hcritbit
>
> Incidentally, 200 responses with content-length 0 also seem to exhibit
> this behaviour.
>
> Thanks in advance,
>
> Mark
>
>
>
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish for multiple magento sites

2023-07-12 Thread Guillaume Quintard
Hi Riccardo,

You are right, I assumed that the tags were uuids of some sorts and welp,
they're not.

The best way would be to ask magento to fix it by including a hash of the
host or something in each tag, but that's probably not going to happen any
time soon.

The next best way is to teach Varnish to be a bit more selective when
banning. This is done in three steps.

First, we are going to mark the backend response with the host it comes
from (maybe Magento2 already does it in some form, in which case you can
use that header instead):

# add this to the beginning of vcl_backend_response
sub vcl_backend_response {
set beresp.http.x-host = bereq.http.host;
...

Then, we change the ban() calls to only apply to the responses with the
right x-host headers:

if (req.http.X-Magento-Tags-Pattern) {
  ban("obj.http.x-host == + req.http.host + " &&
obj.http.X-Magento-Tags ~ " + req.http.X-Magento-Tags-Pattern);
}
if (req.http.X-Pool) {
  ban("obj.http.x-host == + req.http.host + "&& obj.http.X-Pool ~ "
+ req.http.X-Pool);
}

Lastly, we don't need the x-host header to be exposed to the client, so we
strip it at the beginning of vcl_deliver:

sub vcl_deliver {
unset resp.http.x-host;
...

disclaimer: This test is completely untested, and it's early in the
morning, so you probably shouldn't trust me too much and you should test
this before throwing it in prod.

Technically, you *could* use req.http.host directly, but you don't want to
because of the ban-lurker and its performance implications.

Hope this helps.

-- 
Guillaume Quintard


On Wed, Jul 12, 2023 at 4:15 AM Riccardo Brunetti 
wrote:

> Hello Guillaume.
> Thanks for your answer.
> The VCL is actually almost identical to that you mentioned in the link
> (I'm attaching it without references to names and IP anyway)
>
> What somehow worries me is that, if I understand, the ban is performed
> according to some "X-Magento-Tags".
> Now, if I look at the output of varnishlog and search for "*Tags*", what I
> get is:
>
> 1) while navigating the site:
>
> 
> -   RespHeader X-Magento-Tags: NAVIGATIONPRO_MENU_2
> -   RespUnset  X-Magento-Tags: NAVIGATIONPRO_MENU_2
> -   RespHeader X-Magento-Tags:
> store,cms_b,gdpr_c,theme_editor_backend_css_block,cms_b_header_cms_links,cms_b_argento_scroll_up,cms_b_footer_cms_content,cms_b_footer_payments,cms_b_header_block_custom_links,cms_b_main_bottom_newsletter,cms_b_main_bottom_strenghts,cms_b
> -   RespUnset  X-Magento-Tags:
> store,cms_b,gdpr_c,theme_editor_backend_css_block,cms_b_header_cms_links,cms_b_argento_scroll_up,cms_b_footer_cms_content,cms_b_footer_payments,cms_b_header_block_custom_links,cms_b_main_bottom_newsletter,cms_b_main_bottom_strenghts,cms_b
> -   BerespHeader   X-Magento-Tags:
> cat_c_595,cat_c_p_595,store,cms_b,gdpr_c,theme_editor_backend_css_block,cms_b_header_cms_links,cms_b_argento_scroll_up,cms_b_footer_cms_content,cms_b_footer_payments,cms_b_header_block_custom_links,cms_b_main_bottom_newsletter,cms_b_main_
> -   BerespHeader   X-Magento-Tags: NAVIGATIONPRO_MENU_2
> -   RespHeader X-Magento-Tags: NAVIGATIONPRO_MENU_2
> -   RespUnset  X-Magento-Tags: NAVIGATIONPRO_MENU_2
> -   RespHeader X-Magento-Tags:
> cat_c_595,cat_c_p_595,store,cms_b,gdpr_c,theme_editor_backend_css_block,cms_b_header_cms_links,cms_b_argento_scroll_up,cms_b_footer_cms_content,cms_b_footer_payments,cms_b_header_block_custom_links,cms_b_main_bottom_newsletter,cms_b_main_
> -   RespUnset  X-Magento-Tags:
> cat_c_595,cat_c_p_595,store,cms_b,gdpr_c,theme_editor_backend_css_block,cms_b_header_cms_links,cms_b_argento_scroll_up,cms_b_footer_cms_content,cms_b_footer_payments,cms_b_header_block_custom_links,cms_b_main_bottom_newsletter,cms_b_main_
> .
>
> 2) when performing a purge (php bin/magento c:f):
>
> ...
> -   ReqHeader  X-Magento-Tags-Pattern: .*
> ...
>
> In both cases I can't see any specific reference to that particular site.
>
> Thanks again.
> Riccardo
>
> 11/07/2023, 17:09 Guillaume Quintard ha scritto:
>
> Hi Ricardo,
>
> Having your VCL (even anonymized) would help here, otherwise debugging is
> pretty hard. For the moment, I'm going to assume you are using a variation
> of
> https://github.com/magento/magento2/blob/13e54e1b28a5d590ab885bd4df9f58877b549052/app/code/Magento/PageCache/etc/varnish6.vcl
> and deal in generalities.
>
> The way that vcl invalidates content is through bans:
> https://github.com/magento/magento2/blob/13e54e1b28a5d590ab885bd4df9f58877b549052/app/code/Magento/PageCache/etc/varnish6.vcl#L30-L47
> which doesn't need the host header, it's just uses unique tags pushed by
> the backend in response headers.
> If it was using the actual purge mechanism, then mod

Re: Varnish for multiple magento sites

2023-07-11 Thread Guillaume Quintard
Hi Ricardo,

Having your VCL (even anonymized) would help here, otherwise debugging is
pretty hard. For the moment, I'm going to assume you are using a variation
of
https://github.com/magento/magento2/blob/13e54e1b28a5d590ab885bd4df9f58877b549052/app/code/Magento/PageCache/etc/varnish6.vcl
and deal in generalities.

The way that vcl invalidates content is through bans:
https://github.com/magento/magento2/blob/13e54e1b28a5d590ab885bd4df9f58877b549052/app/code/Magento/PageCache/etc/varnish6.vcl#L30-L47
which doesn't need the host header, it's just uses unique tags pushed by
the backend in response headers.
If it was using the actual purge mechanism, then modifying the host should
be sufficient because purge acts on the object found in the cache (and if
you can get a hit, you can get purged).

Here's a good primer on invalidation:
https://docs.varnish-software.com/tutorials/cache-invalidation/

Kind regards,



-- 
Guillaume Quintard


On Tue, Jul 11, 2023 at 4:14 AM Riccardo Brunetti 
wrote:

> Hello.
> I'm new to varnish and I have a question concerning how to manage multiple
> sites using the same varnish cache frontend.
>
> More specifically, I need to setup a single varnish cache server for two
> different Magento2 sites.
>
> Looking around I found that it is possible to manage different backends
> using something like:
>
> if (req.http.host == "somesite") {
> set req.backend_hint = somebackend;
> }
>
> Now, I have two different Magento2 sites and, using the above expression,
> I can handle the two different backends.
> The problem is that I can't understand how to handle the PURGE/BAN of the
> two independently.
>
> As far as I understand from the .vcl file that Magento2 itself produces
> there is nothing inside the "purge" section that specifies which resources
> must be purged.
> It seems to me that is site A performs a purge, than also the cache of
> site B resources will be cleaned.
>
> Can you help me with this or point me to some example or tutorials?
>
> Thanks a lot
> Riccardo
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Unexpected Cache-Control Header Transmission in Dual-Server API Setup

2023-06-28 Thread Guillaume Quintard
Not really, I have no tomcat expertise, which is where the issue should be
fixed. That being said, if you can't prevent tomcat from adding the header,
then you can use the VCL on varnish2 to scrub the headers ("unset
req.http.cache-control;").

-- 
Guillaume Quintard


On Wed, Jun 28, 2023 at 10:03 AM Uday Kumar  wrote:

> Hi Guillaume,
>
> You are right!
> varnish is not adding any cache-control headers.
>
>
> *Observations when trying to replicate the issue locally:*
> I was trying to replicate the issue using Local Machine by creating a
> Spring Boot Application that acts as API-1 and tried hitting API-2 that's
> on Server2.
>
> *Request Flow:* Local Machine > Server2 varnish --> Server2 Tomcat
>
> Point-1: When using* integrated tomcat (Tomcat 9) the spring-boot* issue
> was *not *replicable [*Just ran Application in intellij*] (meaning, the
> cache-control header is *not *being transmitted to Varnish of Server2)
>
> *Point-2:* When *Tomcat 9 was explicitly installed in my local machine*
> and built the* corresponding war of API-1 and used this to hit API-2*
> that's on Server2, *Now issue got replicated* (meaning, *cache-control:
> no-cache, pragma: no-cache is being transmitted to Varnish of Server2*)
>
>
> Any insights?
>
> Thanks & Regards
> Uday Kumar
>
>
> On Wed, Jun 28, 2023 at 8:32 PM Guillaume Quintard <
> guillaume.quint...@gmail.com> wrote:
>
>> Hi Uday,
>>
>> That one should be quick: Varnish doesn't add cache-control headers on
>> its own.
>>
>> So, from what I understand it can come from two places:
>> - either the VCL in varnish1
>> - something in tomcat1
>>
>> It should be very easy to check with varnishlog's. Essentially, run
>> "varnishlog -H request -q 'ReqHeader:uday'" on both varnish nodes and send
>> a curl request like "curl http://varnish1/some/request/not/in/cache.html
>> -H "uday: true"
>>
>> You should see the request going through both varnish and should be able
>> to pinpoint what created the header. Or at least identify whether it's a
>> varnish thing or not.
>>
>> Kind regards
>>
>> For a reminder on varnishlog:
>> https://docs.varnish-software.com/tutorials/vsl-query/
>>
>>
>> On Wed, Jun 28, 2023, 06:28 Uday Kumar  wrote:
>>
>>> Hello All,
>>>
>>> Our application operates on a dual-server setup, where each server is
>>> dedicated to running a distinct API.
>>>
>>> *Technical specifications:*
>>> Framework: Spring-boot v2.4 (Java 1.8)
>>> Runtime Environment: Tomcat
>>> Version: Apache Tomcat/7.0.42
>>> Server1 runs API-1 and Server2 runs API-2. Both servers are equipped
>>> with an installed Varnish application. When either API is accessed, the
>>> request is processed through the Varnish instance associated with the
>>> respective server.
>>>
>>> *Issue Description:*
>>> In a typical scenario, a client (browser) sends a request to API-1,
>>> which is handled by the Varnish instance on Server1. After initial
>>> processing, API-1 makes a subsequent request to API-2 on Server2.
>>>
>>> The Request Flow is as follows:
>>> *Browser --> Varnish on Server1 --> Tomcat on Server1 --> Varnish on
>>> Server2 --> Tomcat on Server2*
>>>
>>> *Assuming, the request from Browser will be a miss at Server1 Varnish so
>>> that the request reaches Tomcat(Backend) on server1.*
>>>
>>> In cases where the browser *does not include any cache-control
>>> headers in the request* (e.g., no-cache, max-age=0), the Server1
>>> Varnish instance correctly *does not receive any cache-control headers*.
>>>
>>> *However, when API-1 calls API-2, we observe that a cache-control:
>>> no-cache and p**ragma: no-cache headers are being transmitted to the
>>> Varnish instance on Server2*, despite the following conditions:
>>>
>>> 1. We are not explicitly sending any cache-control header in our
>>> application code during the call from API-1 to API-2.
>>> 2. Our application does not use the Spring-security dependency, which by
>>> default might add such a header.
>>> 3. The cache-control header is not being set by the Varnish instance on
>>> Server2.
>>>
>>> This unexpected behavior of receiving a cache-control header at
>>> Server2's Varnish instance when invoking API-2 from API-1 is the crux of
>>> our issue.
>>>
>>> We kindly request your assistance in understanding the cause of this
>>> unexpected behavior. Additionally, we would greatly appreciate any guidance
>>> on how to effectively prevent this issue from occurring in the future.
>>>
>>> Thanks & Regards
>>> Uday Kumar
>>> ___
>>> varnish-misc mailing list
>>> varnish-misc@varnish-cache.org
>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>>
>>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Unexpected Cache-Control Header Transmission in Dual-Server API Setup

2023-06-28 Thread Guillaume Quintard
Hi Uday,

That one should be quick: Varnish doesn't add cache-control headers on its
own.

So, from what I understand it can come from two places:
- either the VCL in varnish1
- something in tomcat1

It should be very easy to check with varnishlog's. Essentially, run
"varnishlog -H request -q 'ReqHeader:uday'" on both varnish nodes and send
a curl request like "curl http://varnish1/some/request/not/in/cache.html -H
"uday: true"

You should see the request going through both varnish and should be able to
pinpoint what created the header. Or at least identify whether it's a
varnish thing or not.

Kind regards

For a reminder on varnishlog:
https://docs.varnish-software.com/tutorials/vsl-query/


On Wed, Jun 28, 2023, 06:28 Uday Kumar  wrote:

> Hello All,
>
> Our application operates on a dual-server setup, where each server is
> dedicated to running a distinct API.
>
> *Technical specifications:*
> Framework: Spring-boot v2.4 (Java 1.8)
> Runtime Environment: Tomcat
> Version: Apache Tomcat/7.0.42
> Server1 runs API-1 and Server2 runs API-2. Both servers are equipped with
> an installed Varnish application. When either API is accessed, the request
> is processed through the Varnish instance associated with the respective
> server.
>
> *Issue Description:*
> In a typical scenario, a client (browser) sends a request to API-1, which
> is handled by the Varnish instance on Server1. After initial processing,
> API-1 makes a subsequent request to API-2 on Server2.
>
> The Request Flow is as follows:
> *Browser --> Varnish on Server1 --> Tomcat on Server1 --> Varnish on
> Server2 --> Tomcat on Server2*
>
> *Assuming, the request from Browser will be a miss at Server1 Varnish so
> that the request reaches Tomcat(Backend) on server1.*
>
> In cases where the browser *does not include any cache-control headers in
> the request* (e.g., no-cache, max-age=0), the Server1 Varnish instance
> correctly *does not receive any cache-control headers*.
>
> *However, when API-1 calls API-2, we observe that a cache-control:
> no-cache and p**ragma: no-cache headers are being transmitted to the
> Varnish instance on Server2*, despite the following conditions:
>
> 1. We are not explicitly sending any cache-control header in our
> application code during the call from API-1 to API-2.
> 2. Our application does not use the Spring-security dependency, which by
> default might add such a header.
> 3. The cache-control header is not being set by the Varnish instance on
> Server2.
>
> This unexpected behavior of receiving a cache-control header at Server2's
> Varnish instance when invoking API-2 from API-1 is the crux of our issue.
>
> We kindly request your assistance in understanding the cause of this
> unexpected behavior. Additionally, we would greatly appreciate any guidance
> on how to effectively prevent this issue from occurring in the future.
>
> Thanks & Regards
> Uday Kumar
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Purging cached std.fileread() contents

2023-06-15 Thread Guillaume Quintard
Piling on here, there's also one in rust!
https://github.com/gquintard/vmod_fileserver

On Thu, Jun 15, 2023, 19:44 Geoff Simmons  wrote:

> On 6/15/23 18:57, Justin Lloyd wrote:
> >
> > The documentation for std.fileread() says it is cached indefinitely, so
> > how do I get Varnish to re-read the file when it gets updated without
> > having to restart Varnish?
>
> "Cached indefinitely" means just what it says. The VMOD saves the file
> contents in memory on the first invocation of std.fileread(), and never
> reads the file again.
>
> We have a VMOD that reads file contents and then monitors the file for
> changes. The new contents are used after the change:
>
> https://code.uplex.de/uplex-varnish/libvmod-file
>
>
> Best,
> Geoff
> --
> ** * * UPLEX - Nils Goroll Systemoptimierung
>
> Scheffelstraße 32
> 22301 Hamburg
>
> Tel +49 40 2880 5731
> Mob +49 176 636 90917
> Fax +49 40 42949753
>
> http://uplex.de
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses

2023-06-15 Thread Guillaume Quintard
Adding to what Dridi said, and just to be clear: the "cleaning" of those
well-known headers only occurs when the req object is copied into a beteq,
so there's nothing preventing you from stashing the "cache-control" header
into "x-cache-control" during vcl_recv, and then copying it back to
"cache-control" during vcl_backend_response.
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching

2023-06-05 Thread Guillaume Quintard
Hi,

Relevant documentation:
- https://varnish-cache.org/docs/trunk/users-guide/vcl-hashing.html
- https://www.varnish-software.com/developers/tutorials/varnish-builtin-vcl/
- https://varnish-cache.org/docs/trunk/users-guide/vcl-built-in-code.html

Essentially: if you don't use a return statement, then the built-in vcl
code is executed, and so the logic will be different with and without that
statement.

You wrote that the code isn't working, but don't explain further, which
makes it hard to debug, my best guess is that you're hashing too much
because of the built-in code.
One thing you can do is this:
```
sub vcl_deliver {
set resp.http.req-hash = req.hash;
...
}
```
That will allow you to see objects get the same hash, or a different one.
On that topic, I'm pretty certain that hashing the Accept-Encoding header
is useless and will fragment your cache needlessly, as Varnish already
takes that header into account implicitly.

Note that the vcl I shared in my last email doesn't have a vcl_hash
function because it relies entirely on modifying the url before it is
hashed by the built-in vcl.

Hope that helps.

-- 
Guillaume Quintard


On Mon, Jun 5, 2023 at 4:31 AM Uday Kumar  wrote:

> Hello Guillaume,
>
> Thanks for the update!
>
>
> (It's done by default if you don't have a vcl_hash section in your VCL)
>>>>> We can tweak it slightly so that we ignore the whole querystring:
>>>>> sub vcl_hash {
>>>>> hash_data(regsub(req.url, "\?.*",""));
>>>>> if (req.http.host) {
>>>>> hash_data(req.http.host);
>>>>> } else {
>>>>> hash_data(server.ip);
>>>>> }
>>>>> return (lookup);
>>>>> }
>>>>>
>>>>
> Would like to discuss about above suggestion.
>
> *FYI:*
> *In our current vcl_hash subroutine, we didnt had any return lookup
> statement in production , and the code is as below*
> #Working
> sub vcl_hash{
>hash_data(req.url);
>hash_data(req.http.Accept-Encoding);
> }
> The above code is *working without any issues on production even without
> return (lookup)* statement.
>
> For our new requirement * to ignore the parameter in URL while caching, * as
> per your suggestion we have made changes to the vcl_hash subroutine, new
> code is as below.
>
> #Not Working
> sub vcl_hash{
> set req.http.hash-url = regsuball(req.url, "traceId=.*?(\&|$)", "");
> hash_data(req.http.hash-url);
> unset req.http.hash-url;
> hash_data(req.http.Accept-Encoding);
> }
>
> The above code is *not hashing the URL by ignoring traceId (not as
> expected)** but if I add return lookup at the end of subroutine its
> working as expected.*
>
> #Working Code
> sub vcl_hash{
> set req.http.hash-url = regsuball(req.url, "traceId=.*?(\&|$)", "");
> hash_data(req.http.hash-url);
> unset req.http.hash-url;
> hash_data(req.http.Accept-Encoding);
> *return (lookup);*
> }
>
>
> *I have few doubts to be clarified:*
> 1. May I know what difference return (lookup) statement makes?
> 2. Will there be any side effects with modified code, if I use return
> (lookup)? (Because original code was not causing any issue even without
> return lookup in production)
>
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching

2023-06-05 Thread Guillaume Quintard
Hi all,

Turns out install-vmod works, just needed to grab the right tarballs and
have the right dependencies installed. Here's the Dockerfile I used:
FROM varnish:7.3

USER root
RUN set -e; \
EXTRA_DEPS="autoconf-archive libossp-uuid-dev"; \
apt-get update; \
apt-get -y install $VMOD_DEPS $EXTRA_DEPS libossp-uuid16 libuuid1
/pkgs/*.deb; \
# vmod_querystring
install-vmod
https://github.com/Dridi/libvmod-querystring/releases/download/v2.0.3/vmod-querystring-2.0.3.tar.gz;
\
# vmod_uuid
install-vmod
https://github.com/otto-de/libvmod-uuid/archive/refs/heads/master.tar.gz; \
apt-get -y purge --auto-remove $VMOD_DEPS $EXTRA_DEPS varnish-dev; \
rm -rf /var/lib/apt/lists/*
USER varnish

and here's the VCL:

vcl 4.1;

import querystring;
import uuid;

backend default {
.host = "localhost";
.port = "";
}

sub vcl_init {
new qf = querystring.filter(sort = true);
qf.add_string("myparam");
}

# clear the url from param as it goes in
sub vcl_recv {
# clear myparam from the incoming url
set req.url = qf.apply(mode = drop);
}

# add the querystring parameter back if we go to the backend
sub vcl_backend_fetch {
# create the unique string
set bereq.http.mynewparam = regsub(uuid.uuid_v4(), "^(.{20}).*", "\1");

# add our own myparam
if (bereq.url ~ "\?") {
set bereq.url = bereq.url + "=" + bereq.http.mynewparam;
} else {
set bereq.url = bereq.url + "?myparam=" + bereq.http.mynewparam;
}
}

It's a bit crude, but it fulfills your requirements. Make sure you test it
though.

-- 
Guillaume Quintard


On Thu, Jun 1, 2023 at 6:10 AM Uday Kumar  wrote:

> Thanks for the prompt response!
>
> Thanks & Regards
> Uday Kumar
>
>
> On Thu, Jun 1, 2023 at 11:12 AM Guillaume Quintard <
> guillaume.quint...@gmail.com> wrote:
>
>> Thanks, so, to make things clean you are going to need to use a couple of
>> vmods, which means being able to compile them first:
>> - https://github.com/otto-de/libvmod-uuid as Geoff offered
>> - https://github.com/Dridi/libvmod-querystring that will allow easy
>> manipulation of the querystring
>>
>> unfortunately, the install-vmod tool that is bundled into the Varnish
>> docker image isn't able to cleanly compile/install them. I'll have a look
>> this week-end if I can, or at least I'll open a ticket on
>> https://github.com/varnish/docker-varnish
>>
>> But, if you are able to install those two, then your life is easy:
>> - once you receive a request, you can start by creating a unique ID,
>> which'll be the the vcl equivalent of `uuidgen | sed -E
>> 's/(\w+)-(\w+)-(\w+)-(\w+).*/\1\2\3\4/'` (without having testing it,
>> probably `regsub(uuid.uuid_v4(), "s/(\w+)-(\w+)-(\w+)-(\w+).*",
>> "\1\2\3\4/"`)
>> - then just add/replace the parameter in the querystring with
>> vmod_querystring
>>
>> and...that's about it?
>>
>> Problem is getting the vmods to compile/install which I can help with
>> this week-end. There's black magic that you can do using regex to
>> manipulate querystring, but it's a terrible idea.
>>
>> --
>> Guillaume Quintard
>>
>>
>> On Wed, May 31, 2023 at 6:48 PM Uday Kumar 
>> wrote:
>>
>>>
>>> Does it need to be unique? can't we just get away with
>>>> ""?
>>>>
>>>
>>> Our Requirements:
>>> 1. New Parameter should be *appended *to already existing parameters in
>>> Query String. (should not replace entire query string)
>>> 2. Parameter Value *Must be Unique for each request* (ideally unique
>>> randomness is preferred)
>>> 3. Allowed Characters are Alphanumeric which are *URL safe* [can be
>>> lowercase, uppercase in case of alphabets]
>>> 4. Characters can be repeated in parameter value EX: Gn4lT*Y*
>>> gBgpPaRi6hw6*Y*S (here, Y is repeated) But as mentioned above the value
>>> must be unique as a whole.
>>>
>>> Ex: Parameter value for 1st request can be "Gn4lT*Y*gBgpPaRi6hw6*Y*S",
>>> 2nd request can be
>>> "G34lTYgBgpPaRi6hyaaF" and so on
>>>
>>>
>>> Thanks & Regards
>>> Uday Kumar
>>>
>>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching

2023-05-31 Thread Guillaume Quintard
Thanks, so, to make things clean you are going to need to use a couple of
vmods, which means being able to compile them first:
- https://github.com/otto-de/libvmod-uuid as Geoff offered
- https://github.com/Dridi/libvmod-querystring that will allow easy
manipulation of the querystring

unfortunately, the install-vmod tool that is bundled into the Varnish
docker image isn't able to cleanly compile/install them. I'll have a look
this week-end if I can, or at least I'll open a ticket on
https://github.com/varnish/docker-varnish

But, if you are able to install those two, then your life is easy:
- once you receive a request, you can start by creating a unique ID,
which'll be the the vcl equivalent of `uuidgen | sed -E
's/(\w+)-(\w+)-(\w+)-(\w+).*/\1\2\3\4/'` (without having testing it,
probably `regsub(uuid.uuid_v4(), "s/(\w+)-(\w+)-(\w+)-(\w+).*",
"\1\2\3\4/"`)
- then just add/replace the parameter in the querystring with
vmod_querystring

and...that's about it?

Problem is getting the vmods to compile/install which I can help with this
week-end. There's black magic that you can do using regex to manipulate
querystring, but it's a terrible idea.

-- 
Guillaume Quintard


On Wed, May 31, 2023 at 6:48 PM Uday Kumar  wrote:

>
> Does it need to be unique? can't we just get away with
>> ""?
>>
>
> Our Requirements:
> 1. New Parameter should be *appended *to already existing parameters in
> Query String. (should not replace entire query string)
> 2. Parameter Value *Must be Unique for each request* (ideally unique
> randomness is preferred)
> 3. Allowed Characters are Alphanumeric which are *URL safe* [can be
> lowercase, uppercase in case of alphabets]
> 4. Characters can be repeated in parameter value EX: Gn4lT*Y*gBgpPaRi6hw6
> *Y*S (here, Y is repeated) But as mentioned above the value must be
> unique as a whole.
>
> Ex: Parameter value for 1st request can be "Gn4lT*Y*gBgpPaRi6hw6*Y*S",
> 2nd request can be
> "G34lTYgBgpPaRi6hyaaF" and so on
>
>
> Thanks & Regards
> Uday Kumar
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching

2023-05-31 Thread Guillaume Quintard
Does it need to be unique? can't we just get away with
""?

the crude VCL code would look like:
set req.url = regsub(req.url, "\?.*","?yourparam=");

i.e. getting rid of the whole query string and just putting yours in place.

-- 
Guillaume Quintard


On Wed, May 31, 2023 at 11:07 AM Uday Kumar  wrote:

> Hello,
>
> We would like to configure varnish to create unique parameter such that
> its value should be of 20 characters (alphanumeric characters that are URL
> safe).
>
> On Wed, May 31, 2023, 13:34 Guillaume Quintard <
> guillaume.quint...@gmail.com> wrote:
>
>> >  Could you please also suggest how to configure Varnish so that Varnish
>> can add Unique Parameter by itself??
>>
>> We'd need more context, is there any kind of check that tomcat does on
>> this parameter, does it need to have a specific length, or match a regex?
>> If we know that, we can have Varnish check the user request to make sure
>> it's valid, and potentially generate its own parameter.
>>
>> But it all depends on what Tomcat expects from that parameter.
>>
>> --
>> Guillaume Quintard
>>
>>
>> On Tue, May 30, 2023 at 11:18 PM Uday Kumar 
>> wrote:
>>
>>> Hello Guillaume,
>>>
>>> Thank you so much for your help, will try modifying vcl_hash as
>>> suggested!
>>>
>>>
>>>> Last note: it would probably be better if the tomcat server didn't need
>>>> that unique parameter, or at the very least, if Varnish could just add
>>>> it itself rather than relying on client information as you're caching
>>>> something public using something that was user-specific, so there's
>>>> potential for snafus here.
>>>>
>>>
>>>  Could you please also suggest how to configure Varnish so that Varnish
>>> can add Unique Parameter by itself??
>>>
>>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching

2023-05-31 Thread Guillaume Quintard
>  Could you please also suggest how to configure Varnish so that Varnish
can add Unique Parameter by itself??

We'd need more context, is there any kind of check that tomcat does on this
parameter, does it need to have a specific length, or match a regex?
If we know that, we can have Varnish check the user request to make sure
it's valid, and potentially generate its own parameter.

But it all depends on what Tomcat expects from that parameter.

-- 
Guillaume Quintard


On Tue, May 30, 2023 at 11:18 PM Uday Kumar  wrote:

> Hello Guillaume,
>
> Thank you so much for your help, will try modifying vcl_hash as suggested!
>
>
>> Last note: it would probably be better if the tomcat server didn't need
>> that unique parameter, or at the very least, if Varnish could just add
>> it itself rather than relying on client information as you're caching
>> something public using something that was user-specific, so there's
>> potential for snafus here.
>>
>
>  Could you please also suggest how to configure Varnish so that Varnish
> can add Unique Parameter by itself??
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Need Assistance in Configuring Varnish to Retain and Ignore Unique Parameter in Request URL while caching

2023-05-30 Thread Guillaume Quintard
Hi Uday,

Ultimately, you'll probably want to learn and use this vmod:
https://github.com/Dridi/libvmod-querystring , but in the meantime, we can
use a quick hack.

Essentially, we don't need to modify the URL, but we can just alter the
cache key computation.

By default, the key logic looks like this:

sub vcl_hash {
hash_data(req.url);
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
return (lookup);
}

(It's done by default if you don't have a vcl_hash section in your VCL)
We can tweak it slightly so that we ignore the whole querystring:

sub vcl_hash {
hash_data(regsub(req.url, "\?.*",""));
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
return (lookup);
}

It's crude, but should do the job. To use it, just copy the code above in
your VCL, for example just after the vcl_recv definition (not inside it
though). Of course, if you already have a vlc_hash definition in your code,
you'll need to modify that one, instead of adding a new one.

Relevant documentation:
- https://varnish-cache.org/docs/trunk/users-guide/vcl-hashing.html
- https://www.varnish-software.com/developers/tutorials/varnish-builtin-vcl/

Last note: it would probably be better if the tomcat server didn't need
that unique parameter, or at the very least, if Varnish could just add it
itself rather than relying on client information as you're caching
something public using something that was user-specific, so there's
potential for snafus here.

Hope that helps,


On Tue, May 30, 2023, 03:45 Uday Kumar  wrote:

> Hello everyone,
>
> In our system, we're currently using Varnish Cache in front of our Tomcat
> Server for caching content.
>
> As part of our new requirement, we've started passing a unique parameter
> with every URL. The addition of this unique parameter in each request is
> causing a cache miss, as Varnish treats each request as distinct due to the
> difference in the parameter. Our intent is to have Varnish ignore this
> specific parameter for caching purposes, so that it can treat similar
> requests with different unique parameters as identical for caching purposes.
>
> Expected Functionality of Varnish:
>
> 1. We need Varnish to ignore the unique parameter when determining if a
> request is in the cache or while caching a request.
>
> 2. We also need Varnish to retain this unique parameter in the request
> URL when it's passed along to the Tomcat Server.
>
> We're looking for a way to modify our Varnish configuration to address the
> above issues, your assistance would be greatly appreciated.
>
>
> Thanks & Regards
> Uday Kumar
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Mysterious no content result, from an URL with pass action

2023-05-11 Thread Guillaume Quintard
On Tue, May 9, 2023, 22:45 Jakob Bohm  wrote:

> Expecting uncachable results that vary with time and are only sometimes
> 204,


Understood, but that one looks like a backend issue. Also, just to be
clear, the response is uncatchable because the back looked at the url and
deemed the request wouldn't lead to cacheable content, so we knew the
response would be uncatchable before even contacting the backend

and the response time is also somewhat unexpected, but is not
> clearly logged (only a Varnish expert like you can decrypt that it is 27
> seconds).


To be fair, varnishlog's goal is just to provide all the info it can, in an
unopinionated matter. The fact that the response took a long time may or
may not be normal, so there's no reason for varnishlog to fret about it,
and it doesn't necessarily know either what duration you are interested in,
so it gives them all)
For anyone reading along and trying to make sense of the Timestamp lines:
https://varnish-cache.org/docs/6.0/reference/vsl.html#timestamps

Note that varnisncsa would have probably been more concise and maybe useful
to check the timing.

It is also unclear if Varnish is always receiving those
> responses from the backend.
>

As a rule of thumb, by default, varnish only generates 503s in case of an
error (your VCL can also generate other errors, but then you are expected
to know about that).

A quick way to tag the transport-level error that varnish will generate on
the backend side it to have this in your VCL:

``` vcl
sub vcl_backend_error {
set beresp.http.is-a-varnish-error = "true";
}

>
> I also expected some other URLs in the log, but don't see them.
>

You could maybe log more, on disk, and filter for the urls you care about?
If that's not what you are already doing?

This page might help: https://docs.varnish-software.com/tutorials/vsl-query/

Hope that helps!

>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Mysterious no content result, from an URL with pass action

2023-05-09 Thread Guillaume Quintard
Hi Jakob,

(Sorry i didn't see that email sooner, it was in my spam folder)

Looking at the log, I'm not sure what varnish should be loud about :-)
204 is a success code, and more importantly it's generated by the backend,
so varnish is happily passing it along.

At the http level, everything looks about right, but I can guess from your
apparent irritation that something wrong one level up, let's try to debug
that.

What kind of response are you expecting, if not a 204? And maybe, what is
that endpoint supposed to do? Given that the method was GET, and that
there's no body, my only guess is that there's something happening with
the TeamCity-AgentSessionId header, maybe?
Is the 27 seconds processing time expected?

Cheers,

On Tue, May 9, 2023, 15:12 Jakob Bohm  wrote:

> Dear Varnish mailing list,
>
> When testing varnish as a reverse proxy for multiple services
> including a local JetBrains TeamCity instance, requests to that
> teamcity server get corrupted into "204 No Content" replies.
>
> Once again, Varnish fails to say why it is refusing to do its job.
> Any sane program should explicitly and loudly report any fatal error
> that stops it working.  Loudly means the sysadmin or other user
> invoking the program receives the exaxt error message by default
> instead of something highly indirect, hidden behind a debug option
> or otherwisse highly non-obvious.
>
> Here's a relevant clip from the VCL:
>
> # Various top comments
> vcl 4.1;
>
> import std;
> import proxy;
>
> # Backend sending requests to the teamcity main server
> backend teamcity {
>  .host = "2a01:::::::";
>  .port = "8111";
> }
>
> # IP ranges allowed to access the build server and staging server
> acl buildtrust {
>  "127.0.0.0"/8;
>  "::"/128;
>  "various others"/??;
> }
>
> # IP ranges allowed to attempt login to things that use our common login
> #database
> acl logintrust {
>  "various others"/??;
> }
>
> sub vcl_recv {
>  # Happens before we check if we have this in cache already.
>  #
>  # Typically you clean up the request here, removing cookies you
> don't need,
>  # rewriting the request, etc.
>  if (proxy.is_ssl()) {
>  set req.http.Scheme = "https";
>  set req.http.ssl-version = proxy.ssl_version();
>  set req.http.X-Forwarded-Proto = "https";
>  set req.http.X-SSL-cipher = proxy.ssl_cipher();
>  std.log("TLS-SSL-VERSION: " + proxy.ssl_version());
>  } else {
>  set req.http.X-Forwarded-Proto = req.http.Scheme;
>  unset req.http.ssl-version;
>  unset req.http.X-SSL-cipher;
>  std.log("TLS-SSL-VERSION: none");
>  }
>  unset req.http.X-SSL-Subject;
>  unset req.http.X-SSL-Issuer;
>  unset req.http.X-SSL-notBefore;
>  unset req.http.X-SSL-notAfter;
>  unset req.http.X-SSL-serial;
>  unset req.http.X-SSL-certificate;
>
>  set req.http.X-Forwarded-For = client.ip;
>
>  call vcl_req_host;
>
>  if (req.url ~ "^/something") {
>  set req.backend_hint = be1;
>  } else if (req.url !~ "^/somethingelse" &&
> !(client.ip ~ logintrust) &&
> !(client.ip ~ buildtrust)) {
>  # Treat as unknown by redirecting to public website
>  if ((req.url ~ "^/yeatanother") ||
>  (req.url ~ "^/yetsomeother")) {
>  return (synth(752));
>  } else if (req.url ~ "^/yetsomethird") {
>  return (synth(753));
>  }
>  return (synth(751));
>  } else if (req.http.Scheme && req.http.Scheme != "https") {
>  # See example at
> https://www.varnish-software.com/developers/tutorials/redirect/
>  return (synth(750));
>  } else if (req.url ~ "^/somethingelse") {
>  set req.backend_hint = be1;
>  } else if (req.url ~ "^/somethingfourth") {
>  set req.backend_hint = be2;
>  } else if (req.url ~ "^/somethingfifth") {
>  set req.backend_hint = be2;
>  } else if (!(client.ip ~ buildtrust)) {
>  # Treat as unknown by redirecting to public website
>  if ((req.url ~ "^/yeatanother") ||
>  (req.url ~ "^/yetsomeother")) {
>  return (synth(752));
>  } else if (req.url ~ "^/yetsomethird") {
>  return (synth(753));
>  }
>  return (synth(751));
>  } else if (req.url ~ "^/teamcity") {
>  set req.backend_hint= teamcity;
>  return (pass);
> #} else if (req.http.host ~ "^somethingsixths") {
> #   set req.backend_hint= be4;
>  } else {
>  set req.backend_hint = be5;
>  }
>  call vcl_req_method;
>  call vcl_req_authorization;
>  call vcl_req_cookie;
>  return (hash);
> }
>
> sub vcl_backend_response {
>  # Happens after we have read the response headers from the backend.
>  #
>  # Here you clean the response headers, removing silly Set-Cookie
> headers
>  # and other mistakes your 

Re: Varnish won't start because backend host resolves to too many addresses, but they are all identical IPs

2023-04-19 Thread Guillaume Quintard
> The documentation seems a bit lacking (no full VCL example), but I guess
I could use their test cases as examples.

https://github.com/nigoroll/libvmod-dynamic/blob/master/src/vmod_dynamic.vcc#L538-L583
maybe?
I'm sure Nils will pipe up here if you need help, and if you want more
synchronous assistance, there's always the discord channel
<https://varnish-cache.org/support/>.

> The dynamic one seems like the only one that supports community edition
LTS 6.0.

Yes, of the three, that's the only one that will support that one (VS is
focused on the Enterprise version, and I lack the time to port vmods on 6.0
(but I'll welcome the help)).

Cheers,

-- 
Guillaume Quintard


On Wed, Apr 19, 2023 at 9:02 AM Batanun B  wrote:

> > Shouldn't your DNS entries be clean? ;-)
>
> Preferably, but I blame Microsoft here 
>
> The problem went away by itself when I tried starting again like half an
> hour later or so, so I guess it was a temporary glitch in the matrix.
>
> As far as I understand it, the IPs of these machines only change if they
> are deleted and created again. We do it occasionally in test/staging, and
> there we can live with Varnish needing to be restarted. In production we
> don't really delete them once they are properly setup, unless there is some
> major problem and then a restart of the load balanced varnish servers
> should not be a concern.
>
> Thanks for your vmod suggestions! I will check them out. The dynamic one
> seems like the only one that supports community edition LTS 6.0. The
> documentation seems a bit lacking (no full VCL example), but I guess I
> could use their test cases as examples.
>
> --
> *From:* Guillaume Quintard 
> *Sent:* Wednesday, April 19, 2023 4:42 PM
> *To:* Batanun B 
> *Cc:* varnish-misc@varnish-cache.org 
> *Subject:* Re: Varnish won't start because backend host resolves to too
> many addresses, but they are all identical IPs
>
> The fact the IPs are identical is weird, but I wouldn't be surprised if
> the dns entry actually contained 3 identical IPs.
>
> > Shouldn't Varnish be able to figure out that in that case it can just
> choose any one and it will work as expected?
>
> Shouldn't your DNS entries be clean? ;-)
>
> Honestly, if the IP(s) behind the service name is liable to change, you
> shouldn't use a dynamic backend because Varnish resolves the IP when the
> VCL is loaded, so if the IP changes behind your back, Varnish won't follow
> it, and you'll be screwed.
> Instead, you should use dynamic backends, of which there are a handful:
> - dynamic <https://github.com/nigoroll/libvmod-dynamic>, by UPLEX: it's
> been around for ages, it's battle-tested, and it's included in the oficial
> Varnish Docker image <https://hub.docker.com/_/varnish>
> - udo+activedns
> <https://docs.varnish-software.com/varnish-enterprise/vmods/udo/#subscribe>,
> by Varnish Software: the design is slightly different and allows you to
> specify pretty much any load-balancing policy you might need. You'll need a
> subscription but you'll get excellent support (disclaimer, I'm an ex
> employee)
> - reqwest
> <https://github.com/gquintard/vmod_reqwest#backend-https-following-up-to-5-redirect-hops-and-brotli-auto-decompression>,
> by yours truly: the interface focuses on providing a simple experience and
> a few bells and whistles (HTTPS, HTTP2, brotli, following redirects)
>
> As you can see, the static backend's reluctance to fully handle DNS has
> been a fertile ground for vmods :-)
>
> --
> Guillaume Quintard
>
>
> On Wed, Apr 19, 2023 at 1:49 AM Batanun B  wrote:
>
> All of the sudden Varnish fails to start in my development environment,
> and gives me the following error message:
>
> Message from VCC-compiler:
> Backend host "redacted-hostname": resolves to too many addresses.
> Only one IPv4 and one IPv6 are allowed.
> Please specify which exact address you want to use, we found all of these:
>  555.123.123.3:80
>  555.123.123.3:80
>  555.123.123.3:80
>
> I have changed the hostname and the IP above to not expose our server, but
> all three IP numbers are 100% identical. Shouldn't Varnish be able to
> figure out that in that case it can just choose any one and it will work as
> expected? It really should remove duplicates, and only if there are more
> than one non-duplicate IP then it should fail.
>
> The problem is that the backend host is a so called "app service" in
> Microsoft Azure, which is basically a platform as a service (PaaS), where
> Microsoft handles the networking including the domain name (no user access
> it directly). I have no idea why it suddenly resolves to multiple duplicate
> IPs.
> __

Re: Varnish won't start because backend host resolves to too many addresses, but they are all identical IPs

2023-04-19 Thread Guillaume Quintard
The fact the IPs are identical is weird, but I wouldn't be surprised if the
dns entry actually contained 3 identical IPs.

> Shouldn't Varnish be able to figure out that in that case it can just
choose any one and it will work as expected?

Shouldn't your DNS entries be clean? ;-)

Honestly, if the IP(s) behind the service name is liable to change, you
shouldn't use a dynamic backend because Varnish resolves the IP when the
VCL is loaded, so if the IP changes behind your back, Varnish won't follow
it, and you'll be screwed.
Instead, you should use dynamic backends, of which there are a handful:
- dynamic <https://github.com/nigoroll/libvmod-dynamic>, by UPLEX: it's
been around for ages, it's battle-tested, and it's included in the oficial
Varnish Docker image <https://hub.docker.com/_/varnish>
- udo+activedns
<https://docs.varnish-software.com/varnish-enterprise/vmods/udo/#subscribe>,
by Varnish Software: the design is slightly different and allows you to
specify pretty much any load-balancing policy you might need. You'll need a
subscription but you'll get excellent support (disclaimer, I'm an ex
employee)
- reqwest
<https://github.com/gquintard/vmod_reqwest#backend-https-following-up-to-5-redirect-hops-and-brotli-auto-decompression>,
by yours truly: the interface focuses on providing a simple experience and
a few bells and whistles (HTTPS, HTTP2, brotli, following redirects)

As you can see, the static backend's reluctance to fully handle DNS has
been a fertile ground for vmods :-)

-- 
Guillaume Quintard


On Wed, Apr 19, 2023 at 1:49 AM Batanun B  wrote:

> All of the sudden Varnish fails to start in my development environment,
> and gives me the following error message:
>
> Message from VCC-compiler:
> Backend host "redacted-hostname": resolves to too many addresses.
> Only one IPv4 and one IPv6 are allowed.
> Please specify which exact address you want to use, we found all of these:
>  555.123.123.3:80
>  555.123.123.3:80
>  555.123.123.3:80
>
> I have changed the hostname and the IP above to not expose our server, but
> all three IP numbers are 100% identical. Shouldn't Varnish be able to
> figure out that in that case it can just choose any one and it will work as
> expected? It really should remove duplicates, and only if there are more
> than one non-duplicate IP then it should fail.
>
> The problem is that the backend host is a so called "app service" in
> Microsoft Azure, which is basically a platform as a service (PaaS), where
> Microsoft handles the networking including the domain name (no user access
> it directly). I have no idea why it suddenly resolves to multiple duplicate
> IPs.
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Possible to disable/inactivate a backend using VCL?

2023-04-19 Thread Guillaume Quintard
Thank, I think I get it now. How about:

backend theBackend none;

Here's the relevant documentation:
https://varnish-cache.org/docs/trunk/users-guide/vcl-backends.html#the-none-backend
It was added in 6.4.

Hope that helps.

-- 
Guillaume Quintard


On Wed, Apr 19, 2023 at 1:36 AM Batanun B  wrote:

> Hi Guillaume,
>
> > I'm curious, if it's completely deactivated what's the benefit of having
> it in the vcl?
>
> It is only intended to be deactivated in production (until we go live).
> Our test and staging environments have the backend active.
>
> > if (false) {
> > set req.backend_hint = you_deactivated_backend;
> > }
>
> Thanks, I will test this.
> My current prod-specific setup for this backend looks like this:
>
> backend theBackend {
> .host = "localhost";
> .port = "";
> .probe = {
> .interval = 1h;
> }
> }
>
> This seems to be working when testing it locally. It also solves the
> problem of having to assign some arbitrary ip or hostname (the actual
> backend host for this service hasn't been created in production yet, since
> we are several months away from go live), which actually was our main
> problem. What do you think about this approach instead? Preferably this
> would be a built in feature in Varnish, with a setting "disabled = true" or
> similar in the backend definition, and then it would not require any host
> or ip to be configured.
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Strange Broken Pipe error from Varnish health checks

2023-04-18 Thread Guillaume Quintard
Hi George,

That pcap only contains HTTP info, it would be super useful to have the TCP
packets, (SYN/ACK/FIN) to see who closes the connection on whom.
-- 
Guillaume Quintard


On Tue, Apr 18, 2023 at 9:43 AM George  wrote:

> Hi,
>
> Attached is the packet capture for the health check
>
> Please check and advise.
>
>
> În lun., 17 apr. 2023 la 19:15, Guillaume Quintard <
> guillaume.quint...@gmail.com> a scris:
>
>> That code hasn't moved in a while, so I'd be surprised to see a bug
>> there, but that's always possible.
>> Any chance you could get a tcpdump of a probe request (from connection to
>> disconnection) so we can see what's going on?
>> --
>> Guillaume Quintard
>>
>> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Strange Broken Pipe error from Varnish health checks

2023-04-17 Thread Guillaume Quintard
That code hasn't moved in a while, so I'd be surprised to see a bug there,
but that's always possible.
Any chance you could get a tcpdump of a probe request (from connection to
disconnection) so we can see what's going on?
-- 
Guillaume Quintard


On Mon, Apr 17, 2023 at 9:12 AM George  wrote:

> Hi,
>
> In our case the response body is few bytes (13), header+body is 170 bytes.
> Can this be a bug related to something else perhaps?
>
> Please let me know.
> Thanks
>
> În lun., 17 apr. 2023 la 18:47, Guillaume Quintard <
> guillaume.quint...@gmail.com> a scris:
>
>> Thanks, I looked at the code quickly and I'd venture that maybe the
>> /varnish_check is a bit too large and doesn't fit Varnish's probe buffer
>> (we only care about the status line of the response anyway), so Varnish
>> closes the connection while nginx isn't done yet.
>>
>> If it's that, it's not very polite, but it's harmless.
>> --
>> Guillaume Quintard
>>
>> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Strange Broken Pipe error from Varnish health checks

2023-04-17 Thread Guillaume Quintard
Thanks, I looked at the code quickly and I'd venture that maybe the
/varnish_check is a bit too large and doesn't fit Varnish's probe buffer
(we only care about the status line of the response anyway), so Varnish
closes the connection while nginx isn't done yet.

If it's that, it's not very polite, but it's harmless.
-- 
Guillaume Quintard


On Mon, Apr 17, 2023 at 7:49 AM George  wrote:

> Hi,
>
> Below is the probe "health", I forgot to send it the first time:
> probe health {
>.url = "/varnish_check";
>.timeout = 2s;
> .interval = 5s;
> .window = 3;
> .threshold = 2;
>   }
>
> Thanks
>
> În lun., 17 apr. 2023 la 17:32, Guillaume Quintard <
> guillaume.quint...@gmail.com> a scris:
>
>> Hi George,
>>
>> Just to be sure, how is the probe "health" defined in your VCL?
>>
>> Cheers,
>>
>> --
>> Guillaume Quintard
>>
>> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Strange Broken Pipe error from Varnish health checks

2023-04-17 Thread Guillaume Quintard
Hi George,

Just to be sure, how is the probe "health" defined in your VCL?

Cheers,

-- 
Guillaume Quintard


On Mon, Apr 17, 2023 at 1:23 AM George  wrote:

> Hi,
>
> I have a Varnish/nginx cluster running with varnish-7.1.2-1.el7.x86_64 on
> CentOS 7.
>
> The issue I am having comes from the varnish health checks. I am getting a
> "broken pipe" error in the nginx error log at random times like below:
> Apr 10 17:32:46 VARNISH-MASTER nginx_varnish_error: 2023/04/10 17:32:46
> [info] 17808#17808: *67626636 writev() failed (32: Broken pipe), client:
> unix:, server: _, request: "GET /varnish_check HTTP/1.1", host: "0.0.0.0"
>
> The strange thing is that this error appears only when Varnish performs
> the health checks. I have other scripts doing it(nagios, curl, wget, AWS
> ELB) but those do not show any errors. In addition to this Varnish and
> nginx where the health checks occur are on the same server and it makes no
> difference if I use a TCP connection or socket based one.
>
> Below are the varnish vcl and nginx locations for the health checks:
> backend nginx_varnish {
>.path = "/run/nginx/nginx.sock";
>.first_byte_timeout = 600s;
>.probe = health;
> }
>
> location = /varnish_check {
> keepalive_timeout 305;
> return 200 'Varnish Check';
> access_log /var/log/nginx/varnish_check.log main;
> error_log /var/log/nginx/varnish_check_errors.log debug;
> error_log
> syslog:server=unix:/run/nginx_log.in.sock,facility=local1,tag=nginx_varnish_error,nohostname
> info;
> }
>
> Are there any docs I can read about how exactly varnish performs the
> health checks and what internal processes are involved?
> Did anyone happen to have similar issues? This is not causing any
> operational problems for the cluster but it is just something that I want
> to determine why it is happening because it just should not be happening.
>
> Please help
> THanks in advance.
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Possible to disable/inactivate a backend using VCL?

2023-04-14 Thread Guillaume Quintard
Hi Batanun,

I'm curious, if it's completely deactivated what's the benefit of having it
in the vcl?

To answer your question: cheat! Add this somewhere in your vcl:

sub vcl_recv {
# the if statement will never work, but the backend is referenced, so
the compiler won't bother you
if (false) {
set req.backend_hint = you_deactivated_backend;
}
}

You could also start varnish with `-p vcc_feature=-err_unref` but I don't
recommend it as it'll affect all the unreferenced symbols, even the ones
you may care about.

Hope that helps

-- 
Guillaume Quintard


On Fri, Apr 14, 2023 at 4:53 AM Batanun B  wrote:

> Hi,
>
> We are currently working on a new feature that won't go live for several
> months still. This new feature has it's own backend in Varnish. Most of our
> VCL code is identical for all environments, and this code refers to the new
> backend, so it needs to be defined otherwise Varnish won't start. But in
> production we don't want to show anything of this feature. And we would
> like to have this backend completely disabled or inactivated in Varnish.
> Can we do that using VCL? Like forcing the health to be sick, or something
> similar. We would prefer to keep this inside the backend declaration if
> possible, and we would also prefer somethink not too "hackish" (like
> pointing it to a dead IP).
>
> Does Varnish has any recommended approach for this? I know there is a cli
> command to set the health, but as I said we would really prefer doing this
> using VCL only.
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish cache issu

2023-03-27 Thread Guillaume Quintard
Thanks for the extra information.

The VCL doesn't seem too wild, let try getting some logs. Can you try
running this command:
varnishlog -q 'ReqMethod eq PURGE or ReqMethod eq BAN' -g request

And, as it is running, try purging something, the varnishlog log command
should start outputting some transactions.
If it doesn't, that means the purging requests are never reaching Varnish.
If it does, well, we'll have to look at them :-)

-- 
Guillaume Quintard


On Mon, Mar 27, 2023 at 7:26 AM Rafael Hakobian 
wrote:

> Hi Guillaume,
>
> Thanks a lot for a quick response.
>
> 1.How are you purging
>
> We used drupal varnish purger module for this and want just to invalidate
> via URL.
> 2. If by varnish plugin you mean varnish cache, yes we have installed
> varnish according to the documentation at
> https://www.varnish-software.com/emea/.
> 3. this is an example of a vcl.
>
> Please let me know in case I missed something.
>
> Kind regards,
> Rafael
>
> On Mon, Mar 27, 2023 at 6:12 PM Guillaume Quintard <
> guillaume.quint...@gmail.com> wrote:
>
>> Hi Rafael,
>>
>> We are going to need some information here. Can you share your VCL? How
>> are you purging? Are you using a Varnish plugin?
>>
>> Cheers,
>>
>> On Mon, Mar 27, 2023, 05:48 Rafael Hakobian 
>> wrote:
>>
>>> Hello!
>>>
>>> in a website developed by drupa 9 varnish cache is used.
>>>
>>> when invalidating the caches via URL it shows that caches are cleared,
>>> ,but the caches are not being cleared in reality.
>>>
>>> can you please help on this issue?
>>>
>>> thanks
>>> Kind regards,
>>> Rafael
>>> ___
>>> varnish-misc mailing list
>>> varnish-misc@varnish-cache.org
>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>>
>>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish cache issu

2023-03-27 Thread Guillaume Quintard
Sending again, keeping the list copied:

Hi Rafael,

We are going to need some information here. Can you share your VCL? How are
you purging? Are you using a Varnish plugin?

Cheers,

On Mon, Mar 27, 2023, 05:48 Rafael Hakobian 
wrote:

> Hello!
>
> in a website developed by drupa 9 varnish cache is used.
>
> when invalidating the caches via URL it shows that caches are cleared,
> ,but the caches are not being cleared in reality.
>
> can you please help on this issue?
>
> thanks
> Kind regards,
> Rafael
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: varnish and mp4 files

2023-02-20 Thread Guillaume Quintard
Hello Karim,

You VCL would be useful to debug this (as well as the command line you are
running Varnish with), but it sounds like Varnish is using the Transient
storage (
https://varnish-cache.org/docs/trunk/users-guide/storage-backends.html#transient-storage)
to store the file, and as the storage isn't bounded, it explodes.
We can fix this in a couple of ways, from storing the file in the regular
cache storage, to using pipe, to waiting a few days for
https://github.com/varnishcache/varnish-cache/pull/3572#issuecomment-1305736643
to be released.

Question is: should that file be cached?

Cheers,

-- 
Guillaume Quintard


On Mon, Feb 20, 2023 at 7:14 AM Karim Ayari 
wrote:

> Hi!
>
> I am currently experiencing a memory load problem with video playback.
>
> here is the infrastructure :
>
> client --> haproxy --> varnish --> moodle workers (x5)
>
> a teacher uploaded a 400MB video to Moodle, when we start playing the
> video with browser player, Varnish consumes all the memory until it runs
> out and oom killer to kill varnishd. i have no configuration for mp4
> files in my vcl file, so by default they are not hidden (?). I can't find
> a solution :(
>
> I can give my vcl file if necessary.
>
> (I am a beginner on varnish :))
>
> thank you for your support.
>
> Karim
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: varnish:6.0.11 Docker image crashing on Apple M1 processor

2023-01-26 Thread Guillaume Quintard
Hi,

I'm not sure what is going in on here as we do have arm64v8 official
images:
https://github.com/docker-library/official-images/blob/master/library/varnish

Could it just be a permissions issue?

-- 
Guillaume Quintard


On Thu, Jan 26, 2023 at 9:32 AM Martynas Jusevičius 
wrote:

> Hi,
>
> We have a Docker image based on varnish:6.0.11.
>
> A user on Apple M1 processor is reporting a crash:
> https://github.com/AtomGraph/LinkedDataHub/issues/149
>
> Error:
> Message from VCC-compiler:
> Assert error in vju_subproc(), mgt/mgt_jail_unix.c line 212:
>   Condition((initgroups(vju_user, vju_gid)) == 0) not true.
>   errno = 1 (Operation not permitted)
> qemu: uncaught target signal 6 (Aborted) - core dumped
> Running VCC-compiler failed, signal 6
> VCL compilation failed
>
> Do you provide images with linux/arm64/v8 support as well? Or what is
> the course of action here?
>
> Thanks.
>
> Martynas
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Centralizing varnish logs

2023-01-10 Thread Guillaume Quintard
Hi Justin, happy new year!

Without getting too much in the details, it should look like a basic shell
command with a few pipes. Splunk for example has the universal forwarder
that is going to push logs to the server where you can then review and
search for the ingested logs.

The main issue is to push something meaningful to the log collector, and
this is where things are a bit lacking, mainly because it's better to push
structured info, and varnish isn't great at it yet.

For example, for logs, you have about three choices:
- varnishncsa, treat each line as a string and be done with it. It's not
amazing as you'll be forced to use regex to filter requests, since you just
logged a string
- varnishncsa -j, it's better, you can carefully craft a format line to
look like an LDJSON object, and now the log analyzer (I know splunk does
it, at least) will allow you to look for "resp.status == 200 && req.url ~
/foo//". The annoyance is that you need to explicitly decide which headers
you want to log, and the format line/file is going to be disgustingly
verbose and painful to maintain.
- enterprise varnishlog has support for LDJSON output, which is great and
is as comprehensive as you can get. It could be too verbose (i.e. storage
heavy), it's only in Varnish Enterprise, and it'll log everything,
including the headers that got deleted/modified.

I believe that what we need is a JSON logger that just log a synthetic view
of the transaction, something like this for example

{
  "req": {
"method": "GET",
"url": "/foo.html",
"headers": [ {"name": "host", "value": "example.com" }, {"name":
"accept-encoding", "value": "gzip"} ],
"start_time": 123456789,
"end_time": 123456790,
"bytes": { "headers": 67, "body": 500, "total": 567 },
"processing": "miss",
  },
  "resp": {...},
  "bereq": {...},
}

we have all the information in varnishlog, it's just a matter of formatting
it correctly. With that, you have something that's easily filtered and is
more natural and comprehensive than what we currently have.

It turns out it's been on my mind for a while, and I intend to get on it,
but for now I'm having way too much fun with rust, vmods and backends to
promise any commitment.
HOWEVER, if somebody wants to code some C/rust to scratch that itch, I'll
be happy to lend a hand!

Does this make sense?

-- 
Guillaume Quintard


On Tue, Jan 10, 2023 at 2:22 PM Justin Lloyd  wrote:

> Hi all,
>
>
>
> I need to centralize logs of multiple Varnish servers in my web server
> environments, generally just 4 or 6 servers depending on the environment.
> I’d like to be able to do this either with Splunk or an Amazon OpenSearch
> cluster, i.e., a managed ELK stack. However, not having worked with either
> tool for such a purpose, I’m not clear on how I could then review, replay,
> etc. the centralized logs similar to the output from tools like
> *varnishlog* and *varnishtop*. Are there existing tools for handling
> Varnish logs in these kinds of centralized log management systems, or would
> I be somewhat constrained on what I could do with the stored logs? Aside
> from the benefit of unifying the logs across all of my web servers, I am
> trying to reduce how much I need to log in to the individual log servers to
> monitor ongoing issues, etc.
>
>
>
> FWIW, I haven’t checked how much log data our production web servers
> generate in a day, but when I checked several years ago (before moving into
> AWS and when the sites were much smaller), it was on the order of like 1 GB
> per day per server.
>
>
>
> Thanks,
>
> Justin
>
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


introducing vmod_fileserver

2022-12-26 Thread Guillaume Quintard
Happy holidays to you all!

I had a few days off, so of course I decided to finally get some hacking
time on Varnish, and decided to once again spend some time on rust and
backends. Here's what I got: https://github.com/gquintard/vmod_fileserver

As the name suggests, this vmod allows you to serve files directly from
disk, without relying on an external backend. That can be useful for a
bunch of things like returning a healthcheck file generated by another
process, providing an error page in case of backend failure, or just to
provide some test data quickly.

Essentially, this vmod:
- allows you to set a root directory and serve files from there
- supports HEAD/GET
- supports conditional requests with if-none-match and if-modified-since

The API is minimal, and the code hasn't seen any production traffic yet, so
be careful! But if you can test it and there are some use cases not covered
here, I'll happily hear your feedback.

Also, if you are a C developer curious about rust, you might want to look
at the code:
https://github.com/gquintard/vmod_fileserver/blob/main/src/lib.rs
I commented it as much as I could, and because varnish-rs is far from
complete, you'll see a lot of C idiosyncrasies that aren't "hidden" by
rust, and therefore the code might look more familiar.

Anyway, that's all for me, let me know what you think!

-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Hitch fails to start.

2022-12-10 Thread Guillaume Quintard
Hi Jim,

Could you share your hitch.conf, as well as the output of "systemctl cat
hitch", please?
-- 
Guillaume Quintard


On Fri, Dec 9, 2022 at 10:10 AM Jim Olson  wrote:

> I have been trying unsuccessfully to get hitch to run on a Debian 10
> based VPS.
>
> ● hitch.service - Hitch TLS unwrapping daemon
> Loaded: loaded (/lib/systemd/system/hitch.service; enabled; vendor
> preset: enabled)
> Active: failed (Result: exit-code) since Fri 2022-12-09 12:51:44
> EST; 6s ago
>   Docs: https://github.com/varnish/hitch/tree/master/docs
> man:hitch(8)
>Process: 6850 ExecStart=/usr/sbin/hitch --user _hitch --group _hitch
> --config /etc/hitch/hitch.conf --quiet (code=exited, status=1/FAILURE)
>   Main PID: 6850 (code=exited, status=1/FAILURE)
>
> Dec 09 12:51:44 racknerd-395538 systemd[1]: hitch.service: Service
> RestartSec=100ms expired, scheduling restart.
> Dec 09 12:51:44 racknerd-395538 systemd[1]: hitch.service: Scheduled
> restart job, restart counter is at 5.
> Dec 09 12:51:44 racknerd-395538 systemd[1]: Stopped Hitch TLS unwrapping
> daemon.
> Dec 09 12:51:44 racknerd-395538 systemd[1]: hitch.service: Start request
> repeated too quickly.
> Dec 09 12:51:44 racknerd-395538 systemd[1]: hitch.service: Failed with
> result 'exit-code'.
> Dec 09 12:51:44 racknerd-395538 systemd[1]: Failed to start Hitch TLS
> unwrapping daemon.
>
> The packages are being pulled from the repository
> https://packagecloud.io/varnishcache/varnish60lts/ubuntu trusty main
>
> Is there anything that can be done to get hitch working?
>
>
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish-cache as private CDN

2022-11-21 Thread Guillaume Quintard
Hi!

So, the big question is: do you own the content/domains that the users will
access?

If yes, there's absolutely no problem, route to Varnish, let it cache, and
you're done. There are certain vmods, like vmod_dynamic or vmod_reqwest
that will allow you to dynamically find a backend based on a hostname.

If you don't own the content, it isn't advisable to try and cache it, like,
at all.
Let's say for example you want to use varnish to cache content for
facebook.com and let's assume you can hijack DNS response to send your
users to Varnish instead of to the actual facebook servers.

If the request Varnish receives is HTTPS (encrypted), well, you're out of
luck because you won't have the certificates to pretend being facebook.com,
your users will realize it and bail out. The only way around it is to try
something like what Kazakhstan did a few years back [1], but I don't think
that would fly in Canada.
If you're thinking "wait, can't I just cache the response without
decrypting it?", nope, because the whole connection is encrypted, and
either you see everything (you have the certificate/key), or nothing (you
don't have them).
In that latter case, the best you can do is blindly redirect the connection
to the facebook server, but then you are just an HTTPS proxy, and caching
isn't relevant.

If we are talking about plaintext HTTP, and ignoring that your browser and
any website worth its salt (including facebook.com) will fight you very
hard and try to go encrypted, you have another issue: you need to know
what's cacheable, and that's a doozy.
There's no universal rule to what's cacheable, and whatever set of rules
you come up with, I'll bet I can find a website that'll break them.
And the price of failure is super high too: imagine you start sending the
same cached bank statement to everybody, people will sue you into the
ground.

So, all in all, meh, I wouldn't worry about it. And it's not just Varnish,
it's any caching solution: you just can't "cache the internet".

Sorry if that reads like a very long-winded way of saying "NO", but as I've
had to answer this question many times over the years, I thought I'd hammer
that point home once and for all :-)


[1]: https://en.wikipedia.org/wiki/Kazakhstan_man-in-the-middle_attack

-- 
Guillaume Quintard


On Mon, Nov 21, 2022 at 7:13 PM InfoVerse Inc.  wrote:

> Hello list,
>
> I am working on a design to use Varnish-Cache as a private CDN. The
> solution is for a small regional ISP in a remote region who wants to
> provide fast cached content to its users and minimize access to the
> Internet.
>
> Since this is an ISP, the users accessing the Internet can be routed to
> varnish cache servers, however, in the event of a "miss" the content should
> be fetched from the Internet. This is a different requirement than the
> traditional backend server.
>
> How can this be achieved with Varnish? I have done a bit of research on
> backends, directors but they all require a server or group of servers whose
> content can be cached.
>
> Is it possible to configure multiple Varnish storage servers as backends?
> The storage servers will fetch data from the Internet in case of a miss. Is
> this a workable solution?
>
> Looking forward to a solution.
>
> Thanks
> InfoVerse
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Need an clarification to use Varnish cache to store docker images.

2022-11-21 Thread Guillaume Quintard
Hello Kavinnath,

The short answer is yes, but".

Varnish can happily front a docker registry, and it's only HTTP, but there
are a certain number of caveats:
- remember that docker, by default, doesn't like plaintext connections, so
you'll probably want to use a TLS-terminator in front of Varnish (
https://hub.docker.com/_/hitch)
- docker images can be quite big, so make sure to size Varnish properly.
- if you want to front the main docker registry in particular, you'll have
to deal with the token dance (
https://docs.docker.com/registry/spec/auth/token/) it requires. It's not a
problem in itself, but it requires some VCL to get it working.

If you have more questions, this mailing list will work very well for
asynchronous messaging, but know that there's also an IRC channel (
https://varnish-cache.org/support/#irc-channel) as well as a discord server
(https://discord.gg/EuwdvbZR6d) if you want something more synchronous.

Cheers,

-- 
Guillaume Quintard


On Mon, Nov 21, 2022 at 4:39 AM learner  wrote:

> Hi Team,
>
> I hope this is the right place to ask doubts otherwise please redirect me
> to the right place.
> I would like to understand whether we can use varnish as proxy cache as
> frontend for docker registry. Hope varnish is able to cache docker images.
> If yes, Could you please share with me the rightful resources to explore
> further.
>
> --
> Kind Regards,
> Kavinnath
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: TTL <=0 ?

2022-09-27 Thread Guillaume Quintard
One minor addition to Geoff's excellent answer: you may want to try this
VCL: https://docs.varnish-software.com/tutorials/hit-miss-logging/
Specially at the beginning it helps knowing what happened to the request.

-- 
Guillaume Quintard


On Tue, Sep 27, 2022 at 6:33 AM Geoff Simmons  wrote:

> On 9/27/22 15:07, Johan Hendriks wrote:
> > Hello all, varnish tells me that the TTL is smaller or equal to 0, but
> > looking at the response headers that is not the case as the BerespHeader
> > has  Expires: Wed, 27 Sep 2023 12:23:11 GMT which is in 2023!
> >
> > -   Begin  bereq 8203147 pass
>
> The request was set to pass on the client side; that sets
> bereq.uncacheable=true, which is passed along to the backend side as
> beresp.uncacheable=true.
>
> The Expires response header (and also Cache-Control in your example)
> might at least tell browser caches that they can cache the response. But
> Varnish won't cache it.
>
> [...]
> > -   BereqHeaderCookie: _sharedid=redacted; cto_bundle=redacted
> [...]
>
> > Am i right that the TTL is <=0 because it sends a cookie to the backend?
>
> If you haven't changed this part of builtin.vcl, then yes:
>
> sub vcl_req_cookie {
> if (req.http.Cookie) {
> # Risky to cache by default.
> return (pass);
> }
> }
>
> If a request/response has a property such as a Cookie header, and a
> number of other things that suggest that the response may be
> personalized, then it can't take the chance of caching it by default.
> That can be one of the worst mistakes you can make with a caching proxy.
>
> So if you need to be able to cache despite the presence of cookies, as
> do many sites these days, you need to write rules for that in VCL.
> Default VCL has to make the safest choice.
>
>
> Best,
> Geoff
> --
> ** * * UPLEX - Nils Goroll Systemoptimierung
>
> Scheffelstraße 32
> 22301 Hamburg
>
> Tel +49 40 2880 5731
> Mob +49 176 636 90917
> Fax +49 40 42949753
>
> http://uplex.de
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Using varnish and vouch-proxy together

2022-07-26 Thread Guillaume Quintard
Hi!

You are correct, the request body is, by default, not cached. To correct
this, you need to use the cache_req_body() function from the std vmod:
https://varnish-cache.org/docs/trunk/reference/vmod_std.html#std-cache-req-body


I haven't looked at the vcl yet (or the article), but since you mention
vmod_curl, maybe you can try with vmod_reqwest instead (
https://github.com/gquintard/vmod_reqwest). It's a bit different, but
hopefully more powerful, and I'd love to get some feedback on it.


Hope that helps!

On Tue, Jul 26, 2022, 00:03 Tom Anheyer | BerlinOnline <
tom.anhe...@berlinonline.de> wrote:

> Hello,
>
> I try to use vouch-proxy and varnish (v7) together to build a authorisation
> proxy. vouch-proxy is written to work with nginx
> ngx_http_auth_request_module
>
> https://github.com/vouch/vouch-proxy
> https://nginx.org/en/docs/http/ngx_http_auth_request_module.html
>
> Idea:
>
> inspired from
>
> https://web.archive.org/web/20121124064818/https://adayinthelifeof.nl/2012/07/06/using-varnish-to-offload-and-cache-your-oauth-requests/
>
> - use varnish request restart feature
> - intercept original client request and make a GET request to vouch-proxy
> validate endpoint
> - when validated restore the original request and do a restart
>
> in detail:
>
> # vcl_recv
> #   restarts == 0
> #   save req method, url, Content-Length, Content-Type in var
> #   method := GET
> #   url := /validate
> #   backend := vouch-proxy
> #   remove Content-Length, Content-Type
> #   restarts > 0
> #   check vouch-proxy headers (roles, groups)
> #
> # vcl_deliver
> #   resp == vouch-proxy,GET,/validate,200
> #   restore req method, url, Content-Length, Content-Type from var
> #   forward vouch-proxy response headers to req
> #   restart (original) req
>
> see attached common-vouch-proxy.vcl
>
> It works for client requests without request body (GET, HEAD, …) but not
> for
> POST, PUT, …. POST, PUT run in timeouts, so I think the request body is
> lost in
> the restarted request. Why is the body gone after restart?
>
> I think it should work with the curl vmod but this is not integrated yet.
>
> Thank you very much in advance
> tom
>
> --
> Tom Anheyer
> Senior Developer
>
> BerlinOnline Stadtportal GmbH & Co. KG
> Stefan-Heym-Platz 1
> 10365 Berlin
> Germany
>
> Tel.: +49 30 2327-5210
> Fax: +49 30 5771180-95
> E-Mail: tom.anhe...@berlinonline.de
>
> berlin.de | berlinonline.net
>
> Amtsgericht Berlin-Charlottenburg, HRA 31951
> Sitz der Gesellschaft: Berlin,
> Deutschland
> USt-IdNr.: DE219483549
>
> Persönlich haftender Gesellschafter:
> BerlinOnline Stadtportalbeteiligungsges. mbH
> Amtsgericht Berlin-Charlottenburg, HRB 79077
> Sitz der Gesellschaft: Berlin, Deutschland
>
> Geschäftsführung: Olf Dziadek, Andreas Mängel
> Amtierender Vorsitzender des Aufsichtsrates: Lothar Sattler
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Feedback needed: vmod_reqwest

2022-07-06 Thread Guillaume Quintard
On Wed, Jul 6, 2022 at 12:07 AM Dridi Boukelmoune  wrote:

> I didn't find how to scam people with NFTs in the manual, should I
> open a github issue?
>

No no no, it's just to weed out the weak investors, send me 5 bitcoins and
I'll show where it is.


>
> In general, I agree, the API looks rather well thought out, even
> though it does suffer bloated constructor syndrome.


Yes, I realized later that I could use the event method of the backend to
finalize the object, however, I'm not sure this:

new client = reqwest.client();
client.set_base_url("http://www.example.com;);
client.set_follow(5);
client.set_brotli(true);
client.set_probe(p1);
client.set_connect_timeout(5s);


is more readable, or practical than:

new client = reqwest.client(
base_url = "http://www.example.com;,
follow = 5,
auto_brotli = true,
probe = p1,
connect_timeout = 5s
);


(consider this a question to you all, if you have an opinion, voice it!)

Did you put only
> timeout and connect_timeout to lower the number of arguments or
> weren't you able to implement ftbo and bbto with reqwest? I suspect
> both :p
>

Definitely the latter, once you pass the 6-7 arguments threshod, the sky's
the limit.


>
> Also it says this:
>
> > In practice, when contacting a backend, you will need to `unset
> bereq.http.accept-encoding;`, as Varnish sets it automatically.
>
> Probably a nice spot to mention
>
> https://varnish-cache.org/docs/7.0/reference/varnishd.html#http-gzip-support
> to explain why one would be set.
>
> On the other hand, if you disable gzip support you may also be
> forwarding the client's accept-encoding header if it survived all the
> way to the backend fetch
>

Good points, I can update the docs. I'm wondering though if it's better to
special-case the AE header handling in the vmod and try to be smart, or
just let the user do it in VCL...

-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Feedback needed: vmod_reqwest

2022-07-03 Thread Guillaume Quintard
Hi all,

In January, I wrote here about *vmod_reqwest* and today I'm coming back
with a major update and a request for the community.

Little refresher for those who don't know/remember what vmod_request is
about: https://github.com/gquintard/vmod_reqwest.
In short it does *dynamic backends* and HTTP requests from VCL *(à la
vmod_curl*).
Some random buzzwords to make you click on the link: *HTTPS, HTTP/2, gzip,
brotli, parallel requests, sync/async*, cryptocurrency.

The main benefit of this release is the *probe support.* vmod_reqwest is
now capable of handling probes the same way native backends do, but
combined with dynamic backends, it allows you one pretty neat trick: you
can probe one backend to set the health of another
<https://github.com/gquintard/vmod_reqwest#backend-using-a-probe-to-one-backend-to-determine-anothers-health>
.

The API is fairly complete and ergonomic I believe, but I would love to get
more hands and eyes on this to break it/make it better. If some of you have
opinions and/or want to take it for a spin, there are build explanations in
the README <https://github.com/gquintard/vmod_reqwest#build-and-test>, as
well as a Dockerfile
<https://github.com/gquintard/vmod_reqwest/blob/main/Dockerfile> [1] that
will build onto the official image without polluting it.

Let me know what you think of it!

[1]: thanks @thomersch for the help and push on the Docker front
-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


vmods and docker containers got easier

2022-04-28 Thread Guillaume Quintard
Hi everyone,

Here's a triplet of announcements that should make life easier for a bunch
of vmod users.

# install-vmod

Earlier this month I pushed
https://github.com/varnish/toolbox/tree/master/install-vmod which, I must
admit, takes inspiration from xcir's vmod-packager (
https://github.com/xcir/vmod-packager), but with a way less ambitious scope.

Essentially, you can just point install-vmod at a tarball (local or
remote), with an optional checksum and it will download, build, test and
install the vmod for you:

install-vmod
https://github.com/varnish/varnish-modules/releases/download/0.20.0/varnish-modules-0.20.0.tar.gz
e63d6da8f63a5ce56bc7a5a1dd1a908e4ab0f6a36b5bdc5709dca2aa9c0b474bd8a06491ed3dee23636d335241ced4c7ef017b57413b05792ad382f6306a0b36


It's only a few lines of shell and doesn't handle dependencies
installation, but it's pretty convenient for the next point.

# install-vmod in official docker images

install-vmod is included in all official images, making it very easy to
supplement the images with your own vmod combinations. Here's a and example
taken from the official docs (https://hub.docker.com/_/varnish):

FROM varnish:7.1-alpine


# install build dependencies
USER root
RUN set -e; \
apk add --no-cache $VMOD_DEPS; \
\
# install one, possibly multiple vmods
install-vmod
https://github.com/varnish/varnish-modules/releases/download/0.20.0/varnish-modules-0.20.0.tar.gz;
\
\
# clean up
apk del --no-network $VMOD_DEPS
USER varnish


Note the VMOD_DEPS that allows you to quickly add and remove the general
building dependencies.

# official docker images now include varnish-modules and vmod_dynamic

Now that images have an easy way to install vmods, it seemed like a waste
to not install a couple of those, namely:
- varnish-modules (https://github.com/varnish/varnish-modules) because it's
full of tools that are useful in most setups (headers, var, str, etc.)
- vmod_dynamic (https://github.com/nigoroll/libvmod-dynamic) since
containers usually live in dynamic environments with backend with a DNS
record but no fixed IP

And those two have the big benefit of not requiring any extra dependencies
compared to Varnish, meaning the image size only slightly increased.

And that's it for now! As usual, feedback is welcome, especially since the
features are so new.

Until next time!

-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Public alternative to VFP_Push

2022-04-23 Thread Guillaume Quintard
Hi!

With 6.6, VFP_Push was made private, but it looks like there's no
alternative for it. I've seen VCL_StackVFP, but it's equally private.

For context, I'm currently using VFP_Push in
https://github.com/gquintard/vmod_reqwest/blob/4aecc793643d5eb395c43cbbad463c7b0deef6ab/src/lib.rs#L658
, pretty much exactly like Varnish does it internally:
- have a backend with a gethdrs method
- get called by VDI_GetHdr
- once the headers are in, use VFP_Push to inject a processor at the start
of the pipeline

It works very well, but if there's a way to respect the API boundaries, I'd
be happy to abide.

Cheers,

-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Question regarding lifetime of PRIV_TASK pointer

2022-03-24 Thread Guillaume Quintard
My pleasure!

Just to show off a bit, and because I know you have an eye on rust:
https://github.com/gquintard/varnish-rs/blob/main/examples/vmod_timestamp/src/lib.rs

VPriv (https://docs.rs/varnish/latest/varnish/vcl/vpriv/struct.VPriv.html)
is the rust equivalent of vmod_priv and will properly be garbage-collected
when dropped (yes, I do need to write some docs for it!)

-- 
Guillaume Quintard


On Thu, Mar 24, 2022 at 6:46 AM Lee Hambley  wrote:

> Hi,
>
> This is exaclty what we were looking for. Thank you sincerely.
>
> Lee Hambley
> http://lee.hambley.name/
> +49 (0) 170 298 5667
>
>
> On Wed, 23 Mar 2022 at 17:15, Guillaume Quintard <
> guillaume.quint...@gmail.com> wrote:
>
>> Hi Lee,
>>
>> Looks like you had the right page, but missed the interesting part :-) In
>> you
>> https://varnish-cache.org/docs/trunk/reference/vmod.html#ref-vmod-private-pointers
>> have this bit:
>>
>> > .fini will be called for a non-NULL .priv of the struct vmod_priv when
>> the scope ends with that .priv pointer as its second argument besides a
>> VRT_CTX.
>>
>> i.e. if your vmod_priv has a methods->fini pointer, it will be called
>> when the vmod_priv is deleted.
>>
>> Was this what you were after, or did I misunderstand your question?
>>
>> --
>> Guillaume Quintard
>>
>>
>> On Wed, Mar 23, 2022 at 7:28 AM Lee Hambley 
>> wrote:
>>
>>> Dear List,
>>>
>>> I inherited a project using PRIV_TASK [1] for which the documentation
>>> says:
>>>
>>> PRIV_TASK “per task” private pointers are useful for state that applies
>>> to calls for either a specific request or a backend request. For instance
>>> this can be the result of a parsed cookie specific to a client. Note that
>>> PRIV_TASK contexts are separate for the client side and the backend
>>> side, so use in vcl_backend_* will yield a different private pointer
>>> from the one used on the client side. These private pointers live only for
>>> the duration of their task.
>>>
>>> We do a form of reference counting in our internal data structures, and
>>> the PRIV_TASK pointer in parts is used to hold a (counted) reference to
>>> some data in the shared structure.
>>>
>>> We are struggling to find the latest possible safest place to hook where
>>> PRIV_TASK is about to be invalid (end of the request) so that we can
>>> safely, and finally decrement the reference count and clean-up.
>>>
>>> Writing this out now, I suspect that there's a safe exit from the state
>>> machine [2] where we could modify our VCL to include a call to a clean-up
>>> function in our vmod, however it's not clear to me if this would be "safe"
>>> (restarts, request coalescing, etc, etc)
>>>
>>> In short then, is there an obvious place into which we can hook which is
>>> the place where Varnish is already about to discard the "task" and it is
>>> unoquivically safe for us to decrement our reference counted pointer to the
>>> PRIV_TASK referenced data?
>>>
>>> Thanks so much, very much enjoying being in a role hacking on Varnish,
>>> and Varnish adjacent stuff in my job currently.
>>>
>>> [1]:
>>> https://varnish-cache.org/docs/trunk/reference/vmod.html#ref-vmod-private-pointers
>>> [2]:
>>> https://www.varnish-software.com/developers/tutorials/varnish-configuration-language-vcl/#finite-state-machine
>>>
>>>
>>> Lee Hambley
>>> http://lee.hambley.name/
>>> +49 (0) 170 298 5667
>>> ___
>>> varnish-misc mailing list
>>> varnish-misc@varnish-cache.org
>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>>
>>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Question regarding lifetime of PRIV_TASK pointer

2022-03-23 Thread Guillaume Quintard
Hi Lee,

Looks like you had the right page, but missed the interesting part :-) In
you
https://varnish-cache.org/docs/trunk/reference/vmod.html#ref-vmod-private-pointers
have this bit:

> .fini will be called for a non-NULL .priv of the struct vmod_priv when
the scope ends with that .priv pointer as its second argument besides a
VRT_CTX.

i.e. if your vmod_priv has a methods->fini pointer, it will be called when
the vmod_priv is deleted.

Was this what you were after, or did I misunderstand your question?

-- 
Guillaume Quintard


On Wed, Mar 23, 2022 at 7:28 AM Lee Hambley  wrote:

> Dear List,
>
> I inherited a project using PRIV_TASK [1] for which the documentation says:
>
> PRIV_TASK “per task” private pointers are useful for state that applies
> to calls for either a specific request or a backend request. For instance
> this can be the result of a parsed cookie specific to a client. Note that
> PRIV_TASK contexts are separate for the client side and the backend side,
> so use in vcl_backend_* will yield a different private pointer from the
> one used on the client side. These private pointers live only for the
> duration of their task.
>
> We do a form of reference counting in our internal data structures, and
> the PRIV_TASK pointer in parts is used to hold a (counted) reference to
> some data in the shared structure.
>
> We are struggling to find the latest possible safest place to hook where
> PRIV_TASK is about to be invalid (end of the request) so that we can
> safely, and finally decrement the reference count and clean-up.
>
> Writing this out now, I suspect that there's a safe exit from the state
> machine [2] where we could modify our VCL to include a call to a clean-up
> function in our vmod, however it's not clear to me if this would be "safe"
> (restarts, request coalescing, etc, etc)
>
> In short then, is there an obvious place into which we can hook which is
> the place where Varnish is already about to discard the "task" and it is
> unoquivically safe for us to decrement our reference counted pointer to the
> PRIV_TASK referenced data?
>
> Thanks so much, very much enjoying being in a role hacking on Varnish, and
> Varnish adjacent stuff in my job currently.
>
> [1]:
> https://varnish-cache.org/docs/trunk/reference/vmod.html#ref-vmod-private-pointers
> [2]:
> https://www.varnish-software.com/developers/tutorials/varnish-configuration-language-vcl/#finite-state-machine
>
>
> Lee Hambley
> http://lee.hambley.name/
> +49 (0) 170 298 5667
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Confusion about LTS versions. Any comprehensive documentation regarding everything LTS?

2022-03-02 Thread Guillaume Quintard
Hi,

Would this help? https://lists.archive.carbon60.com/varnish/misc/49998

Cheers,

-- 
Guillaume Quintard


On Wed, Mar 2, 2022 at 8:51 AM Batanun B  wrote:

> Hi,
>
> Is there any documentation focused on the LTS versions of Varnish Cache?
> And with that I mean things like "What does the LTS version of Varnish
> mean?", "Why should or shouldn't I choose an LTS version?", "What is the
> latest LTS version?" and "How do I install an LTS version?".
>
> Currently I can't find any such documentation anywhere.
>
> We use Varnish Cache 6.0 LTS (6.0.6) now, on Ubuntu 18.04 LTS. I'm testing
> setting up a new Varnish server, on Ubuntu 20.04 LTS, and it automatically
> installs Varnish 6.2.1-2.
>
> Is that an LTS version? How can I verify that? And if not, how can I make
> sure that the version being selected is an LTS version?
>
> I followed the instructions at
> https://packagecloud.io/varnishcache/varnish60lts/install#manual-deb and
> in the file varnishcache_varnish60lts.list I made sure to change "trusty"
> to "focal" to match the Ubuntu version.
>
> Also, the "Releases & Downloads" page is quite confusing. First, it
> doesn't say _anything_ about LTS versions. Secondly, it specifically
> mentions version 7.0.2, 6.6.2 and 6.0.10 as supported, and says "All
> releases not mentioned above are End-Of-Life and unsupported". What does
> that mean?
>
> https://varnish-cache.org/releases/
>
> Also, is there a place where we can see the roadmap for future planned LTS
> versions? Now I have no idea if there will be a new LTS coming next week,
> next year, or 2030.
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Fwd: Writing vmods in Rust

2022-01-30 Thread Guillaume Quintard
Hi all,

So, almost 2 months after the initial announcement and quite a lot of
dogfooding, varnish-rs has seen a few releases and grown a bit. Even though
I still consider it in the alpha stage, I wanted to present a couple of
vmods I built on top of it.

The first vmod is the smallest one: vmod_rers (
https://github.com/gquintard/vmod_rers) that deals with Regular Expressions
in RuSt.
A cursory look at the VCC file (
https://github.com/gquintard/vmod_rers/blob/main/vmod.vcc) will quickly
reveal that it does way less than `
https://code.uplex.de/uplex-varnish/libvmod-re2/` however, there's still a
couple of nice things about it:
- regex are cached in a LRU store, so you don't have to recompile the same
dynamic expression over and over again
- named patterns!
- there is both a VDP and a VFP, in other words: you can modify content as
it enters the cache AND as it leaves it. Note however that to do this, we
do cache the whole body, use this feature responsibly.

The second one is beefier: vmod_reqwest (
https://github.com/gquintard/vmod_reqwest) is a vmod_curl on steroids.
The initial idea here is also to allow the VCL to send and receive HTTP
requests, but things got a bit out of hand, and in the end, it has a few
extra features:
- handle multiple requests at the same time, within the same task
- fire-and-forget, non-blocking mode: you can send 15 requests at the same
time and continue regular VCL processing as they are in flight
- proper VCL backend, instead of the regular static backends, you can use
dynamic ones backed by reqwest, offering:
  - redirect following, if the response sports a 30X status, the vmod can
automatically follow it
  - connection pooling, of course
  - HTTPS backends
  - proxies for both HTTP and HTTPS
  - automatic brotli (and gzip and deflate) decompression
There are a few things missing at the moment, notably probes, but they are
coming.

So, things look pretty exciting BUT I haven't tested that in production,
and I've mostly been operating in a vacuum so far, so I would love some
feedback on APIs, performance, bugs, etc.

-- 
Guillaume Quintard


-- Forwarded message -
From: Guillaume Quintard 
Date: Sun, Dec 5, 2021 at 5:32 PM
Subject: Writing vmods in Rust
To: varnish-misc@varnish-cache.org 


Hi everyone,

I've been working for a while on a little project that ended up taking
quite a chunk of my time, and while it's nowhere near done, I feel like
it's time to show the current progress. *In other words: that stuff ain't
ready, don't put it in prod.*

Here we go: we can *build vmods using pure rust*.
For example, this guys:
https://github.com/gquintard/varnish-rs/tree/main/examples
And the crates `varnish` and `varnish-sys`, that make this possible have
been published on crates.io, and we have documentation!
https://docs.rs/varnish/latest/varnish/

Without trying to start a language war, there are a few benefits to using
Rust over C:
- the language is designed to prevent a few classes of annoying bugs, and
the compiler is very good at explaining where you went wrong
- a lot of the C practices (assert everywhere, miniobj, etc.) are basically
baked into Rust
- the barrier between C and Rust is razor-thin, with almost zero overhead
cost (curse you, null-terminated strings!)
- the tooling is absolutely stellar
- string manipulation is sane (yeah, easy stab at C, but I had to do it)

What you get at the moment:
- few dependencies: you only need cargo, python3 and the libvarnish dev
files to get going
- write the same vmod.vcc as you would for C, the boilerplate is generated
for you
- building is literally just one command: `cargo build`
- automatic type translation so you can write pure rust, and never need to
see the `VCL` objects under their C form (
https://docs.rs/varnish/latest/varnish/vcl/convert/index.html)
- support for PRIVobjects (
https://github.com/gquintard/varnish-rs/blob/main/examples/vmod_object/vmod.vcc
and
https://github.com/gquintard/varnish-rs/blob/main/examples/vmod_object/src/lib.rs
)
- you can iterate through and modify the HTTP objects easily (
https://docs.rs/varnish/latest/varnish/vcl/ctx/struct.Ctx.html)

What you don't get for now:
- a comprehensive API, some bits are still missing, like
WS_ReserveAll/WS_Release, but because of the type conversion, you can
possibly live without it
- a stable API. I'm heavily working on it, and things will change twenty
times before the next Varnish release (but it doesn't matter as the crates
are version-locked)
- something based solely on the public library (libvarnishapi), I
reimplemented some stuff using private bits to avoid copies
- all the types. I only implemented a subset of VCL types, but fret not,
the others will come.
- a sensible boilerplate code generation, the current approach involves
embedding a python script that imports `vmodtool.py` and running it from
rust. But there's a plan to fix that

In case you want to try it, I've also created a minimal example repo that
you can play with:
git

Re: Varnish returning synthetic 500 error even though it has stale content it should serve. But only seems to happen during/after a burst of traffic

2021-12-17 Thread Guillaume Quintard
-- 
Guillaume Quintard


On Fri, Dec 17, 2021 at 5:18 AM Batanun B  wrote:

> Is there even an official word for this final "cache key"? "Hash" clearly
> isn't specific enough. I'm talking about a word that refers to the unique
> key that always corresponds to only a _single_ version of a cached object.
>

I don't think there is  at the moment

Sorry, I'm confused now... Don't touch _which_ guy? Our VCL doesn't contain
> anything regarding "Accept-Encoding". All I said was that the Vary header
> in the response from the backend is "Accept-Encoding". And the way I see
> it, this shouldn't be the cause of the strange problems we are seeing,
> since even when factoring in this, there should exist a matching cached
> object for me, and it should be served regardless of TTL or backend health
> as long as the grace hasn't expired (which it hasn't). Or is my reasoning
> flawed here, based on the VCL snippet in my original post? Can you think of
> a scenario where our VCL would return the synthetic 500 page even when
> there exists a cached objekt matching the hash and vary logic?
>
> As you mentioned that your VCL was simplified, I didn't want to assume
anything. So yes, I meant: do not worry about "accept-encoding", either as
a header, or as an entry in the vary header, Varnish will handle that
properly.

So, the vary hypothesis doesn't pan out. Could it be that your cache size
is too small instead and that the churn is pushing the object out?

Yeah, I think 80MB is a bit to small for us. Ideally we should be able to
> sit down on a Monday and troubleshoot problems that occured Friday evening,
> but that might require a way too big VSL space. But a few hundred MB should
> be fine.
>
> I would just log on disk, and rotate every few days. You mentioned that
the traffic is fairly low, so the disk usage shouldn't be bad, especially
if the backend is on another server. Varnish won't trip itself by choking
the disk, the worst case scenario is that varnishlog will not write the
file enough and will drop a few transactions.

>
> > The problem is that you have to be recording when the initial request
> goes through. But, if you have then, cache hits will show the VXID of that
> first request in their "x-varnish" header, and you can find it this way
> ("varnishlog -r log.bin -q 'vxid == THE_VXID'")
>
> Well, would it really be a cache hit? The main transaction I'm looking for
> is the first transaction for a specific path (in this case, "/") where
> Varnish served the synthetic 500 page. And then I would also like to see
> the closest preceding transaction for that same page, where the hash (and
> the Vary logic) matches the main transaction mentioned above.
>

In that case, you must log all the requests that could match, and once you
have found your offender, walk your way up to find the previous request. I
don't think there's another way here.
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish returning synthetic 500 error even though it has stale content it should serve. But only seems to happen during/after a burst of traffic

2021-12-16 Thread Guillaume Quintard
> Ah, I remember hearing about that earlier, and made a mental note to read
up on that. But I forgot all about it. Now I did just that, and boy was
that a cold shower for me! I definitely need to unset that header. But why,
for the love of all that is holy, does Varnish not include the vary-data in
the hash? Why isn't the hash the _single_ key used when storing and looking
up data in the cache? Why does Varnish hide this stuff from us?

Easy one: you build the key from the request, while the vary header is a
response property. Therefore, Varnish can't know to put the varied headers
into the hash, because it doesn't have that information yet.
Basically, the vary header is backend information that allows Varnish to
properly differentiate objects with the same hash, and the "hash collision"
happens because you, the VCL writer, were not specific enough in vcl_hash
(not necessarily your fault, though).
(technically, and to avoid the wrath of Varnish veterans: it's not a hash
collision because the resources technically represent the same resource and
the varied headers are just secondary keys).

> However, when checking the Vary header from the backend, it is set to
"Accept-Encoding". And since I haven't changed anything in my browser, it
should send the same "Accept-Encoding" request header whenever I surf the
website. And since I have visited the startpage multiple times the last 10
days, it should have a cached version of it matching my "Accept-Encoding".

Don't touch that guy! Varnish will ignore "accept-encoding" in "vary"
because it handles compression internally, and always forces
"accept-encoding: gzip" before entering vcl_backend_fetch. If your VCL
mentions accept-encoding, it's almost always wrong.

> Well, that gives me nothing that is relevant here, sadly. The last time
this happened was a few days ago, and the buffer doesn't seem to be big
enough to keep data that far back.
> But maybe you could describe what you would look for? I would love to
learn how to troubleshoot this.

The default VSL space is 80MB, which is "only" worth a few (tens of)
thousands requests, so yeah, it can be a short backlog. You can instead
start logging into a file:

> varnishlog -g request -q 'RespStatus eq 500' -w log.bin

once the file grows, you can start looking at it using "varnishlog -r
log.bin"

> Thanks, although most of that stuff I already knew. And it doesn't really
give any more advanced examples. Like the problem I mentioned earlier. I
really would like to know if it is possible to find the first request where
it served the 500 page for the "/" url, as well as the request just before
that, for the same url. Do you know how to construct a query that gives me
that?

The problem is that you have to be recording when the initial request goes
through. But, if you have then, cache hits will show the VXID of that first
request in their "x-varnish" header, and you can find it this way
("varnishlog -r log.bin -q 'vxid == THE_VXID'")

Hope that helps

-- 
Guillaume Quintard


On Thu, Dec 16, 2021 at 1:33 PM Batanun B  wrote:

> > Could be a vary issue
>
> Ah, I remember hearing about that earlier, and made a mental note to read
> up on that. But I forgot all about it. Now I did just that, and boy was
> that a cold shower for me! I definitely need to unset that header. But why,
> for the love of all that is holy, does Varnish not include the vary-data in
> the hash? Why isn't the hash the _single_ key used when storing and looking
> up data in the cache? Why does Varnish hide this stuff from us?
>
> However, when checking the Vary header from the backend, it is set to
> "Accept-Encoding". And since I haven't changed anything in my browser, it
> should send the same "Accept-Encoding" request header whenever I surf the
> website. And since I have visited the startpage multiple times the last 10
> days, it should have a cached version of it matching my "Accept-Encoding".
>
> > can you post the output of `varnishlog -d -g request -q 'RespStaus eq
> 500'?
>
> Well, that gives me nothing that is relevant here, sadly. The last time
> this happened was a few days ago, and the buffer doesn't seem to be big
> enough to keep data that far back.
>
> But maybe you could describe what you would look for? I would love to
> learn how to troubleshoot this.
>
> > In the meantime, here's a cheat sheet to get started with varnishlog:
> > https://docs.varnish-software.com/tutorials/vsl-query/
>
> Thanks, although most of that stuff I already knew. And it doesn't really
> give any more advanced examples. Like the problem I mentioned earlier. I
> really would like to know if it is possible to find the first request where
> it served the 500 page for the "/" url,

Re: Varnish returning synthetic 500 error even though it has stale content it should serve. But only seems to happen during/after a burst of traffic

2021-12-16 Thread Guillaume Quintard
Could be a vary issue, can you post the output of `varnishlog -d -g request
-q 'RespStaus eq 500'? Please anonymize the bits you don't want us to see
by replacing them with "XX", but don't simply remove them.

In the meantime, here's a cheat sheet to get started with varnishlog:
https://docs.varnish-software.com/tutorials/vsl-query/

-- 
Guillaume Quintard


On Thu, Dec 16, 2021 at 12:15 PM Batanun B  wrote:

> Hi,
>
> One of our websites usually has quite a low but steady stream of visitors,
> but occationally we get a sudden surge of requests over a very short time
> period (about 1-2 minutes). Varnish seems to handle the load fine, but the
> backends struggle with this. But I have noticed that Varnish doesn't serve
> the stale cached data, but instead shows our synthetic 500 page. This is
> true even for the start page, that definitely existed in the cache. And we
> have a grace period of 10 days, so I it's quite annoying that we can't
> simply serve the stale cached data during this short period.
>
> I have tried picturing the entire flow, following the logic in the vcl,
> but I can't see what we do wrong. And annoyingly I can't reproduce the
> problem locally by simply shutting down the backends (or setting them to
> unhealthy), because whenever I do that I get the stale content served just
> as intended. Could the sheer volume itself cause this, making it impossible
> to reproduce by simply fetching the page a few times in the browser before
> and after disabling the backends? Or is there some edge case that I haven't
> thought of that is causing this?
>
> A simpliedfied version of our vcl is included below, with only the
> relevant parts. But unless I have some blatent problem with the vcl, I
> think it would be good if I learned how to troubleshoot this using the
> Varnish tools, like varnishlog. So that next time this happens, I can use
> varnishlog etc to see what's happening.
>
> Is it possible using varnishlog to find the very first request for a
> specific path ("/" in our case) where it returned the synthetic 500 and put
> it in the cache? And is it also possible to find the request just before
> that one, for the same path. If I could extract those two requests
> (including all the metadata in those transactions) from the djungle of
> thousands of requests, then maybe I can find some explanation why the
> second request doesn't return the stale data.
>
> -
>
> sub vcl_hit {
> if (obj.ttl > 0s) {
> // Regular cache hit
> return (deliver);
> } else if (req.restarts == 0 && std.healthy(req.backend_hint)) {
> // Graced cache hit, first attempt.
> // Force cache miss to trigger fetch in foreground (ie synchronous
> fetch).
> set req.http.X-forced-miss = "true";
> return (miss);
> } else {
> // Graced cache hit, previous attempts failed (or backend
> unhealthy). Let the fetch happen in the background (ie asynchronous fetch),
> and return the cached value.
> return (deliver);
> }
> }
>
> sub vcl_recv {
> if (req.http.X-cache-pass == "true") {
> return(pass);
> }
>
> set req.grace = 240h; // 10 day grace
> }
>
> sub vcl_backend_response {
> if (bereq.http.X-cache-pass != "true") {
>   if (beresp.status < 400) {
>  set beresp.grace = 240h;
>  set beresp.ttl = 30m; // Cache invalidation in the form of xkey
> softpurge can put objects into grace before the TTL is past.
> } else if (beresp.status < 500) {
> set beresp.ttl = 1m;
> return (deliver);
> } else if (beresp.status >= 500) {
> // In some cases we want to abandon the backend request on
> 500-errors, since it otherwise would overwrite the cached object that still
> is usefull for grace.
> // This will make it jump to vcl_synth with a 503 status.
> There it will restart the request.
> if (bereq.is_bgfetch) {
> // In grace period. Abandoning 5xx request, since it
> otherwise would overwrite the cached object that still is usefull for grace
> return (abandon);
> } else if (bereq.http.X-forced-miss == "true") {
> return (abandon);
> }
> // Non background fetch, ie no grace period (and no stale
> content available). Cache the error page for a few seconds.
> set beresp.ttl = 15s;
> return (deliver);
> }
> }
> }
>
> sub vcl_synth {
> if (req.http.X-forced-miss == "true" && resp.status >= 500) {
> return (restart);
> }
&

Writing vmods in Rust

2021-12-05 Thread Guillaume Quintard
Hi everyone,

I've been working for a while on a little project that ended up taking
quite a chunk of my time, and while it's nowhere near done, I feel like
it's time to show the current progress. *In other words: that stuff ain't
ready, don't put it in prod.*

Here we go: we can *build vmods using pure rust*.
For example, this guys:
https://github.com/gquintard/varnish-rs/tree/main/examples
And the crates `varnish` and `varnish-sys`, that make this possible have
been published on crates.io, and we have documentation!
https://docs.rs/varnish/latest/varnish/

Without trying to start a language war, there are a few benefits to using
Rust over C:
- the language is designed to prevent a few classes of annoying bugs, and
the compiler is very good at explaining where you went wrong
- a lot of the C practices (assert everywhere, miniobj, etc.) are basically
baked into Rust
- the barrier between C and Rust is razor-thin, with almost zero overhead
cost (curse you, null-terminated strings!)
- the tooling is absolutely stellar
- string manipulation is sane (yeah, easy stab at C, but I had to do it)

What you get at the moment:
- few dependencies: you only need cargo, python3 and the libvarnish dev
files to get going
- write the same vmod.vcc as you would for C, the boilerplate is generated
for you
- building is literally just one command: `cargo build`
- automatic type translation so you can write pure rust, and never need to
see the `VCL` objects under their C form (
https://docs.rs/varnish/latest/varnish/vcl/convert/index.html)
- support for PRIVobjects (
https://github.com/gquintard/varnish-rs/blob/main/examples/vmod_object/vmod.vcc
and
https://github.com/gquintard/varnish-rs/blob/main/examples/vmod_object/src/lib.rs
)
- you can iterate through and modify the HTTP objects easily (
https://docs.rs/varnish/latest/varnish/vcl/ctx/struct.Ctx.html)

What you don't get for now:
- a comprehensive API, some bits are still missing, like
WS_ReserveAll/WS_Release, but because of the type conversion, you can
possibly live without it
- a stable API. I'm heavily working on it, and things will change twenty
times before the next Varnish release (but it doesn't matter as the crates
are version-locked)
- something based solely on the public library (libvarnishapi), I
reimplemented some stuff using private bits to avoid copies
- all the types. I only implemented a subset of VCL types, but fret not,
the others will come.
- a sensible boilerplate code generation, the current approach involves
embedding a python script that imports `vmodtool.py` and running it from
rust. But there's a plan to fix that

In case you want to try it, I've also created a minimal example repo that
you can play with:
git clone https://github.com/gquintard/vmod-rs-template.git
cd vmod-rs-template
cargo build
cargo test

I will continue working on expanding the API, documentation and will start
filing issues to come up with a roadmap. If you are interested in helping,
have questions or feedback, please do ping me, I'd love to hear from you.

Cheers!

-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: A tool to package VMOD easily

2021-11-21 Thread Guillaume Quintard
Sweet! I've been meaning to build something like that for a long time,
that's great!

On Sun, Nov 21, 2021, 21:22 kokoniimasu  wrote:

> Hi, All.
>
> I've been struggling for a long time to deploy VMOD in my environment.
> So I created a tool to build a VMOD package easily.
> The Package is intended to be installed in its own environment, and
> supports deb and rpm.
>
>
> ```
> xcir@build01:~/git/vmod-packager$ ./vmod-packager.sh -d focal -v 7.0.0 -e
> 0.19 varnish-modules
> ...
> ##
> docker image: vmod-packager/focal:7.0.0-1
> Dist: focal
>  Varnish Version: 7.0.0
>  Varnish VRT: 140
>VMOD name: varnish-modules
> VMOD Version: 140.0.19
>   Status: SUCCESS
>
> xcir@build01:~/git/vmod-packager$ ls pkgs/debs/varnish-modules/
> varnish-modules_140.0.19~focal-1_amd64.build
>  varnish-modules_140.0.19~focal-1_amd64.changes
>  varnish-modules-dbgsym_140.0.19~focal-1_amd64.ddeb
> varnish-modules_140.0.19~focal-1_amd64.buildinfo
>  varnish-modules_140.0.19~focal-1_amd64.deb
> ```
>
> It's still under development, but I think it will work.
>
> https://github.com/xcir/vmod-packager
>
> Regards,
>
> --
> Shohei Tanaka(@xcir)
> https://blog.xcir.net/ (JP)
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Query for authorization username

2021-10-25 Thread Guillaume Quintard
I think it's close to optimal, given the current tools. I would probably
try to move away from regsub() and use vmod_str (
https://github.com/varnish/varnish-modules/blob/master/src/vmod_str.vcc#L42),
and maaybe use multiple assignments rather than on big expressions, but
that's a personal preference at this point.

It would look like something like this in my mind (highly untested, don't
sue me if your computer explodes):

import var;
import str;
import blob;

sub vcl_recv {
if (str.split(req.http.Authorization, 0) == "Basic") {
var.set("b64", str.split(req.http.Authorization, 1));
var.set("decoded", bob.transcode(encoding = BASE64URL, encoded =
var.get("b64")));
set req.http.X-Auth-User = str.split(var.get("decoded"), 0, ":");
}
}


everything in one expression:

set req.http.X-Auth-User = str.split(
   blob.transcode(
   encoding = BASE64URL,
   encoded = str.split(req.http.Authorization,
1))
   ),
   0,
   ":"
       );


You should possibly use blob.transcode() anyway.

--
Guillaume Quintard


On Mon, Oct 25, 2021 at 11:25 AM Justin Lloyd  wrote:

> In my dev environment, I have a few users configured to use Basic
> authorization (configured in the Nginx backend) and I’d like to be able to
> perform VSL queries based on the auth user. This is what I was able to come
> up with, but I’m wondering if there is a simpler way that I’m just not
> seeing.
>
>
>
> require blob;
>
> if (req.http.Authorization) {
>
> set req.http.X-Auth-User = regsub(blob.encode(IDENTITY,
>
>   blob=blob.decode(BASE64,
>
>
> encoded=regsub(req.http.Authorization, "^Basic (.*)", "\1"))),
>
>   ":.*$", "");
>
> }
>
>
>
> varnishtop -I ReqHeader:X-Auth-User
>
> varnishlog -i ReqURL -q 'ReqHeader:X-Auth-User ~ “someuser”'
>
>
>
> Thanks,
>
> Justin
>
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish only resolve the ip on startup

2021-10-05 Thread Guillaume Quintard
I'm fine with adding vmod_dynamic to the varnish docker images so that
users have access to this basic functionality, but, for the record:
- I really feel this should be considered a basic feature and exist in core
- if the docker image starts adding downstream vmods, it opens the gates to
a flood of "can you add this vmod too?" questions, which I'm not looking
forward to.

-- 
Guillaume Quintard


On Tue, Oct 5, 2021 at 6:39 AM Nils Goroll  wrote:

> On 03.10.21 18:05, Léo wrote:
> > I have found some ways to solve the problem
>
> https://github.com/nigoroll/libvmod-dynamic exists to do just that
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish only resolve the ip on startup

2021-10-04 Thread Guillaume Quintard
On Mon, Oct 4, 2021 at 11:55 AM Dridi Boukelmoune  wrote:

> That still doesn't seal the can of worms: once there are more than one
> address per family or addresses change, it's our connection and
> pooling models that need to be revisited, how many different addresses
> to try connecting to, how to retry, and how we account for all of that
> (stats for example). Again, it's a bit more complex that just saying
> "change the connect callback to one that combines resolve+connect".
>

I do understand that it's the core of the problem, and I'm probably being
pig-headed on this, but it feels to me that it's not really different from
a server with a floating IP, or a level-4 load-balancer fronting the
backend. The addresses may change, there may be more than one, but once the
connection is open, you can trust it and keep using it.
We trust connect() to get us to our goal, using getaddrinfo() on top of it
just means that we trust the DNS server to provide good info. So:
- if we prefer IPv6, we go through the list and pick the first IPv6 entry,
if we don't find one, we just grab the first IPv4 entry, no second chance
- if you try to reuse a connection and it died on you, you try to
resolve+connect a new one. Maybe you get the same IP, maybe you don't, but
we trust the DNS implementation to shuffle the entries around
- stats is an interesting issue, but again, only if you let it be. Backends
are already "IP address" or "UDS path", I'm fine with hiding all the DNS
entries behind "non-numerical host".

I agree, things get gnarly when we start fiddling with DNs, but there's
that portable interface that allows us to not have to. And if you want
fancy stuff like stats per IP, and fun load-balancing options that are
DNS-aware, there are vmods for that.

For the sake of transparency, I must admit that I do have an issue with my
proposal though: there's no timeout option for getaddrinfo, which sucks.

-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish only resolve the ip on startup

2021-10-04 Thread Guillaume Quintard
On Mon, Oct 4, 2021 at 9:49 AM Dridi Boukelmoune  wrote:

> One problem I have (and that you should be familiar with) is that
> portable interfaces we have that *respect* the system configuration
> (hosts file, nsswitch configuration etc) are not providing enough
> information. For example it becomes cumbersome to resolve SRV records
> or even get the TTL of individual records for a DNS resolution in a
> *portable* fashion.
>
> When you put it like this, it sounds simple enough (dare I say
> simplistic?) but what I see is a sizeable can of worms.
>

That sounds like a bit of a strawman to me. getaddrinfo and connect are
standard, and that's about all we should need. Applications are supposed
(in general) to just use whatever gai give them. We can call them every
time we need a new connection so we don't worry about TTL, and we just
disregard SRV records.
The vast majority of users don't need SRV (yet?), and don't expect the
application to optimize DNS calls, but they do complain that giving a
hostname to VCL doesn't work.

Let's just provide basic, expected functionality out of the box, and leave
the fancier features to vmod_goto and vmod_dynamic.

--
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish only resolve the ip on startup

2021-10-04 Thread Guillaume Quintard
I think it makes sense for Varnish to natively support backends changing
their IPs. I do get the performance argument but now that there is a
cloud/container market and that Varnish has proven to be useful in it, this
basic functionality should be brought in.

Would it be acceptable to add a "host_string" to vrt_endpoint and fill it
if the VCL backend isn't an IP, then, we can add another cp_methods to
cache_conn_pool.c to use it? This way IPs are still super fast, and
hostnames become actually useful and a bit less confusing?

-- 
Guillaume Quintard


On Sun, Oct 3, 2021 at 9:07 AM Léo  wrote:

> Hello,
> While setting up a docker-compose with a cache powered by varnish I came
> across an issue when recreating a container that varnish depends on. After
> the recreation varnish doesn't try to retrieve the new local ip. I've been
> told this is an upstream varnish issue (
> https://github.com/varnish/docker-varnish/issues/41). I have found some
> ways to solve the problem, but will an appropriate solution be considered ?
>
> Thanks in advance,
> Have a good day,
> Léo Lelievre
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: varnishadm exit code 200

2021-10-02 Thread Guillaume Quintard
Oh, 4.x is ancient, glad you made the upgrade!

On Sat, Oct 2, 2021, 15:54 Martynas Jusevičius 
wrote:

> I thought that was a non-zero exit code but it's not :)
>
> We just noticed a change -- in the earlier version we used (not sure
> which now, I think 4.x) there was no output from varnishadm.
>
> We'll just send the 200 to /dev/null then.
>
> On Sun, Oct 3, 2021 at 12:39 AM Guillaume Quintard
>  wrote:
> >
> > 200 means the command passed to varnish, via varnishadm, succeeded. What
> makes you think it failed?
> >
> > On Sat, Oct 2, 2021, 15:24 Martynas Jusevičius 
> wrote:
> >>
> >> Actually it does not seem to be the exit code. I tried checking and it
> >> looks like the exit code is 0:
> >>
> >> root@dc17c642d39a:/etc/varnish# varnishadm "ban req.url ~ /"
> >> 200
> >>
> >> root@dc17c642d39a:/etc/varnish# test $? -eq 0 || echo "Error"
> >> root@dc17c642d39a:/etc/varnish#
> >>
> >> So where is that "200" coming from?
> >>
> >> On Sun, Oct 3, 2021 at 12:14 AM Martynas Jusevičius
> >>  wrote:
> >> >
> >> > Hi,
> >> >
> >> > We recently switched to the varnish:latest container and based an
> >> > unprivileged image on it (entrypoint runs as USER varnish):
> >> > https://github.com/AtomGraph/varnish/blob/official-image/Dockerfile
> >> >
> >> > We noticed that our varnishadm commands started failing. More
> specifically:
> >> >
> >> > root@dc17c642d39a:/etc/varnish# varnishadm "ban req.url ~ /"
> >> > 200
> >> >
> >> > As I understand it's a 200 exit code, which means varnishadm failed:
> >> >
> https://varnish-cache.org/docs/5.1/reference/varnishadm.html#exit-status
> >> >
> >> > What does 200 mean exactly? I couldn't find any code list.
> >> > My guess is that this has to do with the unprivileged varnish user,
> >> > but I'm not sure what it takes to fix it.
> >> >
> >> >
> >> > Martynas
> >> ___
> >> varnish-misc mailing list
> >> varnish-misc@varnish-cache.org
> >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: varnishadm exit code 200

2021-10-02 Thread Guillaume Quintard
200 means the command passed to varnish, via varnishadm, succeeded. What
makes you think it failed?

On Sat, Oct 2, 2021, 15:24 Martynas Jusevičius 
wrote:

> Actually it does not seem to be the exit code. I tried checking and it
> looks like the exit code is 0:
>
> root@dc17c642d39a:/etc/varnish# varnishadm "ban req.url ~ /"
> 200
>
> root@dc17c642d39a:/etc/varnish# test $? -eq 0 || echo "Error"
> root@dc17c642d39a:/etc/varnish#
>
> So where is that "200" coming from?
>
> On Sun, Oct 3, 2021 at 12:14 AM Martynas Jusevičius
>  wrote:
> >
> > Hi,
> >
> > We recently switched to the varnish:latest container and based an
> > unprivileged image on it (entrypoint runs as USER varnish):
> > https://github.com/AtomGraph/varnish/blob/official-image/Dockerfile
> >
> > We noticed that our varnishadm commands started failing. More
> specifically:
> >
> > root@dc17c642d39a:/etc/varnish# varnishadm "ban req.url ~ /"
> > 200
> >
> > As I understand it's a 200 exit code, which means varnishadm failed:
> > https://varnish-cache.org/docs/5.1/reference/varnishadm.html#exit-status
> >
> > What does 200 mean exactly? I couldn't find any code list.
> > My guess is that this has to do with the unprivileged varnish user,
> > but I'm not sure what it takes to fix it.
> >
> >
> > Martynas
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Serve stale content if backend is healthy but "not too healthy"

2021-09-21 Thread Guillaume Quintard
Hi,

As Dridi said, what you are looking for is exactly vmod_stale, but I wanted
to point out that part:

> We have a backend that actually proxies different services

In that case, it might be good to actually have a Varnish backend for each
type of backend behind the proxies. The backend definition would be exactly
the same, but the probe definitions would be different, with a specific
URL/host. this way, Varnish would be aware of who is actually unhealthy and
you don't have to deal with the stale thing.

If you need an open-source approach, I reckon the best you can do is
restart with a zero TTL if you detect a bad response. It does have a couple
of race conditions baked-in that vmod_stale sidesteps, but that's usually
good enough:

sub vcl_recv {
# be hopeful that the backend will send us something good, ignore grace
if (req.restarts == 0) {
set req.grace = 0s;
}
}

sub vcl_deliver {
# welp, that didn't go well, try again without limiting the grace
if (req.restarts == 0 && resp.status >= 500) {
set req.ttl = 10y;
return (restart);
}
}


Main issue is that you restart, so you are going to spend a lil bit more
time/resources processing the request, and the object in cache may have
expired by the time you realize you need it.

Hope that helps,
-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


PSA: varnish-modules packages for arch and alpine

2021-09-04 Thread Guillaume Quintard
Hi everyone,

Here's a short email to let you know that the varnish-modules[1] collection
is now available in the AUR[2] and on alpine testing[3]. Those two join the
existing varnish-module packages for Fedora[4] (if someone wants to package
for Debian, let me know!).

Those packages are fresh out of the oven, so if there is any issue, please
report it either on the AUR page, the Alpine gitlab or directly on the
varnish-module github page.

*# Alpine instructions*

Activate the testing repository in the configuration file, or just do it as
a one-shot in the command line:

apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/edge/testing
varnish-modules

*# Arch instructions*

If you already have an AUR-capable package manager[5] such as yay or
yaourt, you can install it directly:

   yay -Sy varnish-modules

Otherwise, build it yourself:

   sudo pacman -Sy git base-devel
   git clone https://aur.archlinux.org/varnish-modules.git
   cd varnish-modules/
   makepkg -frisc

Have a great week-end!

[1]: https://github.com/varnish/varnish-modules
[2]: https://aur.archlinux.org/packages/varnish-modules/
[3]:
https://pkgs.alpinelinux.org/package/edge/testing/x86_64/varnish-modules
[4]: https://src.fedoraproject.org/rpms/varnish-modules
[5]: https://wiki.archlinux.org/title/AUR_helpers#Comparison_tables

-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish and AWS ALBs

2021-08-19 Thread Guillaume Quintard
Hi,

If I read this correctly:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/x-forwarded-headers.html
, you can trust the before-last IP, because it was added by the ALB,
always. (and using vmod_str makes it easy to retrieve
https://github.com/varnish/varnish-modules/blob/master/src/vmod_str.vcc#L42)

Side question: would an NLB work? They support proxy-protocol, that would
also solve your problem.

Cheers,

-- 
Guillaume Quintard


On Thu, Aug 19, 2021 at 1:52 PM Carlos Abalde 
wrote:

> Hi,
>
> No so sure about that. Let's assume the client address is 1.1.1.1. Two
> possible scenarios:
>
> - The client request reaches the ALB without XFF. The ALB will inject XFF
> with value 1.1.1.1. Then Varnish will modify XFF adding the ALB's address
> (i.e., 1.1.1.1,). Using the next-to-last IP you're using the right
> client address.
>
> - The client request reaches the ALB with a forged XFF (e.g. 127.0.0.1).
> The ALB will will modify XFF (i.e. 127.0.0.1,1.1.1.1). The Varnish will do
> the same (i.e. 127.0.0.1,1.1.1.1,). Using the next-to-last IP
> you're still using the right client address.
>
>
> I've not checked using a ALB, but that should be the expected behaviour
> for me.
>
> Best,
>
> --
> Carlos Abalde
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Docker images: Alpine and new architecture support

2021-08-15 Thread Guillaume Quintard
Replying to all this time

On Fri, Aug 13, 2021, 08:48 Guillaume Quintard 
wrote:

>
> On Fri, Aug 13, 2021 at 3:31 AM Geoff Simmons  wrote:
>
>> Can you (or anyone) share some info about how well Varnish performs with
>> musl libc?
>>
>
> Good question, and unfortunately, I don't have a good answer in return as
> I haven't benchmarked it, so feedback is more than welcome.
>
> What I can comment on is that musl is quite adamant about being standard
> and pure, so it will hopefully be more portable and will let compilers do
> more work. As you mentioned, we've had issues in the past compiling and
> testing with it, but it should be all behind us now:
> - there were some header issues that prevented us from compiling, but we
> fixed that a couple of years ago
> - libbacktrace isn't available on Alpine, which prompted the move to
> libunwind (can we make it the default now?)
> - it has help fix a few compiler warning issues lately
>
> Cheers,
>
> --
> Guillaume Quintard
>
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Docker images: Alpine and new architecture support

2021-08-05 Thread Guillaume Quintard
Hi everyone,

I just wanted to everyone that we have now closed two important tickets:
- https://github.com/varnish/docker-varnish/issues/2 (alpine support)
- https://github.com/varnish/docker-varnish/issues/12 (arm support)

In short, all images are now supported on amd64, arm32v7, arm64v8, i386,
ppc64le, and s390x. And on top of this, the "fresh" tags are also
accessible with an "-alpine" suffix (fresh-alpine, 6.6.1-alpine, etc.).

What you don't get, for now, is an alpine variant of the stable image as we
still need a couple of backports before it's viable.

The way images are built have changed quite a lot (they don't rely on
packagecloud anymore), and there could be a few quirks lying around, so
please give it a go and report anything odd on the usual bug tracker:
https://github.com/varnish/docker-varnish/issues

And I need to thank again @tianon and @yosifkit over at
https://github.com/docker-library/official-images for their help and
unending patience.

Cheers,

-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Best practice for caching scenario with different backend servers but same content

2021-08-05 Thread Guillaume Quintard
Hi,

I'm pretty sure there's a confusion with the sequence of actions here.
Normalization happen *before* you look into the cache, so that way before
you fetch anything from the backend. By the time you cache the data
(vcl_backend_response), the hash key has already been set (vcl_hash), it's
way too late to normalize the request.

As to normalization, it's usually done in vcl_recv, and it can range from
just setting the host header to a static string to using std.tolower() and
removing the host port.

for the sake of the example:

sub vcl_vcl {
   set req.http.host = "myvideoservice.com";
}

For shard example, look at the VCTs, for example:
https://github.com/varnishcache/varnish-cache/blob/6.6/bin/varnishtest/tests/d00029.vtc#L66

import directors;

sub vcl_init {
new shard_dir = directors.shard();
shard_dir.add_backend(be1);
shard_dir.add_backend(be2);
shard_dir.add_backend(be3;

new p = directors.shard_param();
vd.associate(p.use());

vd.reconfigure(replicas=25);}
sub vcl_backend_fetch {
p.set(by=KEY, key=bereq.url);
set bereq.backend_hint = shard_dir.backend(resolve=LAZY);}

For udo:

import crypto;
import udo;

sub vcl_init {
new udo_dir = udo.director();
udo_dir.set_type(random);
udo_dir.add_backend(be1);
udo_dir.add_backend(be2);
udo_dir.add_backend(be3);
udo_dir.set_type(hash);}
sub vcl_backend_fetch {
set bereq.backend_hint = udo_dir.backend();
udo_dir.set_hash(crypto.hash(sha256, bereq.url));}


These have been written without testing, so don't put them straight into
production.

-- 
Guillaume Quintard


On Thu, Aug 5, 2021 at 3:33 AM Hamidreza Hosseini 
wrote:

> Hi,
> 1.
>
> Is there any way to normalize host headers and other things to say to
> varnish not to cache the same content for different backend?
> I want to use round robin director but after fetching the content I want
> to normalize the header and cache the content,
> I would appreciate if you give me an example about this and how I can do
> it.
>
> 2.
> I couldn't find any good example for directors-shard and
> xshard-key-string, I would appreciate if you could give example about this
> too.
>
> Many Thanks
> --
> *From:* varnish-misc  hotmail@varnish-cache.org> on behalf of Hamidreza Hosseini <
> hrhosse...@hotmail.com>
> *Sent:* Sunday, August 1, 2021 4:17 AM
> *To:* varnish-misc@varnish-cache.org 
> *Subject:* Best practice for caching scenario with different backend
> servers but same content
>
> Hi,
> I want to use varnish in my scenario as cache service, I have about 10
> http servers that serve Hls fragments as the backend servers and about 5
> varnish servers for caching purpose, the problem comes in when I use
> round-robin director for backend servers in varnish,
> if a varnish for specific file requests to one backend server and for the
> same file but to another backend server it would cache that file again
> because of different Host headers ! so my solution is using fallback
> director instead of round-robin as follow:
>
> ```
> In varnish-1:
> new hls_cluster = directors.fallback();
> hls_cluster.add_backend(b1());
> hls_cluster.add_backend(b2());
> hls_cluster.add_backend(b3());
> hls_cluster.add_backend(b4());
> hls_cluster.add_backend(b5());
> hls_cluster.add_backend(b6());
> hls_cluster.add_backend(b7());
> hls_cluster.add_backend(b8());
> hls_cluster.add_backend(b9());
> hls_cluster.add_backend(b10());
>
>
>
> In varnish-2:
> new hls_cluster = directors.fallback();
> hls_cluster.add_backend(b10());
> hls_cluster.add_backend(b1());
> hls_cluster.add_backend(b2());
> hls_cluster.add_backend(b3());
> hls_cluster.add_backend(b4());
> hls_cluster.add_backend(b5());
> hls_cluster.add_backend(b6());
> hls_cluster.add_backend(b7());
> hls_cluster.add_backend(b8());
> hls_cluster.add_backend(b9());
>
>
> In varnish-3:
> new hls_cluster = directors.fallback();
> hls_cluster.add_backend(b9());
> hls_cluster.add_backend(b1());
> hls_cluster.add_backend(b2());
> hls_cluster.add_backend(b3());
> hls_cluster.add_backend(b4());
> hls_cluster.add_backend(b5());
> hls_cluster.add_backend(b6());
> hls_cluster.add_backend(b7());
> hls_cluster.add_backend(b8());
> hls_cluster.add_backend(b10());
>
> ```
> But I think this is not the best solution, because there is no load
> balancing despite, I used different backend for the first argument of
> fallback directive,
> What is varnish recommendation for this scenario?
>
>
>
> 

Re: Best practice for caching scenario with different backend servers but same content

2021-08-01 Thread Guillaume Quintard
Hi,

There are a lot of things to unpack here.

> if a varnish for specific file requests to one backend server and for the
same file but to another backend server it would cache that file again
because of different Host headers ! so my solution is using fallback
director instead of round-robin

The two aren't related, if you have a hashing problem causing you to cache
the same object twice, changing the directors isn't going to save you.
Ideally, the requests will get normalized (host header and path) in
vcl_recv{} so that they will be properly hashed in vcl_hash{}.

The backend resolution only happens after you have exited
vcl_backend_fetch{}, long after you have (not) found the object in the
cache, and the best solution for video is usually to use
consistent-hashing. In open-source this means vmod_shard (
https://varnish-cache.org/docs/trunk/reference/vmod_directors.html#directors-shard),
in Enterprise, it'll be udo (
https://docs.varnish-software.com/varnish-cache-plus/vmods/udo/#set-hash),
they are going to handle about the same except udo makes it easier to set
the hash, which may be important for live (more info below).

With consistent hashing, you can configure all Varnish servers the same,
and they will determine which backend to use based on the request.
Typically, the same request will always go to the same backend. This
provides pretty good load-balancing over time, and additionally it
leverages the internally caching that most video origins have.

If you are serving VOD, that is all you need, but if you are serving Live,
you need to care about one other thing: you want consistent hashing not per
request, but per stream. Because the origins may be slightly out of sync,
you may get a manifest on origin A which will advertise a chunk that isn't
available anywhere yet, and if you don't fetch the new chunk from origin A,
you'll get a 404 or a 412.
So, for live, you will need to use shard's key() (
https://varnish-cache.org/docs/trunk/reference/vmod_directors.html#int-xshard-key-string)
or udo's set_hash() (
https://docs.varnish-software.com/varnish-cache-plus/vmods/udo/#set-hash)
to create a hash based on the stream path.

For example, consider these paths:
- /live/australia/Channel5/480p/manifest.m3u8 and
/live/australia/Channel5/480p/chunk_43212123.ts: the stream path is
/live/australia/Channel5/480p/
- /video/live/52342645323/manifest.dash and
/video/live/52342645323/manifest.dash?time=4216432432=8000=523453:
the stream path is /video/live/52342645323/manifest.dash

On top of all this, if you start having more than 5 Varnish servers, you
might want to consider adding an extra layer of caching between the
client-facing Varnish nodes and the origins (origin shields) to reduce
the load on the origins. In that case, the shields would be the one
handling the consistent hashing.

Hope this helps

-- 
Guillaume Quintard


On Sun, Aug 1, 2021 at 4:18 AM Hamidreza Hosseini 
wrote:

> Hi,
> I want to use varnish in my scenario as cache service, I have about 10
> http servers that serve Hls fragments as the backend servers and about 5
> varnish servers for caching purpose, the problem comes in when I use
> round-robin director for backend servers in varnish,
> if a varnish for specific file requests to one backend server and for the
> same file but to another backend server it would cache that file again
> because of different Host headers ! so my solution is using fallback
> director instead of round-robin as follow:
>
> ```
> In varnish-1:
> new hls_cluster = directors.fallback();
> hls_cluster.add_backend(b1());
> hls_cluster.add_backend(b2());
> hls_cluster.add_backend(b3());
> hls_cluster.add_backend(b4());
> hls_cluster.add_backend(b5());
> hls_cluster.add_backend(b6());
> hls_cluster.add_backend(b7());
> hls_cluster.add_backend(b8());
> hls_cluster.add_backend(b9());
> hls_cluster.add_backend(b10());
>
>
>
> In varnish-2:
> new hls_cluster = directors.fallback();
> hls_cluster.add_backend(b10());
> hls_cluster.add_backend(b1());
> hls_cluster.add_backend(b2());
> hls_cluster.add_backend(b3());
> hls_cluster.add_backend(b4());
> hls_cluster.add_backend(b5());
> hls_cluster.add_backend(b6());
> hls_cluster.add_backend(b7());
> hls_cluster.add_backend(b8());
> hls_cluster.add_backend(b9());
>
>
> In varnish-3:
> new hls_cluster = directors.fallback();
> hls_cluster.add_backend(b9());
> hls_cluster.add_backend(b1());
> hls_cluster.add_backend(b2());
> hls_cluster.add_backend(b3());
> hls_cluster.add_backend(b4());
> hls_cluster.add_backend(b5());
> hls_cluster.add_backend(b6());
> hls_cluster.add_backend(b7());
> hls_cluster.add_backend(b8());
> hls_cluster.add_backend(b10());
>
> ```
> But I think

Re: Varnish Dynamic Page Caching & Cache Purging vs Nginx+Redis

2021-07-27 Thread Guillaume Quintard
Hi,

That's a very broad question, and so I'll keep the answer pretty high-level.

All in all, Varnish has a lot fewer internal rules than nginx and really
only cares about requests at an HTTP level. This means "dynamic" content
doesn't matter to Varnish, it's just requests/objects with specific
headers, querystrings, etc. As a result, I feel that Varnish is way better
equipped to functionally handle any kind of traffic.

Of course, because Varnish operates at a lower level, with fewer rules, it
needs an excellent configuration scheme, and that's probably what trips
people: the configuration language is actually a programming language that
allows you to dictate very precisely how each request is handled. Here an
article I wrote some weeks ago about this:
https://info.varnish-software.com/blog/finally-understanding-built-in-vcl

For purging, I won't mince my words: nginx is bad and you should stay away
from it, it's limited and impractical. Varnish on the other side is once
again very low-level and will force you to implement your own logic, but
the primitives are much more powerful. And, lucky you, here's a ready-made
VCL framework you can use:
https://github.com/varnish/toolbox/tree/master/vcls/invalidate

Hope this helps

Cheers,
-- 
Guillaume Quintard


On Tue, Jul 27, 2021 at 3:17 AM s s  wrote:

> Hello all,
>I am quite new to Varnish.  I have been reading about both Varnish and
> Nginx+Redis for page caching, and I am particularly interested in dynamic
> caching and cache purging.  I have read in a number of posts that Varnish
> is "more flexible" in this regard, but without many additional details on
> this.  Can you please explain what features Varnish provides for dynamic
> page caching and cache purging, especially which are not available (or are
> more limited) in Nginx+Redis?  Please forgive me if my question is very
> basic/ignorant.  As I said, I am new to Varnish.
>
> Thanks and Best Regards,
> Sal
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish HA and MediaWiki HTTP PURGEs

2021-06-14 Thread Guillaume Quintard
Hello Justin!

VHA is a commercial product, so we should probably keep it short of private
as this is an open-source mailing-list.

However, since I'm sure the answer will be useful for other people, let's
answer publicly :-)

VHA is a fire-and-forget tool, outside of the critical path so that
replication requests failing (or being rate-limited) don't cause harm.
Purging, on the other hand, needs to be very vocal about failed purge
requests failing as your cache consistency is at stake, so while VHA can do
it, it's a bad idea.

However, VHA uses a tool named broadcaster which can be used on its own to
do exactly what you need: replicate a single request for the CMS backend to
the whole cluster, and report back so you can act on failures.

Cheer,

-- 
Guillaume Quintard


On Mon, Jun 14, 2021 at 7:39 AM Justin Lloyd  wrote:

> Hi all,
>
>
>
> I just saw the new Varnish HA video
> <https://www.youtube.com/watch?v=KhqVdKe2RAU> and was wondering if VHA’s
> node synchronization would obviate the need for all of the Varnish nodes to
> be listed in the MediaWiki Varnish caching configuration
> <https://www.mediawiki.org/wiki/Manual:Varnish_caching>. MediaWiki uses
> the list of cache nodes to send HTTP PURGE requests to invalidate cached
> pages when they are updated. So with VHA, could MediaWiki just be
> configured with a single hostname or floating IP address (e.g. keepalived)
> that points to the Varnish cluster so that the cluster could handle
> replicating the PURGE requests?
>
>
>
> Thanks,
>
> Justin
>
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish wouldn't cache HLS fragments

2021-06-14 Thread Guillaume Quintard
please keep the mailing-list CC'd

you backend is telling Varnish to not cache:
--  BerespHeader   Cache-Control: no-cache

which is acted upon in the built-in.vcl:
https://github.com/varnishcache/varnish-cache/blob/6.0/bin/varnishd/builtin.vcl#L161
more info here;
https://varnish-cache.org/docs/trunk/users-guide/vcl-built-in-code.html#vcl-built-in-code
and maybe this can help too:
https://info.varnish-software.com/blog/finally-understanding-built-in-vcl


-- 
Guillaume Quintard


On Sun, Jun 13, 2021 at 11:16 PM Hamidreza Hosseini 
wrote:

> This is one of hls fragments that I want to be cached:
>
> > wget http://stream.test.local/hls/mystream/1623650629260.ts
>
> ```
> *   << Request  >> 32770
> -   Begin  req 32769 rxreq
> -   Timestamp  Start: 1623650670.552461 0.00 0.00
> -   Timestamp  Req: 1623650670.552461 0.00 0.00
> -   VCL_useboot
> -   ReqStart   192.168.200.10 58016 a0
> -   ReqMethod  GET
> -   ReqURL /hls/mystream/1623650629260.ts
> -   ReqProtocolHTTP/1.1
> -   ReqHeader  User-Agent: Wget/1.20.3 (linux-gnu)
> -   ReqHeader  Accept: */*
> -   ReqHeader  Accept-Encoding: identity
> -   ReqHeader  Host: stream.test.local
> -   ReqHeader  Connection: Keep-Alive
> -   ReqHeader  X-Forwarded-For: 192.168.200.10
> -   VCL_call   RECV
> -   VCL_return hash
> -   ReqUnset   Accept-Encoding: identity
> -   VCL_call   HASH
> -   VCL_return lookup
> -   VCL_call   MISS
> -   VCL_return fetch
> -   Link   bereq 32771 fetch
> -   Timestamp  Fetch: 1623650670.557642 0.005181 0.005181
> -   RespProtocol   HTTP/1.1
> -   RespStatus 200
> -   RespReason OK
> -   RespHeader Server: nginx/1.20.1
> -   RespHeader Date: Mon, 14 Jun 2021 06:04:30 GMT
> -   RespHeader Content-Type: video/mp2t
> -   RespHeader Content-Length: 161868
> -   RespHeader Last-Modified: Mon, 14 Jun 2021 06:03:51 GMT
> -   RespHeader ETag: "60c6f147-2784c"
> -   RespHeader Cache-Control: no-cache
> -   RespHeader Access-Control-Allow-Origin: *
> -   RespHeader Access-Control-Expose-Headers: Content-Length
> -   RespHeader Accept-Ranges: bytes
> -   RespHeader X-Varnish: 32770
> -   RespHeader Age: 0
> -   RespHeader Via: 1.1 varnish (Varnish/6.2)
> -   VCL_call   DELIVER
> -   VCL_return deliver
> -   Timestamp  Process: 1623650670.557660 0.005199 0.18
> -   Filters
> -   RespHeader Connection: keep-alive
> -   Timestamp  Resp: 1623650670.558417 0.005956 0.000757
> -   ReqAcct179 0 179 406 161868 162274
> -   End
> **  << BeReq>> 32771
> --  Begin  bereq 32770 fetch
> --  VCL_useboot
> --  Timestamp  Start: 1623650670.552655 0.00 0.00
> --  BereqMethodGET
> --  BereqURL   /hls/mystream/1623650629260.ts
> --  BereqProtocol  HTTP/1.1
> --  BereqHeaderUser-Agent: Wget/1.20.3 (linux-gnu)
> --  BereqHeaderAccept: */*
> --  BereqHeaderHost: stream.test.local
> --  BereqHeaderX-Forwarded-For: 192.168.200.10
> --  BereqHeaderAccept-Encoding: gzip
> --  BereqHeaderX-Varnish: 32771
> --  VCL_call   BACKEND_FETCH
> --  VCL_return fetch
> --  BackendOpen25 b1 {Backend_ip} 80 {Varnish_ip} 49734
> --  BackendStart   {Backend_ip} 80
> --  Timestamp  Bereq: 1623650670.552739 0.84 0.84
> --  Timestamp  Beresp: 1623650670.557325 0.004669 0.004586
> --  BerespProtocol HTTP/1.1
> --  BerespStatus   200
> --  BerespReason   OK
> --  BerespHeader   Server: nginx/1.20.1
> --  BerespHeader   Date: Mon, 14 Jun 2021 06:04:30 GMT
> --  BerespHeader   Content-Type: video/mp2t
> --  BerespHeader   Content-Length: 161868
> --  BerespHeader   Last-Modified: Mon, 14 Jun 2021 06:03:51 GMT
> --  BerespHeader   Connection: keep-alive
> --  BerespHeader   ETag: "60c6f147-2784c"
> --  BerespHeader   Cache-Control: no-cache
> --  BerespHeader   Access-Control-Allow-Origin: *
> --  BerespHeader   Access-Control-Expose-Headers: Content-Length
> --  BerespHeader   Accept-Ranges: bytes
> --  TTLRFC 120 10 0 1623650671 1623650671 1623650670 0 0
> cacheable
> --  VCL_call   BACKEND_RESPONSE
> --  TTLVCL 300 10 0 1623650671 cacheable
> --  TTLVCL 30 10 0 1623650671 cacheable
> --  TTLVCL 120 10 0 1623650671 cacheable
> --  TTLVCL 120 10 0 1623650671 uncacheable
> --  VCL_return deliver
> --  Filters
> --  Storagemalloc Transient
> --  Fetch_Body 3 length stream
> --  BackendReuse   25 b1
> --  Timestamp  BerespBody: 1623650670.558352 0.005697 0.001

Re: Varnish wouldn't cache HLS fragments

2021-06-13 Thread Guillaume Quintard
Hi,

Can you share the output of "varnishlog -g request" for one of those
requests that should be cached please?

Cheers,

-- 
Guillaume Quintard

On Sun, Jun 13, 2021, 00:17 Hamidreza Hosseini 
wrote:

> Hi,
> I put varnish in front of my http servers to serve Hls streaming, I want
> varnish cache the fragments but not .m3u8 manifest file,
> I configure it but it cache nothing!
> My configuration file:
>
> ```
> vcl 4.1;
>
> import directors;
>
>
> backend b1 {
> .host = "playback-02";
> .probe = {
> .url = "/";
> .timeout = 150 ms;
> .interval = 10s;
> .window = 6;
> .threshold = 5;
> }
> }
>
>
>
> sub vcl_init {
> # we use round robin director for our backend swift proxies
>
> new hls_cluster = directors.round_robin();
> hls_cluster.add_backend(b1);
>
> }
>
> acl purge {
> "localhost";
> }
>
>
> sub vcl_recv {
>
> set req.backend_hint = hls_cluster.backend();
> if (req.method == "PURGE") {
> if (!client.ip ~ purge) {
> return(synth(405,"Not allowed."));
> }
> return (purge);
> }
>
> if (req.url ~ "\.m3u8$") {
>   return (pass);
> }
> }
>
>
>
>
>
> sub vcl_backend_response {
> # cache for half of a day
> set beresp.ttl=5m;
> # Don't cache 404 responses
>
> if (bereq.url ~ "\.(aac|dash|m4s|mp4|ts)$") {
>   set beresp.ttl = 30s;
> }
>
> if ( beresp.status == 404 ) {
> set beresp.ttl = 120s;
> set beresp.uncacheable = true;
> return (deliver);
> }
> if (beresp.status == 500 || beresp.status == 502 || beresp.status ==
> 503 || beresp.status == 504)
> {
> set beresp.uncacheable = true;
> }
> }
>
> ```
>
> Varnish version:
> varnishd (varnish-6.0.7 revision 525d371e3ea0e0c38edd7baf0f80dc226560f26e)
> Copyright (c) 2006 Verdens Gang AS
> Copyright (c) 2006-2020 Varnish Software AS
>
> Distribution: Ubuntu 20.04 LTS
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Use varnish-cache in front of HLS servers for live streaming

2021-06-10 Thread Guillaume Quintard
Hi,

By default, Varnish only hashes the host and URL (including the query
string):
https://github.com/varnishcache/varnish-cache/blob/master/bin/varnishd/builtin.vcl#L124

So you possibly need to clean the query string.

Or, while unlikely, it could be that your backend is returning a Vary
header, in which case, you should remove the request headers corresponding
to this (ignore content-encoding though)

-- 
Guillaume Quintard


On Thu, Jun 10, 2021 at 3:54 AM Hamidreza Hosseini 
wrote:

> Hi,
> I want to use varnish as a cache server in front of my Http live streaming
> servers to serves .ts files to client and I want to ignore caching .m3u8
> files extension to be cached.
> When I read how varnish would cache the objects again, I encountered with
> some issues for example because each clients would request the .ts files
> from varnish directly or through load balancers  (load balancer would pass
> all headers to varnish) so for unique .ts file , varnish will cache the
> file for each client! so I should normalize header or delete some clients
> header or somehow I should tell varnish that this file is unique and dont
> cache it again based on different useless header...
> How can I tell this to varnish or which header should be deleted by
> varnish because I don't know which client would send which header !
> which header would affect on double caching?
> Is there any sample config to satisfy my needs?
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Understanding 503s

2021-04-23 Thread Guillaume Quintard
Thanks for taking the time to debrief, I'm sure that will be useful in the
future!

-- 
Guillaume Quintard

On Fri, Apr 23, 2021, 00:04 Maninder Singh  wrote:

> I finally figured out why this was happening.
>
> Hope this helps someone.
>
> We were running php-fpm and had the following configuration.
>
> pm = dynamic
> pm.max_children = 166
> pm.start_servers = 16
> pm.min_spare_servers = 8
> pm.max_spare_servers = 16
>
> This was working fine with the usual load.
> But, we found that whenever there was a spike it led to an increase in
> 503s.
>
> This was due to start_servers set to 16.
>
> php-fpm takes a sec to spawn more processes and during that time we see
> 503s.
>
> For a high traffic site ( like ours ), we had to set this to
> pm = static
> pm.max_children = 125
>
> The above values are kept keeping in mind our RAM.
> These would be different for others.
>
> Now, we don't see any 503s as the server is prepared to handle more
> connections.
>
> Thanks,
>
> On Thu, 15 Apr 2021 at 14:03, Maninder Singh  wrote:
>
>> Apache runs on port 8080 and is not open to the outside world.
>> All requests are routed through varnish but then not all requests are
>> cached.
>>
>> I guess in that case, varnish becomes the only client for apache.
>> So, I should increase the KeepAliveTimeout.
>>
>>
>>
>> On Thu, 15 Apr 2021 at 13:45, Dridi Boukelmoune  wrote:
>>
>>> On Thu, Apr 15, 2021 at 7:27 AM Maninder Singh  wrote:
>>> >
>>> > Thank you Dridi.
>>> > This is very helpful.
>>> >
>>> > FYI - My apache keepalive is
>>> > KeepAliveTimeout 3
>>> >
>>> > You would suggest increasing this to 5-10 ?
>>>
>>> If varnish is httpd´s only client then increase it to 70s. Varnish
>>> will close unused connections after 60s by default, and if it´s really
>>> really busy that gives a 10s window for the actual shutdown to happen.
>>>
>>> If there are other direct clients in front of your httpd server, then
>>> decrease backend_idle_timeout in varnishd to 2s, but then you will
>>> force varnish to establish connections more often. This is already the
>>> case of course, but at least that will reduce the risk of reusing a
>>> closed connection and failing backend fetches for this reason.
>>>
>>> > We had lowered the KeepAliveTimeout as the server is a very busy one
>>> and we want to handle many connections.
>>>
>>> I understand, and there´s a good reason to have a low default when you
>>> can´t trust the clients. It boils down to whether your httpd server is
>>> openly accessible to more than just varnish, including potentially
>>> malicious clients.
>>>
>>> Dridi
>>>
>> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Introducing bob

2021-03-31 Thread Guillaume Quintard
Hi everyone,

Sometime ago I built and started to use a little tool that I'd like to
share with you, in the hope that it'll make your life easier too. This tool
is named "bob", and is available here:
https://github.com/varnish/toolbox/tree/master/bob

The short of it is that on compatible repositories, if you have bob and
docker installed, you can start building without worrying about
dependencies. For varnish, you can just go:

bob ./autogen.des
bob make
bob make check


And things should just work, the files are created by the container in your
work directory, with the right owner and permissions, but without you
having to install anything extra. And that allows you to code from your
host machine with the IDE of your choice, and to rely on bob only for the
compilation steps.

Internally, bob just:
- grabs a a Dockerfile in the .circleci/ directory and builds the
corresponding image
- creates a container user with the same UID/GID that is running on the host
- mounts your home as the container's home
- run a command
- destroy the container

The goal here is to provide a frictionless start to new users so they can
get cracking faster, but also to allow testing on other platforms. For
example, you can set the BOB_DIST env variable to run other containers:

BOB_DIST=ubuntu bob ./autogen.des
BOB_DIST=ubuntu bob make
BOB_DIST=ubuntu bob make check


This can prove very useful to test different environments (cf. new compiler
version issues)

If you want to recreate the image (which includes a "docker pull"), just
run it without arguments:

bob

If you need to get into the container to run a couple of commands:

bob bash


I've just pushed the necessary Dockerfiles to enable it on:
- https://github.com/varnishcache/varnish-cache (centos by default, and
BOB_DIST support for ubuntu, archlinux and alpine)
- https://github.com/varnish/varnish-modules (centos only)
- https://github.com/varnish/hitch/ (centos by default, and BOB_DIST
support for ubuntu, archlinux and alpine)

So those are ready to go, but the tool is meant to be generic and should
work with pretty much any repo as long as you create a Dockerfile for it in
bob/, .bob/, .circleci/ or .github.

Of course, this is completely optional, and if you don't care about it,
this should be completely invisible to you. However, if you do find it
useful, or PRs and bug trackers are open, I'm eager to get more feedback
about it.

Cheers!

-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Varnish 6.6 packages and images

2021-03-22 Thread Guillaume Quintard
Hi everyone,

As I'm sure most of you already know, Varnish 6.6 was released last week (
https://varnish-cache.org/releases/index.html), and this week we have
packages (https://packagecloud.io/varnishcache) and docker images (
https://hub.docker.com/_/varnish).

We are aware that the docker image was mistagged (6.5 in addition of 6.6)
and this is being worked on.

Let us know if you have questions!

-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Accessing original object in varnish background Fetch

2021-03-03 Thread Guillaume Quintard
Hi,

This expectation is wrong:
 # Here beresp.http.myval (on RHS of assignment expression).
 # should be the original hdr value stored with the object

beresp is the new backend response, and VCL doesn't make the old one
available.

There are two ways to deal with this.

The first one is vmod_stale (
https://docs.varnish-software.com/varnish-cache-plus/vmods/stale/#get-header)
in Varnish Enterprise that will allow you to get direct access to that
header.

The second one, if you want to stick to open source, uses vcl_hit to stash
the old headers in req so that they are available in bereq. That would look
something like this (untested code, use with caution):

sub vcl_recv {
# avoid injections from the clients
unset req.http.myval;
}

sub vcl_hit {
# store the header
set req.http.myval = obj.http.myval;
}

sub vcl_backend_response {
# bereq is basically a req copy, so myval crossed over

# note that I do that unconditionally here, but you can indeed check
bereq.is_bg_fetch

set beresp.http.myval = bereq.http.myval + " extra string";
}


that should do the trick, however, be careful, the code above will add more
text to the header at every background fetch, so you are more than likely
to hit the header length limit if you are not careful.

Does that make sense?

-- 
Guillaume Quintard


On Wed, Mar 3, 2021 at 8:23 PM Arunabha Saha 
wrote:

> Hello,
>   Something I am trying to do is update an existing value in a
> cached object after every background fetch.I can't seem to figure
> out how to access the existing object parameters during the background
> fetch.
>
> Example
>
> vcl_backend_response {
>   if (!bereq.is_bgfetch) {
> set beresp.http.myval = beresp.http.val_from_backend;
>   } else {
>  #
>  #Here beresp.http.myval (on RHS of assignment expression).
>  # should be the original hdr value stored with the object
> but i can't seem to access it this way
>  # or any other way.
>  #
>  set beresp.http.myval = beresp.http.myval +
> beresp.http.val_from_backend;
>   }
> }
>
> --
> regards,
> Arun
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish Health check

2021-02-19 Thread Guillaume Quintard
Your backend is returning a 400, most probably because there's no host
header in your probe.

-- 
Guillaume Quintard


On Fri, Feb 19, 2021 at 8:53 PM Hamidreza Hosseini 
wrote:

> I correct the wrong name (It couldn't resolve from dns server), But now I
> have this error:
>
> 0 Backend_health - boot.varnish_1 Still sick 4---X-R- 0 3 5 0.007800
> 0.00 HTTP/1.1 400 Bad Request
>
> My probe:
>
> probe myprobe {
> .request =
>   "HEAD / HTTP/1.1"
>   "Connection: close"
>   "User-Agent: Varnish Health Probe";
> .timeout = 3s;
> .interval = 5s;
> .window = 5;
> .threshold = 3;
> }
> Should I change the probe request "HEAD /" to something else?
> Is there any way to define special port and whenever it is accessable even
> with authentication erro (error 400) it consider backend healthy?
> --
> *From:* Hamidreza Hosseini
> *Sent:* Thursday, February 18, 2021 4:49 AM
> *To:* varnish-misc@varnish-cache.org 
> *Subject:* Varnish Health check
>
> Hi,
> I want to adjust health check on my varnish backends, But I dont know how
> can I know they are healthy or not
> because the nodes are up and running and even service is up but varnish
> does'nt work for all requests (just third of them are responding till I
> restart it (It happens sometimes)).
> How can I check this?
> ```
> backend server1 {
> .host = "server1.example.com";
> .probe = {
> .request =
>   "HEAD / HTTP/1.1"
>   "Connection: close"
>   "User-Agent: Varnish Health Probe";
> .timeout = 1s;
> .interval = 5s;
> .window = 5;
> .threshold = 3;
> }
> }
> ```
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: How to adjust priority for both backend healthy

2021-02-18 Thread Guillaume Quintard
I'm formally abandoning this thread and will only focus on the other one
-- 
Guillaume Quintard


On Thu, Feb 18, 2021 at 8:21 AM Hamidreza Hosseini 
wrote:

> I read you article and it was great but I didn't find out my answer, I
> said that I have 2 layer varnish:  disk layer and ram layer and ram layer,
> I want to check the health of layer 2 for example:
>
> Varnish Ram ===> Varnish Disk ===> Webserver
> I adjust this probe on varnish ram:
>
> probe myprobe {
> .request =
>   "HEAD / HTTP/1.1"
>   "Connection: close"
>   "User-Agent: Varnish Health Probe";
> .timeout = 1s;
> .interval = 5s;
> .window = 5;
> .threshold = 3;
> }
> and varnish says: ` 0 Backend_health - boot.varnish_1 Still sick 
> 0 3 5 0.00 0.00 Open error 111 (Connection refused)`
> And I think it is right because it checks HEAD / on varnish backends and
> there is nothing there!
> So I'm asking how should I configure the probe to trigger another varnish
> health as a backend
>
> Best regards.
>
> --
> *From:* Hamidreza Hosseini
> *Sent:* Thursday, February 18, 2021 5:31 AM
> *To:* varnish-misc@varnish-cache.org 
> *Subject:* How to adjust priority for both backend healthy
>
> I have two backend that both of them are healthy and I use fallback for
> them and I want to say all request should goes to backend1 and if backend1
> become unhealthy all requests should go to backend2 but backend1 has higher
> priority and when backend1 become healthy, all  requests should go to
> backend1,
> How can I define priority?
> my config:
>
> ```
> vcl 4.1;
>
> import directors;
>
> probe myprobe {
> .request =
>   "HEAD / HTTP/1.1"
>   "Connection: close"
>   "User-Agent: Varnish Health Probe";
> .timeout = 1s;
> .interval = 5s;
> .window = 5;
> .threshold = 3;
> }
>
> backend backend1 { .host = "backend1"; .port = "8080"; .probe = myprobe; }
>
> backend backend2 { .host = "backend2"; .port = "8080"; .probe = myprobe; }
> backend backend3 { .host = "backend3"; .port = "8080"; .probe = myprobe; }
>
>
> sub vcl_init {
>
> new backend2_cluster = directors.round_robin();
> backend2_cluster.add_backend(backend2);
> backend3_cluster.add_backend(backend3);
>
>
> new backend_cluster_fb = directors.fallback();
> backend1_fb.add_backend(backend1);
> backend2_cluster_fb.add_backend(backend2_cluster.backend());
> }
>
> sub vcl_recv {
> set req.backend_hint = backend_cluster_fb.backend();
>
> }
>
> ```
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish Health check

2021-02-18 Thread Guillaume Quintard
please keep the mailing-list in CC for future communications.

> Open error 111 (Connection refused)

This is a TCP issue, the backend is just not accepting the connection, are
you sure the IP:PORT is right?

-- 
Guillaume Quintard


On Thu, Feb 18, 2021 at 8:16 AM Hamidreza Hosseini 
wrote:

> I read you article and it was great but I didn't find out my answer, I
> said that I have 2 layer varnish:  disk layer and ram layer and ram layer,
> I want to check the health of layer 2 for example:
>
> Varnish Ram ===> Varnish Disk ===> Webserver
> I adjust this probe on varnish ram:
>
> probe myprobe {
> .request =
>   "HEAD / HTTP/1.1"
>   "Connection: close"
>   "User-Agent: Varnish Health Probe";
> .timeout = 1s;
> .interval = 5s;
> .window = 5;
> .threshold = 3;
> }
> and varnish says: ` 0 Backend_health - boot.varnish_1 Still sick 
> 0 3 5 0.00 0.00 Open error 111 (Connection refused)`
> And I think it is right because it checks HEAD / on varnish backends and
> there is nothing there!
> So I'm asking how should I configure the probe to trigger another varnish
> health as a backend
>
> Best regards.
>
> --
> *From:* Guillaume Quintard 
> *Sent:* Thursday, February 18, 2021 7:14 AM
> *To:* Hamidreza Hosseini 
> *Subject:* Re: Varnish Health check
>
> Ah, I missed the multilayer setup. In that case, you can have a look at
> this one: https://info.varnish-software.com/blog/howto-respond-to-probes
> --
> Guillaume Quintard
>
>
> On Thu, Feb 18, 2021 at 7:08 AM Hamidreza Hosseini 
> wrote:
>
> How can I probe a backend varnish? for example I have 2 layer varnish disk
> layer and ram layer and ram layer want to check the health of layer 2, How
> can I do this? I've done something but I have error:
>
> sudo varnishadm backend.list -p
> Backend name   Admin  ProbeLast updated
> boot.varnish_1 probe  Sick0/5
>   Current states  good:  0 threshold:  3 window:  5
>   Average response time of good probes: 0.00
>   Oldest == Newest
>    Happy
>
> sudo varnishlog -g raw -i Backend_health
> 0 Backend_health - boot.varnish_1 Still sick  0 3 5 0.00
> 0.00 Open error 111 (Connection refused)
>
> my config:
> probe myprobe {
> .request =
>   "HEAD / HTTP/1.1"
>   "Connection: close"
>   "User-Agent: Varnish Health Probe";
> .timeout = 1s;
> .interval = 5s;
> .window = 5;
> .threshold = 3;
> }
> --
> *From:* Guillaume Quintard 
> *Sent:* Thursday, February 18, 2021 7:00 AM
> *To:* Hamidreza Hosseini 
> *Cc:* varnish-misc@varnish-cache.org 
> *Subject:* Re: Varnish Health check
>
> Hi,
>
> The answer will be highly dependent on your setup, usually you want to
> find a probe request that will truly test the backend. One option, if you
> have control over the backend is to write a page to test the subsystem and
> make sure everybody's up.
>
> This link may prove useful:
> https://info.varnish-software.com/blog/backends-load-balancing
>
> --
> Guillaume Quintard
>
>
> On Thu, Feb 18, 2021 at 4:51 AM Hamidreza Hosseini 
> wrote:
>
> Hi,
> I want to adjust health check on my varnish backends, But I dont know how
> can I know they are healthy or not
> because the nodes are up and running and even service is up but varnish
> does'nt work for all requests (just third of them are responding till I
> restart it (It happens sometimes)).
> How can I check this?
> ```
> backend server1 {
> .host = "server1.example.com";
> .probe = {
> .request =
>   "HEAD / HTTP/1.1"
>   "Connection: close"
>   "User-Agent: Varnish Health Probe";
> .timeout = 1s;
> .interval = 5s;
> .window = 5;
> .threshold = 3;
> }
> }
> ```
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: How to adjust priority for both backend healthy

2021-02-18 Thread Guillaume Quintard
Hi,

No offense, I think you would get better answers on this mailing list if
you started only one thread and focused on it. The current way of sending
similar but slightly different questions, and then duplicating messages
make it hard to focus for people willing to help you.

With this being said, I believe I replied to this question in the other
thread.

Kind regards,

-- 
Guillaume Quintard


On Thu, Feb 18, 2021 at 7:11 AM Hamidreza Hosseini 
wrote:

> How can I probe a backend varnish? for example I have 2 layer varnish disk
> layer and ram layer and ram layer want to check the health of layer 2, How
> can I do this? I've done something but I have error:
>
> sudo varnishadm backend.list -p
> Backend name   Admin  ProbeLast updated
> boot.varnish_1 probe  Sick0/5
>   Current states  good:  0 threshold:  3 window:  5
>   Average response time of good probes: 0.00
>   Oldest == Newest
>    Happy
>
> sudo varnishlog -g raw -i Backend_health
> 0 Backend_health - boot.varnish_1 Still sick  0 3 5 0.00
> 0.00 Open error 111 (Connection refused)
>
> my config:
> probe myprobe {
> .request =
>   "HEAD / HTTP/1.1"
>   "Connection: close"
>   "User-Agent: Varnish Health Probe";
> .timeout = 1s;
> .interval = 5s;
> .window = 5;
> .threshold = 3;
> }
>
> --
> *From:* Hamidreza Hosseini
> *Sent:* Thursday, February 18, 2021 5:31 AM
> *To:* varnish-misc@varnish-cache.org 
> *Subject:* How to adjust priority for both backend healthy
>
> I have two backend that both of them are healthy and I use fallback for
> them and I want to say all request should goes to backend1 and if backend1
> become unhealthy all requests should go to backend2 but backend1 has higher
> priority and when backend1 become healthy, all  requests should go to
> backend1,
> How can I define priority?
> my config:
>
> ```
> vcl 4.1;
>
> import directors;
>
> probe myprobe {
> .request =
>   "HEAD / HTTP/1.1"
>   "Connection: close"
>   "User-Agent: Varnish Health Probe";
> .timeout = 1s;
> .interval = 5s;
> .window = 5;
> .threshold = 3;
> }
>
> backend backend1 { .host = "backend1"; .port = "8080"; .probe = myprobe; }
>
> backend backend2 { .host = "backend2"; .port = "8080"; .probe = myprobe; }
> backend backend3 { .host = "backend3"; .port = "8080"; .probe = myprobe; }
>
>
> sub vcl_init {
>
> new backend2_cluster = directors.round_robin();
> backend2_cluster.add_backend(backend2);
> backend3_cluster.add_backend(backend3);
>
>
> new backend_cluster_fb = directors.fallback();
> backend1_fb.add_backend(backend1);
> backend2_cluster_fb.add_backend(backend2_cluster.backend());
> }
>
> sub vcl_recv {
> set req.backend_hint = backend_cluster_fb.backend();
>
> }
>
> ```
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish Health check

2021-02-18 Thread Guillaume Quintard
Hi,

The answer will be highly dependent on your setup, usually you want to find
a probe request that will truly test the backend. One option, if you have
control over the backend is to write a page to test the subsystem and make
sure everybody's up.

This link may prove useful:
https://info.varnish-software.com/blog/backends-load-balancing

-- 
Guillaume Quintard


On Thu, Feb 18, 2021 at 4:51 AM Hamidreza Hosseini 
wrote:

> Hi,
> I want to adjust health check on my varnish backends, But I dont know how
> can I know they are healthy or not
> because the nodes are up and running and even service is up but varnish
> does'nt work for all requests (just third of them are responding till I
> restart it (It happens sometimes)).
> How can I check this?
> ```
> backend server1 {
> .host = "server1.example.com";
> .probe = {
> .request =
>   "HEAD / HTTP/1.1"
>   "Connection: close"
>   "User-Agent: Varnish Health Probe";
> .timeout = 1s;
> .interval = 5s;
> .window = 5;
> .threshold = 3;
> }
> }
> ```
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Problem in varnish-6.0.7

2021-02-14 Thread Guillaume Quintard
re-adding the mailing-list so people know it's fixed

for vcl version, try using 4.1, it's the latest one, and there's no major
difference with 4.0.

-- 
Guillaume Quintard


On Sun, Feb 14, 2021 at 9:16 AM Hamidreza Hosseini 
wrote:

> Thanks for your help,
> It was my problem, varnishd -C -f /etc/varnish/default.vcl shows I didn't
> set my server names to dns
>
> And now it is working:
>
> ```
>  Docs: https://www.varnish-cache.org/docs/4.1/
>man:varnishd
>  Main PID: 2765 (varnishd)
> Tasks: 217
>Memory: 83.9M
>   CPU: 494ms
>CGroup: /system.slice/varnish.service
>├─2765 /usr/sbin/varnishd -j unix,user=vcache -F -a :6081 -T
> localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s
> malloc,1g -i ubuntu-test
>└─2780 /usr/sbin/varnishd -j unix,user=vcache -F -a :6081 -T
> localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s
> malloc,1g -i ubuntu-test
>
> Feb 14 20:43:34 ubuntu-test varnishd[2765]: Warnings:
> Feb 14 20:43:34 ubuntu-test varnishd[2765]: VCL compiled.
> Feb 14 20:43:34 ubuntu-test varnishd[2765]: Debug: Version: varnish-6.0.7
> revision 525d371e3ea0e0c38edd7baf0f80dc226560f26e
> Feb 14 20:43:34 ubuntu-test varnishd[2765]: Version: varnish-6.0.7
> revision 525d371e3ea0e0c38edd7baf0f80dc226560f26e
> Feb 14 20:43:34 ubuntu-test varnishd[2765]: Debug: Platform:
> Linux,4.4.0-186-generic,x86_64,-junix,-smalloc,-sdefault,-hcritbit
> Feb 14 20:43:34 ubuntu-test varnishd[2765]: Platform:
> Linux,4.4.0-186-generic,x86_64,-junix,-smalloc,-sdefault,-hcritbit
> Feb 14 20:43:34 ubuntu-test varnishd[2765]: Debug: Child (2780) Started
> Feb 14 20:43:34 ubuntu-test varnishd[2765]: Child (2780) Started
> Feb 14 20:43:34 ubuntu-test varnishd[2765]: Info: Child (2780) said Child
> starts
> Feb 14 20:43:34 ubuntu-test varnishd[2765]: Child (2780) said Child starts
> ```
>
> But first I thoght it is because of VCL version
> I read in doc 6.0 you wrote vcl4.0 for this version but I'm using vcl4.1
> Which one is prefered to use in 6.0.7 version?
>
> --
> *From:* Guillaume Quintard 
> *Sent:* Sunday, February 14, 2021 9:05 AM
> *To:* Hamidreza Hosseini 
> *Cc:* varnish-misc@varnish-cache.org 
> *Subject:* Re: Problem in varnish-6.0.7
>
> Hi,
>
> That error seems truncated, can you try to run this in your terminal
> please:
>
> ```
> varnishd -C -f /etc/varnish/default.vcl
> ```
>
> This will perform only the VCL->C translation and should provide a more
> complete message.
>
> Cheers,
>
> --
> Guillaume Quintard
>
>
> On Sun, Feb 14, 2021 at 8:54 AM Hamidreza Hosseini 
> wrote:
>
> I have migrate from varnish 6.0.5 to 6.0.7 but it could not run with this
> errro:
>
> ```
> ● varnish.service - Varnish HTTP accelerator
>Loaded: loaded (/etc/systemd/system/varnish.service; enabled; vendor
> preset: enabled)
>Active: failed (Result: exit-code) since Sun 2021-02-14 20:15:14 +0330;
> 4s ago
>  Docs: https://www.varnish-cache.org/docs/4.1/
>man:varnishd
>   Process: 29630 ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a
> :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret
> -s malloc,1g -i ubuntu-test (code=exited, status=2)
>  Main PID: 29630 (code=exited, status=2)
>
> Feb 14 20:15:14 ubuntu-test varnishd[29630]: -
> Feb 14 20:15:14 ubuntu-test varnishd[29630]: In backend specification
> starting at:
> Feb 14 20:15:14 ubuntu-test varnishd[29630]: ('/etc/varnish/default.vcl'
> Line 5 Pos 1)
> Feb 14 20:15:14 ubuntu-test varnishd[29630]: backend default {
> Feb 14 20:15:14 ubuntu-test varnishd[29630]: ###--
> Feb 14 20:15:14 ubuntu-test varnishd[29630]: Running VCC-compiler failed,
> exited with 2
> Feb 14 20:15:14 ubuntu-test varnishd[29630]: VCL compilation failed
> Feb 14 20:15:14 ubuntu-test systemd[1]: varnish.service: Main process
> exited, code=exited, status=2/INVALIDARGUMENT
> ```
>
> Some part of my config:
>
> ```
> vcl 4.1;
>
> import directors;
>
> backend default {
> .host = "test-lb";
> .port = "8000";
> }
>
> backend test_1 { .host = "test-1"; .port = "8000"; }
> backend test_2 { .host = "test-2"; .port = "8000"; }
> backend test_3 { .host = "test-3"; .port = "8000"; }
> backend test_4 { .host = "test-4"; .port = "8000"; }
> backend test_5 { .host = "test-5"; .port = "8000"; }
> backend test_6 { .host = "test-6"; .port = "8000"; }
> ```
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Problem in varnish-6.0.7

2021-02-14 Thread Guillaume Quintard
Hi,

That error seems truncated, can you try to run this in your terminal please:

```
varnishd -C -f /etc/varnish/default.vcl
```

This will perform only the VCL->C translation and should provide a more
complete message.

Cheers,

-- 
Guillaume Quintard


On Sun, Feb 14, 2021 at 8:54 AM Hamidreza Hosseini 
wrote:

> I have migrate from varnish 6.0.5 to 6.0.7 but it could not run with this
> errro:
>
> ```
> ● varnish.service - Varnish HTTP accelerator
>Loaded: loaded (/etc/systemd/system/varnish.service; enabled; vendor
> preset: enabled)
>Active: failed (Result: exit-code) since Sun 2021-02-14 20:15:14 +0330;
> 4s ago
>  Docs: https://www.varnish-cache.org/docs/4.1/
>man:varnishd
>   Process: 29630 ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a
> :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret
> -s malloc,1g -i ubuntu-test (code=exited, status=2)
>  Main PID: 29630 (code=exited, status=2)
>
> Feb 14 20:15:14 ubuntu-test varnishd[29630]: -
> Feb 14 20:15:14 ubuntu-test varnishd[29630]: In backend specification
> starting at:
> Feb 14 20:15:14 ubuntu-test varnishd[29630]: ('/etc/varnish/default.vcl'
> Line 5 Pos 1)
> Feb 14 20:15:14 ubuntu-test varnishd[29630]: backend default {
> Feb 14 20:15:14 ubuntu-test varnishd[29630]: ###--
> Feb 14 20:15:14 ubuntu-test varnishd[29630]: Running VCC-compiler failed,
> exited with 2
> Feb 14 20:15:14 ubuntu-test varnishd[29630]: VCL compilation failed
> Feb 14 20:15:14 ubuntu-test systemd[1]: varnish.service: Main process
> exited, code=exited, status=2/INVALIDARGUMENT
> ```
>
> Some part of my config:
>
> ```
> vcl 4.1;
>
> import directors;
>
> backend default {
> .host = "test-lb";
> .port = "8000";
> }
>
> backend test_1 { .host = "test-1"; .port = "8000"; }
> backend test_2 { .host = "test-2"; .port = "8000"; }
> backend test_3 { .host = "test-3"; .port = "8000"; }
> backend test_4 { .host = "test-4"; .port = "8000"; }
> backend test_5 { .host = "test-5"; .port = "8000"; }
> backend test_6 { .host = "test-6"; .port = "8000"; }
> ```
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: varnish crush every 30 minutes in high pressure

2021-02-13 Thread Guillaume Quintard
Hi,

Go with the LTS unless you need features from 6.5, it will be supported for
a longer time.

Cheers,

On Sat, Feb 13, 2021, 02:45 Hamidreza Hosseini 
wrote:

> Hi guys,
> I want to upgrade my varnish but I didnt know what is difference between
> varnish 6.0.* and 6.5.* !
> Are varnish 6.0.* LTS versions and varnish 6.5.* the latest?
> which major version should I install for production?
> --
> *From:* Hamidreza Hosseini
> *Sent:* Saturday, September 26, 2020 3:39 AM
> *To:* varnish-misc@varnish-cache.org 
> *Subject:* varnish crush every 30 minutes in high pressure
>
>
> Hi
> Im using varnish , before i hadn't any problem in varnish but today after
> crushing my dns (I fixed it now) and in high pressure , it
> restart whole machine almost every 30 minutes even Netdata that i'm using
> for monitoring and it shows in service status:
>
> Sep 26 13:49:48 varnish-1 varnishd[945]: Child (1548) not responding to CLI, 
> killed it.
> Sep 26 13:49:48 varnish-1 varnishd[945]: Unexpected reply from ping: 400 CLI 
> communication error (hdr)
> Sep 26 13:49:48 varnish-1 varnishd[945]: Child (1548) died signal=9
>
>
> What should i do for this problem?
> My varnish version:
>
> varnishd (varnish-6.0.5 revision 3065ccaacc4bb537fb976a524bd808db42c5fe40)
> Copyright (c) 2006 Verdens Gang AS
> Copyright (c) 2006-2019 Varnish Software AS
>
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish restart because of ram filled issue

2020-11-18 Thread Guillaume Quintard
Hi,

The cache isn't the only one taking RAM, I would recommend having a look at
this article to understand what would cost you here:
https://info.varnish-software.com/blog/understanding-varnish-cache-memory-usage
Main culprit is possibly going to be Transient storage (
https://varnish-cache.org/docs/trunk/users-guide/storage-backends.html#transient-storage)
but there could be other reasons

Kind regards,

-- 
Guillaume Quintard


On Wed, Nov 18, 2020 at 4:20 AM Hamidreza Hosseini 
wrote:

> Hi,
> I have some varnish ram in production that I'm using it for 1 Year
> I had this problem since first time that I install it but Now because we
> have so many request , the varnish ram will filled and crash and it
> destroys my backend cluster
> The whole ram is 54GB but I adjust 43 GB ram to it in systemd:
>
> ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :6081 -T
> localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s
> malloc,43g -i varnish
>
> what should I do for this problem??
> why varnish doesnt prevent ram filled?
> Is there any config in varnish that I tell it you can not eceed from this
> amount and if you reach that amount you MUST free your cache from the first
> input?(This feature is really important because I think because I have 5
> core cpu onj server, systemd cant understand the limitation that I'm
> adjusting, it should balance in varnish...)
>
> OS: ubuntu
> Varnish version:
> varnishd (varnish-6.0.5 revision 3065ccaacc4bb537fb976a524bd808db42c5fe40)
> Copyright (c) 2006 Verdens Gang AS
> Copyright (c) 2006-2019 Varnish Software AS
>
>
>
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Varnish Cache 6.0.7

2020-11-12 Thread Guillaume Quintard
Hi all,

Almost a week ago, we released Varnish Cache 6.0.7, this is the latest
entry in the LTS series. As expected, the changelog (
https://github.com/varnishcache/varnish-cache/blob/6.0/doc/changes.rst#varnish-cache-607-2020-11-06)
mainly contains bug fixes, making the release even more robust, but it also
contains a couple of treats like vmod_shard supporting weighted backends
and the new "pid" command for varnishadm.

You will be able to find packages for this release on the usual
packagecloud page: https://packagecloud.io/varnishcache/varnish60lts
However, you will see something unusual here: we have added support for
RHEL/CentOS 8, Debian Buster and Ubuntu Focal.
We usually don't add support for new distribution once a series is started,
but we are making an exception for LTS as we don't want to encourage people
running it on EOL'd distributions.

We have also pushed the Official Docker images here:
https://hub.docker.com/_/varnish
Note that the underlying Debian image switch from Stretch to Buster as the
Stretch image won't be maintained anymore.

Kind regards,

-- 
Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


  1   2   3   4   5   >