Re: A plea for a more useful and discoverable built-in VCL

2023-11-08 Thread Dridi Boukelmoune
On Mon, Nov 6, 2023 at 6:36 AM Guillaume Quintard
 wrote:
>
> Hi everybody!
>
> A bunch of questions I regularly get regarding Varnish behavior revolve 
> around the built-in vcl, mainly, I get one of these three:
> - why is Varnish not caching?
> - how come something is happening in vcl_X even though I don't have it in my 
> vcl?
> - what on earth is that built-in vcl you are talking about?
>
> As usual, I have a half-baked solution with a bunch of problems, which will 
> hopefully inspire smarter people to fix the issue properly.
>
> What I came up with is here: 
> https://github.com/varnish/toolbox/tree/verbose_builtin/vcls/verbose_builtin
> Essentially, use std.log() to explain what the built-in code is doing.
>
> At the moment, it's a purely opt-in solution, meaning that you need to know 
> about builtin.vcl to find it, which doesn't really help with discoverability, 
> but I intend on including that code in the docker image, which should raise 
> awareness a bit.
> The absolute best in my mind would be to have something similar in core, but 
> I can see how importing std would be a hurdle. Maybe as part of packaging, we 
> could include that file in the provided default.vcl?
>
> I dismissed the performance penalty of printing a few more lines as 
> negligible, but I could be wrong about that.

I think we should rather have new VSL tags for the call action in VCL
(calling and returning from a non-step subroutine) because this way if
they really consume too many resources (vsl_buffer, vsl_space, etc)
they could easily be masked.

For example:
- VCL_sub_call
- VCL_sub_return

I would probably not log subroutines called by VMODs and leave that as
an exercise to VMOD authors.

> There's also the question of phrasing, so we can have a message that is 
> concise but also gives enough information to debug the behavior. But that's 
> very minor, and the least of our worries here.
>
> Thoughts?

If you look at the VCL reference documentation between 6.0 and 7.4 you
may notice that we went from a monolithic manual to a bunch of them:

https://varnish-cache.org/docs/6.0/reference/
https://varnish-cache.org/docs/7.4/reference/

One thing I've wanted to do for a while is to add a vcl-builtin(7)
manual, and I think even a small synopsis and a dump of
bin/varnishd/builtin.vcl would improve discoverability. I would
however prefer to have something smarter than that, but haven't given
enough thoughts on how to proceed.

For example, the vcl-step(7) and vcl-var(7) manual are authoritative,
so updating them changes Varnish's behavior.

We could imagine having a vcl-builtin(7) manual where we document and
put individual subroutines authoritative snippets, and generate
bin/varnishd/builtin.vcl from that.

Cheers
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: out of workspace (bo)

2023-09-11 Thread Dridi Boukelmoune
Hi Kevyn,

On Wed, Aug 30, 2023 at 11:37 AM Kevyn Fyleyssant
 wrote:
>
> Here is my VCL :
> https://pastebin.com/TpN8r0Um
>
> And my varnishd command :
> /usr/sbin/varnishd -a :8181 -p feature=+http2 -p http_resp_hdr_len=200k -p 
> http_resp_size=2M -p http_req_hdr_len=200k -p workspace_backend=256k -p 
> workspace_client=256k -p http_max_hdr=256 -f /etc/varnish/default.vcl -s 
> malloc,4G

You already have a fairly large workspace, but to solve this you will
need to further increase workspace_backend.

You configured Varnish to accept up to 200kB per header field, and up
to 2MB per response headers (all fields combined). The worst case
scenario wouldn't fit in 256kB. The file size should have no
significant effect on workspace consumption, so this recurring
overflow is probably caused by beresp headers alone.

You should first make sure you have a good understanding of the origin
server and why it may produce such large response headers.

If this is legitimate, there is no way around increasing
workspace_backend. Your VCL isn't doing workspace-intensive
operations, so a little over 2MB (for example 2MB+64kB aka 2102kB)
should be enough.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Conditional requests for cached 404 responses

2023-07-15 Thread Dridi Boukelmoune
On Sat, Jul 15, 2023 at 5:09 AM Guillaume Quintard
 wrote:
>
> Hi Mark,
>
> You are correct: 
> https://github.com/varnishcache/varnish-cache/blob/varnish-7.3.0/bin/varnishd/cache/cache_fetch.c#L699-L703
>
> We only set the OF_IMSCAND flag (that we use to say that we can conditional 
> download) if:
> - the object is not a Hit-For-Miss (HFM)
> - if the status is 200
> - we either have a convincing Last-modified, or an Etag header
>
> You can also test it with this VTC:
> varnishtest "conditional requests"
>
> server s1 {
> rxreq
> txresp -status 200 -hdr "ETag: 1234" -hdr "Last-Modified: Wed, 21 Oct 
> 2015 07:28:00 GMT" -body "dad"
>
> rxreq
> expect req.http.if-none-match == "1234"
> expect req.http.if-modified-since == "Wed, 21 Oct 2015 07:28:00 GMT"
> txresp
> } -start
>
> varnish v1 -vcl+backend {
> sub vcl_backend_response {
> set beresp.ttl = 0.1s;
> set beresp.grace = 0s;
> set beresp.keep = 1y;
> return (deliver);
> }
> } -start
>
> client c1 {
> txreq
> rxresp
>
> delay 0.2
>
> txreq
> rxresp
> } -run
>
> Change the 200 to a 404 and the test will now fail.
>
> I quickly skimmed the HTTP spec and see no reason for us to actually check 
> the status, but I'm sure somebody closer to the code will pop up to provide 
> some light on the topic.

I was writing a very similar test case but haven't spent time on the
RFC but their is also the concern of not breaking existing setups.

Similarly to how how we handle request cookies with extra caution, we
could imagine something like this in the built-in VCL for
vcl_backend_request:

sub bereq_revalidate {
if (bereq.uncacheable || obj_stale.status == 200) {
return;
}
unset bereq.http.If-None-Match;
unset bereq.http.If-Modified-Since;
}

Then enabling revalidation for 404 is only a matter of adding this:

sub bereq_revalidate {
if (obj_stale.status == 404) {
return;
}
}

We briefly discussed accessing the stale object during last VDD and
extensively discussed other aspects of (re)validation.

https://github.com/varnishcache/varnish-cache/wiki/VDD23Q1#compliance

Right now the workaround would be to store the actual beresp.status in
a header when you wish to enable revalidation, change it to 200 and
restore the original in vcl_deliver.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: varnishlog record length

2023-07-13 Thread Dridi Boukelmoune
On Thu, Jul 13, 2023 at 9:46 AM Luca Gervasi  wrote:
>
> Hi,
> i'm helping to fix some issues with cookies (number, length...) thus I need 
> to dump all the cookies in every request (tied with the host header). I tried 
> with varnishlog but the actual value is limited (in our configuration) to ~ 
> 700 bytes by the vsl_reclen parameter, I suppose. Reading the documentation, 
> that value ranges between 16b and 4084b, which is lower than what I need to 
> dump the full monster cookie (~ 24kb).

This limit is actually dynamic and better reflected in the reference
manual for vsl_buffer and vsl_reclen:

https://varnish-cache.org/docs/7.3/reference/varnishd.html#vsl-buffer
https://varnish-cache.org/docs/7.3/reference/varnishd.html#vsl-reclen

If you want to increase reclen to 24kB, it first needs to fit inside a
VSL buffer.

In your case I would recommend bumping vsl_buffer to at least 64kB,
and add 64kB to workspace_client and workspace_backend too (the VSL
buffer lives on these workspaces).

> At the moment I'm using tcpdump in the proxy port, but it is really 
> impractical as I have to grep/awk and group the requests...and I'm not really 
> proud about its manutenablity.
>
> Is there a way to dump, for every request that contains the cookie header, 
> such header plus others? (in my case: host)?

https://varnish-cache.org/docs/7.3/reference/vsl-query.html

> I was thinking about tapping into varnish functions using eBPF. What do you 
> think? Do you have something already working for my case that I could tamper 
> with?

Try bumping vsl_buffer and vsl_reclen first, eBPF is probably way overkill.

Cheers,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Question about changes to ESI processing, possible feature request

2023-06-21 Thread Dridi Boukelmoune
On Thu, Jun 22, 2023 at 12:28 AM Daniel Karp  wrote:
>
> Hi, this is, I think, my first post to a varnish mailing list--I hope
> this is the right place for this.

Welcome!

> Varnish 7.3 changes the way it handles errors for Edge-side includes.
> Previously, the body of an ESI response would be included in the
> parent regardless of the status code of the ESI response. 7.3 changed
> this behavior to more closely match the ESI specification
> (https://www.w3.org/TR/esi-lang/). Now, if there is an error, the
> parent request will fail as well. The "onerror" attribute for ESIs is
> also supported (with the appropriate feature flag), allowing the
> request to continue when it would otherwise fail because of a failed
> ESI.
>
> But it isn't clear to me from the docs what happens when an ESI fails
> and include is set to "continue".
> The changelog says "any and all ESI:include objects will be delivered,
> no matter what their status might be."
> The user guide says "However, it is possible to allow individual
>  The Pull request for this feature says: "This changes the default
> behavior of includes, matching the ESI language specification."
> The ESI specification itself says: "If an ESI Processor can fetch
> neither the src nor the alt, it returns a HTTP status code greater
> than 400 with an error message, unless the onerror attribute is
> present. If it is, and onerror="continue" is specified, ESI Processors
> will delete the include element silently."
>
> What is the new behavior? Will the body of the response be included
> with onerror set to "continue" (which reproduces the previous
> behavior), or will the element be silently removed?

It is a good question and the short answer is that nothing is removed.

If the processing of an ESI fragment fails in VCL, and there is no
onerror="continue" kicking in, ESI delivery just stops where the parent
response was, in the middle of its body delivery.

Past VCL execution, we have the aforementioned body delivery, and
if your include fails for any reason, the _partial_ include body was part
of the parent response delivery, but again, it is interrupted.

In each case, if onerror="continue" was in effect you would likely get
a 503 guru meditation in the middle of your overall response for the
former and a missing gap for the latter.

> If it is the former, then that shouldn't be a problem for our use
> case, although it is not great for conforming to the ESI specs. But if
> the element is silently removed--or if that change is discussed to
> better conform to the standards, I have a feature request. :)

In other words, not removed, either replaced by a synthetic response
or truncated.

> We use ESI extensively for our JSON API, returning arrays of results.
> We might have something like:
> {foo: [
> ,
> ,
> etc...
> ] }
> Some of those requests might be 404 responses, and our 404's have
> "null" in the body (I know, a bit hacky, but it works) so that the
> JSON remains valid. If the ESI were silently removed, that would break
> the JSON in our responses.

I have seen and done much worse, this is actually an interesting
representation of the resource in the "not found" state.

Partial delivery could mean that you get "nu" instead of "null" and
roll with it because of onerror="continue".

> But there is a better solution in the ESI Specs: The "alt" attribute.
> If varnish were to silently remove the include, and also support the
> alt attribute, this would be the cleanest solution. In that case, we
> could set the alt attribute for our ESIs to an endpoint that returns a
> 200 response and "null", where appropriate.

I don't think we support the alt attribute, or I remember pretty badly
since I introduced the initial onerror support. It has been mentioned
very recently so maybe there could be a move in this direction. An
include with src and alt attributes would likely not be streamed,
ruling out the "partial fragment" scenario.

> I hope this is clear!

I seriously doubt my answer was :)

Cheers,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses

2023-06-15 Thread Dridi Boukelmoune
On Thu, Jun 15, 2023 at 9:33 AM Uday Kumar  wrote:
>
>
>> There is this in the code:
>>
>> > H("Cache-Control",  H_Cache_Control,  F  )  // 2616 
>> > 14.9
>>
>> We remove the this header when we create a normal fetch task, hence
>> the F flag. There's a reference to RFC2616 section 14.9, but this RFC
>> has been updated by newer documents.
>
>
> Where can I find details about the above code, could not find it in RFC 2616 
> 14.9!

This is from include/tbl/http_headers.h in the Varnish code base.

I'm not going to break it down in details, but that's basically where
we declare well-known headers and when to strip them when we perform a
req->bereq or beresp->resp transition.

In this case, we strip the cache-control header from the initial
beresp when it is a cache miss.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses

2023-06-14 Thread Dridi Boukelmoune
On Wed, Jun 14, 2023 at 9:02 AM Uday Kumar  wrote:
>
> Hi Guillaume,
>
> Thanks for the response.
>
> Can you provide us with a log of the transaction please?
>
> I have sent a Request to VARNISH which Contains Cache-Control: no-cache 
> header, we have made sure the request with cache-control header is a MISS 
> with a check in vcl_recv subroutine, so it's a MISS as expected.
>
> The problem as mentioned before:
> Cache-Control: no-cache header is not being passed to the Backend even though 
> its a MISS.

There is this in the code:

> H("Cache-Control",  H_Cache_Control,  F  )  // 2616 14.9

We remove the this header when we create a normal fetch task, hence
the F flag. There's a reference to RFC2616 section 14.9, but this RFC
has been updated by newer documents. Also that section is fairly long
and I don't have time to dissect it, but I suspect the RFC reference
is only here to point to the Cache-Control definition, not the F flag.

I suspect the rationale for the F flag is that on cache misses we act
as a generic client, not just on behalf of the client that triggered
the cache miss. If you want pass-like behavior on a cache miss, you
need to implement it in VCL:

- store cache-control in a different header in vcl_recv
- restore cache-control in vcl_backend_fetch if applicable

Please note that you open yourself to malicious clients forcing
no-cache on your origin server upon cache misses.

Come to think of it, we should probably give Pragma both P and F flags.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Mysterious no content result, from an URL with pass action

2023-06-06 Thread Dridi Boukelmoune
On Tue, Jun 6, 2023 at 12:07 PM Jakob Bohm  wrote:
>
> Hi all,
>
> Just a quick update,
>
> After changing the hitch to varnish connection from AF_UNIX to TCP,
> rerunning the experiment with tcpdump active revealed that varnish
> 7.2.1 seemed to silently ignore HTTP/2 requests whenever my browser
> chose that over HTTP/1.x .  Turning off HTTP/2 in hitch seems to
> make things work.
>
> I'm still surprised that varnishd drops HTTP/2 over proxyv2 silently
> with no logging that a connection was dropped, and in such a way that
> web browsers interpret it as an empty page.  Feels very similar to
> my earlier issue that failure to bind to a specified listen address
> was not shown to the sysadmin starting varnishd .

Did you enable http2 support?

https://varnish-cache.org/docs/7.2/reference/varnishd.html#feature

I don't like the idea that we silently close sessions, could you
please open a github issue explaining what you observe and how to
reproduce it?

> Now it's time to upgrade to 7.3.0 and improve the configuration.

I don't think we improved anything in that area in the 7.3.0 release.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Is there any "try catch" functionality in VCL? If not, how to handle runtime errors in vcl_init?

2023-04-19 Thread Dridi Boukelmoune
On Wed, Apr 19, 2023 at 4:25 PM Batanun B  wrote:
>
> Hi,
>
> We use the vmod crypto to verify cryptographic signatures for some of our 
> traffic. When testing, the public key was hard coded in the VCL, but before 
> we start using this feature in production we will switch to reading the 
> public key from a file on disk. This file is generated on server startup, by 
> fetching it from an Azure keyvault.
>
> Now, the problem I'm picturing here is that this fetching of the public key 
> can fail, or the key can be corrupt or empty, maybe by user error. Or the key 
> could be valid, but the format of the key happens to be unsupported by the 
> vmod crypto. So, even if we do our best to validate the key, in theory it 
> could pass all our tests but still fail when we give it to the vmod crypto. 
> And if that happens, Varnish won't start because the vmod crypto is initiated 
> with the public key in vcl_init, like this:
>
> sub vcl_init {
>   new cryptoVerifier = crypto.verifier(sha256, 
> std.fileread("/path/to/public.key"));
> }
>
> What I would prefer to happen if the key is rejected, is that vcl_init goes 
> through without failure, and then the requests that use the cryptoVerifier 
> will fail, but all other traffic (like 99%) still works. Can we achieve this 
> somehow? Like some try-catch functionallity? If not, is there some other way 
> to handle this that doesn't cause Varnish to die on startup?

It's the VMOD author you should ask to have an option to ignore public
key errors.

This is a constructor, and even if we had a try-catch kind of
construct in the language, I don't think we would make this one
recoverable.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Possible to disable/inactivate a backend using VCL?

2023-04-19 Thread Dridi Boukelmoune
On Wed, Apr 19, 2023 at 3:45 PM Batanun B  wrote:
>
> > backend theBackend none;
> > Here's the relevant documentation: 
> > https://varnish-cache.org/docs/trunk/users-guide/vcl-backends.html#the-none-backend
> > It was added in 6.4.
>
> Look like exactly what we need! Sadly we are "stuck" on 6.0 until the next 
> LTS version comes. So I think that until then I will use our poor man version 
> of the "none" backend, ie pointing to localhost with an port number that 
> won't give a response.

It was back-ported to 6.0, which is not an LTS branch limited to bug fixes ;)

https://varnish-cache.org/docs/6.0/users-guide/vcl-backends.html

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish won't start because backend host resolves to too many addresses, but they are all identical IPs

2023-04-19 Thread Dridi Boukelmoune
On Wed, Apr 19, 2023 at 2:44 PM Guillaume Quintard
 wrote:
>
> The fact the IPs are identical is weird, but I wouldn't be surprised if the 
> dns entry actually contained 3 identical IPs.
>
> > Shouldn't Varnish be able to figure out that in that case it can just 
> > choose any one and it will work as expected?
>
> Shouldn't your DNS entries be clean? ;-)

It should, but Varnish could probably make an effort here.
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Improved github issue templates

2023-04-18 Thread Dridi Boukelmoune
On Tue, Apr 18, 2023 at 3:05 PM kokoniimasu  wrote:
>
> Hi,
>
> Varnish already have a issue template, but sometimes I see people ignore it 
> and post questions, etc. (they are closed right away)
>
> I saw a post on Twitter about the great categorization of issues in 
> ImageMagick.
>
> https://twitter.com/yoya/status/1644160988600733696
>
> I thought this was very nice and made a sample to see if it could be done in 
> Varnish as well.
> I think that by putting in a category, we can guide them to the right place.
>
> https://github.com/xcir/ghsandbox/issues
> (Feel free to create issues for testing.)
>
> When a new issue is made, the category will be displayed.
> Bug report required each information (Expected Behavior, Current Behavior...)
>
> Of course, I understand that it is also used as a TODO, so I also allow a 
> free format description with "Don't see your issue here? Open a blank issue." 
> at the bottom of the issue creation screen

If we can help it, I'd rather not have the blank issue link.

> A warning is also included here to prevent unintended use.
>
> https://github.com/xcir/ghsandbox/issues/new
>
> I am thinking of submitting a PR if this might be useful, what do you think?

YES PLEASE!

I have had this deep in my backlog for a long time now, and never got
around to fishing it out.

I didn't even know that issue templates could include external links.

> config is here.
> https://github.com/xcir/ghsandbox/tree/main/.github/ISSUE_TEMPLATE
>
> I'd be happy to be helpful.

Feel free to refresh the current template. Maybe we don't need as many
markdown comments as we currently have today with the categorization
you bring upfront.

Many thanks!
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Strange Broken Pipe error from Varnish health checks

2023-04-18 Thread Dridi Boukelmoune
On Mon, Apr 17, 2023 at 4:17 PM Guillaume Quintard
 wrote:
>
> That code hasn't moved in a while, so I'd be surprised to see a bug there, 
> but that's always possible.
> Any chance you could get a tcpdump of a probe request (from connection to 
> disconnection) so we can see what's going on?

It has: https://github.com/varnishcache/varnish-cache/pull/3886

But it shouldn't have changed the default behavior.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: CLI result = 300, how to troubleshoot?

2023-02-15 Thread Dridi Boukelmoune
On Wed, Feb 15, 2023 at 10:24 AM Jakob Bohm  wrote:
>
> Dear fellow users,
>
> I am running varnish-cache 7.2.1 (compiled from source) in
> preproduction.
> After some seemingly minor settings changes, every time I
> try to start varnishd, I get the following on the terminal:
>
> # /etc/init.d/varnish start
> Starting Varnish HTTP(S) proxy: varnishWarnings:
>
> Change will take effect when VCL script is reloaded
> Child launched OK
> CLI result = 300
>  failed!
>
> Now the questions are what does this mean?, and how do I get
> a more detailed error message?  Obviously, the failure to
> start varnishd makes varnishlog useless, but maybe there is
> another log file for such startup errors.

There may be something in your syslog, otherwise it is hard to tell
without knowing what the init script is up to.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: varnish:6.0.11 Docker image crashing on Apple M1 processor

2023-01-26 Thread Dridi Boukelmoune
Hi Martynas,

On Thu, Jan 26, 2023 at 9:00 PM Martynas Jusevičius
 wrote:
>
> Hi Guillaume,
>
> I reproduced the same error as well, running Terminal in Rosetta on MacOS.
>
> Can it be a permissions issue if the same exact docker-compose setup
> runs fine on Windows?

The problem with containers is the uncertainty regarding the host
operating system running them, so something working in a container on
a specific platform does not guarantee the container will run fine
everywhere.

> Here's our Dockerfile:
> https://github.com/AtomGraph/varnish/blob/master/Dockerfile

Try this:

ENTRYPOINT ["/usr/local/bin/docker-varnish-entrypoint", "-jnone"]

Arguably, the container system already applies the principle of least
privileges, so we don't know Varnish's jail feature. Who knows? Not
me.

Cheers,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Header names case sensitivity

2022-12-15 Thread Dridi Boukelmoune
On Wed, Dec 14, 2022 at 10:05 AM Jérémy Lecour  wrote:
>
> On Wed, Dec 14, 2022 at 10:56 AM Poul-Henning Kamp  
> wrote:
> > not everybody reads RFCs recreationally...
>
> What ?? I'm shocked ! :D
>
> I spent 2 hours last night reading parts of the RFC9110 about HTTP and I've 
> learnt a lot about HTTP headers.
>
> For example, the "X-" prefix for non-standard headers has been deprecated 
> since 2012.
> It was a good idea in theory but proven to be counter productive, based on 
> the long running experimentation in email and SIP (and HTTP).
>
> For Varnish I guess we're stuck with the X- prefix since it became a de-facto 
> standard.

We can always rename X-Varnish to something else but we'd need a good
reason to break existing setups.

On the other hand you have the ability to rename the header in VCL and
have access to the req.xid and bereq.xid variables to build your own
transaction tracking header.

Cheers
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish 6.0.11 panic help

2022-11-23 Thread Dridi Boukelmoune
On Tue, Nov 22, 2022 at 6:03 PM Luke Rotherfield
 wrote:
>
> Hi Guys
>
> I am really struggling to debug this panic and wondered if you could give any 
> hints were I might start looking for answers.  I have been trawling the 
> GitHub repo and have seen several people recommend upping the 
> thread_pool_stack value.  Here is the panic and a few other bits of data that 
> I think are useful based on the issues I have been reading:
>
>  varnishd[15482]: Child (15492) Panic at: Tue, 22 Nov 2022 16:10:26 GMT
>
> Wrong turn at cache/cache_main.c:284:
> Signal 11 (Segmentation fault) received at 0x887fe000 si_code 2
> version = varnish-6.0.11 revision a3bc025c2df28e4a76e10c2c41217c9864e9963b, 
> vrt api = 7.1
> ident = 
> Linux,4.14.290-217.505.amzn2.aarch64,aarch64,-junix,-smalloc,-sfile,-sdefault,-hcritbit,epoll
> now = 3120063.050288 (mono), 1669133426.954528 (real)
> Backtrace:
>   0x43f5b4: /usr/sbin/varnishd() [0x43f5b4]
>   0x4a968c: /usr/sbin/varnishd(VAS_Fail+0x54) [0x4a968c]
>   0x43a350: /usr/sbin/varnishd() [0x43a350]
>   0x90fd7668: linux-vdso.so.1(__kernel_rt_sigreturn+0) [0x90fd7668]
>   0x90c8f540: /lib64/libc.so.6(memset+0x100) [0x90c8f540]
>   0x4b88f4: /usr/sbin/varnishd(deflateReset+0x48) [0x4b88f4]
>   0x42e924: /usr/sbin/varnishd(VGZ_NewGzip+0x88) [0x42e924]
>   0x42ebc0: /usr/sbin/varnishd() [0x42ebc0]
>   0x42df28: /usr/sbin/varnishd(VFP_Open+0x98) [0x42df28]
>   0x42b950: /usr/sbin/varnishd() [0x42b950]
> thread = (cache-worker)
> thr.req = (nil) {
> },
> thr.busyobj = 0x3d040020 {
>   end = 0x3d05,
>   retries = 0,
>   sp = 0x3c241a20 {
> fd = 28, vxid = 32945,
> t_open = 1669133426.953970,
> t_idle = 1669133426.953970,
> ws = 0x3c241a60 {
>   id = \"ses\",
>   {s, f, r, e} = {0x3c241aa0, +96, (nil), +344},
> },
> transport = HTTP/1 {
>   state = HTTP1::Proc
> }
> client = 172.31.47.149 50812 :80,
>   },
>   worker = 0x832326c8 {
> ws = 0x83232770 {
>   id = \"wrk\",
>   {s, f, r, e} = {0x83231e00, +0, (nil), +2040},
> },
> VCL::method = BACKEND_RESPONSE,
> VCL::return = deliver,
> VCL::methods = {BACKEND_FETCH, BACKEND_RESPONSE},
>   },
>   vfc = 0x3d041f30 {
> failed = 0,
> req = 0x3d040640,
> resp = 0x3d040ab8,
> wrk = 0x832326c8,
> oc = 0x3b250640,
> filters = {
>   gzip = 0x3d04a740 {
> priv1 = (nil),
> priv2 = 0,
> closed = 0
>   },
>   V1F_STRAIGHT = 0x3d04a660 {
> priv1 = 0x3d042600,
> priv2 = 674132,
> closed = 0
>   },
> },
> obj_flags = 0x0,
>   },
>   ws = 0x3d040058 {
> id = \"bo\",
> {s, f, r, e} = {0x3d041f78, +34832, (nil), +57472},
>   },
>   ws_bo = 0x3d0425e8,
>   http[bereq] = 0x3d040640 {
> ws = 0x3d040058 {
>   [Already dumped, see above]
> },
> hdrs {
>   \"GET\",
>   \"/build/admin/css/oro.css?v=7c08a284\",
>   \"HTTP/1.1\",
>   \"X-Forwarded-Proto: https\",
>   \"X-Forwarded-Port: 443\",
>   \"Host: london.paperstage.doverstreetmarket.com\",
>   \"X-Amzn-Trace-Id: Root=1-637cf472-4faa5da7246eda4f0a477811\",
>   \"User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) 
> AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36\",
>   \"X-Amz-Cf-Id: 
> WVvRZ7m_mF9MaQ88b0rJd_vEKVJo5YqQr6Povp9ODasHVw2FQop36w==\",
>   \"Via: 3.0 c1685d59e35fdb859ab8a1f97feb5652.cloudfront.net 
> (CloudFront)\",
>   \"Cookie: BAPID=mdmh5umu6v6l1qejvo0lbhse54; 
> __utma=58316727.170235450.1647004079.1669126271.1669131591.29; 
> __utmb=58316727.22.10.1669131591; __utmc=58316727; __utmt=1; __utmt_b=1; 
> __utmz=58316727.1665659005.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); 
> _csrf=5Inqz08yS1ACLDnJDFG8xoavis88G4BgHx4WpD6RuQM; 
> _ga=GA1.2.170235450.1647004079; 
> _ga_QNDMHZ1SBD=GS1.1.1667387720.14.1.1667387720.60.0.0; 
> _gcl_au=1.1.255577917.1662482672; _landing_page=%2F; _orig_referrer=; 
> _shopify_y=1531218a-8c0c-4153-86e1-37e06cc3a103; 
> _y=1531218a-8c0c-4153-86e1-37e06cc3a103; 
> df_preview=%7B%22preview_id%22%3A%221755%22%2C%22preview_type%22%3A%22version%22%2C%22preview_code%22%3A%22d82194c2b3c4f22164ca71d225673ff6%22%7D\",
>   \"Accept-Language: en-US,en;q=0.9,fr;q=0.8\",
>   \"Accept: text/css,*/*;q=0.1\",
>   \"Referer: 
> https://london.paperstage.doverstreetmarket.com/doverstreetadmin/pages/settings/1755/en\;,
>   \"Accept-Encoding: gzip, deflate, br\",
>   \"pragma: no-cache\",
>   \"cache-control: no-cache\",
>   \"sec-gpc: 1\",
>   \"sec-fetch-site: same-origin\",
>   \"sec-fetch-mode: no-cors\",
>   \"sec-fetch-dest: style\",
>   \"X-Forwarded-For: 32.217.249.1, 15.158.35.113, 172.31.47.149\",
>   \"X-Varnish: 32947\",
> },
>   },
>   http[beresp] = 0x3d040ab8 {
> ws = 0x3d040058 {
>   [Already dumped, see above]
> },
> hdrs {
>   

Re: HTTP error code in case of workspace overflow

2022-08-25 Thread Dridi Boukelmoune
Hi,

On Wed, Aug 24, 2022 at 9:26 AM  wrote:
>
> Hello all,
>
> I was looking on this Varnish troubleshooting guide, section "Not enough 
> workspace memory".
> There is info that Varnish should return 503 HTTP code when there is overflow:
>
> When a task consumes more memory than allowed in one of the specific 
> workspace contexts, the transaction is aborted, and an HTTP 503 response is 
> returned. When a workspace_session overflow occurs, the connection will be 
> closed.
>
> Additionally I noticed this PR where is mentioned that Varnish should return 
> 500 in case of overflow which kinda make sense ...
>
> I tracked this change to current master and there is still 500 code:
> - 
> https://github.com/varnishcache/varnish-cache/blob/master/bin/varnishd/http/cache_http1_deliver.c#L130
> - 
> https://github.com/varnishcache/varnish-cache/blob/master/bin/varnishd/http1/cache_http1_deliver.c#L77
>
> The question obviously is if the DOC talks about something different or the 
> code should be adjusted.

Excellent detective work. The documentation should be adjusted.

I reported your finding to the developer portal team and we will
figure out a communication channel to report issues directly related
to the portal itself.

Thanks,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish attempting to serve disconnected clients

2022-07-28 Thread Dridi Boukelmoune
On Thu, Jul 28, 2022 at 12:07 PM Lee Hambley  wrote:
>
> Dear List,
>
> I'm debugging a problem with our Varnish, and our QA folks found an 
> interesting case.
>
> Ultimately this breadcrumb trail was discovered looking into our varnishes 
> having an enormous number of open connections in CLOSE_WAIT status, in a way 
> that doesn't appear to be ideal connection reuse, rather dead connections 
> that we need to close out; non the less we're focused on a slightly more 
> specific sub-issue of that issue right now:
>
> We have noticed that if a request is eligible to be coalesced, but the 
> impatient client disconnects before the request is served, Varnish continues 
> to try and serve that request by going to the backend even after the client 
> is disconnected.

Hi Lee,

Reading up to this point I'm convinced that you have a pretty good
understanding of the problem (but I will read the rest, don't worry).

This is something we fixed a while ago in Varnish Enterprise, but it
took several painful attempts to get it right. While this problem may
look straightforward, it comes with a flurry of details.

The basic idea is to implement a way to "walk away" from the waiting
list, because in this context "there is no rush". The problem is that
your client could be stuck in a variety of places, like for example 3
levels down a parallel ESI tree over an HTTP/2 connection. Another
problem is that you short-circuit the normal delivery path, so you
need to make sure that the client task and session are accurately tore
down.

This is something we wanted to study more before submitting, in
case we could come up with a less complicated solution, but either
way we also need time to work on porting this nontrivial patch.

> I suspect in our case, we can disable request coalescing, but I didn't want 
> to miss an opportunity to report a possible bug, or learn something about a 
> corner of Varnish I don't know well... here's our setup:

Thanks a lot, very much appreciated!

> - With Varnish 6.6.1
> - Given a toy python script backend which answers 200OK to the health check, 
> but answers after 10s to the other requests with an `HTTP 500` error; [source 
> linked below]
> - Given the following config [below]
>   - started with `/usr/local/sbin/varnishd -n /usr/local/var/varnish -F -f 
> $PWD/foo.vcl -a test=:21601,HTTP`
> - When running `for i in $(seq 10); do curl -m 1 localhost:21601/ &; done` 
> (ampersand for background is important)
> - Varnish makes 1 request to the backend, coalescing the others
> - Clients all disconnect thanks to `curl -m 1` (`--max-time`)  (or `0.1`, no 
> difference, naturally)
> - First request completed with `HTTP 500`, Varnish continues to retry 
> requests for disconnected clients. (logs without health checks below)
>
> In the real set-up Varnish actually only handles the next request, then the 
> next, then the next one each 10 seconds, I didn't take the time to reproduce 
> that in this set-up as I believe it's a bug that Varnish continues to do work 
> for disconnected clients. I guess in my toy we benefit from hit-for-pass, and 
> in our real world setup that's not true.

Yes, either hit-for-pass (return(pass)), hit-for-miss
(beresp.uncacheable) or a very small TTL.

> I can somehow imagine this as a feature (populate the cache even though the 
> client went away) but queueing hundreds or thousands of requests, and 
> nibbling away at them one-by-one even after clients are long-since hung up is 
> causing resource exhaustion for us; we can tune the configs significantly now 
> that we know the issue, but we'd love to get some opinionated feedback on 
> what would be an idiomatic approach to this.

It's the lack of (hitpass, hitmiss or regular) object that causes
waiting list serialization, we don't need to implement a new feature
in that regard.

> - Are we doing something wrong?

Probably a zero TTL, otherwise running into the waiting list nepotism
misbehavior which is not your fault.

> - Should varnish still go to the backend to serve disconnected clients?

No, that would be the walkaway feature.

> - Is this a bug, should I report it somewhere more formally and attach the 
> repro case a bit more diligently?

We don't need a bug report, and I believe we have at least one
reproducer in the enterprise test suite.


> Warm regards everyone, thanks for Varnish, an active list, and active support 
> on StackOverflow and such.
>
> [config]
> vcl 4.0;
> backend foo {
> .host = "127.0.0.1";
> .port = "3100";
> .probe = {
>   .url = "/healthz";
>   .interval = 5s;
>   .timeout = 1s;
>   .window = 5;
>   .threshold = 3;
> }
> }
> sub vcl_recv {
>   return (hash);
> }
>
> [toy python server logs]
> python3 ./server.py
> socket binded to port 3100
> 2022-07-28 13:45:05.589997 socket is listening
> 2022-07-28 13:45:17.336371 from client: b'GET / HTTP/1.1\r\nHost: 
> localhost:21601\r\nUser-Agent: curl/7.68.0\r\nAccept: */*\r\nX-Forwarded-For: 
> 

Re: How to handle errors when using esi tags?

2022-07-13 Thread Dridi Boukelmoune
On Tue, Jul 12, 2022 at 9:50 PM Felipe Santiago
 wrote:
>
> Hi,
>
> I've been trying to use  to execute a 
> subrequest to /bar in case /foo fails, however I didn't manage to make it 
> work. Do you support the alt attribute? If my backend returns a 4xx or 5xx, 
> is that considered an error?

Hi,

We don't support alternate URLs for ESI includes, and actually I'm
wondering how we could do it.

A 4xx or 5xx response is considered a response, so unless you abandon
the backend fetch, such a backend response will be included in the
client response.

Since Varnish 7.1 abort ESI delivery on include error:

https://varnish-cache.org/docs/7.1/whats-new/changes-7.1.html#other-changes-in-varnishd

> I also found in the documentation some references on how to do that using the 
> esi:remove, but I didn't have success either.  Any suggestions?

The  tag serves a completely different purpose.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: health check varnish

2021-12-30 Thread Dridi Boukelmoune
On Thu, Dec 30, 2021 at 8:17 AM Johan Hendriks  wrote:
>
> Hello all, first off all, thanks for a year full of varnish happiness
> like the developers page and the parallel ESI for the varnish-cache edition.
>
> We use the following in the vcl for our varnish health check from our
> haproxy loadbalancers
>
> if (req.url == "/varnish_status") {
>  if (!std.healthy(req.backend_hint)) {
>  return (synth(500, "Varnish: unhealthy backend."));
>  }
>  else {
>  return (synth(200, "Varnish: All good!"));
>  }
>  }
>
> And this works fine if we run varnish on the webserver itself with only
> one backend.
> But if we use the director with multiple backends we get an 500 error if
> one of the backend servers is offline even if the director itself has
> enough servers left and still is marked ad healthy.
>
> Is there a way to check if the director itself is down?
>
> like
> Backend name  AdminProbeHealth Last change
> boot.web01probe5/5  healthyMon, 27 Dec 2021 08:42:58 GMT
> boot.web02probe5/5  healthyMon, 27 Dec 2021 08:42:58 GMT
> boot.web03probe5/5  healthyMon, 27 Dec 2021 08:42:58 GMT
> boot.webcluster   probe2/2  healthyMon, 27 Dec 2021 08:42:58 GMT
>
> return a 200 as all is good
>
> boot.web01probe2/5  sick   Wed, 29 Dec 2021 18:33:41 GMT
> boot.web02probe4/5  healthyWed, 29 Dec 2021 18:33:41 GMT
> boot.web03probe4/5  healthyWed, 29 Dec 2021 18:33:41 GMT
> boot.webcluster   probe1/2  healthyWed, 29 Dec 2021 18:33:41 GMT
>
> still return a 200 as we still have a healthy webcluster backend.
>
> Backend name  AdminProbeHealthLast change
> boot.web01probe2/5  sick  Wed, 29 Dec 2021 18:34:40 GMT
> boot.web02probe2/5  sick  Wed, 29 Dec 2021 18:34:40 GMT
> boot.web02probe2/5  sick  Wed, 29 Dec 2021 18:34:40 GMT
> boot.webcluster   probe0/2  sick  Wed, 29 Dec 2021 18:34:40 GMT
>
> return the 500 error code as our webcluster backend is down.
>
> So can we just get the health status of webcluster in this case?

If you assign webcluster.backend() to req.backend_hint before
performing the health check it should work fine.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish returning synthetic 500 error even though it has stale content it should serve. But only seems to happen during/after a burst of traffic

2021-12-20 Thread Dridi Boukelmoune
On Fri, Dec 17, 2021 at 4:03 PM Marco Dickert - evolver group
 wrote:
>
> On 2021-12-17 15:25:31, Batanun B wrote:
> > Thanks. I have thought about that too. But I think we might want to include
> > non-error transactions as well. I mean, with the problems this post is about
> > we want to see when the cached version of the start page was generated and
> > when it was last served from cache successfully. But maybe we could have a
> > permanent logging just for the start page, regardless of http status. That
> > should hopefully reduce the logging intensity enough so that logging to disk
> > isn't effecting the Varnish performance.
>
> Well, it depends on the performance of your storage and the amount of req/sec 
> on
> the front page, but these logs can get very huge very quickly. I'd suggest to
> determine the correct delivery of the front page via an external monitoring
> (e.g. icinga2 or a simple script). As far as I understand, you don't need to
> know the exact request, but more of a rough point in time of when the requests
> start failing. So a monitoring script which curls every minute should be
> sufficient and causes a lot less trouble.
>
> > One thing though... If you log all "status: 500+" transactions to disk, 
> > isn't
> > there a risk that your logging might exacerbate a situation where your site 
> > is
> > overwhelmed with traffic? Where a large load causes your backends to start
> > failing, and that triggers intense logging of those erroneous transactions
> > which might reduce the performance of Varnish, causing more timeouts etc 
> > which
> > cause more logging and so on...
>
> Indeed there is a risk of self-reinforcing effects, but it didn't happen yet. 
> We
> also do not plan to logging 500s forever, but only till our problem is solved,
> which is an error in varnishs memory handling. At the moment, our most
> concerning 500s are caused by varnish itself, stating "Could not get storage",
> when the configured memory limit is reached.

If you get a surge of 5XX responses from either Varnish or the
backend, you can also rate-limit logs to the disk:

https://varnish-cache.org/docs/6.0/reference/varnishlog.html

See the -R option.

Cheers,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Best practice for caching scenario with different backend servers but same content

2021-10-06 Thread Dridi Boukelmoune
On Mon, Aug 16, 2021 at 1:34 PM Hamidreza Hosseini
 wrote:
>
> > In that case, hashing the URL only would prevent you from adding new
> domains through your Varnish server. It won't hurt if you know you
> will only ever have one domain to deal with, but hashing the host will
> also not hurt as long as you normalize it to a unique value.
>
> Hi,
> Let me elaborate my architecture more:
> I have some backend servers to serve hls fragments for video live stream,e.g:
>
> ```
>
> hls_backend_01
> hls_backend_02
> hls_backend_03
> hls_backend_04
> hls_backend_05
> hls_backend_06
> hls_backend_07
> hls_backend_08
> hls_backend_09
> hls_backend_10
>
> ```
>
> There is same content on all hls backend servers, there are 5 varnish in 
> front of them for caching
> Now If I use round-robin director on Varnishes, because varnish would cache " 
> req.http.host + req.url ", so for the same content but from different 
> backends it would cache double! for example:
> if varnish for the first request and "test.ts" file goes to "hls_backend_01"  
> backend server, would cache it and
> for the next request from other clients because it is using round-robin 
> director
> it goes to "hls_backend_02" and would cache the same file again due to 
> different "req.http.host"
> So now I have a solution to use Shard director based on "key=req.url" instead 
> of round robin
> another way is to use round robin but adjusting the hash vcl to something 
> like bellow:
>
> ```
>
> sub vcl_hash {
> hash_data(req.url);
> return (lookup);
> }
>
> ```
>
> In this way varnish just hash the "req.url" not "req.http.host"
> So, Varnish would cache the content based on the content uniqueness not based 
> on the difference between backends.
> 1. At first, I asked how I can normalize it, Is it possible at all according 
> to what I said!?
> Would you please explain it more with an example?

In this case I think you are confusing "req.http.host" (host header)
with the backend host name.

For example, if you reach one of your 5 Varnish servers via
www.example.com that's what clients will use and that's what
req.http.host will contain.

Your backends FQDNs could be something like this:

- hls01.internal.example.com
- hls02.internal.example.com
- hls03.internal.example.com
- ...
- hls10.internal.example.com

As the example suggests, these domains should not be directly reached
by clients if your goal is to proxy them with Varnish. Those internal
FQDNs should have no effect on the cache key populated with
hash_data(...).

> 2. You give an example about other domains, In this case I do not understand 
> what it has to do with the domain

Let's say your clients can reach either example.com or www.example.com
for the same service, or tomorrow you add more than your HLS service
behind Varnish you may very well receive multiple host headers.

> 3.Maybe I'm thinking in wrong way because if varnish hash the data based on 
> req.url : 'hash_data(req.url)' It shouldn't cache the same content but 
> different backends again!
> for example my request is :

In this case you are "hashing" the client request with hash_data(...)
and it has nothing to do with backend selection. The fallback director
will precisely not do any kind of traffic balancing since its purpose
is to always select the first healthy backend in the insertion order.
The shard director may rely on the request hash or other criteria as
we already covered.

> http://varnish-01:/hls/test.ts
> for first request it goes to "hls_backend_01" backend and cache it and for 
> next request it goes to "hls_backend_02" backend,
> so for each request it caches it again because backends are different?

All subsequent requests to http://varnish-01:/hls/test.ts should go to
the same hls_backend_01 backend with the shard director. As long as
there are no other criteria than the ones we already discussed. If you
want consistency across all your Varnish servers, you should configure
your shard directors identically, with the backends added in the same
order (unlike your initial VCL example using the fallback director).

Best,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Difference between adjusting set req backend in vcl_recv and vcl_backend_fetch

2021-10-05 Thread Dridi Boukelmoune
On Mon, Sep 6, 2021 at 2:22 PM Hamidreza Hosseini
 wrote:
>
> Hi,
> This is part of my varnish configuration file for sharding:
>
> ```
> cat /etc/varnish/default.vcl
>
> vcl 4.1;
>
> import directors;
>
>
> probe myprobe {
> .request =
>   "HEAD /healthcheck.php HTTP/1.1"
>   "Connection: close"
>   "User-Agent: Varnish Health Probe";
> .timeout = 1s;
> .interval = 5s;
> .window = 5;
> .threshold = 3;
> }
>
>
> backend b-01 { .host = "b-01"; .port = "80"; .probe = myprobe; }
> backend b-02 { .host = "b-02"; .port = "80"; .probe = myprobe; }
> backend b-03 { .host = "b-03"; .port = "80"; .probe = myprobe; }
>
>
> sub vcl_init {
>
>
>   new hls_cluster = directors.shard();
> hls_cluster.add_backend(b-01);
> hls_cluster.add_backend(b-02);
> hls_cluster.add_backend(b-03);
>
>
>   new p = directors.shard_param();
>
> hls_cluster.set_rampup(30s);
> #hls_cluster.set_warmup(0.5);
>
> hls_cluster.reconfigure();
> hls_cluster.associate(p.use());
>
> }
>
>
>
> acl purge {
> "localhost";
> }
>
>
> sub vcl_recv {
>
>set req.backend_hint = hls_cluster.backend();
>
> }
>
>
> sub vcl_backend_fetch {
>
>   p.set(by=KEY, key=hls_cluster.key(bereq.url));
>   set bereq.backend = hls_cluster.backend(resolve=LAZY, healthy=CHOSEN);
>
> }
>
> ```
> 1. there are two set backend in this config, one is on vcl_recv:
> "set req.backend_hint = hls_cluster.backend();"
> and one in vcl_backend_fetch:
> "set bereq.backend = hls_cluster.backend(resolve=LAZY, healthy=CHOSEN);"
> should I remove set in vcl_recv cause I think if I adjust it , all requset 
> will go through this backend list and configs like healthy=CHOSEN in 
> vcl_backend_fetch wouldnt be applied! Am I true?

bereq.backend is initialized from req.backend_hint when a backend
fetch is triggered. Setting bereq.backend will simply override
anything one in vcl_recv.

> 2.Actually what is difference between vcl_backend_fetch and vcl_recv?

In vcl_recv you are processing a client request that was just received.

In vcl_backend_fetch you are preparing a bereq (backend request)
derived from req (client request) in before fetching the resource from
the backend.

https://varnish-cache.org/docs/6.0/users-guide/vcl-built-in-subs.html

> 3.should I remove "set req.backend_hint" from vcl_recv?

If the answer to "do I use the backend selection to make decisions in
client subroutines?" is no, then you can remove it from vcl_recv.

The shard director has task-specific settings, so some things you may
configure in vcl_recv would not apply to vcl_backend_fetch, so if the
answer to the question above was yes, you would probably need both.

Best,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish only resolve the ip on startup

2021-10-04 Thread Dridi Boukelmoune
On Mon, Oct 4, 2021 at 5:15 PM Guillaume Quintard
 wrote:
>
> On Mon, Oct 4, 2021 at 9:49 AM Dridi Boukelmoune  wrote:
>>
>> One problem I have (and that you should be familiar with) is that
>> portable interfaces we have that *respect* the system configuration
>> (hosts file, nsswitch configuration etc) are not providing enough
>> information. For example it becomes cumbersome to resolve SRV records
>> or even get the TTL of individual records for a DNS resolution in a
>> *portable* fashion.
>>
>> When you put it like this, it sounds simple enough (dare I say
>> simplistic?) but what I see is a sizeable can of worms.
>
>
> That sounds like a bit of a strawman to me. getaddrinfo and connect are 
> standard, and that's about all we should need. Applications are supposed (in 
> general) to just use whatever gai give them. We can call them every time we 
> need a new connection so we don't worry about TTL, and we just disregard SRV 
> records.
> The vast majority of users don't need SRV (yet?), and don't expect the 
> application to optimize DNS calls, but they do complain that giving a 
> hostname to VCL doesn't work.

The "portable" interface I was referring to includes getaddrinfo, and
I guess your suggestion would be to always resolve and leave A/
records caching to your stub and/or recursive resolver.

Fair enough, simplicity.

That still doesn't seal the can of worms: once there are more than one
address per family or addresses change, it's our connection and
pooling models that need to be revisited, how many different addresses
to try connecting to, how to retry, and how we account for all of that
(stats for example). Again, it's a bit more complex that just saying
"change the connect callback to one that combines resolve+connect".

I'm not against a holistic management of connections, there are
actually more aspects that I think should be at the core of our
connection management like certificates attached to a session (and in
particular their domains) once we have TLS support.

I just think it needs to be better defined and not give the impression
that messing about a new set of backend callbacks that will
systematically resolve endpoints is as simple as it gets.

Cheers
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish only resolve the ip on startup

2021-10-04 Thread Dridi Boukelmoune
On Mon, Oct 4, 2021 at 5:29 PM Justin Lloyd  wrote:
>
> I’m definitely watching this topic, considering I’m planning on moving to 
> Varnish Enterprise next year and putting a cluster in ECS, if not Fargate, so 
> being able to easily handle dynamic IPs would be extremely helpful.

Like I implied in my initial response you have this ability out of the
box with Varnish Enterprise, and with Varnish Cache there are
third-party VMODs.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish only resolve the ip on startup

2021-10-04 Thread Dridi Boukelmoune
On Mon, Oct 4, 2021 at 3:45 PM Guillaume Quintard
 wrote:
>
> I think it makes sense for Varnish to natively support backends changing 
> their IPs. I do get the performance argument but now that there is a 
> cloud/container market and that Varnish has proven to be useful in it, this 
> basic functionality should be brought in.

I would assume the primary argument was simplicity, not performance,
but I wasn't around. One could argue it's turned into simplism in
today's cloudy cloud cloudy world.

> Would it be acceptable to add a "host_string" to vrt_endpoint and fill it if 
> the VCL backend isn't an IP, then, we can add another cp_methods to 
> cache_conn_pool.c to use it? This way IPs are still super fast, and hostnames 
> become actually useful and a bit less confusing?

One problem I have (and that you should be familiar with) is that
portable interfaces we have that *respect* the system configuration
(hosts file, nsswitch configuration etc) are not providing enough
information. For example it becomes cumbersome to resolve SRV records
or even get the TTL of individual records for a DNS resolution in a
*portable* fashion.

When you put it like this, it sounds simple enough (dare I say
simplistic?) but what I see is a sizeable can of worms.

I do think we could do something about it, I don't know what would be
satisfying.

Cheers,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: varnishadm exit code 200

2021-10-03 Thread Dridi Boukelmoune
On Sat, Oct 2, 2021 at 10:24 PM Martynas Jusevičius
 wrote:
>
> Actually it does not seem to be the exit code. I tried checking and it
> looks like the exit code is 0:
>
> root@dc17c642d39a:/etc/varnish# varnishadm "ban req.url ~ /"
> 200
>
> root@dc17c642d39a:/etc/varnish# test $? -eq 0 || echo "Error"
> root@dc17c642d39a:/etc/varnish#
>
> So where is that "200" coming from?

You shouldn't see a 200, except for this bug fixed in 7.0:

https://github.com/varnishcache/varnish-cache/issues/3687

What version of Varnish are you running?

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Serve stale content if backend is healthy but "not too healthy"

2021-09-21 Thread Dridi Boukelmoune
On Tue, Sep 21, 2021 at 1:33 PM Luca Gervasi  wrote:
>
> Hello everyone,
>
> We have a backend that actually proxies different services (mangling
> the original response). Sometimes one of those backends are not
> available and the general response goes from 200 to a 50x.
> Is there a way to serve a stale (valid) content (if present) for a
> request that comes from a backend in a healthy state?
>
> I was thinking about something like this:
> sub backend_fetch {
>   if (beresp.status >= 500) {
> return_a_stale;
>   }
> }
>
> From the state machine
> (https://varnish-cache.org/docs/6.0/reference/states.html) it seems
> that I'm not allowed to return(hash) nor switch to an unhealthy
> backend (that i keep configured) to reach what I want.
>
> Please forgive me if do exists a facility to reach my goal and feel
> free to direct me to the right document.
>
> Ah. Varnish 6.x.

Hi Luca,

Varnish Cache does not have this feature, you should be able to do
that with Varnish Enterprise instead. What you are looking for is
stale-if-error and you may find some implementations using VCL but I
can't vouch for any, not having experience with them.

https://docs.varnish-software.com/varnish-cache-plus/vmods/stale/#description

Cheers,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish Cache Runaway RAM Usage!

2021-09-07 Thread Dridi Boukelmoune
On Tue, Sep 7, 2021 at 3:14 PM John Kormanec  wrote:
>
> Hello,
>
> We have been using the open source (free) version of Varnish Cache for my 
> company's website for several months but are repeatedly running into an issue 
> where Varnish's memory usage increases until all of the server's available 
> memory is consumed, rendering the server unresponsive.� This continues to 
> happen despite having Varnish's malloc and transient malloc storage settings 
> dialed way down (currently set at 50% combined of available RAM).� Here's 
> an excerpt from our storage backend configuration showing these settings.� 
> I've attached our full backend storage settings to this message for review.
>
> # Configure Varnish listening port, default "vcl" file location, memory 
> allocation, etc:
> ExecStart=/usr/sbin/varnishd -a :80 -f /etc/varnish/default.vcl -s malloc,14G 
> -s Transient=malloc,512M -T 0.0.0.0:2000
> ExecReload=/usr/sbin/varnishreload
>
> We're running version 6.0.2 of Varnish Cache on a CentOS 8 virtual 
> server.�� The server has 30 GB of RAM, 29.3 GB of which are available for 
> use.

You should upgrade to 6.0.8 to get the latest bug fixes for the 6.0 branch.

> I've read a number of technical posts from other Varnish users complaining 
> about the same issue, including this 
> one:��https://stackoverflow.com/questions/65450044/varnish-6-lts-w-centos-8-not-respecting-memory-limits.�
>  Unfortunately, I've not seen any published resolution for this problem, but 
> several tuning articles I've read talk about adjusting the server's memory 
> allocation manager to more aggressively cleanup fragmentation, expired 
> objects, etc.�� Here's one such post that talks about adjusting the 
> "jemalloc" memory allocation manager settings as a possible fix:� 
> https://info.varnish-software.com/blog/understanding-varnish-cache-memory-usage.�
>  I searched our CentOS 8 server to see what packages are installed but 
> jemalloc is not in the list.

I think you would have better results with jemalloc 3.6 that better
fits Varnish workloads, but EPEL8 ships 5.2.1 currently.

One thing to consider besides the jemalloc overhead is that you are
only limiting cache storage to 14.5GB and there will be a memory
footprint for everything else.

> I'm still relatively new to Varnish Cache and am not sure what the next steps 
> should be for troubleshooting & identifying the issue.� FWIW, I reached out 
> to the folks at Varnish Cache asking if they could offer any suggestions, but 
> they said we'd have to upgrade to their Enterprise version, which uses 
> something called a "massive storage engine" that would eliminate this 
> problem.� Not sure what the differences are between the paid / free 
> versions, but I'm hoping to find a solution to this problem here before 
> having to upgrade.
>
> Thanks in advance for any assistance the community can provide.

Full disclosure, I work for Varnish Software, but what you were told
was correct.

The Massive Storage Engine brings a feature called Memory Governor
that will allow your Varnish instance to pull all the levers available
to decrease its footprint back to 15GB whenever it crosses the
threshold. You could probably also expect less jemalloc overhead out
of the box and have a decent memory target for 15GB of storage, for
example 20GB.

With Varnish Cache alone it is more difficult to plan for memory usage.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Best practice for caching scenario with different backend servers but same content

2021-08-15 Thread Dridi Boukelmoune
On Sat, Aug 14, 2021 at 10:54 AM Hamidreza Hosseini
 wrote:
>
> Hi,
> Thanks to you and all varnish team for such answers that helped me alot,
> I read the default varnish cache configuration again:
> https://github.com/varnishcache/varnish-cache/blob/6.0/bin/varnishd/builtin.vcl
> and find out vcl_hash as follow:
>
> ```
> sub vcl_hash {
> hash_data(req.url);
> if (req.http.host) {
> hash_data(req.http.host);
> } else {
> hash_data(server.ip);
> }
> return (lookup);
> }
>
> ```
> So, if I change vcl_hash like following , would it be enough for my 
> purpose?(I mean caching the same object from different backends just once 
> with roundrobin directive !:)
>
> ```
>
> sub vcl_hash {
> hash_data(req.url);
> return (lookup);
> }
>
> ```
>
> By this config I told varnish just cache the content based on the 'req.url' 
> not 'req.http.host' therefore with the same content but different backend 
> varnish would cache once(If I want to use round robin directive instead of 
> shard directive ), Is this true? what bad consequences may it cause in the 
> future by this configuration?

In this case req.http.host usually refers to the the domain end users
resolve to find your varnish server (or other hops in front of it). It
is usually the same for every client, let's take www.myapp.com as an
example. If your varnish server is in front of multiple services, you
should be handling the different host headers explicitly. For exampe
if you have exactly two domains you should normalize them to some
canonical form. Using the same example domain that could be
www.myapp.com and static.myapp.com for instance.

In that case hashing the URL only would prevent you from adding new
domains through your Varnish server. It won't hurt if you know you
will only ever have one domain to deal with, but hashing the host will
also not hurt as long as you normalize it to a unique value.

You are correct that by default hashing the request appropriately will
help the shard director do the right thing out of the box. I remember
however that you only wanted to hash a subset of the URL for video
segments, so hashing the URL as-is won't provide the behavior you are
looking for.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Upgrading to 6.6 can't reload

2021-08-02 Thread Dridi Boukelmoune
>> Can you try removing the -S option from varnishd?
>>
>> Since you only listen to the CLI on localhost, there's likely no
>> remote access, so leaving the secret out will make varnishd generate a
>> random one. Basically, if you want to use varnishadm you need local
>> root privileges, same as your current setup.
>
>
> I tried this and it makes no difference, I think the fundamental issue is 
> that calling varnishadm without args seems (regardless the args I pass to 
> varnishd) to end in the message "No -T in shared memory"  if run from root.

This is very strange, I would probably need to somehow peek at the
working directory to figure out what's happening. The only way to see
no -T in shared memory is to explicitly ask for it with `-T none`,
which clearly you didn't do.

> If I run from another user, I do get the message "could not get hold of 
> varnishd, is it running?"

This on the other hand sounds like basic unix permissions coming
short, as one would expect.

> I guess I could update the reload script to pass the -T and -S args, but this 
> seems wrong, just concerned there is a general issue on focal, Is anyone else 
> running 6.6 on focal?

The varnishreload script is meant to focus on the system service
integration use case: you have a local Varnish instance that can also
be operated locally. So we shouldn't need to add options to specify -T
or -S, we should find them on the running instance.

You could use -T none if you have an alternative mode of operations.
For example varnishd -d puts you in debug mode and stdin/stdout is
used for the CLI. The alternative would be the -M option that lets
varnishd "reverse" connect to its operator, but then you would leave
the varnishreload use case.

Not sure about focal users, but I will try to spin up a VM and see if
I can reproduce it.

> Looking at the source code in 6 and 6.6 I can't see anywhere that the -T 
> would default from and yet on 6 under bionic varnishadm as a root user just 
> works without any -T or -S flags.
>
> https://github.com/varnishcache/varnish-cache/blob/6.0/bin/varnishadm/varnishadm.c

Correct, the default behavior is to inspect the running Varnish
instance to find them.

>> A surefire way to see whether varnishlog connects to a running varnish
>> instance is to try:
>>
>> varnishlog -d -g raw
>
>
> I am running as root. If I execute this it connects but I get no output, I 
> know it is connected because when I restart the varnish process I get the  
> message, "Log abandoned (vsm)" which you always see when a new varbnishd 
> process starts. I am definitely hitting the varnish server, as I am executing 
> curl requests to localhost:80, but there is no output from varnishlog.

That seems to indicate that Varnish may have been started on a
different hostname and what you are "connecting" to is the remnant of
a dead instance.

What is the output of `ls /var/lib/varnish` ?

> I am about to spin up some more boxes, so will check to see wheter this is 
> just specific to this box or not, I did initially install 6.2 on this server 
> and varnishlog was working as expected with that.

I would recommend sticking to the 6.0 LTS series, unless you
absolutely need a feature released after 6.0 that hasn't been
back-ported to the stable branch.

https://packagecloud.io/varnishcache/varnish60lts/install

If you don't use an LTS series, I recommend always sticking to the
latest release. The 6.2 series is EOL and no longer maintained, which
may include security vulnerabilities such as VSV7.

https://varnish-cache.org/security/VSV7.html

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Creating ACLs in Varnish

2021-08-02 Thread Dridi Boukelmoune
> Is there a better approach to this in Varnish Cache? We’re also going to be 
> evaluating Varnish Enterprise, so if there’s something in VE, that would also 
> be good to know.

Hello,

There are better ways to do this, but not out of the box with Varnish
Cache. You would need something like
https://code.uplex.de/uplex-varnish/varnish-objvar/ to index ACLs by
host names.

With Varnish Enterprise, you can manage this out of the box with the
combination of built-in modules vmod_aclplus and vmod_kvstore.

https://docs.varnish-software.com/varnish-cache-plus/vmods/aclplus/
https://docs.varnish-software.com/varnish-cache-plus/vmods/kvstore/

Feel free to reach out to me directly to discuss the Varnish
Enterprise solution with you.

Best Regards,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Upgrading to 6.6 can't reload

2021-06-21 Thread Dridi Boukelmoune
On Mon, Jun 21, 2021 at 6:45 AM Richard Chivers  wrote:
>
> Hi, thanks for coming back. No, the hostname didn't change. Here is the rest 
> of the file:
>
> [Unit]

> -S /etc/varnish/secret \

Can you try removing the -S option from varnishd?

Since you only listen to the CLI on localhost, there's likely no
remote access, so leaving the secret out will make varnishd generate a
random one. Basically, if you want to use varnishadm you need local
root privileges, same as your current setup.


>> > In bionic when we run a varnishadm, we don't need to pass the -T or -S 
>> > args, it just reads the secret file ( I am assuming) and connects.
>> >
>> > In focal this is not the case, I need to pass the args. e.g. varnishadm -T 
>> > localhost:6082 -S /etc/varnish/secret

The main entry point is the -n option, and then options (or lack
thereof) like -T and -S can be found from the working directory.

>> > Because of this calling /usr/sbin/varnishreload fails because it calls 
>> > varnishadm -n '' -- vcl.list and gets the response "No -T in shared memory"
>> >
>> > So my question is where does this default from, is there an ENV variable 
>> > to set, or am I just missing something?
>>
>> Did your system's hostname change between the moment when varnish was
>> started and when you attempted a reload?
>>
>> Can you share the rest of your service file? (maybe redact sensitive
>> parts if any)

I didn't have time to give more details, but the default value for -n
is the system's hostname, that's why I asked initially about the
hostname changing.

>> > Another strange issue is that varnishlog is not returning anything, it 
>> > simply hangs and doen't show anything or an error for that matter.

Are you running varnishlog with enough privileges? (most likely
belonging at least to the varnish group.)

If you omit the -d option, varnishlog will print transactions as they
complete, so if by any chance you are inspecting a test system with no
traffic that's not surprising.

A surefire way to see whether varnishlog connects to a running varnish
instance is to try:

varnishlog -d -g raw

>> > I Installed by adding the repo: deb 
>> > https://packagecloud.io/varnishcache/varnish66/ubuntu/ focal main
>> >
>> > Any ideas or help appreciated.
>> >
>> > I have gone back through change logs, but can't spot anything.
>> >
>> > Thanks
>> >
>> > Richard

Please keep the mailing list CC'd.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Understanding 503s

2021-04-15 Thread Dridi Boukelmoune
Hello,

For global timeouts:

https://varnish-cache.org/docs/trunk/reference/varnishd.html#run-time-parameters

They contain "timeout" in the name.

For VCL-defined timeouts:

https://varnish-cache.org/docs/trunk/reference/vcl-backend.html#timeout-attributes
https://varnish-cache.org/docs/trunk/reference/vcl-probe.html#attribute-timeout
https://varnish-cache.org/docs/trunk/reference/vcl-var.html

If the problem was a timeout, you would see this in the log:

FetchError first byte timeout

If you are using apache httpd there´s a good chance that incoming
connections are closed after 5s of inactivity by default. But when
varnish pools backend connections, the default backend_idle_timeout is
60s so you may end up reusing a connection that was closed by your
httpd server. You should increase your keep-alive timeout to meet
varnish´s expectations.

https://httpd.apache.org/docs/2.4/mod/core.html#keepalivetimeout

Please note that varnish has a similar timeout_idle that defaults to
5s too. So stacking varnish servers can lead to the same problem if
you rely on the defaults.

Dridi

On Thu, Apr 15, 2021 at 6:27 AM Maninder Singh  wrote:
>
> Also, backend is apache 2.4
> and running php-fpm.
>
> On Thu, 15 Apr 2021 at 11:52, Maninder Singh  wrote:
>>
>> I have that defined as 2 minutes.
>> backend default {
>> .host = "127.0.0.1";
>> .port = "8080";
>> .first_byte_timeout = 120s;
>> }
>>
>> That's why this error is puzzling.
>>
>> Any other timeouts ( connect ? ) etc to look at ?
>>
>> Also, in the above dump, how much time did it take ?
>>
>> To me it looks like it was closed within a second ?
>>
>>
>> -   BackendOpen32 reload_2021-04-13T130756.default 127.0.0.1 8080 
>> 127.0.0.1 56176
>> -   BackendStart   127.0.0.1 8080
>> -   Timestamp  Bereq: 1618461577.074387 0.281276 0.281276
>> -   FetchError http first read error: EOF
>> -   BackendClose   32 reload_2021-04-13T130756.default
>> -   Timestamp  Beresp: 1618461577.074430 0.281319 0.43
>> -   Timestamp  Error: 1618461577.074434 0.281323 0.04
>>
>> On Thu, 15 Apr 2021 at 11:42, Frands Bjerring Hansen 
>>  wrote:
>>>
>>> Look at the fetch error: http first read error: EOF
>>>
>>>
>>>
>>> Perhaps the backend closed the connection before sending any data or the 
>>> first_byte_timeout defined for the backend has been reached. The default is 
>>> 60 seconds.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> / Frands Bjerring Hansen
>>> Head of Technology, Linux
>>> Office: +45 70 40 00 00
>>> frands.han...@team.blue
>>>
>>> Operations - Linux
>>> team.blue Denmark A/S
>>> Højvangen 4
>>> 8660 Skanderborg
>>> Denmark
>>> CVR: 29412006
>>>
>>>
>>>
>>>
>>> On 15/04/2021, 08.08, "varnish-misc" 
>>>  wrote:
>>>
>>> Hi,
>>>
>>> I need some help understanding why the below 503 is happening.
>>>
>>>
>>>
>>> I am logging 503s to a separate file and then querying as below.
>>>
>>>
>>>
>>> varnishlog -q "BerespStatus eq 503" -w /whatever/file
>>>
>>> varnishlog -r /whatever/file
>>>
>>>
>>>
>>> What's going wrong here ?
>>>
>>> What should I be looking at ?
>>>
>>>
>>>
>>> Please let me know.
>>>
>>>
>>>
>>> *   << BeReq>> 45926757
>>>
>>> -   Begin  bereq 45926756 pass
>>>
>>> -   Timestamp  Start: 1618461576.793111 0.00 0.00
>>>
>>> -   BereqMethodPOST
>>>
>>> -   BereqURL   /index.php?=85=aa
>>>
>>> -   BereqProtocol  HTTP/1.1
>>>
>>> -   BereqHeaderX-Forwarded-Proto: https
>>>
>>> -   BereqHeaderX-Forwarded-Port: 443
>>>
>>> -   BereqHeaderHost: graph.com 
>>>
>>> -   BereqHeaderX-Amzn-Trace-Id: Root=1-xxx
>>>
>>> -   BereqHeaderContent-Length: 793
>>>
>>> -   BereqHeadersec-ch-ua: "Google Chrome";v="89", "Chromium";v="89", 
>>> ";Not A Brand";v="99"
>>>
>>> -   BereqHeaderaccept: application/json, text/javascript, */*; q=0.01
>>>
>>> -   BereqHeadersec-ch-ua-mobile: ?0
>>>
>>> -   BereqHeaderuser-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) 
>>> AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36
>>>
>>> -   BereqHeadercontent-type: application/x-www-form-urlencoded; 
>>> charset=UTF-8
>>>
>>> -   BereqHeaderorigin: https://mandy..com
>>>
>>> -   BereqHeadersec-fetch-site: same-site
>>>
>>> -   BereqHeadersec-fetch-mode: cors
>>>
>>> -   BereqHeadersec-fetch-dest: empty
>>>
>>> -   BereqHeaderreferer: https://mandy.com/
>>>
>>> -   BereqHeaderaccept-encoding: gzip, deflate, br
>>>
>>> -   BereqHeaderaccept-language: en-US,en;q=0.9
>>>
>>> -   BereqHeaderX-Forwarded-For: 103.67.157.20, 10.0.0.170
>>>
>>> -   BereqHeaderbrowser: other
>>>
>>> -   BereqHeaderserverIp: 10.0.3.237
>>>
>>> -   BereqHeaderserverId: abc01
>>>
>>> -   BereqHeaderX-Varnish: 45926757
>>>
>>> -   VCL_call   BACKEND_FETCH
>>>
>>> -   VCL_return fetch
>>>
>>> -   BackendOpen32 reload_2021-04-13T130756.default 127.0.0.1 8080 
>>> 127.0.0.1 56176
>>>
>>> -   BackendStart   127.0.0.1 8080
>>>
>>> -   

Re: Cookie VMOD keep/filter documentation issue

2021-04-06 Thread Dridi Boukelmoune
On Tue, Apr 6, 2021 at 5:41 PM Justin Lloyd  wrote:
>
> Forget I asked, that was a dumb question since it's a regex and | can be 
> used. /sigh

Yes, "|" is an option.


> Hi Dridi,
>
> Thanks for confirming that! However, is there then a way to get the effect of 
> being able to pass a regex to keep_re() and filter_re()? I just started 
> working with the cookie vmod this morning and this would be a useful feature. 
> As I understand it, Varnish 6.4 (we're on 6.5) replaced filter() and 
> filter_except() with keep() and keep_re() and that the previous functions 
> could take CSV strings.

I think there is a misunderstanding here.

We renamed filter_except() to keep() because it had better semantics,
much easier to understand. The filter_re() and keep_re() functions are
regular expression variants for basically the same operations: filtering
some cookies out, or keeping a subset of the cookies.

Cheers,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Cookie VMOD keep/filter documentation issue

2021-04-06 Thread Dridi Boukelmoune
On Tue, Apr 6, 2021 at 4:11 PM Justin Lloyd  wrote:
>
> Hi all,
>
> I'm confused on whether cookie.keep_re and cookie.filter_re should work with 
> CSV strings. The documentation at 
> https://varnish-cache.org/docs/trunk/reference/vmod_cookie.html does not 
> explicitly say that those two functions can take a CSV string like the docs 
> for keep and filter do, but the example for keep_re uses such string, 
> indicating that it should be able to. However, testing with varnishtest 
> definitely shows that the _re functions do not work with CSV strings. Can 
> anyone clarify what the truth is supposed to be?

Hi,

Looking at the examples right now it is clear [1] that filter() and
keep() each take a CSV string. It is also clear [1] that filter_re()
takes a single regular expression [2]. However, the keep_re()
example is both misleading and wrong.

Thanks for bringing this to our attention!

https://github.com/varnishcache/varnish-cache/commit/606977bbfb624ead38e9c8648beac0b3906a4294

Dridi

[1] to me
[2] not to be confused with a singular expression
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish cache MISS

2021-03-10 Thread Dridi Boukelmoune
On Wed, Mar 10, 2021 at 9:09 PM Martynas Jusevičius
 wrote:
>
> Nevermind :) I realized the URLs in this log are truncated and the
> first one contains a unique ID in it...

You might also want to pay attention to backend response headers:

-   RespHeader Cache-Control: private

If this cache-control came from the backend then there's a fair chance
that it made the response uncacheable.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Download paths of varnish.tgz include random string

2021-02-15 Thread Dridi Boukelmoune
On Mon, Feb 15, 2021 at 9:12 AM Marco Lechner  wrote:
>
> Hallo,
>
> since a while a varnish Release is not available on a clear-url basis like
> http://varnish-cache.org/_downloads/varnish-6.0.5.tgz
> with just the release verison as mutable , but has a generated random string 
> as part o the link like
> https://varnish-cache.org/_downloads/41841608341add28256b374dc367af04/varnish-6.0.7.tgz
> Am I just missing an redirect (well 
> https://varnish-cache.org/_downloads/varnish-6.0.7.tgz does not work anymore)
> Or do I have to look fort he specific URL all the time I update any build 
> jobs in my CI environment?
>
> It was very comfortable just having to add the release number to any 
> automatic build workflows

Hallo,

This is a known issue caused by a sphinx upgrade:

https://github.com/varnishcache/varnish-cache/issues/3455

It looks like it broke again, but we haven't committed to a long-term
solution yet. I'll bring it up today.

Thanks,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Traffic drop issue

2020-11-02 Thread Dridi Boukelmoune
On Mon, Nov 2, 2020 at 3:59 PM Guillaume Quintard
 wrote:
>
> What intrigues me is why the number of requests decreased. Hitting 
> thread_pool_max should make the number of request plateau, not go down.
>
> If there's a loadbalancer that realizes requests are getting dropped, and so 
> takes traffic away, it makes sense, just wanted to make sure.
> If that's the case, the loadbalancer will probably give you some info there.

Likewise if too many backend fetches are triggered (low hit ratio?)
and pile up, they will get higher priority than client tasks.

> On top of threas_limited, sess_dropped and sess_queued are probably good 
> counters to check.

I think this doesn't apply to h2 requests.
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Traffic drop issue

2020-11-02 Thread Dridi Boukelmoune
> > Did you try to increase thread_pool_max?
> >
>
> Yes, we increased thread_pool_max and thread_pool_min after that issue.
> For the moment all work fine.

FWIW, it's in the documentation of the threads_limited counter, which
you (and Guillaume) didn't seem to notice (or remember) before I
brought this up. Correct me if I'm wrong of course, but more
importantly please let me know how we could  improve the documentation
if that was not enough.

> We think about adjusting other params concerning threads like 
> thread_pool_reserve.
> Did you have advise about that?

If you already increased thread_pool_min, you mechanically increased
the size of the reserve (5% of thread_pool_min by default) so I'd
suggest you keep monitoring threads_limited and see whether you need
more workers for your workload.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Traffic drop issue

2020-10-29 Thread Dridi Boukelmoune
On Thu, Oct 29, 2020 at 3:06 PM Sébastien EISSLER  wrote:
>
> Hello,
>
> Yes, we have a load-balancer in front, we already investigate that way and 
> don't dectect any error or configuration limitation.
> The fact that threads_limited counter strongly increase and that restart of 
> Varnish restore the traffic suggest an issue on varnish side.

Did you try to increase thread_pool_max?

> In attachement a varnishstat result from 3 hours before the issue.
> We notice a high value for MAIN.sess_closed_err counter.
>
> Thanks for your help.
>
> Regards
>
> > Hi,
> >
> > Do you have a load-balancer in front of Varnish? The decrease looks like
> > the connections are being drained
> >
> > Cheers,
> >
>
> --
> Sébastien
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Modifying req.url and restart() produces ncsa to log wrong URL resource

2020-10-01 Thread Dridi Boukelmoune
On Tue, Sep 29, 2020 at 8:10 PM Marian Velez  wrote:
>
> Hiya all!
>
> I've been trying to implement an abtest engine on my varnish. The problem 
> I've come across is related to the manipulation of the original req.url 
> property.
>
> What I'm experiencing is that when setting a new "req.url" value and after 
> forcing a restart(), the URL that varnishncsa uses to log is just the one it 
> has set when it re-enters recv flow, but not the one it uses from the backend 
> or the last req.URL object either (which will even have much more sense 
> since).
>
> I've made a sample snippet of what I observe just for the sake of reproducing 
> it and to take away all the abtest logic that may be just confusing.
>
> vcl 4.1;
> backend default {
>  .host = "127.0.0.1";
>  .port = "8000";
> }
>
> sub vcl_recv {
>   if (req.restarts == 0) {
> set req.http.X-Original-URL = req.url;
> set req.url = "/this_url_does_not_exist";
> return(restart);
>   }
>   else {
> #restore original URL
> set req.url = req.http.X-Original-URL;
>   }
> }
>
> That vcl just sets an alternative req.url and triggers a flow restart.
> varnishncsa logs this entry
>
> ::1 - - [29/Sep/2020:16:49:08 -0300] "GET 
> http://localhost:6081/this_url_does_not_exist HTTP/1.1" 200 12 "-" 
> "curl/7.72.0"
>
>
> Of course that URL does not exist, but since its using the req.URL object 
> from which varnish recv flow started, it just keeps that for logging, and not 
> any later req.URL set.
>
> Here attached is the varnishlog output from a fresh start:
>
> *   << Request  >> 2
> -   Begin  req 1 rxreq
> -   Timestamp  Start: 1601409664.461391 0.00 0.00
> -   Timestamp  Req: 1601409664.461391 0.00 0.00
> -   VCL_useboot
> -   ReqStart   ::1 55180 a0
> -   ReqMethod  GET
> -   ReqURL /this_object_exists
> -   ReqProtocolHTTP/1.1
> -   ReqHeader  Host: localhost:6081
> -   ReqHeader  User-Agent: curl/7.72.0
> -   ReqHeader  Accept: */*
> -   ReqHeader  X-Forwarded-For: ::1
> -   VCL_call   RECV
> -   ReqHeader  X-Original-URL: /this_object_exists
> -   ReqURL /this_url_does_not_exist
> -   VCL_return restart
> -   VCL_call   HASH
> -   VCL_return lookup
> -   Timestamp  Restart: 1601409664.461429 0.37 0.37
> -   Link   req 3 restart
> -   End

The transaction above does not reply to the client, it's not picked up
by varnishncsa.

> *   << BeReq>> 4
> -   Begin  bereq 3 fetch
> -   VCL_useboot
> -   Timestamp  Start: 1601409664.461553 0.00 0.00
> -   BereqMethodGET
> -   BereqURL   /this_object_exists
> -   BereqProtocol  HTTP/1.1
> -   BereqHeaderUser-Agent: curl/7.72.0
> -   BereqHeaderAccept: */*
> -   BereqHeaderX-Forwarded-For: ::1
> -   BereqHeaderhost: localhost:6081
> -   BereqHeaderAccept-Encoding: gzip
> -   BereqHeaderX-Varnish: 4
> -   VCL_call   BACKEND_FETCH
> -   VCL_return fetch
> -   BackendOpen26 default 127.0.0.1 8000 127.0.0.1 59480 connect
> -   Timestamp  Bereq: 1601409664.462196 0.000642 0.000642
> -   Timestamp  Beresp: 1601409664.463115 0.001561 0.000918
> -   BerespProtocol HTTP/1.0
> -   BerespStatus   200
> -   BerespReason   OK
> -   BerespHeader   Server: SimpleHTTP/0.6 Python/3.8.6
> -   BerespHeader   Date: Tue, 29 Sep 2020 20:01:04 GMT
> -   BerespHeader   Content-type: application/octet-stream
> -   BerespHeader   Content-Length: 12
> -   BerespHeader   Last-Modified: Tue, 29 Sep 2020 19:36:31 GMT
> -   TTLRFC 120 10 0 1601409664 1601409664 1601409664 0 0 cacheable
> -   VCL_call   BACKEND_RESPONSE
> -   VCL_return deliver
> -   Filters
> -   Storagemalloc s0
> -   Fetch_Body 3 length stream
> -   BackendClose   26 default close
> -   Timestamp  BerespBody: 1601409664.463236 0.001682 0.000120
> -   Length 12
> -   BereqAcct  155 0 155 199 12 211
> -   End
>
> *   << Request  >> 3
> -   Begin  req 2 restart
> -   Timestamp  Start: 1601409664.461429 0.37 0.00
> -   ReqStart   ::1 55180 a0
> -   ReqMethod  GET
> -   ReqURL /this_url_does_not_exist
> -   ReqProtocolHTTP/1.1
> -   ReqHeader  Host: localhost:6081
> -   ReqHeader  User-Agent: curl/7.72.0
> -   ReqHeader  Accept: */*
> -   ReqHeader  X-Forwarded-For: ::1
> -   ReqHeader  X-Original-URL: /this_object_exists

This transaction replies to the client but starts with the wrong URL,
but we can find the correct one in the header above.

> -   VCL_call   RECV
> -   ReqURL /this_object_exists
> -   ReqUnset   X-Original-URL: /this_object_exists
> -   ReqUnset   Host: localhost:6081
> -   ReqHeader  host: localhost:6081
> -   VCL_return hash
> -   VCL_call   HASH
> -   VCL_return lookup
> -   VCL_call   MISS
> -   VCL_return fetch
> -   Link   bereq 4 fetch
> -   Timestamp  Fetch: 

Re: Question regarding vcl config

2020-09-27 Thread Dridi Boukelmoune
On Sat, Sep 26, 2020 at 8:31 PM Johan Hendriks  wrote:
>
> Thank you very much for your quick responce, so the order of the rules matter 
> when you write vcl?
> I didn't know that, but it makes sense.

Yes, VCL is an imperative programming language and statements are
executed in order.

> I start some tests.
> Thanks again.

Please keep the mailing list CC'd if you have more questions.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Question regarding vcl config

2020-09-26 Thread Dridi Boukelmoune
On Sat, Sep 26, 2020 at 4:41 PM Johan Hendriks  wrote:
>
> Hello all, I have inherited a varnish server. And I am a little confused 
> about a few lines in that config.
>
> It has the following in vcl_recv
>
> # Don't cache if there are request cookies
> if (req.http.Cookie) {
> set req.http.X-Debug-Varnish-Nocache-Recv = "Got request cookie (" + 
> req.http.Cookie + ")";
>
> return (pass);
> }
>
> if (req.url ~ "\.(png|gif|jpg|jpeg|ico|swf|css|js|pdf|ico|js|ogg|mp4)$") {
> unset req.http.cookie;
> }
> if (req.url ~ "^/(includes|images|templates)") {
> unset req.http.cookie;
> }
>
> So the first if block tells request with req.http.Cookie to pass to the 
> backend.
> The second tells if the requested url ends with 
> .png|gif|jpg|jpeg|ico|swf|css|js|pdf|ico|js|ogg|mp4 and so on to unset the 
> cookie.
> But will these rules be executed or is the request already passed to the 
> backend?

The return(pass) statement will definitely prevent the following code
to be executed.

You might want to move the first if block after the two others, but
it's hard to tell without prior knowledge of how the backend
behaves...

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Can't get "streaming" or pipe to work, Varnish still waits for the full response

2020-09-14 Thread Dridi Boukelmoune
On Sat, Sep 12, 2020 at 10:08 PM Batanun B  wrote:
>
> Hi,
>
> We have some old (legacy) internal admin pages that do some classic old 
> school processing while the page is loading, and outputting the current 
> status as it is working. When requesting these pages directly (through 
> Tomcat), I can see the results in the browser at the same time as the results 
> are written on the other end. But when I go through Varnish, no matter what I 
> try, I only see a blank page that is loading/waiting, and then when the 
> backend is done writing, then I get the entire result in one go.
>
> How can I configure Varnish to bring any content to the client the moment it 
> gets it from the backend, and not wait until the entire response is done?
>
> In vcl_backend_response I do this:
>   set beresp.do_stream = true;
>   set beresp.uncacheable = true;
>   return (deliver);

Streaming is on by default, you don't need to do anything.

> I have also tried returning (pipe) in vcl_recv (with and without do_stream 
> and uncacheable). And gzip is turn off. But nothing helps. What can I do 
> more? And how can I debug this? Varnishlog shows nothing that is telling me 
> that it is buffering, or waiting for the response, or anything like that.

It is indeed hard to get that information just from Varnish, you could
try to capture TCP packets to check how long it takes for backend
traffic to be forwarded to clients. It's not obvious why a response
would be delayed, but it could happen to be related to the
fetch_chunksize parameter.

However I have never come across a setup where we needed to
tune that knob...

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Possible to detect a previous xkey softpurge?

2020-09-14 Thread Dridi Boukelmoune
On Sat, Sep 12, 2020 at 9:56 PM Batanun B  wrote:
>
> > Arguably, if you use Varnish to cache responses, you might as well
> > always tell your backend not to serve from cache. Because if a soft
> > purge moves you inside the grace period, there's no guarantee that the
> > next hit will happen before the object leaves the grace period. At
> > this point this will no longer trigger a background fetch...
>
> Well, the caching in the backend is not on the same level as the Varnish 
> cache. In Varnish, a single request results in a single object to cache. In 
> the backend, a single request can result in hundreds or thousands of separate 
> lookups (some involving separate http calls to other services), each cachable 
> with their own unique key. And most of these objects are reused from the 
> cache for other requests as well. And skipping that internal cache 
> completely, letting the backend do all these lookups and sub-requests every 
> single time a request comes in to the backend, that would be terrible for 
> performance. So we really only want to skip this internal cache in special 
> circumstances.

In this case you might want to coordinate your tiers with
Last-Modified headers. Cached objects are immutable with Varnish,
except for the timing attributes so my previous answer is still
applicable: you can't record that an object was soft-purged by a VMOD.

The bereq.is_bgfetch variable will tell you whether you are in the
grace period but that's about it.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Possible to detect a previous xkey softpurge?

2020-09-03 Thread Dridi Boukelmoune
On Thu, Sep 3, 2020 at 10:57 AM Batanun B  wrote:
>
> Hi,
>
> We sometimes have a problem with the backend using its internal cache for a 
> few seconds too long after something has been updated. We trigger a softpurge 
> (xkey vmod) in varnish, but if someone requests the page again very soon 
> after that, the data that Varnish gets from the backend might be old. In this 
> case, we would like to be able to tell the backend, maybe using an extra 
> header, that it should skip its internal caches and give us the updated 
> content.

You can't really do that, but the closest I can think of would look like this:

sub vcl_backend_fetch {
if (bereq.is_bgfetch) {
# tell the backend to somehow not serve from cache
}
}

Since a soft purge artificially moves you inside the grace period, any
hit on those objects would trigger a background fetch.

Arguably, if you use Varnish to cache responses, you might as well
always tell your backend not to serve from cache. Because if a soft
purge moves you inside the grace period, there's no guarantee that the
next hit will happen before the object leaves the grace period. At
this point this will no longer trigger a background fetch...

> But, I'm not sure how to archive this in Varnish. Is it possible to detect 
> that the page requested has been softpurged earlier? If yes, is it also 
> possible to see when that softpurge took place? Because we would only ever 
> need to do this if the softpurge happened less than let's say 30 seconds ago.
>
> And the reason that the backend data might be old after an update is that 
> what we call "the backend" (from a Varnish perspective) is actually a complex 
> setup of services. And there, the update happens in one place, and our 
> "backend" is actually a frontend server that sometimes don't get the 
> information about the update quick enough. I think that the implementation of 
> this system is a bit faulty, but it is what it is, and I would like to use 
> the power of Varnish to handle this, if possible.

Objects are immutable (modulus TTL) once they enter the cache so
there's nothing actionable to record that they were soft-purged.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Using user defined variable in backend definition?

2020-08-25 Thread Dridi Boukelmoune
On Tue, Aug 25, 2020 at 9:20 AM Batanun B  wrote:
>
> On Mon, Aug 24, 2020 at 11:07 PM Dridi Boukelmoune  wrote:
>
> > Hi,
> >
> > You can't do that, but you can move the backend definition inside
> > environment.vcl instead to keep your default.vcl the same across all
> > environments.
>
> Hi,
>
> Too bad... Strange thing to require hard coded strings. Is there a technical 
> reason for that?

Yes, definitions need to be constant!

> Yeah, I actually ended up doing just that, moving the entire backend 
> definition to the separate file. But I would have preferred having the base 
> structure in the main vcl file, and only the actual host names (and other 
> environment specific configuration) in the separate vcl file. Also, if they 
> were variables, I would be able to use them elsewhere in the vcl (like in 
> synth output, which was my main goal originally). If I want to do that now, I 
> would have to define the same host name multiple times.

I guess, what you want is constants:

https://github.com/varnishcache/varnish-cache/pull/3134

Specifically, from this example, but redacted to remove test-case syntax:

https://github.com/varnishcache/varnish-cache/pull/3134/files#diff-996416d1d725c18f8a7bf688cc2f4a52

environment.vcl:
const string be_host = "example.com";
const duration be_tmo = 3s;

main vcl:
vcl 4.1;

include "environment.vcl";

backend be {
.host = be_host;
.connect_timeout = be_tmo;
}

Right now you have the option of either moving the whole backend
definition inside environment.vcl or treat your VCL as a template and
populate the environment-specific parts at some stage of your
deployment pipeline.

Cheers,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Error in varnish child

2020-08-03 Thread Dridi Boukelmoune
On Mon, Aug 3, 2020 at 11:28 AM hamidreza hosseini
 wrote:
>
> Hi
> I have this error on my varnish instance , is this important or i can ignore 
> it?:

No, you should promptly do something about it.

First off, Varnish 4.1 is no longer supported. You should be able to
update to 4.1.10 without a configuration change to benefit from years
of bug fixes, then consider upgrading to 6.0 sooner than later.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Detect request coalescing?

2020-07-30 Thread Dridi Boukelmoune
On Thu, Jul 30, 2020 at 7:30 PM Batanun B  wrote:
>
> Just a quick question that I wasn't able to find any information on using 
> regular google searches. Is it possible to detect request coalescing somehow? 
> Meaning that, when looking at the response headers, can I somehow see that 
> the request had to wait for another request for the same resource to finish, 
> before this request could return? Can I somehow detect that in VCL, and add a 
> custom header with that information?

Hi,

I don't think you can do that.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: VSM: Could not get hold of varnishd, is it running?

2020-07-20 Thread Dridi Boukelmoune
On Mon, Jul 20, 2020 at 8:12 AM Meken  wrote:
>
> Sorry!
>
> root@hosting:/var/lib/varnish# hostname
> hosting

So now the problem is that you removed the /var/lib/varnish/support
directory and we lost the opportunity to inspect it ¯\_(ツ)_/¯

Does the PID in /var/lib/varnish/hosting/_.pid match the root varnishd
process in the command output below?

ps f -C varnishd -C cache-main

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: VSM: Could not get hold of varnishd, is it running?

2020-07-20 Thread Dridi Boukelmoune
> So I rm -r /var/lib/varnish/support (it is wrong hostname), but nothing 
> changed?

You didn't share the output of the hostname command.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: VSM: Could not get hold of varnishd, is it running?

2020-07-20 Thread Dridi Boukelmoune
On Sat, Jul 18, 2020 at 8:50 PM Meken  wrote:
>
> I am running Ubuntu 20.04.
>
> Recently upgrade of bash (apt update && apt upgrade) causes 
> varnishlog/varnishtop/varnishtop not working:
> http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 bash amd64 
> 5.0-6ubuntu1.1
>
> varnishstat
> .
> VSM: Could not get hold of varnishd, is it running?
>
> Change to" sudo varnishstat" no use
>
> (but varnish is running)
>
> Tested on Varnish 6.2.1 also 6.4 master.
>
> So it is a bug of Ubuntu or Varnish?
>
> The only fix is to roll back to cancel the upgrade.
>
> ps aux | grep varnish
>
> root 2042323  0.0  0.0   9032   736 pts/0S+   04:28   0:00 grep 
> --color=auto varnish
>
> vcache   4055139  0.0  0.2  19148  5120 ?SLs  Jul18   0:05 
> /usr/sbin/varnishd -j unix,user=vcache -F -a :80 -T localhost:6082 -f 
> /etc/varnish/default.vcl -S /etc/varnish/secret -t 120 -p thread_pool_min=500 
> -p thread_pool_max=5000 -p thread_pool_stack=4M -p feature=+http2 -s 
> malloc,20G -a [::1]:6086,PROXY
>
> vcache   4055284  0.0  4.1 4417784 84540 ?   SLl  Jul18   0:48 
> /usr/sbin/varnishd -j unix,user=vcache -F -a :80 -T localhost:6082 -f 
> /etc/varnish/default.vcl -S /etc/varnish/secret -t 120 -p thread_pool_min=500 
> -p thread_pool_max=5000 -p thread_pool_stack=4M -p feature=+http2 -s 
> malloc,20G -a [::1]:6086,PROXY
>
>
>
> Any idea? Thanks!

Can you share the output of the following commands?

find /var/lib/varnish -type f | sort
hostname

I suspect that your machine's hostname changed. If I'm correct, that
you can't rely on your machine's name for some reason, then it's your
job to consistently use a stable -n option across all varnish* programs.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: solution for Active/Passive Backend

2020-07-04 Thread Dridi Boukelmoune
On Sat, Jul 4, 2020 at 3:54 PM Guillaume Quintard
 wrote:
>
> Hi,
>
> You will have to define a probe, otherwise Varnish will consider the backend 
> to be healthy by default. You would then be able to manually make it sick, 
> but for automatic health, you need probing.

Alternatively one can also use vmod_saintmode and retry failed fetches.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Caching OSM tiles - how to not overload backends?

2020-06-15 Thread Dridi Boukelmoune
On Sun, Jun 14, 2020 at 2:32 AM tranxene50
 wrote:
>
> Hello!
>
> Please forgive my bad English, I live in France.
>
> Summary: how to cache - with Varnish - Open Street Map PNG images without 
> overloading OSM tiles servers?
>
> The question seems related to Varnish backends and ".max_connections" 
> parameter.
>
> A far as I know, if ".max_connections" is reached for a backend, Varnish 
> sends 503 http errors.
>
> I understand the logic but would it be possible to queue these incoming 
> requests and wait until the selected backend is really available?
>
> backend a_tile  {
>   .host = "a.tile.openstreetmap.org";
>   .port = "80";
>   .max_connections  = 2;
> }
>
> If Varnish have, let's say 100 incoming requests in 1 second, how can I 
> handle this "spike" without overloading the backend?
>
> All my google searches were "dead ends" so I think the question is poorly 
> formulated.
>
> Note 1 : using [random|round_robin] directors could be a temporary solution
> Note 2 : libvmod-dynamic is great but does not limit backend simultaneous 
> connexions
>
> Many thanks for your help!

Bonsoir,

Unfortunately we don't have any sort of queuing on the backend side,
so besides implementing your own backend transport from scratch in a
VMOD there is currently no solution.

Cordialement,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: varnish node without gcc

2020-06-15 Thread Dridi Boukelmoune
On Mon, Jun 15, 2020 at 12:29 PM  wrote:
>
> Hi,
>
> is it possible to compile the final merged vcl configuration in a pre
> step and then pushing it to the frontend nodes? So, no vcl config and
> gcc on the frontend. Just a bin blob and a restart of the service.

It is technically possible to pass a blob around, but Varnish insists
on compiling VCL locally.

> Can this "compile while restart" activity splitted into to seprated
> ones? Is that possible?

There is no restart involved, VCL is meant to be loadable on the fly.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: vmod_header (varnish-modules) on varnish cache 4.1

2020-05-25 Thread Dridi Boukelmoune
On Mon, May 25, 2020 at 1:19 PM Vlad Rusu  wrote:
>
> Hi all,
>
> A question for the maintainers of https://github.com/varnish/varnish-modules
>
> I need to use vmod_header to get control over the Set-Cookie response 
> headers. Feels like there is no other way in Varnish Cache.
>
> Looking at the branches I see version 6 only. Any reason to believe this 
> won’t work with varnish cache version 4.1.x?
>
> I know.. I should upgrade. But till then..
>
> Appreciate your support.

Hi Vlad,

Grab the 0.15.0 release from the download site:

https://download.varnish-software.com/varnish-modules/

It should work with Varnish 4.1 (otherwise try an older release).

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Transparent hugepages on RHEL8

2020-05-25 Thread Dridi Boukelmoune
> are the implications that varnish build that way in this context
> is more resilient with hugepages enabled?

I have no idea, we didn't package the el8 varnish DNF module!
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Transparent hugepages on RHEL8

2020-05-25 Thread Dridi Boukelmoune
On Sun, May 24, 2020 at 8:12 AM Geoff Simmons  wrote:
>
> On 5/24/20 01:29, info+varn...@shee.org wrote:
> > This notes
> >
> > https://varnish-cache.org/docs/trunk/installation/platformnotes.html
> >
> > has a comment about "Transparent hugepages".
> >
> > Does this still apply to EL8?
>
> That's a good heads-up that those docs need to be updated -- they refer
> to RHEL6 and Linux kernel 3.2. If I'm not mistaken, enabling THP by
> default was fairly new at the time, but it's still the default and
> that's old news now, as your settings confirmed (just checked that it's
> also the default on my Debian stretch laptop).
>
> The issue is not really the distro or kernel version, but the use of the
> THP feature, and it's still a problem, probably always will be. AFAICT
> THP does nothing good for Varnish. It's harmless if you're lucky, but it
> can be very disruptive.
>
> I haven't tried it with RHEL8. The doc says that it "is known to cause
> sporadic crashes of Varnish", but while I haven't seen crashes, on RHEL7
> I've seen that the memory usage of the cache process bloats up
> enormously, orders of magnitude larger than the actual size of the cache
> and anything else in Varnish that occupies memory. After disabling THP
> for Varnish (as detailed below), I saw memory usage become much smaller,
> more like what you'd expect from the cache size and other overhead.
>
> There's an explanation for why THP causes that, but suffice it to say
> that THP creates trouble for a variety of apps that manage a lot of
> memory. MongoDB, Oracle, redis and many other projects advise you to
> turn it off. THP is inevitably a problem for the jemalloc memory
> allocator, which is invariably used with Varnish.

I just wanted to react to the "invariably" word here. This is not
accurate, it should read "by default" instead.

See ./configure --help:

> --with-jemalloc use jemalloc memory allocator. Default is yes on Linux, no 
> elsewhere

And considering that jemalloc is not available in el8, but epel8
instead. I suspect Red Hat ships a varnish package that uses glibc
with no custom allocator.

Cheers,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish over memory allocation

2020-05-25 Thread Dridi Boukelmoune
On Mon, May 25, 2020 at 8:07 AM Alex Wakefield
 wrote:
>
> Whoops, knew I forgot to specify something!
>
> We're using malloc. Command line switch is specifically `-s malloc,24GB`

The -s option only specifies the storage size (HTTP responses with
some metadata). The rest of Varnish's memory footprint goes on top,
things like loaded VCLs, ongoing VCL transactions, all kinds of data
structures. VMODs like XKey may add their own footprint on top, the
list goes on.

Even on the storage side, if you only declare a malloc storage like
you did, you will get an unlimited Transient storage by default for
short-lived or uncacheable responses.

The only way today to tell a Varnish instance to limit itself to 24GB
(and still on a best-effort basis) is with Varnish Enterprise's memory
governor.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Detecting and fixing VSV00004 in older releases

2020-05-13 Thread Dridi Boukelmoune
> I tried to reproduce it myself today and I wasn't able to trigger the
> leak on the master branch's commit prior to the fix. I asked
> internally whether we have a reliable reproducer or if it's something
> that needs a consequential workload to be observable.

The step I was missing trying to reproduce this on my own was ensuring
that the error reason is far enough in the client workspace to be
leakable.

It turns out we had a test case covering all 3 scenarios that was
supposed to be pushed a while after the disclosure, but was forgotten.

You can use this test case now before and after applying the patch:

https://github.com/varnishcache/varnish-cache/commit/0c9c38513bdb7730ac886eba7563f2d87894d734

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Detecting and fixing VSV00004 in older releases

2020-05-12 Thread Dridi Boukelmoune
Hello Sylvain,



> >> Do we know in what version Trygve Tønnesland triggered the vulnerability?

It was first discovered on Varnish Enterprise, and once the origin of
the leak was identified we surveyed older and newer releases and fixed
the ones listed in the advisory.

> > To put it differently, how would one make sure that applying
> > bd7b3d6d47ccbb5e1747126f8e2a297f38e56b8c fixes the issue in a Debian
> > version not explicitly referenced in VS0004, such as 6.1.1?

I tried to reproduce it myself today and I wasn't able to trigger the
leak on the master branch's commit prior to the fix. I asked
internally whether we have a reliable reproducer or if it's something
that needs a consequential workload to be observable.

> AFAICS no GNU/Linux distribution was able to fix their stable releases
> so far.

That's not too bad, there is a workaround and it is overall a niche
case. If I remember correctly when it was brought to us it wasn't a
security problem for the reporter but we recognized the bug as such.

Please note that in 2 of the 3 scenarios your VCL is incorrect in the
first place, so you have other problems to deal with more pressing
than the information leak.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Detecting and fixing VSV00004 in older releases

2020-04-22 Thread Dridi Boukelmoune
Bonjour Sylvain,

On Sat, Apr 18, 2020 at 7:18 PM Sylvain Beucler  wrote:
>
> Hi,
>
> I'm part of the Debian LTS (Long Term Support) team, I'm checking what
> Debian varnish packages are affected by CVE-2019-20637, and how to fix them.
>
> In particular, we ship 4.0.2 and 5.0.0, where cache_req_fsm.c is too
> different to apply the git patch with good confidence.
>
> I appreciate that these versions are not officially supported anymore by
> the Varnish project. Since it is common in GNU/Linux distros to provide
> security fixes to users of packaged releases when feasible, I'm
> classifying this vulnerability and looking for a fix.

EOL series are definitely not a priority and I have other things to
look at before I can dive into this. So I will eventually revisit this
thread, or maybe someone will beat me to it if you're lucky.

> Is there a patch for older Varnish releases, or failing that, a
> proof-of-concept that would help me trigger and fix the vulnerability?

Not that I'm aware of.

> Note: to determine whether the versions are affected, and possibly
> backport the patch, I tried to reproduce the issue following the
> detailed advisory but without success, including on a vanilla 6.0.4:

If the advisory is inaccurate we will definitely want to amend it.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Installation of Varnish 6.0LTS on Debian Buster

2020-04-02 Thread Dridi Boukelmoune
On Thu, Apr 2, 2020 at 8:36 AM Kristian Grønfeldt Sørensen
 wrote:
>
> Hi,
>
> It doesn't look like there's any Varnish 6.0LTS on packagecloud. Is
> there any plans for when they will be available, or is it just me who
> can't find them?
>
> Additionally I noticed that the link to
> https://varnish-cache.org/releases/rel6.0.2 from
> https://varnish-cache.org/docs/trunk/installation/install_debian.html
> results in a 404.   I'm not sure what it was supposed to point to, as the 
> changes.rst doesn't seem to contain any hints on this either.

Hi,

https://github.com/varnishcache/pkg-varnish-cache/issues/127
https://github.com/varnishcache/pkg-varnish-cache/issues/128

*channels guillaume*

I'm not sure what the current status is.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish not respecting pass to backend for specified http hosts

2020-03-31 Thread Dridi Boukelmoune
On Tue, Mar 31, 2020 at 9:58 PM Guillaume Quintard
 wrote:
>
> Hi,
>
> I think there is a bit of confusion regarding what "return(pass)" does.
>
> Basically, when a request comes in, you need to answer two questions:
> - do I want to cache it?
> - if I need data from the backend, well, what is my backend?
>
> "return(pass)" answers the first question with a big fat "no", and this is 
> what you see in your log ("VCL_call PASS" and "VCL_return pass"), the request 
> will be fetched backend, but won't be stored in cache.
>
> You then need to decide where to fetch the data from, but in your vcl, you 
> only have one backend, so everything comes from the same backend, what you 
> want is to create a second backend, and do
>
> if ((req.http.host ~ "(domain.com) || (dev.domain.com)")) {

No matter how you look at it this if statement is broken.

You can either go for this:

> if (req.http.host ~ "^(dev\.)?domain\.com")

Or you can do this:

> if (req.http.host == "domain.com" || req.http.host == "dev.domain.com")

The ~ operator in this context matches regular expressions, and your
regex doesn't make sense.

Dridi

>   set req.backend_hint = default;
> } else {
>   set req.backend_hint = other;
>   return (pass);
> }
>
>
> Hopefully that will clarify things a bit.
>
> Out of curiosity, are you using the official varnish image 
> (https://hub.docker.com/_/varnish) or something else?
>
> Cheers,
>
> --
> Guillaume Quintard
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Grace and misbehaving servers

2020-03-25 Thread Dridi Boukelmoune
> > A problem with the restart logic is the race it opens since you now
> > have two lookups, but overall, that's the kind of convoluted VCL that
> > should work. The devil might be in the details.
>
> Could you describe this race condition that you mean can happen? What could 
> the worst case scenario be? If it is just a guru meditation for this single 
> request, and it happens very rarely, then that is something I can live with. 
> If it is something that can cause Varnish to crash or hang, then it is not 
> something I can live with :)

In general by the time you get to the second lookup the state of the
cache may have changed. An object may go away in between, so a
restart would cause unnecessary processing that would likely lead to
an additional erroring fetch.

Using a combination of saint mode and req.grace to emulate
stale-if-error could in theory lead to something simpler.

At least it would if this change landed one way or the other:

https://github.com/varnishcache/varnish-cache/issues/3259

> > In this case you might want to combine your VCL restart logic with
> > vmod_saintmode.
>
> Yes, I have already heard some things about this vmod. I will definitely look 
> into it. Thanks.

It used to be a no brainer with Varnish 3, being part of VCL...

> > And you might solve this problem with vmod_xkey!
>
> We actually already use this vmod. But like I said, it doesn't solve the 
> problem with new content that effects existing pages.

Oh, now I get it! That's an interesting limitation I don't think I
ever considered. I will give it some thought!

> Several pages might for example include information about the latest objects 
> created in the system. If one of these pages were loaded and cached at time 
> T1, and then at T2 a new object O2 was created, an "xkey purge" with the key 
> "O2" will have no effect since that page was not associated with the "O2" key 
> at time T1, because O2 didn't even exist then.
>
> And since there is no way to know beforehand which these pages are, the only 
> bullet proof way I can see of handling this is to purge all pages* any time 
> any content is updated.
>
> * or at least a large subset of all pages, since the vast majority might 
> include something related to newly created objects

You can always use vmod_xkey to broadly tag responses. An example
I like to take to illustrate this is tagging a response as "article". If
you change the template for articles, you know you can [soft] purge
them all at once.

That doesn't solve the invalidation using keys unknown (yet) to the
cache, but my take would be that if my application can know that, it
should be able to invalidate individual resources affected by their
new key (I'm aware it's not always that easy).

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Grace and misbehaving servers

2020-03-23 Thread Dridi Boukelmoune
Hi,

On Fri, Mar 20, 2020 at 10:14 PM Batanun B  wrote:
>
> On Thu , Mar 19, 2020 at 11:12 AM Dridi Boukelmoune  wrote:
> >
> > Not quite!
> >
> > ttl+grace+keep defines how long an object may stay in the cache
> > (barring any form of invalidation).
> >
> > The grace I'm referring to is beresp.grace,
>
> Well, when I wrote "if ttl + grace + keep is a low value set in 
> vcl_backend_response", I was talking about beresp.grace, as in beresp.ttl + 
> beresp.grace + beresp.keep.
>
>
> > it defines how long we might serve a stale object while a background fetch 
> > is in progress.
>
> I'm not really seeing how that is different from what I said. If beresp.ttl + 
> beresp.grace + beresp.keep is 10s in total, then a req.grace of say 24h 
> wouldn't do much good, right? Or maybe I just misunderstood what you were 
> saying here.

Or maybe *I* just misunderstood your understanding :)

> > As always in such cases it's not black or white. Depending on the
> > nature of your web traffic you may want to put the cursor on always
> > serving something, or never serving something stale. For example, live
> > "real time" traffic may favor failing some requests over serving stale
> > data.
>
> Well, I was thinking of the typical "regular" small/medium website, like 
> blogs, corporate profile, small town news etc.
>
>
> > I agree that on paper it sounds simple, but in practice it might be
> > harder to get right.
>
> OK. But what if I implemented it in this way, in my VCL?
>
> * In vcl_backend_response, set beresp.grace to 72h if status < 400
> * In vcl_backend_error and vcl_backend_response (when status >= 500), return 
> (abandon)
> * In vcl_synth, restart the request, with a special req header set
> * In vcl_recv, if this req header is present, set req.grace to 72h
>
> Wouldn't this work? If no, why? If yes, would you say there is something else 
> problematic with it? Of course I would have to handle some special cases, and 
> maybe check req.restarts and such, but I'm talking about the thought process 
> as a whole here. I might be missing something, but I think I would need 
> someone to point it out to me because I just don't get why this would be 
> wrong.

For starters, there currently is no way to know for sure that you
entered vcl_synth because of a return(abandon) transition. There are
plans to make it possible, but currently you can do that with
confidence lower than 100%.

A problem with the restart logic is the race it opens since you now
have two lookups, but overall, that's the kind of convoluted VCL that
should work. The devil might be in the details.

> > Is it hurting you that less frequently requested contents don't stay
> > in the cache?
>
> If it results in people seeing error pages when a stale content would be 
> perfectly fine for them, then yes.
>
> And these less frequently requested pages might still be part of a group of 
> pages that all result in an error in the backend (while the health probe 
> still return 200 OK). So while one individual page might be visited 
> infrequently, the total number of visits on these kind of pages might be high.
>
> Lets say that there are 3.000 unique (and cachable) pages that are visited 
> during an average weekend. And all of these are in the Varnish cache, but 
> 2.000 of these have stale content. Now lets say that 50% of all pages start 
> returning 500 errors from the backend, on a Friday evening. That would mean 
> that about ~1000 of these stale pages would result in the error displayed to 
> the end users during that weekend. I would much more prefer if it were to 
> still serve them stale content, and then I could look into the problem on 
> Monday morning.

In this case you might want to combine your VCL restart logic with
vmod_saintmode.

https://github.com/varnish/varnish-modules/blob/6.0-lts/docs/vmod_saintmode.rst#vmod_saintmode

This VMOD allows you to create circuit breakers for individual
resources for a given backend. That will result in more complicated
but will help you mark individual resources as sick, making the need
for a "special req header" redundant. And since vmod_saintmode marks
resources sick for a given time, it means that NOT ALL individual
clients will go through the complete restart dance during that window.

I think you may still have to do a restart in vcl_miss because only
then will you know the saint-mode health (you need both a backend and
a hash).

> > Another option is to give Varnish a high TTL (and give clients a lower
> > TTL) and trigger a form of invalidation directly from the backend when
> > you know a resource changed.
>
> Well, that is perfectly fine for pages that have a one-to-one

Re: Purging on PUT and DELETE

2020-03-19 Thread Dridi Boukelmoune
On Thu, Mar 19, 2020 at 10:28 AM Martynas Jusevičius
 wrote:
>
> Thank you  Dridi.
>
> But what I'm reading here
> https://docs.varnish-software.com/tutorials/cache-invalidation/
> > Unlike purges, banned content won’t immediately be evicted from cache 
> > freeing up memory, instead it will either stay in cache until its TTL 
> > expires, if we ban on req properties, or it will be evicted by a background 
> > thread, called ban_lurker, if we ban on the obj properties
>
> Which means that using your example, if immediately follow up
> PUT/DELETE with a GET, it is not certain to get a fresh copy? Because
> "banned content won’t immediately be evicted from cache"?

That's because bans using req criteria (as opposed to obj) need a
request to happen to test the ban on a given object. And even bans
with obj criteria don't happen immediately, they eventually happen in
the background.

But once a ban is in the list, an object is not served from cache
before confirming that it isn't invalidated by a newer ban during
lookup, so you shouldn't worry about that.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Purging on PUT and DELETE

2020-03-19 Thread Dridi Boukelmoune
On Thu, Mar 19, 2020 at 10:05 AM Martynas Jusevičius
 wrote:
>
> Hi,
>
> upon receiving a PUT or DELETE request, I'd like Varnish to invalidate
> the current object (and its variants) *and* to pass the request to the
> backend.
>
> Essentially the same question as here:
> https://serverfault.com/questions/399814/varnish-purge-on-post-or-put
> The answer seems outdated though.

I would do it like this:

> sub vcl_backend_response {
> if (beresp.status == 200 && bereq.method ~ "PUT|DELETE") {
> ban("req.url == " + bereq.url + " && req.http.host == " + 
> bereq.http.host);
> }
> }

Or at least, I would do it in vcl_backend_response, there's no point
in invalidating if the client wasn't allowed to change a resource for
example.

> I consider this a common use case for REST CRUD APIs, so I was
> surprised not to find a single VCL example mentioning it.

The problem is that so many things can go wrong. For example my
snippet doesn't allow the ban to be processed in the background, so
further adjustments are needed to make that happen. It also assumes
that bereq's URL and Host are identical to req's, and isn't subject to
client noise (spurious query parameters and whatnots).

So indeed, I wouldn't want to advertise that kind of snippet without a
heavy supply of red tape.


Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Grace and misbehaving servers

2020-03-19 Thread Dridi Boukelmoune
On Tue, Mar 17, 2020 at 8:06 PM Batanun B  wrote:
>
> Hi Dridi,
>
> On Monday, March 16, 2020 9:58 AM Dridi Boukelmoune  wrote:
>
> > Not really, it's actually the other way around. The beresp.grace
> > variable defines how long you may serve an object past its TTL once it
> > enters the cache.
> >
> > Subsequent requests can then limit grace mode, so think of req.grace
> > as a req.max_grace variable (which maybe hints that it should have
> > been called that in the first place).
>
> OK. So beresp.grace mainly effects how long the object can stay in the cache? 
> And if ttl + grace + keep is a low value set in vcl_backend_response, then 
> vcl_recv is limited in how high the grace can be?

Not quite!

ttl+grace+keep defines how long an object may stay in the cache
(barring any form of invalidation).

The grace I'm referring to is beresp.grace, it defines how long we
might serve a stale object while a background fetch is in progress.

> And req.grace doesn't effect the time that the object is in the cache? Even 
> if req.grace is set to a low value on the very first request (ie the same 
> request that triggers the call to the backend)?

Right, req.grace only defines the maximum staleness tolerated by a
client. So if backend selection happens on the backend side, you can
for example adjust that maximum based on the health of the backend.

> > What you are describing is stale-if-error, something we don't support
> > but could be approximated with somewhat convoluted VCL. It used to be
> > easier when Varnish had saint mode built-in because it generally
> > resulted in less convoluted VCL.
> >
> > It's not something I would recommend attempting today.
>
> That's strange. This stale-if-error sounds like something pretty much 
> everyone would want, right? I mean, if there is is stale content available 
> why show an error page to the end user?

As always in such cases it's not black or white. Depending on the
nature of your web traffic you may want to put the cursor on always
serving something, or never serving something stale. For example, live
"real time" traffic may favor failing some requests over serving stale
data.

Many users want stale-if-error, but it's not trivial, and it needs to
be balanced against other aspects like performance.

> But maybe it was my want to "cache/remember" previous failed fetches and that 
> made it complicated? So if I loosen the requirements/wish-list a bit, into 
> this:
>
> Assuming that:
> * A request comes in to Varnish
> * The content is stale, but still in the cache
> * The backend is considered healthy
> * The short (10s) grace has expired
> * Varnish triggers a synchronus fetch in the backend
> * This fetch fails (timeout or 5xx error)
>
> I would then like Varnish to:
> * Return the stale content

I agree that on paper it sounds simple, but in practice it might be
harder to get right.

For example, "add HTTP/3 support" is a simple statement, but the work
it implies can be orders of magnitude more complicated. And
stale-if-error is one those tricky features: tricky for performance,
that must not break existing VCL, etc.

> Would this be possible using basic Varnish community edition, without a 
> "convoluted VCL", as you put it? Is it possible without triggering a restart 
> of the request? Either way, I am interested in hearing about how it can be 
> achieved. Is there any documentation or blog post that mentions this? Or can 
> you give me some example code perhaps? Even a convoluted example would be OK 
> by me.

I wouldn't recommend stale-if-error at all today, as I said in my first reply.

> Increasing the req.grace value for every request is not an option, since we 
> only want to serve old content if Varnish can't get hold of new content. And 
> some of our pages are visited very rarely, so we can't rely on a constant 
> stream of visitors keeping the content fresh in the cache.

Is it hurting you that less frequently requested contents don't stay
in the cache?

Another option is to give Varnish a high TTL (and give clients a lower
TTL) and trigger a form of invalidation directly from the backend when
you know a resource changed.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Fix incorrect Last-Modified from backend?

2020-03-19 Thread Dridi Boukelmoune
On Wed, Mar 18, 2020 at 2:17 PM Batanun B  wrote:
>
> Hi,
>
> Long story short, one of our backend systems serves an incorrect 
> Last-Modified response header, and I don't see a way to fix it at the source 
> (third party system, not based on Nginx/Tomcat/IIS or anything like that).
>
> So, I would like to "fix" it in Varnish, since I don't expect the maker of 
> that software being able to fix this within a reasonable time. Is there a 
> built in way in Varnish to make it generate it's own Last-Modified response 
> header? Something like:
>
> * If no stale object exists in cache, set Last-Modified to the value of the 
> Date response header
> * If a stale object exists in cache, and it's body content is identical to 
> the newly fetched content, keep the Last-Modified from the step above
> * If a stale object exists in cache, but it's body content is different to 
> the newly fetched content, set Last-Modified to the value of the Date 
> response header

I don't think you can do something like that without writing a module,
and even if you could you would still have a chicken-egg problem for
streaming deliveries when it comes to generating a header based on the
contents of the body (you would need trailers, but we don't support
them).

By the way, when it comes to revalidation based on the body, you
should use ETag instead of Last-Modified.

> Any suggestions on how to handle this situation? Any general Varnish 
> guidelines when working with a backend that acts like this?

I think that's a tough nut to crack. There are many things you can
work around from a misbehaving backend but this case is not trivial.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Grace and misbehaving servers

2020-03-16 Thread Dridi Boukelmoune
Hi,

On Sun, Mar 15, 2020 at 9:56 PM J X  wrote:
>
> Hi,
>
> I'm currently setting up Varnish for a project, and the grace feature 
> together with health checks/probes seems to be a great savior when working 
> with servers that might misbehave. But I'm not really sure I understand how 
> to actually achive that, since the example doesn't really make sense:
>
> https://varnish-cache.org/docs/trunk/users-guide/vcl-grace.html
>
> See the section "Misbehaving servers". There the example does "set 
> beresp.grace = 24h" in vcl_backend_response, and "set req.grace = 10s" in 
> vcl_recv, if the backend is healthy. But since vcl_recv is run before 
> vcl_backend_response, doesn't that mean that the 10s grace value of vcl_recv 
> is overwritten by the 24h value in vcl_backend_response?

Not really, it's actually the other way around. The beresp.grace
variable defines how long you may serve an object past its TTL once it
enters the cache.

Subsequent requests can then limit grace mode, so think of req.grace
as a req.max_grace variable (which maybe hints that it should have
been called that in the first place).

> Also... There is always a risk of some URL's suddenly giving 500-error (or a 
> timeout) all while the probe still returns 200. Is it possible to have 
> Varnish behave more or less as if the backend is sick, but just for those 
> URL? Basically I would like this logic:
>
> If a healthy content exists in the cache:
> 1. Return the cached (and potentially stale) content to the client
> 2. Increase the ttl and/or grace, to keep the healthy content longer
> 3. Only do a bg-fetch if a specified time has past since the last attempt 
> (lets say 5s), to avoid hammering the backend
>
> If a non-health (ie 500-error) exists in the cache:
> 1. Return the cached 500-content to the client
> 2. Only do a bg-fetch if a specified time has past since the last attempt 
> (lets say 5s), to avoid hammering the backend

What you are describing is stale-if-error, something we don't support
but could be approximated with somewhat convoluted VCL. It used to be
easier when Varnish had saint mode built-in because it generally
resulted in less convoluted VCL.

It's not something I would recommend attempting today.

> If no content doesn't exists in the cache:
> 1. Perform a synchronous fetch
> 2. If the result is a 500-error, cache it with lets say ttl = 5s
> 3. Otherwise, cache it with a longer ttl
> 4. Return the result to the client
>
> Is this possible with the community edition of Varnish?

You can do that with plain VCL, but even better, teach your backend to
inform Varnish how to handle either cases with the Cache-Control
response header.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnishncsa Random Log Sampling ?

2020-03-02 Thread Dridi Boukelmoune
On Mon, Mar 2, 2020 at 11:15 AM Yassine Aouadi
 wrote:
>
>
> Hello,
>
> I  am sending my Varnish log to  to remote SAAS solution  I and want to 
> improve  logs costs by  implementing a server side sampling solution .
>
> First I splitted Varnishnncsa  into two service one for error logs and the 
> other for acces logs  :
>
> CGroup: /system.slice/varnishncsa-error.service
>└─18458 /usr/bin/varnishncsa -c -b -a -w 
> /var/log/varnish/varnishncsa-error.log -D -P 
> /run/varnishncsa/varnishncsa-error.pid -f 
> /etc/varnish/varnishncsa_logmatic.format -q *Status > 399
>
>   CGroup: /system.slice/varnishncsa.service
>└─18347 /usr/bin/varnishncsa -c -b -a -w 
> /var/log/varnish/varnishncsa-access.log -D -P 
> /run/varnishncsa/varnishncsa-access.pid -f 
> /etc/varnish/varnishncsa_logmatic.format -q *Status < 400
>
>
> Is there Any way to go further with  varnishncsa  and perform and random 
> sampling of my access logs ? for example write only 10 % of access logs
>
> If it's not possible with varnishncsa any   Suggestion ? I tried rsyslog 
> random sampling but  I am facing memory leaks while stress testing server 
> with high load

Hi,

I think the closest to what you want is rate limiting, see the
documentation for the varnishstat -R option. Otherwise you can always
do the sampling one step downstream and instead of sending
varnishncsa-.log whenever logrotate triggers a rotation you
run script that sends 1 line every 10 lines. But I think rate limiting
with -R is simpler and instead of a percentage that depends highly on
your traffic you can actually get a limit according to a budget since
you wish to reduce costs.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: How to disable core-dump of varnish?

2020-02-10 Thread Dridi Boukelmoune
On Mon, Feb 10, 2020 at 5:02 PM Veeresh Reddy  wrote:
>
> Any idea on how to turn it off except for panic messages?

This?

varnishadm param.set feature +no_coredump

Or during startup:

varnishd [...] -p feature=+no_coredump

See `man varnishd` for other feature flags.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish 6.3, Hitch & HTTP/2

2020-02-06 Thread Dridi Boukelmoune
On Thu, Feb 6, 2020 at 9:50 AM Admin Beckspaced  wrote:
>
> Hello Guillaume,
>
> thanks again for your reply
>
> ok ... did enable hitch ALPN
>
> alpn-protos = "http/2, http/1.1"
>
> did enable http/2 in varnish.
>
> I'm running opensuse and it has its configuration in /etc/sysnconfig/varnish
>
> VARNISHD_PARAMS="-j unix,user=varnish -f /etc/varnish/main.vcl -a :80 -a 
> 127.0.0.1:6086,PROXY -T localhost:6082 -s default=malloc,2G -s 
> static=file,/var/cache/varnish,5G -p feature=+http2"
>
> which then is loaded via systemd service
>
> [Service]
> EnvironmentFile=/etc/sysconfig/varnish
> PIDFile=/var/run/varnishd.pid
> ExecStart=/usr/sbin/varnishd -P /var/run/varnishd.pid -F $VARNISHD_PARAMS
>
> restart hitch & varnish
>
> if I look in the logs all looks fine
>
> Feb 06 10:07:12 cx40 systemd[1]: Starting hitch...
> Feb 06 10:07:12 cx40 hitch[1238]: Trying to initialize SSL contexts with your 
> certificates
> Feb 06 10:07:12 cx40 hitch[1238]: hitch configuration looks ok.
> Feb 06 10:07:13 cx40 systemd[1]: Started hitch.
>
> Feb 06 10:07:14 cx40 varnishd[1233]: Debug: Version: varnish-6.3.0 revision 
> 0c9a93f1b2c6de49b8c6ec8cefd9d2be50041d79
> Feb 06 10:07:14 cx40 varnishd[1233]: Debug: Platform: 
> Linux,4.12.14-lp151.28.36-default,x86_64,-junix,-smalloc,-sfile,-sdefault,-hcritbit
> Feb 06 10:07:14 cx40 varnishd[1233]: Version: varnish-6.3.0 revision 
> 0c9a93f1b2c6de49b8c6ec8cefd9d2be50041d79
> Feb 06 10:07:14 cx40 varnishd[1233]: Platform: 
> Linux,4.12.14-lp151.28.36-default,x86_64,-junix,-smalloc,-sfile,-sdefault,-hcritbit
> Feb 06 10:07:14 cx40 varnishd[1233]: Debug: Child (1619) Started
> Feb 06 10:07:14 cx40 varnishd[1233]: Child (1619) Started
> Feb 06 10:07:14 cx40 varnishd[1233]: Info: Child (1619) said Child starts
> Feb 06 10:07:14 cx40 varnishd[1233]: Info: Child (1619) said SMF.static 
> mmap'ed 5368709120 bytes of 5368709120
> Feb 06 10:07:14 cx40 varnishd[1233]: Child (1619) said Child starts
> Feb 06 10:07:14 cx40 varnishd[1233]: Child (1619) said SMF.static mmap'ed 
> 5368709120 bytes of 5368709120
> Feb 06 10:07:14 cx40 varnishncsa[742]: .
>
> if i then check if the website supports http/2
>
> my website is https://kohphangannews.org/
>
> https://tools.keycdn.com/http2-test
>
> https://http2.pro/check?url=https%3A//kohphangannews.org/
>
> it says that http/2 is not supported ;(
>
> what am I missing?

It's called h2 for ALPN, and with that I think you should be good.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Running Two varnishncsa Instances using Systemd : looking for best practice

2020-01-30 Thread Dridi Boukelmoune
On Wed, Jan 29, 2020 at 4:59 PM Yassine Aouadi
 wrote:
>
>
> Dridi,
>
> Thank you for the Catch !
>
> I was  going to correct with  "RespStatus > 399 or BerespStatus > 399" but 
> yours is better.

Pick whatever works best for you ;-)

> Splitting the unitsseems to look fine in Dev :
>
> ---
> ● varnishncsa-access.service - Varnish HTTP accelerator log daemon
>Loaded: loaded (/lib/systemd/system/varnishncsa-access.service; disabled; 
> vendor preset: enabled)
>Active: active (running) since Wed 2020-01-29 16:52:15 UTC; 5s ago
>  Docs: https://www.varnish-cache.org/docs/4.1/
>man:varnishncsa
>   Process: 10571 ExecStart=/usr/bin/varnishncsa -c -b -a -w 
> /var/log/varnish/access-varnishncsa.log -D -P 
> /run/varnishncsa/varnishncsa-access.pid -f 
> /etc/varnish/varnishncsa_logmatic.format -q *Status < 400
>  Main PID: 10575 (varnishncsa)
> Tasks: 1
>Memory: 188.0K
>   CPU: 44ms
>CGroup: /system.slice/varnishncsa-access.service
>└─10575 /usr/bin/varnishncsa -c -b -a -w 
> /var/log/varnish/access-varnishncsa.log -D -P 
> /run/varnishncsa/varnishncsa-access.pid -f 
> /etc/varnish/varnishncsa_logmatic.format -q *Status < 400
>
> Jan 29 16:52:15 LAB-*** systemd[1]: Starting Varnish HTTP accelerator log 
> daemon...
> Jan 29 16:52:15 LAB*** ystemd[1]: Started Varnish HTTP accelerator log daemon.
>
> ● varnishncsa-error.service - Varnish HTTP accelerator log daemon
>Loaded: loaded (/lib/systemd/system/varnishncsa-error.service; disabled; 
> vendor preset: enabled)
>Active: active (running) since Wed 2020-01-29 16:52:15 UTC; 5s ago
>  Docs: https://www.varnish-cache.org/docs/4.1/
>man:varnishncsa
>   Process: 10566 ExecStart=/usr/bin/varnishncsa -c -b -a -w 
> /var/log/varnish/error-varnishncsa.log -D -P 
> /run/varnishncsa/varnishncsa-error.pid -f 
> /etc/varnish/varnishncsa_logmatic.format -q *Status > 399
>  Main PID: 10574 (varnishncsa)
> Tasks: 1
>Memory: 312.0K
>   CPU: 46ms
>CGroup: /system.slice/varnishncsa-error.service
>└─10574 /usr/bin/varnishncsa -c -b -a -w 
> /var/log/varnish/error-varnishncsa.log -D -P 
> /run/varnishncsa/varnishncsa-error.pid -f 
> /etc/varnish/varnishncsa_logmatic.format -q *Status > 399
>
> Jan 29 16:52:15 LAB-*** systemd[1]: Starting Varnish HTTP accelerator log 
> daemon...
> Jan 29 16:52:15 LAB-*** systemd[1]: Started Varnish HTTP accelerator log 
> daemon.
> ---
>
>
> Would provide feedback once moving to prod.

Please note that you can also put the query in a file with Varnish
6.3, so in theory you could do something like this:

[Unit]
Description=Varnish HTTP accelerator log daemon
Documentation=https://www.varnish-cache.org/docs/4.1/ man:varnishncsa
After=varnish.service

[Service]
Type=forking
PIDFile=/run/varnishncsa/%i.pid
RuntimeDirectory=varnishncsa
User=varnishlog
Group=varnish
ExecStart=/usr/bin/varnishncsa -c -b -a -w
/var/log/varnish/%i-varnishncsa.log -D \
-f /etc/varnish/varnishncsa_logmatic.format -Q /etc/varnish/%i.vslq
ExecReload=/bin/kill -HUP $MAINPID
PrivateDevices=true
PrivateTmp=true
ProtectHome=true
ProtectSystem=full

[Install]
WantedBy=multi-user.target

Then you can manage both services from the same unit:

systemctl start varnishncsa@access.service varnishncsa@error.service

In /etc/varnish/access.vslq you would write:

*Status < 400

I'll leave the dirty systemd details to yourself, but that's how I'd
likely proceed :)

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Running Two varnishncsa Instances using Systemd : looking for best practice

2020-01-29 Thread Dridi Boukelmoune
Hi,

On Wed, Jan 29, 2020 at 3:09 PM Yassine Aouadi
 wrote:
>
> Hello ,
>
> I Would like to Split my Varnish logs into   access log and error log file :
>
> I Know I can reach my goal   by command line using   varnishncsa :
>
> ---
> usr/bin/varnishncsa  -c -b -a -w /var/log/varnish/access-varnishncsa.log -D  
> -f /etc/varnish/varnishncsa_logmatic.format -q 'RespStatus < 400' &&  
> /usr/bin/varnishncsa  -c -b -a -w /var/log/varnish/error-varnishncsa.log -D  
> -f /etc/varnish/varnishncsa_logmatic.format -q 'RespStatus > 399'
> ---

Since you are using both -c and -b you want both client and backend
transactions but your query will only capture client transactions, use
this instead:

-q '*Status < 400'
-q '*Status > 399'

This should work with either RespStatus or BerespStatus.

> I am looking now to edit my varnishncsa unit file so i can do the same using 
> systemd :
>
> This is my actual  unit file with one Varnishncsa instance :
> ---
> [Unit]
> Description=Varnish HTTP accelerator log daemon
> Documentation=https://www.varnish-cache.org/docs/4.1/ man:varnishncsa
> After=varnish.service
>
> [Service]
> Type=forking
> PIDFile=/run/varnishncsa/varnishncsa.pid
> RuntimeDirectory=varnishncsa
> User=varnishlog
> Group=varnish
> ExecStart=/usr/bin/varnishncsa  -c -b -a -w 
> /var/log/varnish/access-varnishncsa.log -D  -f 
> /etc/varnish/varnishncsa_logmatic.format
> ExecReload=/bin/kill -HUP $MAINPID
> PrivateDevices=true
> PrivateTmp=true
> ProtectHome=true
> ProtectSystem=full
>
> [Install]
> WantedBy=multi-user.target
>
> ---
>
> What is the best practice to do so ?
> Can I use the same  one unit file  and use a one shot exec start ? Or should 
> I split unit files and run two different systemd varnishncsa services 
> (instances)?

I think the simplest is to have multiple units.


Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: How to discover manager & worker PIDs given a Varnish Cache instance name?

2019-12-18 Thread Dridi Boukelmoune
Hello Carlos,

On Wed, Dec 18, 2019 at 4:04 PM Carlos Abalde  wrote:
>
> Hi,
>
> Simple question related with Varnish Cache and monitoring. Let's assume a 
> single server running one or more Varnish Cache instances. Given the name of 
> one instance (i.e. '-n' argument), is there any reasonable way (e.g. via 
> varnishadm) to discover the PIDs of the associated manager and worker 
> processes?
>
> The goal is to find those PIDs in order to fetch detailed memory usage stats 
> (virtual, resident, shared, etc.) of the associated Varnish Cache instance 
> and then feed the monitoring agent with that info.

You can try this:

> $ cat test.vtc
> varnishtest "dude, where's my pid?"
>
> varnish v1 -vcl {backend be { .host = "${bad_backend}"; }} -start
>
> shell {
> # manager
> cat ${v1_name}/_.pid
> echo
> # child
> awk '$1 == "#" {print $2}' ${v1_name}/_.vsm_child/_.index
> }
>
> $ varnishtest -v test.vtc | grep shell_out
>  top  shell_out|2038076
>  top  shell_out|2038088

Not the best interface for the child, I must admit :)

Dridi

> Thanks,
>
> --
> Carlos Abalde
>
> ___
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Caching a resource after specific number of requests

2019-12-05 Thread Dridi Boukelmoune
On Tue, Dec 3, 2019 at 6:04 PM al sefid  wrote:
>
> Hello there!
> Is there any functionality like proxy_cache_min_uses in the Varnish cache 
> that caches a resource after specific number of requests to that resource?
> Thank you!

Varnish is not capable of doing that by itself, but as Geoff pointed
out on github you could rely on some form of throttling to emulate
this functionality, but that requires a third-party module:

https://github.com/varnishcache/varnish-cache/issues/3150#issuecomment-561307014

Be careful though, his example was a bit simplistic (I assume on
purpose to show the idea) and shouldn't be used as-is.

It might also not work if you have an object in your cache that
outlives the throttling. Maybe the cat flap could work for this case,
but here be dragons...

I moved the discussion to the misc mailing list because this is not
(yet?) dev material.

While this may sound like a good idea, and while it works for nginx,
it might not be a good fit for varnish. Of course unless we have that
feature in place, it's also hard to tell whether this would benefit
your workload, I suspect it would make things worse or force us to
make other changes. In particular, making this a global configuration
could be highly detrimental if you have some variety of contents in
your cache (small, big, short- and long-lived etc) and the best place
to start is to have the backend tell you precisely all about responses
you may be able to cache or not and how instead of starting with an a
priori criteria that a response shouldn't be cached unless it's asked
at least 3 times (which is a very vague requirement).

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: unset X-Varnish header to the backend server but keep it in the response to client

2019-11-08 Thread Dridi Boukelmoune
Hi,

Thank you for taking the time to reach out to this list.

On Fri, Nov 8, 2019 at 8:39 AM EC DIGIT FPFIS  wrote:
>
> Dear all,
>
> Currently, I migrate a configuration from Varnish 3 to Varnish 6 but I have 
> an issue concerning unset a header to a backend but keep it in the resp.
>
> Indeed, I cannot use it in vcl_backend_response because it's unset before 
> (vcl_pass/vcl_backend_fetch)...
>
> In the documentation 
> (https://book.varnish-software.com/4.0/chapters/VCL_Subroutines.html), I can 
> see that "if you do not wish to send the X-Varnish header to the backend 
> server, you can remove it in vcl_miss or vcl_pass. For that case, you can use 
> unset bereq.http.x-varnish;." but I cannot use bereq in vcl_miss/vcl_pass.

This is a bug in the varnish book, it lives here:

https://github.com/varnish/varnish-book

> Do you have any idea how to keep this header in vcl_backend_response but 
> without send it to backend?
>
> In Varnish 3, I used it in vcl_miss/vcl_pass and the unset bereq was set in 
> vcl_fetch.

Nowadays you would do that in vcl_backend_fetch, but the tricky part
is that you no longer have access to the client context. So instead
you need to "pollute" your bereq to find that information or use a
different tool like vmod_var or something similar.

> Vcl code:
>
> vcl 4.1;
> import std;
>
> backend dev {
>   .host = "127.0.0.1";
>   .port = "8080";
> }
>
> sub vcl_recv {
>   set req.http.App="App1";
>   set req.backend_hint = dev;
>   return (hash);
> }
>
> sub vcl_miss {
>   unset req.http.App;
> }
>
> sub vcl_pass {
>   unset req.http.App;
> }

Don't do anything in vcl_miss or vcl_pass.

> sub vcl_backend_fetch {
>   unset bereq.http.App;
> }

Here you may do something like this:

sub vcl_backend_fetch {
  if (bereq.http.App) {
var.set("app", bereq.http.App);
unset bereq.http.App;
  }
}

> sub vcl_backend_response {
>   if (bereq.http.App) {
> set beresp.http.Debug = "test";
> set beresp.ttl = 10s;
> set beresp.grace = 10s;
> return (deliver); // not applied
>   }
> }

And here, something like that:

sub vcl_backend_response {
  if (var.get("app")) {
set beresp.ttl = 10s;
set beresp.grace = 10s;
return (deliver);
  }
}

> sub vcl_deliver {
>   set res.http.App;
> }
>
> Goal:
>
> Currently: App header in unset for backend & client (unable to use it in 
> vcl_backend_response)
> Goal: App header can be used for conditions in vcl_backend_response but not 
> sent to the backend

See https://github.com/varnish/varnish-modules/blob/master/docs/vmod_var.rst

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Conditional Logging ( using varnishncsa )

2019-11-06 Thread Dridi Boukelmoune
On Thu, Nov 7, 2019 at 6:34 AM Maninder Singh  wrote:
>
> Hi,
>
> We have logging turned on using varnishncsa.
>
> /usr/bin/varnishncsa -a -w /var/log/varnish/varnishncsa.log -D -f 
> /etc/sysconfig/varnishncsa
>
> Here is what's defined in varnishncsa
>
> %{X-Forwarded-For}i %l %u %t %D \"%r\" %s %b \"%{Referer}i\" 
> \"%{User-agent}i\" \"%{Host}i\" %{Varnish:hitmiss}x
>
> However, this would log EVERY request that goes through varnish.
>
> We have a monitoring server that hits it aggressively ( and also static files 
> ).
>
> x.x.x.x - - [07/Nov/2019:00:22:53 -0600] 2080 "GET http://localhost/index.php 
> HTTP/1.0" 200 8 "-" "HTTP-Monitor/1.1" "-" miss
> x.x.x.x - - [07/Nov/2019:00:22:58 -0600] 2472 "GET http://localhost/index.php 
> HTTP/1.0" 200 8 "-" "HTTP-Monitor/1.1" "-" miss
> x.x.x.x - - [07/Nov/2019:00:22:59 -0600] 1919 "GET http://localhost/index.php 
> HTTP/1.0" 200 8 "-" "HTTP-Monitor/1.1" "-" miss
>
> Is there a way in which I can exclude these from varnish logs ?
>
> In apache I would just do
>
>  SetEnvIf Request_URI 
> "\.(jpeg|jpg|xml|png|gif|ico|js|css|swf|woff|ttf|eot\?|js?.|css?.)$" DontLog
>  SetEnvIfNoCase User-Agent "(HTTP-Monitor)" DontLog
>  CustomLog /var/www/logs/access_80_log combined env=!DontLog
>
> This would otherwise just keep filling up the logs.

Do something like this with your command line:

> varnishncsa [...] -q 'not (ReqHeader:User-Agent ~ "HTTP-Monitor" or ReqURL ~ 
> "\.(jpeg|jpg|xml|png|gif|ico|js|css|swf|woff|ttf|eot\?|js?.|css?.)$")'

See man varnishncsa, man vsl and man vsl-query.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Affected 5.x versions of VSV00004 Workspace information leak

2019-10-23 Thread Dridi Boukelmoune
On Wed, Oct 23, 2019 at 9:38 AM Erik Wasser  wrote:
>
> Hello list,
>
> sometimes I'm confused about the supported versions of Varnish. This
> resulted in the post "LTS time frame for Varnish 6.0.X?" on
> https://varnish-cache.org/lists/pipermail/varnish-dist/2019-September/000173.html.
>
> But now I'm confused about the "VSV4 Workspace information leak"
> (https://varnish-cache.org/security/VSV4.html) and the affected
> versions.
>
> "VSV4 Workspace information leak" writes:
>
>  > Versions affected
>  >
>  >   5.0 and forward
>
> So the version 5.0, 5.1 and 5.2 are affected by VSV4, is that
> correct? The page http://varnish-cache.org/releases/index.html states
> that only versions 6.X are supported. So all varnish 5.X should update
> to 6.X?! Is that conclusion correct?

Correct, and if you want some stability I recommend the 6.0 LTS branch
that will be maintained for a while, like the previous 4.1 LTS branch
that reached EOL in March 2019.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish version 6.3 vcl test fails with custom http header starting with a number

2019-10-21 Thread Dridi Boukelmoune
> before submitting a bug a report in github wanted to check is someone is 
> familiar with this issue ?

Can you tell us from which version you are upgrading to 6.3?

I know that at some point header names parsing changed so that they'd
have to be proper VCL symbols, leaving no room for otherwise valid
HTTP headers that don't happen to match VCL expectations. We have no
alternate syntax for "exotic" header names, I suggested req.http[7hello],
someone else suggested req.http."7hello" but nothing happened in this
direction. So your only solution today is to use vmod_header.

https://github.com/varnishcache/varnish-cache/issues/2573

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: How to send only whitelisted http headers to backend?

2019-10-18 Thread Dridi Boukelmoune
On Thu, Oct 17, 2019 at 12:50 PM Jeff Potter
 wrote:
>
>
> Thanks, Geoff and Dridi! We’ll give this a try.
>
> And Dridi, thanks also for maintaining varnish and this list — “long time 
> lurker; very rare poster” — since I have the microphone, just wanted to send 
> a short note of appreciation.

Very appreciated too, but you are crediting me much more than I deserve ;-)

PHK, Martin and Nils are the current maintainers and someone from
Uplex is maintaining this list.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: How to send only whitelisted http headers to backend?

2019-10-16 Thread Dridi Boukelmoune
On Wed, Oct 16, 2019 at 4:08 PM Geoff Simmons  wrote:
>
> On 10/15/19 16:21, Jeff Potter wrote:
> >
> > This seems like an easy task, but I haven’t been able to figure out
> > how to do it or find any posts online. Is there a way to only send
> > certain headers to a backend?
> >
> > I.e. in our application, we know we only need X-Forwarded-For and
> > Cookie headers. I know I can unset other known headers (User-Agent, etc)
> > — but how can I unset *all* other headers?
>
> VMOD re2 has the .hdr_filter() method for the set object:
>
> https://code.uplex.de/uplex-varnish/libvmod-re2
>
> https://code.uplex.de/uplex-varnish/libvmod-re2/blob/master/README.rst#L1775
>
> VOID myset.hdr_filter(HTTP, BOOL whitelist)
>
> The HTTP parameter can be one of req, resp, bereq or beresp. If the
> whitelist parameter is true (default true), then only matching headers
> are retained. Otherwise it's a blacklist -- matching headers are removed.
>
> So for your use case:
>
> sub vcl_init {
> new whitelist = re2.set(anchor=start, case_sensitive=false);
> whitelist.add("X-Forwarded-For:");
> whitelist.add("Cookie:");
> whitelist.add("Host:");
> whitelist.compile();
> }
>
> sub vcl_backend_fetch {
> whitelist.hdr_filter(bereq);
> }

TIL, thanks!

> I took the liberty of adding the Host header to your whitelist, since
> it's required since HTTP/1.1. Even if your backends "happen" to work
> without it, I wouldn't leave it out, since it's not well-formed HTTP
> otherwise (might stop working, for example, if the backend apps are
> upgraded).

Agreed, there are other control headers that one may want to keep in
the whitelist, otherwise you may break conditional or partial requests,
and everything else I don't remember off the top of my head.


Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: How to send only whitelisted http headers to backend?

2019-10-15 Thread Dridi Boukelmoune
On Tue, Oct 15, 2019 at 2:22 PM Jeff Potter
 wrote:
>
>
> Hi All,
>
> This seems like an easy task, but I haven’t been able to figure out how to do 
> it or find any posts online. Is there a way to only send certain headers to a 
> backend?
>
> I.e. in our application, we know we only need X-Forwarded-For and Cookie 
> headers. I know I can unset other known headers (User-Agent, etc) — but how 
> can I unset *all* other headers?
>
> (We’re on VCL format 4.0.)

Hi Jeff,

This is not doable in VCL, this kind of header whitelisting could be
implemented with a VMOD but I'm not aware of any one doing that.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Migration from 4.1 vcl

2019-10-09 Thread Dridi Boukelmoune
> > Probably mostly third-party VMODs, since Varnish 6.x still supports
> > the VCL 4.0 syntax.
>
> Any specific info on migration of vmods?
> I mostly use the "bundled" vmods[1], but occasionally others like 
> libvmod-sqlite3[2].
>
> If I wanted to convert or upgrade them, do you have any hints or info on what 
> is likely to break and why?
> Can these vmods still be built in the same way with 6.0?

The problem with third-party VMODs is that we don't control them, so
if some idiot [1] decides to change a VMOD's API then VCL making use
of it will need some amount of rewriting when upgrading. You shouldn't
have a problem with bundled VMODs, otherwise we made a mistake.

When it comes to third-party VMODS an upgrade from Varnish 4.1 to 6.x
needs to be studied on a case-by-case basis. Worst case scenario the
module is not even available for the Varnish version you with to
upgrade to.

Dridi

[1] https://github.com/dridi/libvmod-querystring
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Migration from 4.1 vcl

2019-10-09 Thread Dridi Boukelmoune
On Wed, Oct 9, 2019 at 9:27 AM Miguel Gonzalez
 wrote:
>
> Hi,
>
> I am migrating from 4.1 and I am scared that with latest version varnish 
> would break.
>
> What should I take into account when migrating from 4.1?

Probably mostly third-party VMODs, since Varnish 6.x still supports
the VCL 4.0 syntax. There may be other non-VCL breaking changes and I
recommend you go through the release notes from 5.0 to 6.x, whichever
you choose (latest is 6.3 and LTS is 6.0).

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish and failed upstream

2019-07-10 Thread Dridi Boukelmoune
On Tue, Jul 9, 2019 at 8:12 AM Nikolay Bogdanov
 wrote:
>
> Hello. I have one case and I can not find good solution for it.
> In varnish3 req.backend was saved during restarts, so I can compare old 
> backend property and set other backend after first restart.
> But in varnish 5 and newer req.backend_hint in vcl_recv is default always. I 
> can not use retry in vcl_backend_response, because this function did not 
> called.
> How can I fix it?

What you are looking for is probably a circuit breaker, you can
implement one with vmod-saintmod:

https://github.com/varnish/varnish-modules/blob/master/docs/vmod_saintmode.rst#vmod_saintmode

However instead of using it with req, it's probably more efficient to
work on the backend side and retry from vcl_backend_fetch or
vcl_backend_error.

Best,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish hit + pass + miss reaches less than 50% of all reqs

2019-04-18 Thread Dridi Boukelmoune
On Wed, Apr 17, 2019 at 6:23 PM Guillaume Quintard
 wrote:
>
> Hi there,
>
> So:
>
> MAIN.client_req290152364 (aaall the requests)
>
> vs
>
> MAIN.cache_hit   7433491
> MAIN.cache_hit_grace36319 (exclude these are they are already 
> accounted for in MAIN.cache_hit)
> MAIN.cache_hitpass   16003020 (exclude these are they are already 
> accounted for in MAIN.s_pass)
> MAIN.cache_miss  89526521
> MAIN.s_synth   11418599
> MAIN.s_pipe 216
> MAIN.s_pass   181773529
>
> the difference is now 8 requests, which is fairly reasonable (some requests 
> may be in flight, and threads don't necessarily push their stats after every 
> requests)

Well, you can also return(synth) from almost anywhere, including after
a lookup where we bump one of the outcomes. This can create a bit of
double-accounting from the point of view of "summing the rest".

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish removing tags incorrectly fromURL

2019-03-29 Thread Dridi Boukelmoune
On Fri, Mar 29, 2019 at 8:18 AM  wrote:
>
> Hi,
>
>   It was just my mistake, you must download the tar.gz version from releases 
> tab : https://github.com/Dridi/libvmod-querystring/releases

I would have been surprised if a 1.x release didn't work for you :)

Thanks for confirming and don't forget to move to 6.0 LTS to get bug
fixes regularly.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: Varnish removing tags incorrectly fromURL

2019-03-27 Thread Dridi Boukelmoune
On Wed, Mar 27, 2019 at 2:33 PM  wrote:
>
> Hello,
>
>
>
>I would highly appreciate if I get some help on the following issu:

If you need to filter out or extract parameters from a query-string I
recommend this:

https://github.com/Dridi/libvmod-querystring/#vmod-querystring

If you are running Varnish below 6.0 I encourage you to upgrade but
meanwhile you also have this:

https://github.com/Dridi/libvmod-querystring/tree/v1.0.6#vmod-querystring

It should be much easier than using regular expressions, at the
expense of having to manage a vmod.


Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: reg: not getting ReqAcct tag in varnishlog

2019-03-20 Thread Dridi Boukelmoune
On Wed, Mar 20, 2019 at 10:40 AM Hardik  wrote:
>
> Hi Dridi,
>
> Do you need all timestamps or a specific metric?
> Regarding timestamp, want to read two tags,
> Timestamp  Start: 1516269224.184112 0.00 0.00
> Timestamp  Resp: 1516269224.184920 0.000808 0.87
>
> Do you need the BereqAcct records for all transactions? Including cache hits?
> Sorry it is my mistake. I am not reading any of the beck-end records. So can 
> ignore BereqAcct.
> I need fields from Req records only.

Ok, in this case you can probably get away with just varnishncsa to
collect all you need.

No grouping (the default -g vxid), client mode (-c) only, with a
custom -F format to grab only what you need.

This should help reduce the churn to the point where you lose data.

If you struggle with this, I can help you later with that, but start
by reading the following manuals:

- varnishncsa
- vsl
- vsl-query

For example, the format for the timestamps you wish to collect would
look like this:

> %{VSL:Timestamp:Start[1]}x %{VSL:Timestamp:Resp[2]}x %{VSL:Timestamp:Resp[3]}x

Rinse and repeat for all the things you need to capture for the logs,
put them in the order you prefer and off you go. No need to write your
own utility.

> What does FD mean here? File descriptor? From ReqStart?
> Yes, Its file descriptor. And yes reading from ReqStart till ReqAcct. Using 
> switch case to read needed records.

If you already work with VXIDs, the FD becomes redundant.



Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: reg: not getting ReqAcct tag in varnishlog

2019-03-19 Thread Dridi Boukelmoune
Hi,

Thank you for the details, more questions to come.

On Tue, Mar 19, 2019 at 1:42 AM Hardik  wrote:
>
> Hi Dridi,
>
> Can you give me a list of log records you need to collect?
>
> SLT_Timestamp :

Do you need all timestamps or a specific metric?

> SLT_ReqStart :
> SLT_ReqMethod :
> SLT_ReqURL:
> SLT_ReqProtocol :
> SLT_RespStatus :
> SLT_ReqHeader :
> SLT_RespHeader :
> SLT_ReqAcct :
> SLT_BereqAcct :

Do you need the BereqAcct records for all transactions? Including cache hits?

This one is tricky in terms of billing.

> SLT_VCL_Log :
>
> And
> possibly how you are trying to group them if they come from different
> transactions?

You can do the grouping with the -g option, but that didn't go well
for you so that's what I'm trying to figure out.

> I am reading based xid ( by FD ). Means reading full records per fd.

What does FD mean here? File descriptor? From ReqStart?

> Please let me know if any other information I can provide..
>
>
> If this is not related to my problem still I am curious to know how grouping 
> is happening. You can point out some code or links with some details, I will 
> go through.

Well, utilities like varnishlog or varnishncsa accumulate transactions
via libvarnishapi in memory (which may take a long time) and then
libvarnishapi presents them in order. So utilities don't assume this
logic and simply get the data presented to them in a callback
function.

That's where timeouts, overruns or transaction limits may result in
data loss since slow log consumers don't slow down Varnish, and
Varnish isn't slowed down by logs more than writing them to memory
requires.



Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: reg: not getting ReqAcct tag in varnishlog

2019-03-18 Thread Dridi Boukelmoune
On Mon, Mar 18, 2019 at 5:53 PM Hardik  wrote:
>
> Hi Dridi,
>
> I am reading few tags for Billing purpose. I have added VMOD for this.In vmod 
> currently I was passing "-g session" option to varnish callback function. But 
> I found out that not getting ReqAcct tag. Also I am missing lots of logs 
> itself.
>
> After with discussion with you guys it seems problem is due to "-g session" 
> option. So I removed that and tested again and looks better then before. 
> Reduced frequency of missing ReqAcct and Log loss decreased a lot.
>
> Now, my doubt, Is it better not to use any option and keep default setting, 
> or to use -c option ? So I can minimize log loss as many as I can.
>
> Here is small from vmod which is reading shared memory (it was lot easier in 
> varnish 3 because I was able to call dispatch function directly),
> vut =  VUT_Init(argv[0], 1, argv, );
> vut->dispatch_f = VarnishLog::handler;
> vut->dispatch_priv = this;
> vut->g_arg = 3;  <  I commented out this now ( -g session )
> vut->sighup = 1;
> vut->sighup_f = VarnishLog::sighup;
> VUT_Setup(vut);
> VUT_Main(vut);
> VUT_Fini();

So you wrote your own log utility in C++? Wow!

> I can not change above whole setup but can modify few things in that.
>
> Now, If you can answer previous questions will be really helpful. 
> Particularly how -g session option creating problem ?

Sorry but this is not what I was looking for. [1]

Can you give me a list of log records you need to collect? And
possibly how you are trying to group them if they come from different
transactions? There's a lot we can do without building a new utility.

Dridi

[1] http://xyproblem.info/
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: reg: not getting ReqAcct tag in varnishlog

2019-03-18 Thread Dridi Boukelmoune
On Sun, Mar 17, 2019 at 1:12 PM Hardik  wrote:
>
> Thanks a lot Dridi & Team for details..

Before I can answer your questions, can you explain exactly what you
are trying to do?

We could probably give you better advice if we knew what you need to collect.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: reg: not getting ReqAcct tag in varnishlog

2019-03-13 Thread Dridi Boukelmoune
On Wed, Mar 13, 2019 at 3:56 PM Hardik  wrote:
>
> Hi Dridi,
>
> We should able to recreate with load and mobile requests. I have not tried 
> with 6.0.3.

I guess there's no need for that. Your varnishlog setup is stretched
too thin to cope with your load.

I was completely oblivious to the problems slink spotted right away...

> @Nils,
> We are seeing issue with both varnishlog and varnishlog with -g option. But 
> here problem is, shared memory it self does not have ReqAcct tag I think ( 
> please correct me if I am wrong). Because all the clients which are reading 
> shm all are getting same thing..means no ReqAcct. But yes I am agree that 
> impact with "varnishlog -g session" is more.
>
> So If shared memory it self has no ReqAcct tag then all clients will also not 
> get right ? How to fix this problem ? Please help with some details which I 
> can understand because we are loosing bills for which we are serving 
> traffic...!

You could try avoiding grouping, use varnishncsa with a custom format,
overall store and process less data in memory.

> Normal varnish command we use to grep running logs
> varnishlog -g request -q "ReqURL ~ '/abc/xyz'"
>
> command uses to read shared memory directly for billing
> varnishlog -g session
> --> we are already planing to use "varnishlog -g xvid" for billing api. 
> Because I understood is, -g session option is taking more time to arrange in 
> particular order and delivery final output. Please help with some more 
> detail.. It will really helpful.

There are few practical uses of -g session for live traffic. This
works better on offline logs for example when examining traffic
post-mortem.

If only ReqAcct is important, and the rest of the information you need
is available on the request side, you should definitely stop using the
-g option, and use the -c option to further limit internal processing.

If you need information from the backend transactions too, try to
figure how you could make this information available on the client
side.

Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: ESI slowing down page loading

2019-03-12 Thread Dridi Boukelmoune
Hi Pinakee,

On Tue, Mar 12, 2019 at 12:41 PM Pinakee BIswas  wrote:
>
> Hi Dridi,
>
> Thanks for your response and the details.
>
> I did get the info on PESI from someone from the varnish plus team.
>
> Would certainly look into it but being a young company, would have also
> factor in the cost considering our web traffic.

That's perfectly understandable. You can also try Varnish Plus cloud
images to give a try to pESI without a significant upfront investment,
and see at the very least whether that brings an improvement to your
case. I think it's available for the 4.1 series too.

By the way, I encourage you to update to the latest 4.1 release
(4.1.11 if my memory serves well) and plan an upgrade to 6.0 LTS
because 4.1 is going EOL soon for Varnish Cache (in case you missed
the announcement).

Cheers,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


Re: reg: not getting ReqAcct tag in varnishlog

2019-03-12 Thread Dridi Boukelmoune
On Tue, Mar 12, 2019 at 11:54 AM Hardik  wrote:
>
> Hi Dridi,
>
> Varnish version - 6.0.1
> OS - centos 7
>
> I checked out vanish 6.0.1 source code, built rpm and installed.
>
> Please let me know if more information is required.

Do you have a reliable way to reproduce this?

If so, can you describe it and does it still happen with 6.0.3?

Thanks,
Dridi
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


  1   2   3   4   >