Hello Hardik,
> Is it any change require regarding this? can we change any tunable parameter
> ? any VCL flow change can help ?
Can you please tell us more about which version of Varnish you are using?
$ varnishd -V
Please also specify which OS you are running Varnish on, and how you
Hello Pinakee,
On Tue, Mar 5, 2019 at 8:16 AM Pinakee BIswas wrote:
>
> Hi,
>
> We are using varnish 4.1.8 and Django 1.11 for our application.
>
> We are using ESI to break down caching on a page. Part of pages which
> are dynamically loaded are fetched from the backend server using ESI but
>
On Tue, Feb 19, 2019 at 9:03 AM Hu Bert wrote:
>
> Good morning,
> i think we solved the problem: we ran into a systemd limit (4915 tasks):
>
> https://github.com/varnishcache/varnish-cache/issues/2822
>
Hello,
On Tue, Dec 25, 2018 at 6:23 PM wrote:
>
> Hello,
>
> I use Varnish 6.1 and I'm using the vcl.load technique to load custom
> VCL based on the domain name.
> Basically, I follow what is explained on this article :
> https://info.varnish-software.com/blog/one-vcl-per-domain
>
> I have a
On Fri, Sep 28, 2018 at 1:45 PM Tommy Becker wrote:
>
> Thanks! One last thing I will point out, is that in our case, 100-continue
> doesn't help since we're responding due to a problem with the body itself. My
> understanding of 100-continue is that it essentially okays the sender to
>
On Fri, Sep 28, 2018 at 3:08 AM Tommy Becker wrote:
>
> Hi Dridi,
> Thanks for the response. I’m curious what specifically you believe to be in
> violation of the spec here. There’s a lot of ambiguity to be had by my read,
> but the option to send responses at any point seems pretty clear. From
Hello,
On Wed, Sep 26, 2018 at 2:44 AM Tommy Becker wrote:
>
> We have an application that we front with Varnish 4.0.5. Recently, after an
> application upgrade in which we migrated from Jetty 9.2 to 9.4, we began
> noticing a lot of 503s being returned from Varnish on POST requests. We have
> What would be too many objects linked to a single key? To be honest, I don't
> know. For an answer we'll need to invoke the VMOD authors :)
Keys are just tags, so you could really tag responses in many ways.
For example, on a server delivering contents for multiple hosts, you
could use the
Found this unread thread in my inbox, I will reply here for completeness.
> Any idea about what could have happened?
Reported on github and fixed in 6.0.1 and 6.1.0, should be fixed as
part of the next 4.1 release:
https://github.com/varnishcache/varnish-cache/issues/2681
Thanks for reporting!
On Fri, Sep 21, 2018 at 1:50 PM Winkelmann, Thomas (RADIO TELE FFH -
Online) wrote:
>
> I again did some testing by settings the params to
>
> -p thread_pools=4
> -p thread_pool_min=2000
> -p thread_pool_max=5000
> -p thread_pool_reserve=95
Please note that min/max/reserve is per pool.
> The
On Thu, Sep 20, 2018 at 3:51 PM Winkelmann, Thomas (RADIO TELE FFH -
Online) wrote:
>
> Just tested these values. Varnish is nearly dead after restarting with these
> new values. Only a few requests will be served. I think, we have to wait :)
I guess now would be a good time to bring back
On Thu, Sep 20, 2018 at 4:14 PM Junaid Mukhtar wrote:
>
> Hi
>
> we are in middle of upgrading from varnish 3.0.7 to varnish 4.1.10; but
> unfortunately all of the response times in the performance test are
> indicating an increase of at least 100%
>
> We have analyzed the logs and everything
On Thu, Sep 20, 2018 at 11:39 AM Winkelmann, Thomas (RADIO TELE FFH -
Online) wrote:
>
> Thanks again! We already use
>
> -p thread_pools=2
> -p thread_pool_min=200
> -p thread_pool_max=5000
>
> I think this results already in a high number of threads.
>
> So it's probably the best to wait until
On Thu, Sep 20, 2018 at 10:22 AM Winkelmann, Thomas (RADIO TELE FFH -
Online) wrote:
>
> Thanks for the feedback. We tried 6.0.1. So far H/2 seems to be working fine
> under normal conditions. But as soon as the request are increasing ( > 800
> reg/s) the traffic will stop. Also HTTP 1.1.
On Wed, Sep 19, 2018 at 2:32 PM Winkelmann, Thomas (RADIO TELE FFH -
Online) wrote:
>
> Hello,
>
> just wanted to test Varnish 6.1 regarding H2 support, but I’m unable to
> compile the vmods on top of it.
>
> Is there a compatible version varnish-modules already available for 6.1?
We haven't
> Yes but i'm sorry i'm unable to provide code. Is there any way to use
> something like a loop or recursion? In that case i would be able to
> modify a single set-cookie header like we need it.
A "foreach" construct would indeed help for the case of multiple
headers and it has been discussed in
On Tue, Aug 7, 2018 at 3:27 PM, Stefan Priebe - Profihost AG
wrote:
> Hello,
>
> our varnish vcl is broken after upgrading from 4.1 to 5.0 but i can't
> find any documentation hint and i also do not find any solution.
>
> The old varnish conf uses std.collect to concatenate alls Set-Cookies
>
> Any help in getting the libvmod_saintmode.so file would be really helpful.
It's part of varnish-modules now:
https://github.com/varnish/varnish-modules
You can find the manual here:
https://github.com/varnish/varnish-modules/blob/master/docs/vmod_saintmode.rst#vmod_saintmode
Cheers
On Wed, Aug 1, 2018 at 1:05 PM, FULLER, David wrote:
> That’s great, thanks. Does the first VCL listed (labelled boot in column 4
> – ‘available cold/cold 0 boot’) need to stay loaded and is there
> a recommended amount of VCLs to keep loaded?
There's nothing special about "boot",
On Mon, Jul 30, 2018 at 2:50 PM, FULLER, David wrote:
> Thanks Dridi,
>
> After further investigation I found that we do have a python/cron job running
> that checks for backend changes and if so does a vcl.load. This has
> resulted in a growing number of VCLs being loaded, as you suspected.
On Thu, Jul 19, 2018 at 10:32 AM, Igor Zivkovic wrote:
> Hello,
>
> Is there a way to escape double quotes in probe requests? I need to send a
> JSON request and I've tried %22 but checking the packets with tcpdump it
> seems Varnish doesn't convert escapes back to double quotes. For example:
>
>
On Fri, Jul 27, 2018 at 4:41 PM, FULLER, David wrote:
> Thanks, we have 1 active VCL and around 35 available (of these 3 are
> auto/warm and the remainder cold/cold). The containers running Varnish had
> to be rebuilt a couple of hours ago, so I'll check varnishadm again on Monday
> to see
On Fri, Jul 27, 2018 at 3:04 PM, FULLER, David wrote:
> Hi Dridi,
>
> Thank you for your response, we do not have any cron jobs or schedules set
> up. Is there a way to check the number of VCLs currently loaded?
Something like varnishadm vcl.list | wc -l
On Thu, Jul 26, 2018 at 5:05 PM, FULLER, David wrote:
> Thanks for your reply.
>
> Am I correct in thinking that even though we’re not using Varnish for
> caching, objects are still stored using malloc? We have malloc set at
> 100MB, with 1GB allocated to the Varnish container. Based on the
Hello Thomas,
On Wed, Jul 4, 2018 at 10:43 AM, Winkelmann, Thomas (RADIO TELE FFH -
Online) wrote:
> Hello everbody,
>
> finally we got Varnish 6.0 + Vmods + Hitch TLS running on Ubuntu. So far
> everything works fine, also HTTP/2 Support.
>
>
> We had some similar problems in the past with
On Thu, May 17, 2018 at 6:03 PM, Prem Kumar wrote:
> Hit527698198 -0.990171 why it is a hit but ttl is negative
> value 0.00 << Grace is 0 0.00
> - VCL_call HIT
>
> We set to zero.
Are you actively setting it to zero? The default is
On Fri, May 11, 2018 at 4:38 PM, Sergio Rus wrote:
> Varnish 3 is quite old, I know. But it's still supported by Ubuntu 14 LTS.
> That's why I'm still using it. I will move to a recent version soon.
I don't think Ubuntu supports Varnish in any way. In my book that
would
On Fri, May 11, 2018 at 3:58 PM, Sergio Rus wrote:
> I would like to validate my theory over a weird issue I had with Varnish 3.
It's reached EOL for a long while now, we aren't supporting Varnish 3...
> So we have an endpoint on the server using Digest Authentication and
> I found some alternatives, though generally they feel uglier.
> Any ideas?
This has been discussed in the past, including the possibility to
return a value, but as of today you can't break out of a subroutine.
https://github.com/varnishcache/varnish-cache/wiki/VIP2%3A-VCL-typed-functions
With
> IIRC, on Linux, malloc always returns with an allocation, or OOM-kills you
That's the default behavior but I believe you can also change that.
___
varnish-misc mailing list
varnish-misc@varnish-cache.org
> What are the dependencies for successfully building the vmods on Ubuntu
> 17.10? Is it only the "varnish-dev" package out of the packagecloud
> repository? Or do I need anything else?
I'm more a Fedora person myself so I may get the package names wrong,
but you need pkg-config, python >= 2.7
On Wed, Apr 4, 2018 at 4:29 PM, Winkelmann, Thomas (RADIO TELE FFH -
Online) wrote:
> Hello,
>
> I want to test http/2 of Varnish 6.0 on Ubuntu. So far I managed to install
> the varnish packages from package cloud as described here:
>
>
On Fri, Mar 30, 2018 at 5:11 PM, AWA SOLUTIONS wrote:
>
> Hi there,
>
> While reading the docs, I found this in
> https://www.varnish-software.com/wiki/content/tutorials/varnish/sample_vclTemplate.html
> :
>
> HANDLING HTTP PURGE
>
> sub vcl_purge {
> # Only handle
On Tue, Feb 20, 2018 at 3:07 PM, Goswami, Naveen, Springer SBM DE
wrote:
> Dear Varnish-Team,
>
> First of all I would like to congratulate you for such an awesome product.
>
> We wanted to send traffic directly to go-routers, which require us to send
> traffic to
On Fri, Feb 16, 2018 at 11:21 PM, Miguel González
wrote:
> On 02/16/18 10:21 PM, Miguel González wrote:
>> Hi,
>>
>> I have changed an URL from:
>>
>> https://www.mysite.com/word1-word2/
>>
>> https://www.mysite.com/word3-word2/
>>
>> and changed my .htaccess
On Fri, Feb 16, 2018 at 10:56 AM, kokoniimasu wrote:
> Hi, Dridi.
>
> Thank you for your comment!
> I will create pull-request.
You may want to name the counter n_lru_limited or something similar to
be consistent with related counters.
Cheers
On Fri, Feb 16, 2018 at 7:42 AM, kokoniimasu wrote:
> Hi,
>
> I got a strange error the other day.
> Status code is 200, but body was incomplete.
> This error occurred because nuke_limit was reached.(It does not 503,
> because beresp.do_stream=true is default)
> I think
On Mon, Feb 5, 2018 at 8:29 AM, Prem Kumar wrote:
> Just want to understand the behavior of varnish if range request while
> streaming of entire object in progress. lets say client1 fetches complete
> object of size of 300 bytes (do_stream == true) and client 2 is
On Thu, Feb 1, 2018 at 9:46 PM, Arvind Narayanan wrote:
> If busy, is the variable busy_found set to 0 or 1? For expired objects, it
> looks like if it is not busy, then you set the return value to HSH_EXPBUSY,
> shouldn't it be HSH_EXP? Can anyone help me understand this
> I mean I have my own VPS running Varnish on a dedicated server I own :)
> Where you meaning that someone could get information on cloud instances
> where Varnish is run for several cloud instances? I am not quite
> grasping what you mean with "neighbor´s VM".
If my understanding is correct, you
On Mon, Jan 29, 2018 at 6:53 PM, Miguel González
wrote:
>
>> There are no plans to open source Varnish Total Encryption, and using
>> HTTPS by the means of a proxy on the same server as Varnish won't help
>> either. To mitigate Meltdown and Spectre, you need an updated
On Fri, Jan 26, 2018 at 6:15 PM, Nils Goroll wrote:
> Dridi, I completely agree with your response, except for one thing: IMHO, bans
> are in no way less real time than purges: While the ban *lurker* processing is
> deferred, actual ban checks at lookup time happen as
On Sat, Jan 27, 2018 at 8:37 PM, Miguel González
wrote:
> Dear all,
>
> I received recently an invitation for a webinar from Varnish about
> cache encryption in Varnish Total Encryption.
>
> I am concerned about how Varnish Cache is going to deal with this. Any
>
On Fri, Jan 26, 2018 at 4:12 PM, Alexandros Kechagias
wrote:
> Hi there,
> thanks for the replies. I see, I didn't give you enough details for you to
> be able to help me. Sorry for that, I had a little bit of tunnel vision and
> also the title is not optimal.
>
>
On Tue, Jan 23, 2018 at 5:12 PM, Alexandros Kechagias
wrote:
> Hi Experts,
> I am looking for a way to invalidate a cache based on a set of different
> "keys" that can be combined with each other with operators like "AND", "OR"
> or "NOT".
>
> So I am looking for
Hello Håvard, Luca,
On Fri, Jan 12, 2018 at 10:13 AM, Håvard Alsaker Futsæter
wrote:
> Hi! I have a problem with backend probes not beeing sent, that seems very
> similar to what Luca Gervasi has reported(see below).
> Luca: Did you ever figure out a fix?
Guillaume brought this
On Wed, Nov 29, 2017 at 12:35 AM, Hugues Alary wrote:
> Just realized this might be better as a bug report, I'll submit one if
> needed.
>
> Also, I just had another panic:
Hi,
You should sanitize the panic output to not disclose user cookies
publicly! Replace the value
>> I think you're getting a retry from varnishd, independent from VCL's
>> return(retry) which only happens on GET/HEAD requests that are
>> supposed to be side-effect-free and safely retry-able. On top of that
>> you might be hitting a bug where Varnish retries more than once.
>> Seeing some log
On Wed, Nov 22, 2017 at 6:49 AM, Prem Kumar wrote:
> Hi All,
>
>
> I'm using default headers specific to backend type in 3.1. For example
> backend 1 ,backend2 where default header for backend1 will be "xyz" and
> backend2 will be "abc".
> Now I don't find a way to add
On Sat, Nov 11, 2017 at 9:48 PM, wrote:
> I was observing a behavior of a PHP script on one backend server that got
> executed/requested
> twice (but not by the client/browser).
>
> The plain script just processes data and only outputs a response until its
> done. The
On Tue, Nov 7, 2017 at 8:09 AM, Guillaume Quintard
wrote:
> Hello Jason,
>
> Just an idea, before patching: log obj.hits and obj.age to get an idea of
> what you are delivering.
Indeed, if you keep track of the relevant bits (like varnishncsa only keeps
track of
> Of the objects that are being nuked/evicted, were they close to their
> natural expiration time anyway? Were they ever used to serve a cache
> hit?
>
> Of the objects that have been nuked, are they requested shortly after,
> suggesting it may have been worth having a larger cache in order to
>
On Thu, Oct 12, 2017 at 2:31 PM, Mattias Geniar wrote:
>> You can indeed do it in pure VCL, but for long URLs it also means a
>> lot more workspace consumption.
>
> Oh absolutely, long-term vmod’s are the way to go, but depending
> on the server setup, those can be cumbersome
On Thu, Oct 12, 2017 at 2:10 PM, Mattias Geniar wrote:
>> Can I 'ignore' query string variables before pulling matching objects from
>> the cache, but not actually remove them from the URL to the end-user?
>
> The quickest ‘hack’ is to strip those parameters from the req.url,
On Thu, Oct 12, 2017 at 1:56 PM, Pinakee BIswas wrote:
> Hi,
>
> We are using varnish 4.1.2 for our website caching. We use bunch of standard
> query parameters (like utm*) to track the channels for our website visits -
> this is quite standard in the web world.
You should
On Wed, Oct 11, 2017 at 11:59 PM, Cosimo Streppone wrote:
[...]
> This is a great step forward for vmod (and vut) development. Well done,
> Dridi!
Thanks!
> Admittedly, I always found the whole forking and modifying vmod-example
> too messy, so much that I tried to build a
Dear Varnish community,
This is rather for developers, so if you are only using Varnish it
will probably not be very interesting...
So, developers then: today I'm sharing a very recent project to get
Varnish projects started differently than how projects usually start.
Instead of forking
On Sun, Sep 24, 2017 at 6:14 AM, Mark Staudinger
wrote:
> Hi Folks,
>
> I'm evaluating the xkey module for use in my workflow and while I have
> tested successfully the basic functionality I am left with one question.
>
> Is there any way to provide criteria for matching
>> Did you try playing videos using HTTPS without enabling h2? As of 5.2.0 it
>> is still experimental.
>
>
> Yes, that is what I have deployed now. Without h2 enabled it works as expected
> (hitch; alpn-protos = "http/1.1). Doing so I lose a bit of site performance
> but
> the functional
>>> Dridi, did you had time to assess this?
>>
>> No, we kind of have a major release this week, I don't have time for
>> much. But if I had to bet I'd blame the h2 feature.
>
>
> Thanks for the answer. Looking forward for the next release :-)
Hi Leon,
Did you try playing videos using HTTPS
On Wed, Sep 13, 2017 at 4:54 PM, wrote:
> Am 08.09.2017 um 18:54 schrieb info+varn...@shee.org:
>> A further investigation shows, that the delivery via plain http through
>> varnish works fine. So, I concentrated then on
>> the TLS termination. Downgrading the protocol
On Mon, Sep 11, 2017 at 8:34 PM, Stefan Kmec wrote:
> Hello all,
>
>
> I am writing you regarding the issue I experiencing after one of the latest
> Red hat release.
>
> Regarding the configuration file "/usr/lib/systemd/system/varnish.service"
> varnish starts after network
On Mon, Sep 11, 2017 at 10:52 AM, Eyal Marantenboim
wrote:
> Hi,
>
> We are trying to have varnish respond to all OPTIONS method requests. These
> requests should not go to any backend. They should return the CORS headers
> with an empty body, always (cors headers
On Fri, Sep 8, 2017 at 2:55 PM, wrote:
> Thanks to pointing to this direction. Looking into it I see following
> (trimmed):
>
> Request 1
> ReqMethod GET
> ReqURL /test.mp4
> ReqHeader range: bytes=0-1
>
> VCL_call HIT
> RespStatus 200
>
On Thu, Sep 7, 2017 at 5:10 PM, wrote:
> Safari loads the file (progress bar shows up) and autoplays it directly:
> Connecting through varnish with return(pass):
> Safari starts to loading the file, progressbar does not shows any progress
> and video does not get
On Mon, Sep 4, 2017 at 2:12 PM, Olivier Hanesse
wrote:
> Are the stats used by varnishstat about the lurker "well" updated "every
> minute" ? The fact that the statistics was only updated once is kinda strange
> : the ban list size is higher than the cutoff value
On Wed, Aug 30, 2017 at 6:28 PM, wrote:
>
> Thanks, Guillaume and Dridi — I’m not seeing a way to limit concurrent
> requests however. We’re using vsthrottle to limit the total number of
> requests to a domain — i.e. “100 requests in 10 seconds; 600 requests in 5
>
On Wed, Aug 30, 2017 at 9:24 AM, wrote:
>
> Hi,
>
> Is there a way to limit the number of concurrent requests sent to a backend
> based on the requested domain name? Or more broadly, based on some key/value?
Have a look at the vsthrottle VMOD, it's usually used
> I've already tried to change (increase/decrease) both "ban_lurker_batch" and
> "ban_lurker_sleep" parameters. Same things.
>
> I don't know how I can get more lurker logs to debug this.
> Is there a way to make it more aggressive ?
>
> Any ideas ?
Did you look at the ban_cutoff parameter?
I
On Mon, Aug 28, 2017 at 4:39 PM, John Cherouvim wrote:
> Yes, that worked. I am sorry for the confusion but, yes, it seems that my
> modified logrotate configuration broke the log:
>
>> /var/log/varnish/varnish.log {
>> compress
>> compresscmd /bin/gzip
>> uncompresscmd
On Mon, Aug 28, 2017 at 3:05 PM, John Cherouvim wrote:
>> 12 00 00 4c 63 b6 4e 40 72 65 71 20 35 31 35 38 |...Lc.N@req
>> 5158|
This is weird, you are missing 4 magic bytes at the beginning of the
file. It could be that the logrotate integration is fundamentally
On Mon, Aug 28, 2017 at 2:18 PM, John Cherouvim wrote:
>> What is the output of `file some_extracted_file`?
>
> It's "data".
Ouch, what is the output of `hexdump -C some_extracted_file | head -1` then?
Dridi
___
varnish-misc mailing
On Mon, Aug 28, 2017 at 9:21 AM, John Cherouvim wrote:
> In /var/log/varnish/ I have some old varnish logs, produced by varnishlog
> which, due to logrotated, are now in gz format:
>
>> -rw-r--r-- 1 varnishlog varnish 143068514 Aug 10 23:59
>> varnish.log.2017-08-10.gz
>>
> I thought 2 pattern solution.
>
> - Add vcl source index in vcl_trace.
> https://github.com/varnishcache/varnish-cache/compare/master...xcir:patch/modify_vcl_trace?expand=1
>
>32770 VCL_trace c 1 2.3.3
>32770 VCL_trace c 2 2.4.5
> | | | |
>
> $ sudo varnishadm backend.list
> PONG 1503057040 1.0
Hello,
This is a known bug, fixed a while ago:
https://github.com/varnishcache/varnish-cache/issues/2010
https://github.com/varnishcache/varnish-cache/pull/2019
You should upgrade to 4.1.8, your version has a major vulnerability
and known
On Wed, Aug 16, 2017 at 12:13 PM, Miguel González
wrote:
> I have found out how to make to work varnish http purge plugin with
> Varnish 4.x in this article (funny it´s on the Varnish blog)
Thanks for letting us know, and sorry for not replying earlier!
Dridi
> The Wordpress plugin is supposed to clean the whole cache for a website
> when clicking on a button saying "Clear cache".
>
> You say from the varnishlog excerpt I sent that´s not the correct way
> of doing so, or apparently what is logged by varnish is not a correct purge.
>
> So how do I
On Wed, Aug 9, 2017 at 5:50 PM, Miguel Gonzalez
wrote:
> And what would be the right way to purge all the objects in the url? Or it
> can only be done with ban?
I'm not sure what you mean by "all the objects in the url", can you
please give me an example involving
On Wed, Aug 9, 2017 at 4:46 PM, Admin Beckspaced wrote:
> Hello Varnish Community,
>
> I'm running an openSUSE 42.2 server with varnish 5.1.2
Update to 5.1.3, your Varnish instance can be DoS'd remotely!
> For me it looks like that varnishlog.service is waiting for
On Wed, Aug 9, 2017 at 1:00 PM, Miguel González
<miguel_3_gonza...@yahoo.es> wrote:
> On 08/09/17 11:54 AM, Dridi Boukelmoune wrote:
>>> - ReqMethod PURGE
>>> - ReqURL /.*
>>
>> Hello Miguel,
>>
>> A purge expects an exact matc
> - ReqMethod PURGE
> - ReqURL /.*
Hello Miguel,
A purge expects an exact match of an object hash, it doesn't work on
criteria like bans do. In order for a purge to succeed, you usually
need the exact Host header and URL in your purge request unless you
changed the default hash.
On Tue, Aug 1, 2017 at 9:34 AM, Guillaume Quintard
wrote:
> Ah, right, I totally forgot about that, sorry.
>
> Sooo, there's no real clean way to do it (that I can see, smarter people
> than me may have a solution), but here's what I can offer.
First, I would
Setting to false is a no-op iirc. Once true, you can no longer bend the
truth.
On Jul 21, 2017 17:29, "Guillaume Quintard"
wrote:
Ah, right, indeed. Unless you set it to true for some set of URLs, and to
false again for a subset of it. That could make your VCL
> Great blog post (that i haven't seen before), thanks for sharing its
> right on topic, looking forward to part 2
Thank you very much, really appreciated. The current draft for part 2
is complete but not really a good read. I'm having a hard time going
over arcane HTTP contradictions and I don't
> ==> Maybe modern web site SHOULD NOT use cookies on every request!
> Because of the way cookies interfere with downstream caching.
Does it really matter? Cookies or not, they should rather *always*
include at least a Cache-Control header in responses.
> HTTP was conceived from the beginning to
> From what you're saying file method still uses resident memory for each
> object, so with my configuration (20G malloc, 75G file) i would need 95G RAM
> if all storage is used? (without swap being used)
Not RAM, virtual memory. But yes, this is the idea.
> If this is the case i'm curious what
On Mon, Jul 10, 2017 at 1:55 PM, Guillaume Quintard
wrote:
> Hi Charles,
>
> So, if I'm reading this right, there's no discrepancy. Varnish will malloc
> the full storage (malloc), and will mmap a file the size of the full storage
> (file). So even though the
> If anyone has seen TCP Incast Collapse with Varnish, were you able to work
> around it, and if so, how?
I don't know, but maybe this could help:
https://github.com/varnish/varnish-modules/blob/master/docs/vmod_tcp.rst#vmod_tcp
Dridi
___
> How can two copies of the same cache key exist (assuming the URL is the key
> here)? Won’t that create conflicts?
The new copy will prevail, while the older one may still be in use
(large objects or slow clients...)
Dridi
___
varnish-misc mailing
On Wed, Jul 5, 2017 at 10:38 AM, Quintin Par wrote:
>
> Does hash_always_miss invalidate the cache?
Not as such, it will fetch a new copy regardless and once cached it
will shadow the previous one (that will eventually go away).
There are other means of invalidation in
On Wed, Jul 5, 2017 at 10:04 AM, Quintin Par wrote:
>
> Nginx has a nifty command for invalidating a specific cache
>
> proxy_cache_bypass $http_cachepurge;
>
> curl -I myapp.example.com/api/ping -H "cachepurge: true"
>
> Is there something equivalent in varnish?
Hi,
I'm
On Sat, Jun 10, 2017 at 12:16 AM, Remo Furlanetto
wrote:
> Hi Dridi,
>
> Thank you for your answer.
>
> I have found a solution. I had to execute the script "configure" with
> --exec-prefix
Oh yes, that too. Since you mentioned installing to a different prefix
on
On Fri, Jun 9, 2017 at 10:10 PM, Remo Furlanetto
wrote:
> Hi,
>
> Is there one way to configure varnish to read the vmods libraries in a
> different directory?
You can use the `from` keyword:
import from ;
There is also a vmod_dir or vmod_path parameter depending
>> Amazingly enough I never looked at the logs of a purge, maybe ExpKill
>> could give us a VXID to then check against the hit. If only SomeoneElse(tm)
>> could spare me the time and look at it themselves and tell us (wink wink=).
>
>
> I'm very happy to help in any way I can. Please let me know
> Sorry about missing the list off the CC, was an oversight on my part, must
> have hit the wrong button and missed that I did that.
No worries, reading my previous email I must say that remark doesn't
look nice. I wrote it in a rush like I often do on this list.
> It's not due to other requests
On Wed, May 31, 2017 at 6:25 PM, Guillaume Quintard
<guilla...@varnish-software.com> wrote:
>
>
> On Wed, May 31, 2017 at 4:49 PM, Dridi Boukelmoune <dr...@varni.sh> wrote:
>>
>> It is possible that while the purge is happening another client
>> requests the
On Tue, May 30, 2017 at 8:06 PM, Nigel Peck wrote:
>
> Thanks, I'll look at that to make sure. The situation is, if I purge an
> object, and that request is restarted in vcl_purge, it is by definition the
> same object, is it not? Since it is the exact same request. So
On Sat, May 27, 2017 at 9:30 PM, Nigel Peck wrote:
>
> Just another update on this, regarding how reliably I can reproduce. I have
> now updated my code for purging the cache, so it reports which purged
> objects received a HIT when they were retrieved with the restarted
On Tue, May 30, 2017 at 1:27 PM, Guillaume Quintard
wrote:
> Hello Danila,
>
> Since hashing only the url, I assumed the cookie was of no interest to the
> backend, so I'm nuking it to benefit from builtin.vcl, ie. I don't have to
> return(hash).
On that note, a
On Thu, May 18, 2017 at 9:20 PM, Ryan Burn wrote:
> I don't see how that could work. The context for the active span needs
> to be injected into the headers of any backend requests. The headers
> need to be modified from the varnishd process, right? And you can't
> inject a
101 - 200 of 386 matches
Mail list logo