re-adding the list > I will reduce probe param 'interval' to, let's say, 10s. That sounds reasonable?
I would definitely make for more reactive decision. Shameless plug: I would recommend reading on that topic: https://info.varnish-software.com/blog/backends-load-balancing (man vcl, the probes section is of course a must-read) -- Guillaume Quintard On Tue, Feb 5, 2019 at 3:05 PM Hu Bert <[email protected]> wrote: > Hi, > i'll try these commands. No output so far, but i'll see. > > I will reduce probe param 'interval' to, let's say, 10s. That sounds > reasonable? > > > Hubert > > Am Di., 5. Feb. 2019 um 14:35 Uhr schrieb Guillaume Quintard > <[email protected]>: > > > > Try something like that: varnishlog -q "Timestamp:Resp[2] > 7" -g request > > (man vsl-query for more info) > > > > I just think your probe definition is pretty bad (1 minute interval is > going to yield some wonky results) and you varnish sees the backend as > healthy, tries to fetch, fakes a long time, then the probe finally kicks in. > > -- > > Guillaume Quintard > > > > > > On Tue, Feb 5, 2019 at 1:47 PM Hu Bert <[email protected]> wrote: > >> > >> Hi, > >> sry i can't reproduce, as i had to get the varnish running. Maybe i > >> have to explain... :-) > >> > >> We once had a server with nginx (frontend), varnish and some other > >> stuff, and as RAM became a tight resource, we got another server > >> (server2) running, separately for varnish. That server now cached all > >> the images and all other stuff (like css, js etc.) from the tomcat > >> backends. So the vcl file contained the images backends and all the > >> tomcat backends. > >> > >> We then moved the cache for "all the other stuff" to server3, and > >> server2 only cached images from then on. But the vcl file stayed > >> untouched, still containing all the backends&probes that actually > >> weren't necessary for images - and now 2 of these backends (due to > >> load) repeatedly answered 500/502 and have to be rebooted regularly > >> (nothing can be done here at the moment). > >> > >> To get the varnish on server2 (images) running i simply removed all > >> the unnecessary tomcat backends and restarted varnish, and now it's > >> running really good. I still have the old vcl file on server3 running, > >> there i see that the 2 tomcat backends are changing between sick and > >> healthy. Don't know if it might work there as well - i tried it but > >> the output of 'varnishlog -g request' is massive. Something special i > >> should grep for? > >> > >> Alternatively i could provide the vcl file, but i'm afraid that your > >> eyes might explode ;-) > >> > >> Hubert > >> > >> Am Di., 5. Feb. 2019 um 13:14 Uhr schrieb Guillaume Quintard > >> <[email protected]>: > >> > > >> > Hi, > >> > > >> > Can you try to set the backend health to sick using "varnishadm > backend.set_health" and try to reproduce? > >> > > >> > If you can reproduce, please pastebin the corresponding "varnishlog > -g request" block > >> > > >> > On Tue, Feb 5, 2019, 12:55 Hu Bert <[email protected] wrote: > >> >> > >> >> Hi Guillaume, > >> >> > >> >> the backend config looks like this (just questioning a simple file > >> >> from tomcat); maybe params are wrong? : > >> >> > >> >> backend tomcat_backend1 { > >> >> .host = "192.168.0.126"; > >> >> .port = "8082"; > >> >> .connect_timeout = 15s; > >> >> .first_byte_timeout = 60s; > >> >> .between_bytes_timeout = 15s; > >> >> .probe = { > >> >> .url = "/portal/info.txt"; > >> >> .timeout = 10s; > >> >> .interval = 1m; > >> >> .window = 3; > >> >> .threshold = 1; > >> >> } > >> >> } > >> >> > >> >> The backend is shown as 'sick', but the time until you get an answer > >> >> from nginx/varnish differs, from below a second to 7 or more seconds > - > >> >> but the requested image is already in cache (hits >= 1). > >> >> > >> >> Imho the cache should work and deliver a cached file, independent > from > >> >> a (non) working backend. Maybe beresp.ttl messed up? > >> >> > >> >> else if (beresp.status<300) { > >> >> [lots of rules] > >> >> } else { > >> >> # Use very short caching time for error messages - giving the > >> >> system the chance to recover > >> >> set beresp.ttl = 10s; > >> >> unset beresp.http.Cache-Control; > >> >> return(deliver); > >> >> } > >> >> > >> >> Thx > >> >> Hubert > >> >> > >> >> Am Di., 5. Feb. 2019 um 12:33 Uhr schrieb Guillaume Quintard > >> >> <[email protected]>: > >> >> > > >> >> > Hi, > >> >> > > >> >> > Do you have probes set up? If you do, the backend will be declared > sick and varnish will reply instantly without even trying to contact it. > >> >> > > >> >> > It sounds like that at the moment, varnish just tries to get > whatever it can, waiting for as long as authorized. > >> >> > > >> >> > Cheers, > >> >> > > >> >> > On Tue, Feb 5, 2019, 11:51 Hu Bert <[email protected] wrote: > >> >> >> > >> >> >> Hey there, > >> >> >> > >> >> >> i hope i'm right here... i have the following setup to deliver > images: > >> >> >> > >> >> >> nginx: https -> forward request to varnish 5.0 > >> >> >> if image is not in cache -> forward request to backend nginx > >> >> >> backend nginx: delivers file to varnish if found on harddisk > >> >> >> if backend nginx doesn't find: forward request to 2 backend > tomcats to > >> >> >> calculate the desired image > >> >> >> > >> >> >> The 2 backend tomcats do deliver another webapp (and are a varnish > >> >> >> backend as well); at the moment they're quite busy and stop > working > >> >> >> due to heavy load (->restart), the result is that varnish > sees/thinks > >> >> >> that the backends are sick. Somehow then even the cached images > are > >> >> >> delivered after a quite long waiting period, e.g. a 5 KB image > takes > >> >> >> more than 7 seconds. > >> >> >> > >> >> >> Is this the normal behaviour that varnish does answer slowly if > some > >> >> >> backends are sick? > >> >> >> > >> >> >> If any other information is need i can provide the necessary > stuff. > >> >> >> > >> >> >> Thx in advance > >> >> >> Hubert > >> >> >> _______________________________________________ > >> >> >> varnish-misc mailing list > >> >> >> [email protected] > >> >> >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc >
_______________________________________________ varnish-misc mailing list [email protected] https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
