34 milliseconds? You are asking for your backend to get marked sick all the time with this value on an apache server. You just want to know if it has totally gone away. Set the timeout to 5 seconds. You can do some beautiful tuning if you really understand your web app and want to move varnish away from sick backends, but for now, you just need to use both your backends, and let varnish protect them.
---- Stefan Caunter ScaleEngine Inc. E: [email protected] Toronto Canada On Mon, May 12, 2014 at 11:03 PM, Tim Dunphy <[email protected]> wrote: > Hi Nick, > > Thanks for your reply! I tried a different probe. No luck so far: > > backend web1 { > .host = "10.10.1.94"; > .port = "80"; > .probe = { > .url = "/healthcheck.html"; > .timeout = 34 ms; > .interval = 1s; > .window = 10; > .threshold = 8; > } > } > > backend web2 { > > .host = "10.10.1.98"; > .port = "80"; > .probe = { > .url = "/healthcheck.html"; > .timeout = 34 ms; > .interval = 1s; > .window = 10; > .threshold = 8; > } > } > > director www client { > { .backend = web1 ; .weight = 2; } > { .backend = web2 ; .weight = 2; } > } > > > This is all the healthcheck file does: > > [root@beta:/var/www/jf-ref] #cat healthcheck.html > good > > Drum roll please!! AAAAnnnnnd: > > [root@varnish1:/etc/varnish] #varnishlog | grep web2 > 0 Backend_health - web2 Still sick ------- 0 8 10 0.000000 0.000000 > 0 Backend_health - web2 Still sick ------- 0 8 10 0.000000 0.000000 > 0 Backend_health - web2 Still sick ------- 0 8 10 0.000000 0.000000 > 0 Backend_health - web2 Still sick ------- 0 8 10 0.000000 0.000000 > 0 Backend_health - web2 Still sick ------- 0 8 10 0.000000 0.000000 > 0 Backend_health - web2 Still sick ------- 0 8 10 0.000000 0.000000 > 0 Backend_health - web2 Still sick ------- 0 8 10 0.000000 0.000000 > 0 Backend_health - web2 Still sick ------- 0 8 10 0.000000 0.000000 > 0 Backend_health - web2 Still sick ------- 0 8 10 0.000000 0.000000 > > Bummer. :( > > No joy at all. > > Thanks for playing! > Tim > > > On Mon, May 12, 2014 at 10:03 PM, nick tailor <[email protected]> wrote: >> >> Try using a custom health check probe >> >> On May 9, 2014 10:00 PM, "Tim Dunphy" <[email protected]> wrote: >>> >>> Hey all, >>> >>> I have two web backends in my varnish config. And one node is reporting >>> healthy and the other is being reported as 'sick'. >>> >>> 10 Backend c 11 www web1 >>> 0 Backend_health - web1 Still healthy 4--X-RH 5 3 5 0.001130 0.001067 >>> HTTP/1.1 200 OK >>> 0 Backend_health - web1 Still healthy 4--X-RH 5 3 5 0.001231 0.001108 >>> HTTP/1.1 200 OK >>> 0 Backend_health - web1 Still healthy 4--X-RH 5 3 5 0.001250 0.001143 >>> HTTP/1.1 200 OK >>> 0 Backend_health - web1 Still healthy 4--X-RH 5 3 5 0.001127 0.001139 >>> HTTP/1.1 200 OK >>> 0 Backend_health - web1 Still healthy 4--X-RH 5 3 5 0.001208 0.001157 >>> HTTP/1.1 200 OK >>> 0 Backend_health - web1 Still healthy 4--X-RH 5 3 5 0.001562 0.001258 >>> HTTP/1.1 200 OK >>> 0 Backend_health - web1 Still healthy 4--X-RH 5 3 5 0.001545 0.001330 >>> HTTP/1.1 200 OK >>> 0 Backend_health - web1 Still healthy 4--X-RH 5 3 5 0.001363 0.001338 >>> HTTP/1.1 200 OK >>> 11 BackendClose b web1 >>> >>> [root@varnish1:/etc/varnish] #varnishlog | grep web2 >>> 0 Backend_health - web2 Still sick 4--X--- 0 3 5 0.000000 0.000000 >>> 0 Backend_health - web2 Still sick 4--X--- 0 3 5 0.000000 0.000000 >>> 0 Backend_health - web2 Still sick 4--X--- 0 3 5 0.000000 0.000000 >>> 0 Backend_health - web2 Still sick 4--X--- 0 3 5 0.000000 0.000000 >>> 0 Backend_health - web2 Still sick 4--X--- 0 3 5 0.000000 0.000000 >>> 0 Backend_health - web2 Still sick 4--X--- 0 3 5 0.000000 0.000000 >>> 0 Backend_health - web2 Still sick 4--X--- 0 3 5 0.000000 0.000000 >>> 0 Backend_health - web2 Still sick 4--X--- 0 3 5 0.000000 0.000000 >>> 0 Backend_health - web2 Still sick 4--X--- 0 3 5 0.000000 0.000000 >>> >>> And I'm really at a loss to understand why. Both nodes should be >>> completely identical. And the web roots on both are basically svn repos that >>> are in sync. >>> >>> From web1 : >>> >>> [root@beta:/var/www/jf-current] #svn info | grep -i revision >>> Revision: 17 >>> >>> To web2: >>> >>> [root@beta-new:/var/www/jf-current] #svn info | grep -i revision >>> Revision: 17 >>> >>> This is the part of my vcl file where I define the web back ends: >>> >>> probe favicon { >>> .url = "/favicon.ico"; >>> .timeout = 60ms; >>> .interval = 2s; >>> .window = 5; >>> .threshold = 3; >>> } >>> >>> backend web1 { >>> .host = "xx.xx.xx.xx"; >>> .port = "80"; >>> .probe = favicon; >>> } >>> >>> backend web2 { >>> >>> .host = "xx.xx.xx.xx"; >>> .port = "80"; >>> .probe = favicon; >>> } >>> >>> And the file that varnish is probing for is present on both: >>> >>> [root@beta:/var/www/jf-current] #ls -l /var/www/jf-current/favicon.ico >>> -rwxrwxr-x 1 apache ftp 1150 Dec 22 00:53 /var/www/jf-current/favicon.ico >>> >>> I've also setup individual web URLs for each host that isn't cached in >>> varnish so I can hit each one. And each site comes up ok. So I'm a little >>> puzzled as to why the second web host is reporting 'sick' and what I can do >>> to get it back into load balancing. >>> >>> Thanks for any help you can provide! >>> >>> Tim >>> >>> >>> >>> >>> >>> -- >>> GPG me!! >>> >>> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B >>> >>> >>> _______________________________________________ >>> varnish-misc mailing list >>> [email protected] >>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc > > > > > -- > GPG me!! > > gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B > > > _______________________________________________ > varnish-misc mailing list > [email protected] > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc _______________________________________________ varnish-misc mailing list [email protected] https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
