Hi,

On 17-11-21 14:20:39, Daniel Schneller wrote:
> > On 21. Nov. 2017, at 14:08, Lukas Tribus <[email protected]> wrote:
> > [...]
> > Instead of hiding specific errors counters, why not send an actual
> > HTTP request that triggers a 200 OK response? So health checking is
> > not exempt from the statistics and only generates error statistics
> > when actual errors occur?
> 
> Good point. I wanted to avoid, however, having these “high level”
> health checks from the many many sidecars being routed through to the
> actual backends.  Instead, I considered it enough to “only” check if
> the central haproxy is available. In case it is, the sidecars rely on
> it doing the actual health checks of the backends and responding with
> 503 or similar, when all backends for a particular request happen to
> be down.
> 
> However, your idea and a little more Googling led me to this Github
> repo
> https://github.com/jvehent/haproxy-aws#healthchecks-between-elb-and-haproxy
> where they configure a dedicated “health check frontend” (albeit in
> their case to work around an AWS/ELB limitation re/ PROXY protocol). I
> think I will adapt this and configure the sidecars to health check on
> a dedicated port like this.

monitor-uri [1] might help as well with this task.

Personally, I'm using Nginx behind HAProxy (on the same machine), a
dedicated domain routed from HAProxy to Nginx, and just one location in
Nginx which retuns '200 OK' to accomplish this. Of course, HAProxy could
work and Nginx could fail at the same time, but I've never encountered
this, especially with such a limited, static Nginx config.

Cheers,
Georg


[1] https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#monitor-uri

Attachment: signature.asc
Description: Digital signature

Reply via email to