Thanks!

That helped quite alot with a 1s cache :)

Best regards
Daniel Ylitalo
System & Network manager

about.mytaste.com <http://about.mytaste.com>



"Experience is something you earn just right after you screwed up and were really in need of it"

Den 2016-06-20 kl. 17:50, skrev CJ Ess:
We have pools of Haproxy talking to pools of Nginx servers with php-fpm backends. We were seeing 50-60 health checks per second, all of which had to be serviced by the php-fpm process and which almost always returned the same result except for the rare memory or nic failure. So we used the Nginx's cache feature with a 1 second ttl in front of our application's health check endpoint so that the first request will actually hit the backend and the other health check requests queue up behind the first (fastcgi_cache_lock). We set a 250ms timeout on the lock so that health checks don't queue forever (fastcgi_cache_lock_timeout).

On Mon, Jun 20, 2016 at 7:44 AM, Daniel Ylitalo <daniel.ylit...@mytaste.com <mailto:daniel.ylit...@mytaste.com>> wrote:

    Hi!

    I haven't found anything about this topic anywhere so I was hoping
    someone in the mailinglist has done this in the past :)

    We are at the size where we need to round-robin tcp balance our
    incoming web traffic with pf to two haproxy servers both running
    with nbproc 28 for http load balancing, however, this leads to 56
    healthchecks being done each second against our web nodes which
    hammers them quite hard.

    How exactly are you guys solving this issue? Because at this size,
    the healthchecks kind of starts eating more cpu than they are helpful.

-- Daniel Ylitalo
    System & Network manager

    about.mytaste.com <http://about.mytaste.com>



    "Experience is something you earn just right after you screwed up
    and were really in need of it"



Reply via email to