Hi,

In my initial testing of NGINX with and without a load balancer, one thing
I ran into were intermittent page load delays.  The browser would briefly
flash a "cannot load page" message (I don't recall the exact wording), then
quickly refresh and and the page would load fine.  Similarly, monitoring
tools would occasionally think a host was down, only to report soon after
the host was fine.

After some poking around I made 2 changes

1. Use an IP address (e.g. 127.0.0.1) in the proxy_pass configuration
instead of a hostname to bypass DNS.
2. Drop the keepalive timeout from the default of 65s to 5s or so.

#1 was mostly a Hail Mary.  I'm pretty sure #2 had the bigger impact.

#2 is also important if you use the load balancer to support "detaching"
bricks for zero-downtime updates.  If the keepalive is too long, it takes a
long time for clients to disconnect after taking a brick out of rotation.

I could also imagine long keepalives causing the load balancer to keep more
idle connections open, thus using more memory.

-b








On Tue, Sep 3, 2019 at 9:52 AM Jason Stephenson <ja...@sigio.com> wrote:

> Josh,
>
> It's interesting to me that you reply to Martha's email today.
>
> We recently started using nginx proxy on the brick heads with ldirectord
> running on our load balancers.  We've had issues where ldirectord dies
> about one per week on each load balancer.  It appears that the load
> balancer is running out of memory.  I've mentioned this in IRC a couple
> of times but gotten no responses from anyone.
>
> I am going to look at alternatives, and I'm leaning toward setting up
> haproxy on the load balancers.
>
> Jason
>
>

Reply via email to