On Thu, Apr 20, 2017 at 11:03:42PM +0800, jaseywang wrote:
> 1. The backlog of haproxy soon become full and begin to drop new tcp
> connection since peak traffic begin, before CDN, our net.core.somaxconn is
> 1024, and use default backlog of haproxy, everything performs well. After
> using CDN, even we increase the somaxconn and backlog parameter to 40000,
> it soon becomes full and deny to accept new connection, so no normal http
> request to haproxy, and our client can't open the webpage.
> 
> 2. Before CDN,  the normal TCP connections like:
> 7k estab, 700 finwait1, 5k finwai2, 14k timewait,10 closewait, 230 lastack.
> Now, the TCP connections like:
> 22k estab, 260 finwai1, 1.6k finwai2, 23k timewait, 25k closewait, 5k
> lastack.
> The most weird thing is the abnormal increasement of closewait, and after
> the packet capture, most of the closewait is between haproxy and cdn.

This really sounds like the CDN tries to close some connections at certain
moments when it's not appropriate (eg: in the middle of a transfer), leaving
a CLOSE_WAIT between haproxy and the CDN, and an ESTABLISHED between haproxy
and nginx. The problem is to understand why it remains in this situation
instead of either resetting the received data or finishing to drain them.

Could you please confirm that most of the CLOSE_WAIT are on the front side
and the ESTABLISHED on the backend side ? If that's the case, can you also
please verify if there are pending data in the send queue for CLOSE_WAIT
sockets (3rd column in netstat) ?

It would also help a lot to provide the output from this :

   echo "show sess all" | socat /var/run/haproxy.sock -

as well as the output from "netstat -nato" to compare them. Be careful, this
may reveal some confidential information such as some private IP addresses,
so you may prefer not to post to the list, it's up to you.

Otherwise your config looks pretty clean and safe, so I don't think it's
a config issue you're facing.

Willy

Reply via email to