That's a command I didn't know about - thanks!
My traffic is bursty, rather than steadily high.
Whenever I manage to execute "ss -tl", on the load balancer or
the backend, I see all zeros in the Recv-Q column, even if I've
occupied the Starman workers.
To be sure, I'm not certain that I've occupied exactly five workers
at the instants I get the 502 errors. Serving the application requests
some rather heavy processing, and some other system limit, such as memory,
could be coming into play. However, I've never had problems hitting the
system process limit or having the workers that are running fail due to
memory problems. So it seems likely to me that Starman closes the
connection "voluntarily," based on its own decision that it can't serve the
In my nginx error.log on the load balancer, I see the message:
"*6528 upstream prematurely closed connection while reading response header
Are there other conditions that could cause a connection to be closed
Also, where in the source code should I look for this?
Starman itself appears to be pure Perl, with no reference to the "backlog"
parameter that it parses. Which underlying module is doing the heavy
lifting, and where does it make a decision about returning a 502
or closing a connection?
On Wednesday, January 4, 2017 at 10:00:31 PM UTC-5, Eric Wong wrote:
> Maybe all 128 slots in the queue are immediately used up by
> clients? It happens on busy sites...
> The ss(8) tool in Linux can show how much of the backlog you
> use in the Recv-Q (2nd) column.
> ss -tl
You received this message because you are subscribed to the Google Groups
To unsubscribe from this group and stop receiving emails from it, send an email
For more options, visit https://groups.google.com/d/optout.