Thanks Roberto,

I've checked, we already use --http-socket.

I've just tried the --no-defer-accept option locally, it doesn't change
anything.

What I do to test:

$ uwsgi --http :9090 --wsgi-file test.py --single-interpreter --master
--die-on-term --enable-threads --pyhome ~/tmp/testenv --harakiri 28 -l 3
--no-defer-accept

test.py:

import time

def application(env, start_response):
    start_response('200 OK', [('Content-Type','text/html')])
    time.sleep(25)
    return ["Hello World"]

Then I do a bunch of curl:

$ curl localhost:9090 -v

And I get the expected:

Tue Aug 19 10:57:31 2014 - *** uWSGI listen queue of socket "127.0.0.1:43920"
(fd: 3) full !!! (4/3) ***

And to be sure that curl is not doing weird things (like auto retrying the
tcp connection), I test with telnet to see if I'm rejected right away:

$ time telnet localhost 9090
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET /test HTTP/1.1

Connection closed by foreign host.

real    1m7.567s
user    0m0.005s
sys     0m0.003s

The load-balancer would expect to be rejected right away at that point.

Thus, what is the backlog for if uwsgi still accepts connections once it's
full? Just a warning?

Also, I've seen an email from you mentioning that using sleep() in a test
wasn't faking the right behavior, so I'm not sure in my case if it's the
right way to fill the backlog.

Cheers


On Mon, Aug 18, 2014 at 11:02 PM, Roberto De Ioris <[email protected]> wrote:

>
> > Hi,
> >
> > we ran into an issue a few weeks ago and I don't know if it's an expected
> > behavior or not.
> >
> > Short version: uwsgi still accepts connections even when the backlog is
> > full. Is it normal? If it is, is there a way to refuse connection once
> > it's
> > full?
> >
> > Long version: we have several instances on Heroku serving our python app
> > with uwsgi. Their load-balancer has the following routing algorithm:
> >
> >
> >    1. Accept a new request for the app
> >    2. Look up the list of web dynos (instance name on Heroku) for the app
> >    3. Randomly select a dyno from that list
> >    4. Attempt to open a connection to that dyno's IP and port
> >    5. If the connection was successful, proxy the request to the dyno,
> and
> >    proxy the response back to the client
> >    6. If it takes more than 30 seconds, the request is killed.
> >
> >
> > If a wsgi worker get stale, the backlog quickly fills up and then it
> > shouldn't accept the connection to let the router know that it should
> > route
> > the requests to another dyno. The problem is that it doesn't and all the
> > traffic going to a stale dyno/worker will get killed eventually.
> >
> > I've managed to reproduce the behavior locally.
> >
> > Thanks
> >
> >
>
> It is hard to say, but if the check is only an open/close connection
> without sending data, the deferred accept
> (
> http://www.techrepublic.com/article/take-advantage-of-tcp-ip-options-to-optimize-data-transmission/
> )
> could fake the proxy:
>
> --no-defer-accept
>
> will disable it
>
> On the other side if you are using --http instead of --http-socket (it is
> a common error) the http proxy is able to accept tons more connections
> than workers, so again this could cause the problem.
>
>
> --
> Roberto De Ioris
> http://unbit.it
> _______________________________________________
> uWSGI mailing list
> [email protected]
> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
>



-- 
Francois
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to