> Hi!
>
> We recently put a project live with uwsgi 1.2.3 behind nginx 1.2.1
> (through the
> uwsgi interface in nginx). A couple of days later, someone complains they
> can't
> access the site, but they manage to fix the problem by clearing their
> cookies.
>
> So after some digging, it turns out nginx tries to send more data than
> uwsgi is
> ready to receive, probably just the complete HTTP GET request:
>
> invalid request block size: 4675 (max 4096)...skip
> Mon Oct  8 09:16:51 2012 - error parsing request
>
> We google this, and it seems to be a common problem, with the remedy being
> setting the buffer-size to 32k. Indeed, raising this limit allows me to
> send a
> 5k GET parameter which otherwise fails.
>
> Now, this is fine and all, but it seems to me that I still have a bug,
> it's just
> harder to hit it. So my question is, is this a bug in uwsgi or in nginx?
>

It is not a bug, it is the expected behaviour for both projects :)

Check the last line of http://projects.unbit.it/uwsgi/wiki/ThingsToKnow

nginx has a 4k limit buffer (auto-tunable to 8k) for the request headers,
uWSGI has 4k by default, but not auto-tunable (but raisable upto 64k)

You have to choose the 'best' value for your app. In your case i would
have used 8192 for being more nginx-friendly.

If you are asking yourself why such limits exist, take in accounts headers
must be available for the whole request management, so you have to put
them in memory. Now just immagine an evil-users send a request with
thousand of headers :)

-- 
Roberto De Ioris
http://unbit.it
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to