2013/12/13 Adam Chlipala <[email protected]> > On 12/12/2013 07:17 AM, Vladimir Shabanov wrote: > > 2013/12/12 Adam Chlipala <[email protected]> > >> >> Interesting; so throwing at least one popular proxy in front doesn't >> bring magic performance improvements. That's at least comforting from the >> perspective of not challenging my mental model of how efficient the Ur/Web >> HTTP binaries are. >> > > I think nginx helped my benchmark only because of it fixed non-working > keep-alive. Since nginx uses the same Ur/Web's HTTP interface there > shouldn't be any improvements on a local machine. On a faraway machine it > could help by reducing network latency by not making new connection for > each request. > > > But would you expect that latency improvement now that keepalive is > available? >
Whoops. Forgot that keep alive is working now. I think it explains why nginx+urweb shows only 70% of performance (quite a major drop). AFAIK nginx doesn't use keepalive for upstreams by default. So latency of course should be improved. Although I wouldn't use urweb HTTP for direct requests serving. There are many other things that nginx do and urweb doesn't (and shouldn't). Static files serving, HTTP compression, SSL, GeoIP. And the most common thing proxies are useful is for handling slow connections. Ur/Web processes request, sends it to proxy and it's free (db transaction finished, socket/thread freed). The proxy then can send response as long as needed. Fix me if I wrong. But Ur/web uses fixed number of threads to handle requests. So if all threads are busy request handling is stalled. So just a few clients with bad connection can stall all the app. I also think that there are many nasty HTTP protocol hacks. Buffer overruns and so on. Nginx is tested to work with most of them, Ur/Web doesn't. So I think direct serving of requests w/o proxy should be used only for benchmarks.
_______________________________________________ Ur mailing list [email protected] http://www.impredicative.com/cgi-bin/mailman/listinfo/ur
