On Thu, Jan 1, 2009 at 9:31 PM, Matthew Weigel <uni...@idempot.net> wrote:

>
> > I can see how this would be OK in cases where the app-level processing
> > is signifigantly more intensive than the HTTP parsing. But there are a
> > number of use-cases where this isn't the case. Take a comet server for
> > example: There's very little work for the other threads to do besides
> > idle for a while, then send back a small body. For these cases we need
> > *each* thread parsing HTTP requests.
>
> I'm a bit skeptical, to be honest.  Although libevent and lighttpd are two
> completely different codebases, both take the same general approach to HTTP
> as
> a single-threaded, single-process server that eschews (where possible)
> select() or poll().  I think the results are pretty clear: parsing HTTP is
> not
> the bottleneck. :-)


I'm not sure what you mean when you say that the results are clear.
Generally speaking, lighttpd deployments will use something like round-robin
fastcgi to allow the application logic to run in as many concurrent
(external) proccesses as is optimal, thus utilizing all of the available
cores. Additionally, lighttpd supports exactly the use-case I outlined where
multiple threads process HTTP (though I'm not sure how they avoid the
thundering herd problem):

From: http://redmine.lighttpd.net/wiki/lighttpd/Docs:MultiProcessor
> "since 1.4.x we have support to spawn several processes to listen on a
> single socket"
>

Let me describe a typical Comet use case that defies the single-threaded
HTTP architecture. I don't mean to beat this into the ground, I just want to
make it clear why someone might desire a multi-threaded/proccess HTTP server
where each thread/process parses input.

Consider a live blogging server where users can stay at a webpage and
receive new posts/updates as the author makes them. If someone is covering a
live event, they might make a new post every 3 minutes on average.  Each
user viewing the blog will make an HTTP request in the long polling style,
asking for a new event. The server will hold that request open for up to 30
seconds if no new blog event occurs, and then send back an empty response.
The reason for the 30 second maximum has to do with firewalls and proxies
that shut connections down if they stay open much past 30 seconds. This
means that we'll have about ~6 out of 7 HTTP requests per user doing
absolutely nothing besides sending back an empty response. Also Consider the
case where half of your users have reached their browser connection limit
and must resort to normal polling (connection doesn't stay open) every 5
seconds. In this case those users will make ~35 out of 36 requests that do
nothing besides HTTP parsing.

When the blog author finally does make a post, he'll POST the contents to
our server, which will then send a copy out to each open connection. There
is no real processing involved beyond shuffling the bytes around. This
server spends about 99.99% of the time on I/O and parsing/generating HTTP.
If we have only one thread handling HTTP, then our app will only use one
processor core no matter how many additional threads we start up for other
tasks.

-Michael Carter
_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkeymail.org/mailman/listinfo/libevent-users

Reply via email to