:Heh... well:
:- We only recently got proper O(1) event notifications,
:- We definitely don't have an asynchronous server, and
:- During the peak, our onlines run pretty darn close to full capacity.
:
:To expand on the second point a bit: when you make a request to our 
:server, that thread is tied up for the duration of your request.  Each 
:server has a fixed number of threads, so tying one of these up to handle 
:keep-alive would be a big deal.  Our way of gracefully handling a 
:request which holds the thread too long is to just kill it (and log tons 
:of errors and page a few engineers).  Yeah, ugly.
:...

    This is a good demonstration of one of the main issues for M:N vs 1:1 
    (or N:1) threading.   It all comes down to how much machine overhead
    is required to support the concept of a 'thread', traded off against
    performance.   Sometimes machine overhead is the bottleneck, sometimes
    performance is the bottleneck.

    In fact, if machine overhead is the biggest issue then the most extreme
    answer is to not use threading at all... instead, go with a strict
    kqueue poll for the core web server and fork one process for each cpu on
    the system.  Farm out anything that might block so the web server core
    operates in a purely non-blocking fashion.

    Another major bottleneck for any web server is the buffer memory used
    for all the open TCP connections.  A server typically has considerable
    output bandwidth but the client typically does not have similar input
    bandwidth, so the data the server pushes out to the client winds up 
    having to sit in the socket buffer for a very long period of time.
    Using a large TCP socket buffer to improve bandwidth to the clients
    (due to the bandwidth delay product) also greatly increases the memory
    footprint of the connection on the server.   You wind up with another
    trade-off:  Improved bandwidth to a smaller number of connections using
    larger TCP buffers, or moderate bandwidth to a much larger number
    of connections using smaller TCP buffers.

    This particular problem is completely independant of the keepalive
    model (it's a problem whether you use keepalive or not), but if you
    think about it keepalive only really produces significantly better 
    bandwidth characteristics relative to one-off connections if you use 
    large TCP socket buffers, and I just got through saying in the previous
    paragraph that using large TCP buffers severely limits the number of
    concurrent connections you can have.  So one is not necessarily able
    to reap any significant benefit in overall operation by using keepalive
    on a heavily loaded web server.

                                        -Matt
                                        Matthew Dillon 
                                        <[EMAIL PROTECTED]>

Reply via email to