On Tue, 2002-10-15 at 10:31, Bill Stoddard wrote:
> Something I've been hacking on (in the pejorative sense of the word 'hack'.
> Look at the patch and you will see what I mean :-).  This should apply and
> serve pages on Linux, though the event_loop is clearly broken as it does not
> timeout keep-alive connections and will hang on the apr_poll() (and hang the
> server) if a client leaves a keep-alive connection active but does not send
> anything on it. Scoreboard is broken, code structure is poor, yadda yadda. I
> plan to reimplement some of this more cleanly but no idea when I'll get
> around to it. Key points:
> 
> 1. routines to read requests must be able to handl getting APR_EAGAIN (or
> APR_EWOULDBLOCK):
> 2. ap_process_http_connection reimplemented to be state driven
> 3. event loop in worker_mpm to wait for pending i/o


>From a quick read through the patch, it looks like the
connection processing flow is:
   - listener thread accepts connection
   - listener passes connection to a worker through fd queue
   - worker wakes up, passes connection to event thread
     through event queue
   - worker goes back to sleep
   - event thread adds the connection to its fd set
   - when data is available on the connection, the event
     thread pushes the connection back onto the fd queue,
     waking up another worker to handle the connection

Did I get that right?  It seems like a lot of extra
context switching, compared to just having one event-loop
thread do all the reads.  On the other hand, having a
single thread doing all the reads would mean that we
couldn't spread the work of input processing across
multiple CPUs (except by adding more processes).

I wonder if we'd get any better results by combining
the roles of the listener and event threads:
  - have 'n' listener/reader threads, and let them
    take turns accepting connections, sort of like the
    leader/followers design
  - each listener/reader thread can handle at most
    'm' connections at once.  As it accepts connections,
    it adds them to its pollset.
  - when one of the listener/reader threads finds data
    available on one of its connections, it does the
    socket read inline.  (Note: This implies that we
    can't do anything in an input filter that might
    take a really long time.)
  - when a listener/reader thread recognizes a complete
    request, it hands that connection off to a worker
    thread for processing.
  - when the worker has produced a response, it hands
    off the brigade to a completion thread to wait for
    the network writes to complete.  (I suppose this
    completion thread could be combined with the
    listener/reader thread.  That might actually improve
    performance on multiprocessors by helping to ensure
    that each connection's network I/O is handled by the
    same thread for the life of the request.)

Brian


Reply via email to