Paul J. Reder wrote: > > > Brian Pane wrote: > >> I've been thinking about strategies for building a >> multiple-connection-per-thread MPM for 2.0. It's >> conceptually easy to do this: >> >> * Start with worker. >> >> * Keep the model of one worker thread per request, >> so that blocking or CPU-intensive modules don't >> need to be rewritten as state machines. >> >> * In the core output filter, instead of doing >> actual socket writes, hand off the output >> brigades to a "writer thread." > > > > During a discusion today, the idea came up to have the > code check if it could be written directly instead of > always passing it to the writer. If the whole response > is present and can be successfully written, why not save > the overhead. If the write fails, or the response is too > complex, then pass it over to the writer.
+1. In cases where the entire file can be delivered in one call to sendfile/sendfilev, all we'll have to do in the writer thread is close the connection once the write completes. > > >> >> * As soon as the worker thread has sent an EOS >> to the writer thread, let the worker thread >> move on to the next request. > > > > I have a small concern here. Right now the writes are > providing the throttle that keeps the system from generating > so much queued output that we burn system resources. If > we allow workers to generate responses without a throttle, > it seems possible that the writer's queue will grow to the > point that the system starts running out of resources. Right. The solution I'd been thinking of is a variant of the current worker's "queue_info" struct: a central structure for process that keeps a count of open connections. The listener thread increments this counter every time it does an accept, and the writer thread decrements it every time a connection completes. If the count reaches a configured maximum, the listener blocks on a condition variable until the writer closes at least one current connection and wakes up the listener. Brian