On Jul 7, 2009, at 10:17 AM, Graham Leggett wrote:
Paul Querna wrote:
It breaks the 1:1: connection mapping to thread (or process) model
which is critical to low memory footprint, with thousands of
connections, maybe I'm just insane, but all of the servers taking
market share, like lighttpd,
2009/7/8 Graham Leggett minf...@sharp.fm:
Paul Querna wrote:
It breaks the 1:1: connection mapping to thread (or process) model
which is critical to low memory footprint, with thousands of
connections, maybe I'm just insane, but all of the servers taking
market share, like lighttpd, nginx,
On Wed, Jul 8, 2009 at 3:05 AM, Graham
Dumpletongraham.dumple...@gmail.com wrote:
2009/7/8 Graham Leggett minf...@sharp.fm:
Paul Querna wrote:
It breaks the 1:1: connection mapping to thread (or process) model
which is critical to low memory footprint, with thousands of
connections, maybe
On 08.07.2009 15:55, Paul Querna wrote:
On Wed, Jul 8, 2009 at 3:05 AM, Graham
Dumpletongraham.dumple...@gmail.com wrote:
2009/7/8 Graham Leggett minf...@sharp.fm:
Paul Querna wrote:
It breaks the 1:1: connection mapping to thread (or process) model
which is critical to low memory
2009/7/9 Rainer Jung rainer.j...@kippdata.de:
On 08.07.2009 15:55, Paul Querna wrote:
On Wed, Jul 8, 2009 at 3:05 AM, Graham
Dumpletongraham.dumple...@gmail.com wrote:
2009/7/8 Graham Leggett minf...@sharp.fm:
Paul Querna wrote:
It breaks the 1:1: connection mapping to thread (or process)
On Wed, 2009-07-08 at 11:01 +1000, Bojan Smojver wrote:
So, the loop would be:
- poll()
- try assembling a full request from data read so far
- process if successful
- go back to poll() if not
Too naive?
I see that we'd most likely get stuck with the accept mutex (i.e. if
another
On Wed, Jul 8, 2009 at 9:11 PM, Bojan Smojverbo...@rexursive.com wrote:
On Wed, 2009-07-08 at 11:01 +1000, Bojan Smojver wrote:
So, the loop would be:
- poll()
- try assembling a full request from data read so far
- process if successful
- go back to poll() if not
Too naive?
I see
On Wed, 2009-07-08 at 22:53 -0400, Paul Querna wrote:
But the event mpm doesn't have an accept mutex :D
Yeah, I know. I was talking about making prefork async too.
--
Bojan
On Mon, Jul 6, 2009 at 10:50 PM, Justin Erenkrantzjus...@erenkrantz.com wrote:
On Mon, Jul 6, 2009 at 10:20 PM, Paul Quernap...@querna.org wrote:
I am looking for an alternative that doesn't expose all this crazyness
of when to free, destruct, or lock things. The best idea I can come
up with
On Mon, Jul 6, 2009 at 10:56 PM, Mladen Turkmt...@apache.org wrote:
Paul Querna wrote:
Can't sleep, so finally writing this email I've been meaning to write
for about 7 months now :D
Pools don't help, but don't really make it worse, and are good enough
for the actual cleanup part -- the
Paul Querna wrote:
This deals with removing an event from the pollset, but what about an
event that had already fired, as I gave in the original example of a
timeout event firing the same time a socket close event happened?
In that case I suppose the only solution is to make the operations
Paul Querna wrote:
Can't sleep, so finally writing this email I've been meaning to write
for about 7 months now :D
One of the challenges in the Simple MPM, and to a smaller degree in
the Event MPM, is how to manage memory allocation, destruction, and
thread safety.
A 'simple' example:
On Tue, Jul 7, 2009 at 7:34 AM, Graham Leggettminf...@sharp.fm wrote:
Paul Querna wrote:
I think it is possible to write a complete server that deals with all
these intricacies and gets everything just 'right', but as soon as you
introduce 3rd party module writers, no matter how 'smart' we
Paul Querna wrote:
Nah, 90% of what is done in moduels today should be out of process aka
in FastCGI or another method, but out of process. (regardless of
MPM)
You're just moving the problem from one server to another, the problem
remains unsolved. Whether the code runs within httpd
On Tue, Jul 7, 2009 at 8:39 AM, Graham Leggettminf...@sharp.fm wrote:
Paul Querna wrote:
Nah, 90% of what is done in moduels today should be out of process aka
in FastCGI or another method, but out of process. (regardless of
MPM)
You're just moving the problem from one server to
On Tue, Jul 7, 2009 at 10:01 AM, Graham Leggettminf...@sharp.fm wrote:
Paul Querna wrote:
Yes, but in a separate process it has fault isolation.. and we can
restart it when it fails, neither of which are true for modules using
the in-process API directly -- look at the reliability of QMail,
Paul Querna wrote:
Yes, but in a separate process it has fault isolation.. and we can
restart it when it fails, neither of which are true for modules using
the in-process API directly -- look at the reliability of QMail, or
the newer architecture of Google's Chrome, they are both great
Paul Querna wrote:
It breaks the 1:1: connection mapping to thread (or process) model
which is critical to low memory footprint, with thousands of
connections, maybe I'm just insane, but all of the servers taking
market share, like lighttpd, nginx, etc, all use this model.
It also prevents
On Tue, Jul 7, 2009 at 9:39 AM, Paul Querna p...@querna.org wrote:
On Tue, Jul 7, 2009 at 8:39 AM, Graham Leggettminf...@sharp.fm wrote:
Paul Querna wrote:
Nah, 90% of what is done in moduels today should be out of process aka
in FastCGI or another method, but out of process.
This is how I envisioned the async stuff working.
-Async event thread is used only for input/output of httpd to/from network*
-After we read the headers, we pass the request/connection to the worker
threads. Each request is sticky to a thread. Request stuff may block,
etc, so this thread pool
On Tue, Jul 7, 2009 at 12:54 PM, Akins, Brianbrian.ak...@turner.com wrote:
This is how I envisioned the async stuff working.
-Async event thread is used only for input/output of httpd to/from network*
-After we read the headers, we pass the request/connection to the worker
threads. Each
Akins, Brian wrote:
This is how I envisioned the async stuff working.
-Async event thread is used only for input/output of httpd to/from network*
-After we read the headers, we pass the request/connection to the worker
threads. Each request is sticky to a thread. Request stuff may block,
On 7/7/09 1:02 PM, Graham Leggett minf...@sharp.fm wrote:
Ideally any async implementation should be 100% async end to end. I
don't believe that its necessary though for a single request to be
handled by more than one thread.
True. However, what about things that may be process intensive.
On 07/07/2009 07:02 PM, Graham Leggett wrote:
Ideally any async implementation should be 100% async end to end. I
don't believe that its necessary though for a single request to be
handled by more than one thread.
I agree. I see no reason for multiple threads working on the same request at
On Tue, 2009-07-07 at 16:01 +0200, Graham Leggett wrote:
As is httpd prefork :)
Yeah, definitely my favourite MPM :-)
As far as I understand this, the deal is that we need to have a complete
request before we start processing it. Otherwise, we can get stuck and
one of our precious resources is
Can't sleep, so finally writing this email I've been meaning to write
for about 7 months now :D
One of the challenges in the Simple MPM, and to a smaller degree in
the Event MPM, is how to manage memory allocation, destruction, and
thread safety.
A 'simple' example:
- 1) Thread A: Client
On Mon, Jul 6, 2009 at 10:20 PM, Paul Quernap...@querna.org wrote:
I am looking for an alternative that doesn't expose all this crazyness
of when to free, destruct, or lock things. The best idea I can come
up with is for each Connection, it would become 'semi-sticky' to a
single thread.
Paul Querna wrote:
Can't sleep, so finally writing this email I've been meaning to write
for about 7 months now :D
Pools don't help, but don't really make it worse, and are good enough
for the actual cleanup part -- the difficultly lies in knowing *when*
you can cleanup an object.
Pool pre
28 matches
Mail list logo