On Fri, 25 Jul 2008 11:06:44 +0300
"Harold J. Ship" <[EMAIL PROTECTED]> wrote:

> > That, on the other hand, sounds like solving a problem that isn't
> there.  Unless you really need 
> > priority queues, you can just let everything run in parallel.
> 
> Well, even a priority queue isn't enough. Suppose there are N threads
> for handling requests. If N+M "heavy" requests come in a burst, then
> there won't be any way to handle any "light" requests until at least
> M+1 "heavy" requests are completed. This would be the same even with a
> priority queue.

The Event MPM decouples threads and requests (though it's dealing
with a different issue) ...

I still don't see what you have in mind.  If your "heavy jobs" are
being submitted to a batch processing queue, why not say so?
But if they're being processed within the HTTP transaction,
then you're just describing an overloaded server that needs to
limit its load in some way (e.g. absolute number of 'heavies')
and return 503 to clients above that limit.  See for example
mod_load_average.

> > So that's a (reverse) proxy architecture.  Apache is happy with
> > that,
> and indeed it's a very common scenario.
> 
> Not exactly a reverse proxy in the usual sense because the other
> server is not a web server. In fact in windows it's another process
> connected using COM. We will probably wrap it with a web interface as
> an interim solution until the other process can also be ported to
> Linux.

OK, communicating with a backend using COM is a job for a module,
which could be a mod_proxy protocol module.  You might want to look 
at how mod_proxy(_balancer) and mod_dbd maintain connection pools.

-- 
Nick Kew

Application Development with Apache - the Apache Modules Book
http://www.apachetutor.org/

Reply via email to