On Wed, May 15, 2013 at 06:52:52PM +0200, Patrik Rak wrote:

> I would also like to point out that in my case, the "slow mail" is
> not a slow mail as in "mail which goes to sites behind slow links".
> It is slow as in "it takes long time before the delivery agent times
> out".

Clear from the outset.

> Therefore, the 60:1 example is not unrealistic at all - in fact, as
> normal mail delivery gets faster, this ratio easily get's even
> worse, because (and as long as) the timeout remains the same.

My issue with 60:1 is not with the latency ratio, but with the
assumption that there is an unlimited supply of such mail to soak
up as many delivery agents as one may wish to add.  In practice
the input rate of such mail is finite, if the output rate (via high
concurrency) exceeds the input rate, there is no accumulation and
no process exhaustion.

> And that's also why throwing in more delivery agents in this case is
> such a waste - no matter how much I throw in, this mail doesn't get
> delivered, period. That's why I am reluctant to spend any extra
> resources on that.

It is not a waste, each message *will* eventually be allocated a
process and will be tried.  All I want to do is widen the pipe and
deal with congestion quickly!  If you keep the pipe narrow you risk
overflowing the queue capacity.  A wider pipe is useful.  You want
to not starve new mail, we can do both.

In order to drain slow mail quickly, (allocate a buch of sleeping
processes via a bit of memory capacity without thrashing) without
starving new mail we need separate process pools for the slow and
fast path.  Each of which can use the blocked delivery agent process
limit baloon.  Then there is never any contention between the two
flows.

Be careful to not starve the deferred queue without back-pressure
on new mail.  Let new mail find a less-congested MX host.

-- 
        Viktor.

Reply via email to