On 15.5.2013 17:44, Wietse Venema wrote:

Short of adding extra concurrency nothing is going to clear a
persistent source of slow mail (or a sufficiently-large deferred
queue).  However I this is not the scenario that I have in mind.
and I think the same holds for Patrik.

Right.

We just don't want to dedicate too many mail delivery resources to
the slowest messages.  Faster messages (or an approximate proxy:
new mail) should be scheduled soon for delivery. It should not have
to wait at the end of the line.

Exactly.

I would also like to point out that in my case, the "slow mail" is not a slow mail as in "mail which goes to sites behind slow links". It is slow as in "it takes long time before the delivery agent times out".

Therefore, the 60:1 example is not unrealistic at all - in fact, as normal mail delivery gets faster, this ratio easily get's even worse, because (and as long as) the timeout remains the same.

And that's also why throwing in more delivery agents in this case is such a waste - no matter how much I throw in, this mail doesn't get delivered, period. That's why I am reluctant to spend any extra resources on that.

Now we could take advantage of the fact that in many cases the
"slow" and "fast" messages cluster around different sites, thus
their recipients will end up in different in-memory queues.  If
there was a feedback of fine-grained delivery agent latencies to
qmgr(8), then could rank nexthop destinations. Not to starve slow
mail, but only to ensure that slow mail does not starve new mail.

Ditto, in my case I don't really need to measure remote site speed, as there is often no site as such anyway. It just times out.

For normal sites, as long as their respective "delivery time speeds" ratios are reasonable, none of them is dominating the queue so terribly that it would bother me that much...

Patrik

Reply via email to