On 15.5.2013 19:04, Viktor Dukhovni wrote:
My issue with 60:1 is not with the latency ratio, but with the
assumption that there is an unlimited supply of such mail to soak
up as many delivery agents as one may wish to add. In practice
the input rate of such mail is finite, if the output rate (via high
concurrency) exceeds the input rate, there is no accumulation and
no process exhaustion.
No doubt about any of this.
No one is talking about unlimited, though. You need only few times as
little deferred messages as you have delivery agents available to
experience delays in new mail deliveries. Probability-wise it all works
well, but in practice it does not.
In order to drain slow mail quickly, (allocate a buch of sleeping
processes via a bit of memory capacity without thrashing) without
starving new mail we need separate process pools for the slow and
fast path. Each of which can use the blocked delivery agent process
limit baloon. Then there is never any contention between the two
flows.
I think we are beyond this split model already. It increases the overall
resource cost and yet doesn't allow the groups to share them. It also
doesn't seem to deal as well with the situation if you mis-classify
something. I would say the shared resource pool is better.
Patrik