On Sun, May 12, 2013 at 11:22:22AM +0200, Patrik Rak wrote:

> The fact that qmgr doesn't know how many delivery agents for each
> transport are there doesn't help either. It only knows the
> var_proc_limit, which is not good enough for this. I recall we have
> had a discussion with Wietse long time about this, and IIRC we
> decided that it is better if qmgr doesn't depend on that value at
> that time...

Yes, of course, I covered this in my earlier post, it would need to
be told an upper bound on the number of processes for deferred entries,
leaving the rest for new entries.

        smtp_deferred_concurrency_limit = 0 | limit

> >I sympathise with the concern about the internal cost, but if the
> >solution adds substantial user-visible complexity I contend that
> >it is pointless, and the users who need this (the sites that accept
> >subscriptions via HTTP, ...) can just create a multi-instance
> >config, it is simple enough to do.
> 
> Hmm, if the visible configuration is what bothers you, it would be
> equally trivial to implement it so qmgr splits the transport only
> internally, and to the outside world it looks like if there was only
> one transport. But I considered this a worse solution as it would do
> something behind the scenes without allowing to configure it
> properly...

The "configuring it properly" part raises the complexity cost to a
level where I would suggest that the tiny fraction of sites taking
a high volume of new recipients via HTTP subscription forms can
implement a fallback instance.  The explicit parallel transports
are not much simpler.  A bulk mail MTA probably needs a fallback
instance anyway.

> >On Sat, May 11, 2013 at 06:33:22PM -0400, Wietse Venema wrote:
> >
> >>Even simpler: stop reading the deferred queue when more than N% of
> >>the maximal number of recipient slots is from deferred mail.
> >
> >This does not address Patrick's stated goal of avoiding process
> >saturation in the smtp transport by slow mail to bogus destinations.
> >(Similar to my 2001 analysis that motivated "relay" for inbound mail
> >and taught me the importance of recipient validation).
> >
> >Rather, it adresses active limit exhaustion.  The idea is perhaps
> >a good one anyway.  Reserve some fraction of the active queue limits
> >for new mail, so that when enough deferred mail is in core, only new
> >mail is processed along with the already deferred mail.
> 
> I too agree that this one would be really nice to have.

We need to be a bit careful, starving the deferred queue can lead
to an ever growing deferred queue, with more messages coming and
getting deferred and never retried.  If we are to impose a separate
deferred queue ceiling while continuing to take in new mail, we'll
need a much more stiff coupling between the output rate and the
input rate to avoid congested MTAs becoming bottomless pits for
ever more mail.

The current inflow_delay mechanism does not push back hard enough.
When the inflow_delay timer is exhausted, cleanup goes ahead and
accepts the message.  We could consider having cleanup tempfail
when deferred mail hits the ceiling in the active queue.

    - Suspend deferred queue scans when we hit a high water mark
      on deferred mail in the active queue.

    - Resume deferred queue scans when we hit a low water mark on
      deferred mail in the active queue.

    - On queue manager startup generate a set of default process
      limit tokens.

    - Generate one token per message moved from incoming into the
      active queue, provided deferred queue scans are not suspended.

    - Generate one token per message delivered or bounced (removed
      rather than deferred) when deferred queue scans are suspended.

    - Generate another set of default process limit tokens each
      time the queue manager completes a full scan of the incoming
      queue, provided deferred queue scans are not suspended.

    - Cleanup (based on request flag from local, bounce, pickup vs.
      smtpd/qmqpd) either ignores inflow_delay (not much point in
      enforcing this with local sources the mail is already in the
      queue) or tempfails after the inflow_delay timer expires.
      With remote sources a full queue, as evidenced by lots of
      deferred mail in the active queue, exerts stiff back-pressure
      on the sending systems.

    - We probably need a longer token wait delay if the coupling
      is stiffer.  This would be a new parameter than turns on
      the new behaviour if set non-zero.

This is not yet a complete design, and requires more thought.  We
need to better understand how this behaves when the queue is not
congested and a burst of mail arrives from some source.  We also
need to understand how it behaves when the deferred queue is large
and the input rate is stiffly coupled to the output rate.

Unless the active queue is completely full, we're not coupled to
the output rate, rather we're coupled to the queue-manager's ability
to move mail from incoming into active, with excess tokens acting
a buffer that is refreshed on each complete incoming queue scan.
If this buffer is not too small (should it be a multiple of the
deferred process limit?) we should be able to accomodate bursts of
mail when not congested without tempfailing any of them, but with
some increase in latency to allow the queue manager to keep up.

What changes is the behaviour when we already have lots of deferred
mail.  No new tokens are generated even when incoming queue scans
are completed, and the MTA only accepts as many messages as are
delivered until deferred messages in the active queue fall below
the low water mark.

The gap between the high and low water marks needs to be large
enough to not be significantly impacted by the minimum backoff time
quantization of deferred queue scans.

> >A separate mechanism is still needed to avoid using all ~100 smtp
> >transport delivery processes for deferred mail.  This means that
> >Patrick would need to think about whether the existing algorithm
> >can be extended to take limits on process allocation to deferred
> >mail into account.
> 
> I have really tried, but unless I separated the two internally
> considerably, I always winded up with the deferred recipients
> somehow affecting the normal recipients. There is so many memory
> limits to deal with, and once you let the deferred recipients
> in-core, it's hard to get rid of them. The "less input" is a
> solution here, but I am afraid it might affect the "real" deferred
> mail adversely to be generally recommended...

Yes the best internal implementation is to internally separate the
transports for which one defines a process ceiling into two twins,
and arrived at the same conclusion yesterday, before reading your
post, so we're in agreement there.

My point is that this is by far the simpler interface for the user.

Also with both internal logical transports talking to a single pool
of delivery agents, in the absense of deferred mail, new mail can
use the full set of delivery agent processes.

-- 
        Viktor.

Reply via email to