On Mon, Aug 06, 2001 at 01:07:36PM +0100, Richard Underwood wrote:
> > From: Henning Brauer [mailto:[EMAIL PROTECTED]]
> >
> > On Mon, Aug 06, 2001 at 11:09:25AM +0100, Richard Underwood wrote:
> > > I've also noticed that if qmail tries to deliver (for example) 50
> > > messages to one host concurrently, perhaps 2 will get through. The rest
> will
> > > be retried, but unfortunately they tend to get retried at much the same
> > > time. Again, 2 messages get through, and the process repeats. This
> simply
> > > isn't efficient.
> >
> > This isn't qmails fault but the fault of the remote host. There is room
> for
> > improvement - just not on qmail's side. The remote host MUST NOT accept
> more
> > connections than it can handle. If it does the remote recipients must live
> > with the delays.
> >
> Read what I wrote again. It IS qmail's fault. One role I use qmail
> for is to accept mail which is then passed on to an exchange server on the
> same network. Here's an example of what can happen ...
> If the exchange server goes down, a large queue builds up. The
> exchange server accepts something like 20 concurrent connections before
> refusing to accept connections. This, as you say, is what the server should
> do.
> When the exchange server comes back up, I kick the qmail-send
> process to get it to deliver the queue. At this point I should be able to go
> off and do other things.
If you would simply qmail let do its job this would not have been happening.
Just do nothing. qmail's backoff algorithm would not deliver all messages at
once if you hadn't send a SIGALRM to qmail-send.
--
* Henning Brauer, [EMAIL PROTECTED], http://www.bsws.de *
* Roedingsmarkt 14, 20459 Hamburg, Germany *
Unix is very simple, but it takes a genius to understand the simplicity.
(Dennis Ritchie)