On Fri, Jan 26, 2001 at 11:56:54PM +0000, Mark Delany wrote:
> On Fri, Jan 26, 2001 at 05:46:51PM -0600, David L. Nicol wrote:
> > David Dyer-Bennet wrote:
> >
> > > Um, most reporting measured results from optimizing high-traffic
> > > qmail-based mail servers have found that disk activity on the queue
> > > disk is the first limit they hit.
> >
> >
> > How about, if the first delivery fails, pass it off to a server with
> > some disks. Why not pre-process with qmail-remote before queueing?
>
> qmail-remote is way too late as most of the I/O load is putting the
> mail in the queue, not getting it off. Besides, the symptom isn't
> delivery failure, it's slowness.
Actually. I was little hasty on the first point. I now see what you're
getting at. One question is, how far do you let qmail-remote go before
deciding it will work. It it's past the MAIL FROM/RCPT TO part, then
why not complete the delivery rather than incur the double load and
double latency?
If you mean to try and completely do the remote delivery prior to
placing it in the queue, then this is the passthru idea that people
have suggest previously. It potentially has merit with an amount of
added complexity.
One problem is that a busy system, such as a mailing list system may
be at full concurrencyremote for extended periods of time, in which
case, new submissions should not attempt qmail-remote delivery so
you're back to square one.
Another problem is that the mail has to be stored somewhere while
qmail-remote attempts delivery. Well, unless you want the submitting
client to wait - that may create a lot of confusing latency for, eg,
people sitting on a PC using Eudora. If the mail is stored somewhere,
you're starting to get back to a disk queue.
But it's not necessarily a silly idea. I believe that sendmail tries
to do something like this in certain circumstances. A monolithic
design has an advantage in this regard. Doing this is a nice
compartmentalized way with the current qmail wouldn't be a lot of fun.
Regards.