On Fri, 21 Jul 2000, Petr Novotny wrote:
> I really suggest you to sift through the archives first. My MTA really
> does faster, even in this situation: The round-trip times around here
> are too long. The less round-trips, the faster the mail gets through.
> Easy as that.
Hmm...RSET needs one roundtrip (C: RSET, S: OK). A new SMTP connection
needs 3 roundtrips: 1. C:TCP(SYN), S:TCP(SYN+ACK), 2. C:TCP(ACK), S:server
hello, 3. C:HELO, S:OK. Moreover a typical TCP implementation will open
every new connection with most parameters (such as rtt, window, mtu) set
to some default values and it takes a while to adjust them. Your claim
(less roundtrips, better performance) is at least misleading...what is
meant by "roundtrips" in that context anyway?
The real reasons why qmail-style multiple connections are very likely to
outperform sendmail-or-whatever-style single connections are:
(Please note that the following explanations are intentionally
simplified for the sake of clarity and brevity.)
1. There are several "synchronization points" per one SMTP transaction,
let's say N (e.g. N=7 for qmail-remote and a server without pipelining:
TCP(SYN) & TCP(SYN+ACK), TCP(ACK) & server hello, HELO, MAIL, RCPT, DATA,
end of DATA (".")). At each of those points, there is one roundtrip wait
on the client's side. Let's say an average roundtrip is R seconds long,
the link can transmit B bytes per second, the client needs to transmit M
bytes per message and the traffic from the server to the client is
negligible. This means an average SMTP transaction is M/B + NR seconds
long, and an average link utilization during a single transaction is
U = M / (M + NRB). It is obvious that U < 1. If U << 1, the client spends
more of its time waiting for the server's responses than sending data. In
such a case, the performance grows less or more linearly with the number
of simultaneously running clients, as does the link utilization. Up to a
point: when the link is saturated, the performance cannot grow anymore,
and it even decreases gradually due to the congestion-induced overhead and
increased per-message latencies.
2. Even when the link is congested, it might still be possible to
increase the amount of data you send...if there are other users using the
link you can steal some bandwidth from them. Let's assume the link can
transmit B bytes per second and the router on its end receives Y b/s
to be transmitted from you and O b/s from other users. A typical Internet
router will transmit a less or more randomly chosen set of cca. B incoming
bytes per second and drop the rest, i.e approx. Y / (Y + O) of the
bandwith will be allocated to you, and O / (Y + O) to others. If O remains
fixed and Y grows, Y / (Y + O) grows as well. This is similar to the
"communistic" behaviour of the traditional unix CPU scheduler: the more
processes you run, the more CPU time you get...at the other user's
expense. Today, it pays off to be aggressive (OTOH, I am not sure it
will pay off tomorrow in the more QoS-aware Internet).
--Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ]
"Resistance is futile. Open your source code and prepare for assimilation."