>A lot of people say this and I don't see a particular reason why it
>should be the case. Both POP and SMTP do little more than read a file
>and write it to a socket. ftp does little more than read a file and
>write it to a socket. In the case of binary, the file will be encoded,
>big deal.
>
>Perhaps the problem is that email clients tend not to deal well with
>large files - that's a bug. The reality is that people find email an easy method
>to send things and I'm sure they will continue to do so. And, as people who
>provide email services perhaps we should be making sure it does what people
>want rather than bemoaning the fact that it lets technical novices exchange
>all sorts of unlikely data with each other.
I assume one problem is the fact that the elements you state in
your first paragraph all rely on a *continuous* connection being
available for a given transmission.
That is, if the connection is dropped for any reason -- and TCP/IP
protocols, as I understand them, are not designed to protect the
sanctity of a connection above all else -- then the entire transmission
(of that email, anyway) is lost, since, unlike some other protocols,
there isn't (as I understand it) a higher-level facility to restart
the transmission (after re-establishing the connection) where the
last successful one left off.
In an extreme case, if a spurious disconnect of all TCP/IP connections
between machines K and R occurs every 10 minutes with 100% reliability,
and never occurs at any other time, then emails that take 10 or more
minutes to transmit will, no matter how often retries occur, *never*
be transmitted. Shorter emails will succeed (geometrically, or
something like that) proportional to the amount of time they need fewer
than 10 minutes.
In a sense, designing a system for robustness at many levels can
end up making it seem less than robust at those very levels due to
problems that are on *other* levels (such as the disconnect problem),
as people using the system believe they can act in ways that basically
stress the robust levels until they are over-extended vis-a-vis the
non-robust ones.
I.e. if the networking software is designed to be so robust that those
every-10-minute outages are basically never noticed, then as people
gain confidence in the 'net, they'll stress a system like SMTP, which
was not designed to be quite so robust (it can't resume transmission
after a broken connection), until it suddenly fails, and they'll think
the problem is elsewhere (qmail, networking software), when it's first
and foremost in the every-10-minute outage, secondarily in the SMTP
protocol.
If I'm wrong that SMTP can't continue a broken transmission or other
stuff, apologies. And, some of my points were already made in other
ways on this topic. I'm basically agreeing that the problem isn't
necessarily in qmail, but can *appear* to be there due to lots of other
factors.
Kinda like how people often believed GNU/Linux, especially through
1996 or so, was flakier than Windows 3.1 and then 95, because all you
had to do was use gcc to compile the kernel to crash lots of systems,
systems that ran Windows just fine. When, in the vast majority of those
cases, it was the hardware (e.g. memory subsystem) that was failing,
only GNU/Linux stressed it hard enough to fail, something that was
apparently very hard to do (under normal circumstance) with Windows.
The lesson here is that just because POP/SMTP/whatever do "little more than"
read and write, it's the facilities upon which they rely to be that simple
that must also be robust. If those facilities (e.g. TCP connections) aren't
sufficiently robust, then either improve those facilities (often requiring
lots of different approaches in different shops), or improve POP/SMTP/whatever
(requiring One Fix that everyone can copy, except that it makes the code
no longer so obviously simple).
Ditto for GNU/Linux, which could have been "fixed" to get around those
hardware problems by either willfully running slower/leaner or by including
lots of special magic to trap and correctly resume after most types of
memory errors. The former, even the latter, could be viewed as crippling
the software to accommodate the hardware. Ditto for fixing POP/SMTP/whatever
to cope better with dropped connections or having qmail handle large emails
specially. But sometimes that sort of crippling is the best practical
alternative.
tq vm, (burley)