At 17 October, 2020 Demi M. Obenour wrote:
 
> > Postfix is not an HTTP server handling tens to hundreds of thousands of 
> > requests
> > per second, and does not benefit from the optimisations needed for those 
> > kinds
> > of workloads.  Premature optimisations that sacrifice robustness and 
> > security
> > for little gain are not part of the design.
> 
> If one is Google or Microsoft and need to process hundreds of millions
> of messages per day, then Postfix might not work.  But if one needs
> to handle that much mail, then one can probably afford to write a
> bespoke MTA.

A decade ago I helped create and run a mailbox hoster with a few million
active accounts.  We were nothing compared to gmail/hotmail, but we ran
our border MTAs using postfix (with custom smtp content filters and
custom LMTP services).  My memory is rusty, but given the amount of spam
we consumed, we definitely were doing 10s-100s of millions of messages
per day (on the inbound side).  Postfix did great -- our choke point was
storage IOops being saturated by spam that no one would ultimately read,
which is annoying but the truth of life.

I no longer work in email, but I do work at a fairly large $MEGACORP and
I was discussing something the other day with a coworker: When you're
sitting on the internet with a service that needs to suport downtime,
heavy load, etc., then having a service that fully supports RFCs is
really important because you can't be taking postmaster@ emails from
rando operators because you're doing something dumb.

But once you're dealing with internal services, it's all custom code,
because you can just message the engineer responsible for whichever
subservice is acting up and sort it out asap.  As such things tend to be
much more narrow focused in implementation and written for narrowly
scoped perf metrics in mind and are less robust (feature wise) than
software like postfix.

Reply via email to