antoine.depoisier--- via Mailman-users writes:

 > First, my nodes are all an instance of mailman-core only.
 > I have an existing MTA that handle all email, and sent them to a
 > proxy, that dispatch all email between the existing pods. Then,
 > email are transferred to the MTA and MTA send them.

So you are distributing mailman core across multiple nodes?  And
"proxy" = "load balancer" (such as HAProxy)?

That is probably subject to a number of problems, unless you do things
that I'm not sure how to do.  For example, when a user account is
disabled by bounces, every so often Mailman will send a "hey, are you
alive again?" message to the user.  If enough of those bounce, the
user gets unsubscribed.  The problem is that that status is recorded
in the PostgreSQL database that all Mailman instances have access to,
and I think they'll probably all send those messages.  At best there's
a race condition where more than one might send the message.

Under some conditions moderators may get daily messages about
bounces. I suspect they would get one from each Mailman instance.

I think digests will be broken unless you share the lists subfolder,
because each Mailman instance will accumulate only the messages it
processes, so chances are good on an active list that digest
subscribers will get multiple partial digests when they should get
only one.

As I'll describe below, Mailman tends to spawn a lot of processes,
many of which don't have much work to do.  Now you're dividing very
little work across multiple nodes, which seems very wasteful.

You haven't said where your MTA lives.  If it's Postfix, it needs to
share the $data_dir folder with a Mailman instance that is responsible
for list creation.  Every time you restart Mailman it recreates the
files in that folder, so if it's shared among Mailman instances there
will be delays due to file locking.

So unless you have an insane amount of list traffic to process (say, a
G7 national government ;-), I wonder if the multi-instance approach is
necessary.  Mailman is not implemented to be operated that way --
you're asking for trouble.  The design is such that I imagine it can
be done with careful tuning, but current default configuration didn't
consider such a use case.  You don't need to answer, you know your use
case and I don't, but you may save yourself a lot of trouble by just
running with a little more resources on a single node.

Mailman uses a lot of memory just to get started (about 2GB unless
you're really aggressive and do unsupported tuning of which runners
are started and what modules are loaded), but then it easily scales
without increasing resources except CPU to some extent.  For example
I've worked on a system that processes hundreds of thousands of
incoming posts a day on a single Linode (2vCPU, 16GB) running core,
Postorius, HyperKitty, Xapian, and nginx (PostgreSQL got its own VM
for some reason).  CPU usage on that system never gets above 25%,
active memory usage generally 20-25% and there's usually a substantial
amount free despite Linux's strategy of aggressively caching files,
load average normally around 2.5 and never more than 5.  The only
tuning we did there was to bump the out queue's slices to 8 and in
queue to 2, but all running on that same Linode (which pushes
Mailman's memory usage to over 2GB).  Running "ls -R queue" gives all
empty subfolders about 2/3 of the time.

 > For the folder queue, I'm not sure if I should share this folder
 > between all pods, because one instance of mailman core mean one
 > worker if I'm right,

No, one instance of Mailman core means a minimum of 15 processes (one
master and 14 runners) in the default configuration.  About half of
those have nothing to do most of the time.  You can probably whittle
that 15 down to 11 with some care at a cost of a certain amount of
functionality.

Most runners have their own queues as subfolders of 'queue'.  Each
queue consists of message files with times derived from the timestamp
of creation and a hash.  Each runner processes its queue in order of
the timestamps.  When its task is complete, it passes the message to
the next runner in sequence by moving the file into the appropriate
subfolder (with the same filename).

By the nature of email, each message is independent of all the
others.  So we can process them in parallel, as long as there is a way
to assign one and only one runner to each message in a queue.  The way
we do that is to take the top N bits of the hash component of the
filename, which we call a slice.  Thus each queue has 1 or 2 or 4 or
8, etc slices.  To configure multiple slices for the out runner
(usually the bottleneck because it's the one that talks almost
directly to the network[1]), add

[runner.out]
instances: 4

to your mailman.cfg and restart.

That's what I recommend you do.

Footnotes: 
[1]  At least Postfix optimizes relaying by opening a connection to
the remote smtpd while still talking to Mailman, and only queues the
file on disk if the remote connection fails.

_______________________________________________
Mailman-users mailing list -- mailman-users@mailman3.org
To unsubscribe send an email to mailman-users-le...@mailman3.org
https://lists.mailman3.org/mailman3/lists/mailman-users.mailman3.org/
Archived at: 
https://lists.mailman3.org/archives/list/mailman-users@mailman3.org/message/QOUHNZ6DZYC67UDGK6USM7R42NOVG4I6/

This message sent to arch...@mail-archive.com

Reply via email to