This is also now possible with RedHat Advanced Server - using shared
storage??? So will it run in a load balanced manner there??
Presumably if it runs on a Tru64 cluster (which we have many of), it will
run on a number of servers sharing file systems via NFS??
Thoughts??
Dave.
Using
Hello,
To make things simpler, you could limit users (and admins) to modify the
list databases on only one of the two (or more) servers
That's what I was going to do.
You could could point all website directions to that one server (and
only one of the servers) and you could direct the
Are there any mailman developers around here? - a much better
solution would be to turn on a backup server option in the
config file, specifying the e-mail address of the list server on
the backup server. What should then happen if this is on, is
whenever a list is changed either with mail
On Thu, 2002-11-21 at 03:42, Ian Chilton wrote:
ok, i'll rsync them across every 1/2hr or so and hope that the main
server is not unavailable for the time before someone changing a
subscription and the next rsync :)
Create a file in the locking directory:
qrunner.lock.moya.trilug.org.22845
Using the LDAP master/slave paradigm. Even better would be to use the
Novell e-Directory method and have a 'peer' network. So any updates
against any system in the peer-ring, will update (and make sure it
happens - receive confirmation) every other member in the peer group.
Sounds good -
Hello,
There are several ways to set this up. Here are two examples that I've
done:
Thanks for the reply. However this doesn't really answer my origional
question - how do you keep the subscription information in sync across
the multiple servers?
Thanks!
Bye for Now,
Ian
I would worry about locking on the databases. Mailman handles this via
a lock file in a common directory, so it should work even in
active-active, but you would still only have one mailman process able to
run on a list at one time. In other words, I don't think you would get
much better
If you actually have individual databases driving each install (as
opposed to a shared database via NFS or some other tool), then you could
easily setup some scripts to keep the databases in sync. You could use
the log files as your trigger, or even simple periodic diffs of each
individual
Not so much of performance that will be a problem, but more of a
availability issue. Yes, would be fun to try...
How about Ian's original idea, about keeping certain directories in sync??
Which directories would that have to be, and can a straight 'cp -R' acheive
the syncing??
Dave.
I would
If you are running two separate systems then you don't want to copy the
follow directories under ~mailman/:
data - where moderated messages are held
locks - where the locking files are created
logs - the local logs for the running Mailman system
qfiles - queued up messages waiting to
Hello,
I have a domain with 2 master mx servers and I want to run mailman on
both so if 1 is down, the mailing lists still work. I have not setup
mailman yet, but have subscribed to quite a few lists using it.
Does anyone have a similar setup?
I was thinking the best way would be to alias
On Tue, 2002-11-19 at 04:17, Ian Chilton wrote:
Hello,
I have a domain with 2 master mx servers and I want to run mailman on
both so if 1 is down, the mailing lists still work. I have not setup
mailman yet, but have subscribed to quite a few lists using it.
Does anyone have a similar
You can also store the email addresses (one per line, and *only* the email
address) and then use user_add to add the list of users to an existing
mailing list.
That sounds good - would they just get the default options?
What about a password for the web interface?
Using the web
I have a domain with 2 master mx servers and I want to run
mailman on both so if 1 is down, the mailing lists still work.
I have not setup mailman yet, but have subscribed to quite a
few lists using it.
Does anyone have a similar setup?
Most folks do this via NFS. They setup a
There are several ways to set this up. Here are two examples that I've
done:
- Front end the servers with an LVS cluster and run the servers in your
DMZ. The LVS cluster acts as a firewall and connects a user to an
active internal server (one of many in its list) based on either an ip
address
How about ACTIVE-ACTIVE sharing the same installation via an NFS mount (or
possibly shared storage in a cluster)?
D.
There are several ways to set this up. Here are two examples
that I've done:
- Front end the servers with an LVS cluster and run the servers
in your DMZ. The LVS cluster
16 matches
Mail list logo