We have a very similar set up.  We have a third machine as the MySQL master
which doesnt directly interact as a mail server, but it makes the MySQL
database triple-redundant without sacrificing performance.

For load balancing/redundancy, we are using Foundry
(http://www.foundrynetworks.com/) switches with the SLB (Server Load
Balancing) code.  Very powerful; it not only detects if the server is
running, or listening to port 25/110, but it actually understands SMTP and
POP3 and will make sure the SMTP and POP3 servers are responding properly as
part of its 'heartbeat' tests.. So, for example, if you had a machine that
had a problem and would still listen to port 25 but couldnt actually answer
with a '220', it would be treated as 'offline' instead of opening dead
connections for clients..  It also allows you to load balance between X
number of servers rather than just failing over in 'worst case scenario'
situations..

Andre

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of
Super-User
Sent: Monday, December 30, 2002 12:33 PM
To: [EMAIL PROTECTED]
Subject: Re: [vchkpw] vpopmail clustering examples


Well, I don't have any "examples", but here is what we are doing (using
mysql for authentication):

Ingredients:

2 qmail/vpopmail/mysql servers(load balanced)
2 mysql servers
1 nfs server

Preheat the oven to 350...

mysql:
One mysql server is the primary, and one the secondary (the secondary is
not
necessary, but I wanted a 'warm' standby).  In addition, the 2 vpopmail
servers are running a local mysql daemon.  The secondary server, and the
2
vpopmail servers are replication slaves of the primary.  (For
replication
information, see the documentation on the mysql site, its fairly
straightforward)

nfs:
The ~vpopmail/domains directory is nfs mounted from the nfs server.  The
/var/qmail/control directory and the /var/qmail/users directory are also
nfs
mounted.  (I had some trouble getting nfs to work.  In the mount
options, I
had to set anon=89 to get it to work properly)

Vpopmail:
Edit the vmysql.h header file so that the MYSQL_UPDATE_SERVER is the
primary
mysql server, and the MYSQL_READ_SERVER is localhost.  I'm sure that you
could set your read server to a remote host, but I think localhost would
be
faster, and it reduces network traffic.

So basically, each qmail/vpopmail server is running its own queue.
Mysql
updates are sent to the primary server, and replicated through to the
secondary, and the local mysql daemons.  The Maildirs and the control
files
are on nfs to ensure that both boxes are as similar as possible.

In the vipmap, for each domain we have the public IP, and the 2 private
IPs
set up.  I'm not certain whether having the public IP in the map is
necessary, but I don't think it hurts anything, so why not.

If you are using cdb for authentication, my first suggestion would be to
try mounting it over nfs so that all clients are working with the same
info.  This could cause some problems since multiple client's updates
might stomp on each other.  I'm not a cdb guru so maybe someone else has
some input?

By the way, in the near future, we're considering replacing our current
Solaris nfs box with 2 Redhat boxes, using heartbeat for failover, with
fibre channel for storage.  Is anyone doing something similar with
vpopmail or heartbeat?  If so, any input?


Thanks,
Duane Wylie




> -----Original Message-----
> From: John Runnels [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, December 29, 2002 8:40 PM
> To: [EMAIL PROTECTED]
> Subject: [vchkpw] vpopmail clustering examples
>
> is there anyone out there in vpopmail land that knows where there is any
> information on clustering or loadbalancing vpopmail. what I mean by this
is
> having multiple copies of vpopmail running on more than one server.
>
> the reason why I am asking the group in I am running into problems where
the
> systems are not in sync
>
> Help !!!
>
> I see the option in the compile but I have found no instructions on how to
> implement this.
>
> anyway thanks in advance of  all of the responses.
>
> (Including the flames)


Reply via email to