Doug Clements wrote:|
I've not yet actually priced out Sybase for this, and probably wouldn't do concurrent NFS access, since with two way replication it'd be easier to just have two servers with a virtual IP for failover (makes it totally transparent to the clients). Sybase licensing is typically on a per CPU basis though, so if you have low load the pricing shouldn't be too bad.On Fri, Sep 05, 2003 at 02:31:39PM -0500, Nick Harring wrote:I'm about to tackle this same problem myself, since I'm about to ditch a pair of Solaris boxes for my NFS mounted mail spools and instead move to filers. My plan is to instead use Linux HA to have two machines as a failover cluster mounting the database via NFS. I think this'll be a lot cleaner, and will also integrate in a nicer fashion with the concept vpopmail has of mysql replication. Your other option, if you've got money to spend on the problem, is to get a database server which is both supported by vpopmail and supports more flexible replication. I'd recommend Sybase, since they support real two way replication, and their replication speed is amazing.Thanks for the input. I would assume Sybase also supports concurrent access to the DB files over NFS, since you're planning to use that. I know mysql specifically doesn't allow that (at least with MyISAM tables). Unfortunately, Sybase pricing is probably greater than the cost of labor in setting up a dedicated mysql server. What is a typical Sybase price for a setup like yours?--Doug
Also, with MySQL I'm not planning on doing concurrent access, but active/passive failover. The scenario plays out roughly like this:
Server A is primary, and thus active. Server B is secondary, and thus passive.
Servers A and B mount the nfs share, but only Server A starts MySQL and takes ownership of the virtual IP.
Server A chugs along happily for an arbitrary period of time, serving requests and keeping the table files up to date.
Server A fails, server B fails to receive a heartbeat and immediately assumes the virtual IP, forcibly arps to notify the switch, and then starts MySQL, loading the tables via NFS.
When Server A resumes it sees that B is primary, and stays passive, waiting for a heartbeat failure in order to fail back over.
The other option, one that I'm also considering, is instead of using shared storage is to have two configuration files on each server, one with it being the master, and the other with it being the slave. You still use a virtual IP, but both servers run all the time. Which ever is the slave just connects to the master as a slave and pulls updates from it, only restarting in master mode in order to server updates properly to the rest of the cluster.
In the first scenario, if you're leery of NFS (which is a reasonable thing for databases), the other option is to hang one storage array with two scsi host channels off of two boxes and have them mount/umount based on their active or passive role. In either scenario you need to be 100% positive that you'll completely fail rather than have both sides become active, as split brain syndrome is incredibly difficult to recover from. You might want to look at Veritas Cluster Services for this sort of setup, as they're pretty reasonable to manage and they offer the flexibility to do a lot of fail over scenarios.
I'd definitely advise looking at the Linux HA tools and see what kind of reliability and flexibility you need. I have had very, very positive experiences with both LVS and Pirahna in the past, including doing it with MySQL. The upfront effort is a little on the high side, especially for dynamically reconfiguring MySQL servers to flip from master to slave and back, but its well worth it when you get an essentially bulletproof cluster.