On 13 May 2010 11:45, Jake Anderson <[email protected]> wrote: > Personally I'd go with the max memory setup you were talking about but I > wouldn't bother with the NAS. > With only 2 nodes DRBD is fairly easy to setup, it gives you complete > synchronisation of partitions, IE when you write in one place that write > will only come back as ok if it has made it across the network and been > written to disk on the remote machine (depending on settings). If your ok > with a manual change over with a little downtime (in the case of an > intentional transition between servers) I'd put something like ext4 on a LVM > ontop of the DRBD partition mainly to keep things fairly simple. to migrate > machines you shutdown the guests, unmount the file system on host A, mount > it on host B and start the guests there
We use Xen+CentOS 5+DRBD+Linux-HA to achieve similar goals. We actually build each side of the cluster separately using automatic deployment tools (puppet and some glue around it). We use ext3 on the DRBD patition, the DRBD is actually managed from inside the Xen guests, not the host (we have different DRBD partitions for different guests). Linux-HA gives automatic fail-over (been tested a few times "under fire" when hardware failed - the other side took over automatically and all we saw from this was an SMS from Nagios about the crashed server being down). But DRBD could come at a performance cost, depends on how much you are pushing the setup it could hurt and we are looking at cheap SAN replacements for the development/office stuff. > If you want seemless transitions your going to want something like OCFS or We tried to setup GFS on top of DRBD (+on top of Xen) in order to move some of the functions to primary/primary mode but the performance was horrendous. Maybe we could get it to work if we spent more time tweaking it but just switched back to primary/secondary and ext3 setup for now. > somesuch for the file system, which gives you the ability to have it mounted > at both locations and hence live migration, you might be able to feed your > VM's raw lvm partions on the DRBD system and not bother with OCFS which Feeding the LVM's to the xen guests will work. You can setup the DRBD partition as a PV if you like, or setup "PC-style" partitions on it or just use it straight. "kpartx"+"losetup" are very handy tools for such games (mainly for accessing the xen DomU's "disks" from the Dom0). However if you want to use the DRBD in write/write mode and put an LVM on top of it then I think you'll have to use Clustered LVM. Not sure, though. > would make life easier but I haven't looked into that. > Upside to this system is you don't have a NAS that can go down as a single > point failure. Correct. Another option brought by a hosting provider we talked to was to setup a couple of CentOS servers (or FreeNAS/Openfiler as was mentioned before) to replicate the disks between them using DRBD and serve access to the disks through iSCSI to the "application servers". Effectively building a highly-available SAN cluster from existing hardware. The possible advantage there might be that you have hosts (CPU, bus, disk controller) dedicated for disk access so even though the applications access the disks over a network it could still free up other resources and make the app actually run faster. If 1Gb network is not enough then you can add NIC's/cables and bond them together. Actually having at least 2x1Gb cables and two separate network switches will avoid the switch from becoming a SPOF. This could be critical not only for plain functionality but to avoid the dreaded split-brain situation. Whatever you do for HA - make sure you do the fencing right. DRBD is very smart but the other parts should also work right. > For your offsite backup I'd then snapshot the machines and LVM's and rsync > them to your remote location. > rsync of the memory snapshot could consume a decent amount of bandwidth, its > probably going to be pretty volatile, if you can shutdown the guest snapshot > its disk then boot it back up again then the rsync traffic should only be a > little over the quantity of changes made to the disk IE files added/changed, > so not much more than your existing offsite backup needs. As far as I saw on the web (a bit to my surprise), ext3 journaling is supposed to be good enough to allow live snapshots, so you don't have to take the client down for this. Many people on the net report doing backups that way. Windows NTFS might be different but it might also be good enough for such a trick. In general - I try to stick to the tools which come bundled with CentOS. It comes with Xen 3.0 so that's what we use. CentOS 6 is expected to support KVM then we'll gladly switch to it (from what I heard about its performance over Xen I'd love to switch to it). libvirt should help avoid virtualisation solution dependency but so far we haven't got around to update our home-grown scripts to use it. Cheers, --Amos -- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
