On 2011-10-10 00:26, Miles Fidelman wrote: > Hi Folks, > > I've been running a 2-node, high-availability cluster for a while. I've > just acquired 2 more servers, and I've been trying to figure out my > options for organizing my storage configuration. > > Basic goal: provide a robust, high-availability platform for multiple > Xen VMs. > > Current configuration (2 nodes): > - 4 drives each (1TB/drive) > - md software raid10 across the 4 drives on each machine > -- md devices for Dom0 /boot / swap + one big device > -- 2 logical volumes per VM (/ and swap) > -- VM volumes replicated across both nodes, using DRBD > -- pacemaker, heartbeat, etc. to migrate production VMs if a node fails > > I now have 2 new servers - each with a lot more memory, faster CPUs (and > more cores), also 4 drives each. So I'm wondering what's my best option > for wiring the 4 machines together as a platform to run VMs on. > > Seems like my first consideration is how to wire together the storage, > within the following constraints: > > - want to use each node for both processing and storage (only have 4U of > rackspace to play with, made the choice to buy 4 general purpose > servers, with 4 drives each, rather than using some of the space for a > storage server) > > - 4 gigE ports per server - 2 reserved for primary/secondary external > links, 2 reserved for storage & heartbeat comms. > > - total of 16 drives, in groups of 4 (if a node goes down, it takes 4 > drives with it) - so I can't simply treat this as 16 drives in one big > array (I don't think) > > - want to make things just a bit easier to manage than manually setting > up pairs of DRBD volumes per VM > > - would really like to make it easier to migrate a VM from any node to > any other (for both load leveling and n-way fallback) - but DRBD seems > to put a serious crimp in this > > - sort of been keeping my eyes on some of the emerging cloud > technologies, but they all seem to be aimed at larger clusters > > - sheepdog seems like the closest thing to what I'm looking for, but it > seems married at the hip to KVM (unless someone has ported it to support > Xen while I wasn't looking) > > So... just wondering - anybody able to share some thoughts and/or > experiences?
Yeah. Some of your goals are kind of contradictory, so I'm afraid it's near impossible to meet all of them. If you want something simple that Just Works™ and is easily expandable, forget the idea of storage being distributed across all nodes. Instead, pair 2 of your nodes as an iSCSI or NFS cluster with DRBD replication on a non-Xen kernel. Then, run the other two nodes as your hypervisor environment as iSCSI initiators or NFS clients. That way, you can migrate your domUs at will between the two Xen nodes you've already got, and any further Xen nodes you might add in the future -- you just can't migrate them to the storage nodes. You also don't need pairs of DRBD volumes per VM. It's all fairly straightforward to set up and integrate. Cheers, Florian -- Need help with Pacemaker, virtualization and storage? http://www.hastexo.com/now _______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
