I am sure I have blathered about this before...

Something I looked at at a previous job (they didn't bite) was replacing a hodpodge of physically dying servers with a pair of modern servers, each big enough to carry the load, but set up rather redundantly. More recently I was thinking I might do something related to replace my own basement server(s).

The rough idea:

- Each box does Linux software raid 1 (on disks from different manufacturers) to cushion against a dead disk. - On top of the raid (and probably some LVM) is DRBD in a hot/spare arrangement, so a single filesystem can be seen from each box. (But not a clustered filesystem, seems no point, instead just do a shift of which box gets to talk at any point. - Each box has a dedicated fast ethernet link to the other for DRBD syncing. - Use KVM for virtualization of guest VMs, and live migration between the two boxes. - Dual /-partitions on each box so the host OS can get upgrades that can be quickly be reverted with a reboot. - If inside the VMs I want to do dual /-partitions, that might be smart, too. - Some attention to physical dispersal: like external USB 3 or ESATA disks, so a single smoking event doesn't necessarily smoke everything. Put the second box at some physical distance, too. - UPSs to keep the boxes running through power outages. Maybe hibernate to disk when the power runs out? - And some thought about how to snapshot and backup VM data, even haul offsite if I want to be extra good. Ping-pong between two encrypted backup disks. I crafted that once, too.

Not a high performance model but a high availability model that doesn't care much about what happens inside the VMs. A given VM that isn't otherwise interested in rebooting might be run for years in such a rig.

The Shuttle's DS61 V1.1 looks like a nice way to build a minimal array like this.


-kb

_______________________________________________
Discuss mailing list
[email protected]
http://lists.blu.org/mailman/listinfo/discuss

Reply via email to