DRBD is great for redundancy. However judging by your specification the DRBD
layer will most likely end up being the bottleneck for performance,
especially when you sit multiple VMs on it.

Bonding the two NICs can help, but it depends on how much traffic you are
expecting from the main link as it will compete with DRBD.

And like others have suggested, what is your DR plan? If you backup your
data through the same network interfaces it may impact the performance even
further.

On 3 June 2011 17:15, Jake Anderson <[email protected]> wrote:

> On 06/03/2011 04:04 PM, Kevin Saenz wrote:
>
>> A lot of those questions you have raised are usually answered in
>> documentation for the virtual environment you are running.
>>
> The docs don't seem to talk about "optimum" setups or have reference
> numbers for different configurations of the same hardware.
> I spose the tens of thousands of $ you pay for vmware is good for something
> afterall ;-P
>
>
>  Let me add a disclaimer that I have ceased supporting organisations that
>> are smaller than 2000 staff. So my head may be in the clouds
>>
>> The issue you have is you limit your redundancy if you run RAID 1, and you
>> will have only 1 TB of space. What VMs are you going to run? If it's a mail
>> server, SQL and fileserver I would hope that the client is small enough to
>> deal grow into 1TB shared disk space within the next 3 years and not out
>> grow it in 12 months. In my personal experience I have not seen a small
>> business use less than 1 TB for files, this was back in 2004-2005
>>
> 1TB is sufficient, they have ~200gb worth of files atm, and the growth rate
> is slow, 2Tb disks are off the shelf items now so i'm safe in that sense.
> My original plan was RAID1 with a drbd mirror of the lot, so in theory that
> could handle 3 disks failing and only show a minimal performance impact.
> Now I wonder about the performance implications of that setup. I'm noticing
> that iowait is higher in a bursty manner than i was expecting on their
> existing setup which is similar but with whitebox hardware rather than a
> real "server".
>
>  How do you intend support  the clustered environment? Granted I have not
>> played with Xen virtualisation but I would believe that similar rules apply
>> in VMWare or Microsoft and Citrix. I am making assumptions here: If you are
>> employing HA clustering My question is how will share disks out across nodes
>> work when your VM reside on node1's disk  and for some stupid reason node1
>> is dead how will node2 access the VMs?
>>
> The whole point of drbd is to replicate the hdd's across the network, think
> of it as raid1 over lan.
> So node 1 and node 2 both have the same disk content in their storage
> pools. Instances of the VM's can then be started on either node.
> ganeti takes care of setting up drbd and such so i can tell it i want a new
> instance, i want it to primarily run on node A and it'll automagically start
> itself up on node B if node A dies or if i fail node A it'll live migrate
> everything over to B so i can start mucking with A without bothering the
> users too much.
> It really does take care of allot of complexity in neat ways.
>
> BTW, i'm using KVM not xen, kvm feels more linuxy and seems better
> integrated. (the makers being bought out by redhat and being part of the
> kernel and all)
> It was also a 5 line install that just worked out of the box a few years
> ago vs Xen which still seems to be a pain in the butt to get working.
>
>
>  I wouldn't worry too much about striping v's mirroring until you consider
>> what type of cluster you are going to design. To have real HA you might need
>> to reconsider the idea of supporting any local disks as you will not have
>> real HA in your design until the disks are external to both cluster nodes.
>> This is big money but you get what you pay for. for a small home/office for
>> my personal use I have purchased a DRoboPro FS fully populated with 2 TB
>> drives with a total capacity of 10TB storage for $6,000 the system is self
>> healing, you can intermix drive sizes and you use cheap SATA drives, I use
>> it as my test environment which I run VMs through vSphere, and I have True
>> HA until I lose my disk subsystem.
>>
> I have no hardware SPF's outside of the switch and power, those are a
> conscious choice as those components have shown themselves to have
> sufficient reliability that the expense and complexity of supporting HA for
> them it isn't worth it. (they will have a UPS, but if the power goes out,
> all the desktops turn off anyway so its not a big deal)
> I was mainly curious about performance vs reliability and stability.
>
>
> --
> SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
>
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to