With your setup,

Your read performance is going to be limited by your RAID selection. Be
prepared to experiment and document the performance of various different
nodes.

With a 1G interconnect, write performance will be dictated by network speed.
You'll want jumbo frames at a minimum, and might have to mess with buffer
sizes. Keep in mind that latency is just as important as throughput.

There is a performance tuning page on the linbit site. I spent a day messing
with various parameters, but found no appreciable improvements.

With 4 drives, I think you'll get better performance with raid 10.

However, I think you'll need to install a benchmark like iozone, and spend a
lot of time doing before/after comparisons.

Mike



On Sat, Jun 5, 2010 at 10:27 AM, Miles Fidelman
<[email protected]>wrote:

> Hi Folks,
>
> I've been doing some experimenting to see how far I can push some old
> hardware into a virtualized environment - partially to see how much use I
> can get out of the hardware, and partially to learn more about the behavior
> of, and interactions between, software RAID, LVM, DRBD, and Xen.
>
> What I'm finding is that it's really easy to get into a state where one of
> my VMs is spending all of its time in i/o wait (95%+).  Other times,
> everything behaves fine.
>
> So... I'm curious about where the bottlenecks are.
>
> What I'm running:
> - two machines, 4 disk drives each, two 1G ethernet ports (1 each to the
> outside world, 1 each as a cross-connect)
> - each machine runs Xen 3 on top of Debian Lenny (the basic install)
> - very basic Dom0s - just running the hypervisor and i/o (including disk
> management)
> ---- software RAID6 (md)
> ---- LVM
> ---- DRBD
> ---- heartbeat to provide some failure migration
> - each Xen VM uses 2 DRBD volumes - one for root, one for swap
> - one of the VMs has a third volume, used for backup copies of files
>
> What I'd like to dig into:
> - Dom0 plus one DomU running on each box
> - only one of the DomUs is doing very much - and it's runnin about 90%
> idle, the rest split between user cycles and wait cycles
> - start a disk intensive job on the DomU (e.g., tar a bunch of files on the
> root LV, put them on the backup LV)
> - i/o WAIT goes through the roof
>
> It's pretty clear that this configuration generates a lot of complicated
> disk activity.  Since DRBD is at the top of the disk stack, I figure this
> list is a good place to ask the question:
>
> Any suggestions on how to track down where the delays are creeping in, what
> might be tunable, and any good references on these issues?
>
> Thanks very much,
>
> Miles Fidelman
>
> --
> In theory, there is no difference between theory and practice.
> In<fnord>  practice, there is.   .... Yogi Berra
>
>
> _______________________________________________
> drbd-user mailing list
> [email protected]
> http://lists.linbit.com/mailman/listinfo/drbd-user
>



-- 
Dr. Michael Iverson
Director of Information Technology
Hatteras Printing
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to