[CentOS-virt] CentOS 6 kvm disk write performance

2012-08-10 Thread Julian price
I have 2 similar servers. Since upgrading one from CentOS 5.5 to 6, disk 
write performance in kvm guest VMs is much worse.


There are many, many posts about optimising kvm, many mentioning disk 
performance in CentOS 5 vs 6.  I've tried various changes to speed up 
write performance, but northing's made a significant difference so far:


- Install virtio disk drivers in guest
- update the host software
- Update RAID firmware to latest version
- Switch the host disk scheduler to deadline
- Increase host RAM from 8GB to 24GB
- Increase guest RAM from 2GB to 4GB
- Try different kvm cache options
- Switch host from ext4 back to ext3
- Set noatime on the virtual disk image file
Note: There is no encryption or on-access virus scanner on any host or 
guest.


Below are some the block write figures in MB/s from bonnie++ with 
various configurations:


First, figures for the hosts show that the CentOS 6 server is faster:

54CentOS 5 Host
50CentOS 5 Host
69CentOS 6 host
70CentOS 6 host

Figures for a CentOS 6 guest running on the CentOS 5 host show that the 
performance hit is less than 50%:


30CentOS 6 guest on CentOS 5 host with no optimisations
27CentOS 6 guest on CentOS 5 host with no optimisations
32CentOS 6 guest on CentOS 5 host with no optimisations

Here are the figures a CentOS 6 guest running on the CentOS 6 host with 
various optimisations.  Even with these optimisations, performance 
doesn't come close to the un-optimised guest running on the CentoOS 5 host:


5   No optimisations (i.e. same configuration as on CentOS 5)
4   deadline scheduler
5   deadline scheduler
15   noatime,nodiratime
14   noatime,nodiratime
15   noatime
15   noatime + deadline scheduler
13   virtio
13   virtio
10   virtio + noatime
9   virtio + noatime

The CentOS 6 server has a better RAID card, different disks and more 
RAM, which might account for the better CentOS 6 host performance.  But 
why might the guest write performance be so much worse?


Is this a known problem?  If so, what's the cause?If not, is there a 
way to locate the problem rather than using trial and error?


Thanks,
Julian
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] RAID: by host or within KVM?

2012-08-10 Thread Kanwar Ranbir Sandhu
Hi Virtualizers, 

I just setup a CentOS 6 box (at home) to run as a KVM host. It's replacing
an absolutely ancient CentOS 5 server that's running Xen. I have one OS
drive, and two drives in RAID 1 with LVM on top which is being used as the
KVM storage pool. 

I created a KVM that will run OpenMediaVault (OMV). OMV requires an OS
drive (which is really a LVM), and a separate drive(s) to put all the
media
on. This is where I'm a little unsure on how to proceed. I think I have
two
options: 

1. Let the KVM host manage the drives (i.e. RAID with LVM on top) and just
assign the single volume to OMV. OMV will see it as one HD.
2. Assign the individual drives to the OMV KVM, and let OMV manage the
RAID
creation, management, etc. 

I'm not sure which one will perform better. My hunch is if the RAID
management is left at the host level, I'll see better overall performance.
Performance isn't exactly my number one goal here, but I don't want to
kill
it completely either by going the wrong way. 

On the other hand, if I let OMV do the RAID management for the media
storage disks, I'll gain future flexibility because it'll be much easier
to
move OMV to bare metal. 

Which way should I go? What would you guys do? 

Regards, 

Ranbir  
-- 
Kanwar Ranbir Sandhu
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] RAID: by host or within KVM?

2012-08-10 Thread SilverTip257
I agree with Stephen.  Option #1 is the way to go.

On all of the KVM nodes I've personally built, I use a hardware RAID
controller and let it manage the array.  You could use software RAID
on the host OS, but there are advantages of using hardware RAID
(background array initialization, battery backed).

Keep the RAID management at the hardware _or_ host OS (software raid)
level and it will simplify administration.

---~~.~~---
Mike
//  SilverTip257  //


On Fri, Aug 10, 2012 at 11:00 AM, Stephen Harris li...@spuddy.org wrote:
 On Fri, Aug 10, 2012 at 10:39:18AM -0400, Kanwar Ranbir Sandhu wrote:
 1. Let the KVM host manage the drives (i.e. RAID with LVM on top) and just
 assign the single volume to OMV. OMV will see it as one HD.
 2. Assign the individual drives to the OMV KVM, and let OMV manage the
 RAID
 creation, management, etc.

 I recommend option 1 simply because of recovery methodology.  If you
 lose a disk and replace it, if the host controls the RAID then you have
 one point of repair and the VMs don't even notice.  If, however, each
 VM does RAID itself then _each_ VM will need to perform disk replace
 and rebuild, which is a lot of admin overhead.  Also that could cause
 a lot of disk contention and slow down the rebuild.

 Today you only have one VM.  Tomorrow? :-)

 --

 rgds
 Stephen
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt