Il 19/04/2014 08:04, Richard Weinberger ha scritto:
Hi!
I hope this is the right place to ask. :)
On a rather recent x86_64 server I'm facing very bad write performance.
The Server is a 8 Core Xeon E5 with 64GiB ram.
Storage is a ext4 filesystem on top of LVM which is backed by DRBD.
On the host side dd can easily write with 100MiB/s to the ext4.
OS is Centos6 with kernel 3.12.x.
Within a KVM Linux guest the seq write throughput is always only
between 20 and 30MiB/s.
The guest OS is Centos6, it uses virtio-blk, cache=none, io=natvie and
the deadline IO scheduler.
The worst thing is that the total IO bandwidth of KVM seems to 30MiB/s.
If I run the same write benchmark within 5 guests each one achieves
only 6 or 7 MiB/s.
I see the same values also if the guest writes directly to a disk like vdb.
Having the guest disk directly on LVM instead of a ext4 file also didn't help.
It really looks like 30MiB/s is the upper bound for KVM disk IO.
As a first guess, can you try XFS or direct DRBD? There seems to be a
bug in ext4 that limits queue depth to a very low value.
Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html