On Thu, Oct 13, 2011 at 05:03:53PM +0200, benoit ROUSSELLE wrote:
> On Thu, Oct 13, 2011 at 4:27 PM, Stefan Hajnoczi <[email protected]> wrote:
> > dd performs buffered I/O by default. That means it just writes to the
> > page cache and the kernel decides when to write out dirty pages.
> >
> > So your host probably has a bunch more RAM than the guest - dd
> > write(2) calls are simply dirtying memory. Your guest has less RAM
> > and needs to do actual block I/O. That's why the results are so
> > different.
> >
> > You are not measuring virtio-blk performance here. Use dd
> > oflag=direct to bypass the page cache and actually do block I/O.
>
> You are right the change is impressive, but still there is a big difference:
> dd oflag=direct bs=6M count=1000 if=/dev/zero of=titi.txt
>
> i get on the host:
> 6291456000 bytes (6.3 GB) copied, 29.8403 s, 211 MB/s
> and in the vm:
> 6291456000 bytes (6.3 GB) copied, 51.3302 s, 123 MB/s
The next step is trying QEMU's -drive aio=native, which uses Linux AIO
instead of a custom userspace threadpool for doing I/O. It is usually
faster. The libvirt domain XML is:
<disk ...>
<driver name='qemu' type='raw' io='native'>
If you have the time, trying a recent kernel and qemu-kvm could yield
better results. Your guest configuration is already reasonable and I
would choose a similar setup.
You mentioned vhost - it's an in-kernel virtio implementation but
currently only available for virtio-net in a kernel release. There are
experimental vhost-blk patches on the kvm mailing list but they are not
merged and folks are currently playing with them to see how much
performance could be gained.
Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html