No, it's an absurd assessment.

You have additional layers of caching happening because you're running a guest from a filesystem on the host.

Comments below.

A benchmark running under a guest that happens do be faster than the host does not indicate anything. It could be that the benchmark is poorly written.

I believe that I have removed the benchmark from discussion, we are now looking at semantics of small writes followed by

What operation, specifically, do you think is not behaving properly under kvm? ext4 (karmic's default filesystem) does not enable barriers by default so it's unlikely this is anything barrier related.


Re-quoting me from two replies ago.

===
I dug deeper into the actual syscalls being made by sqlite. The salient part of the behaviour is small sequential writes followed by a
fdatasync (effectively a metadata-free fsync).
===

And quoting from Dustin

===
I have tried this, exactly as you have described.  The tests took:

 * 1162.08033204 seconds on native hardware
 * 2306.68306303 seconds in a kvm using if=scsi disk
 * 405.382308006 seconds in a kvm using if=virtio
===

And finally Christoph

===
Can't remember anything like that.  The "bug" was the complete lack of
cache flush infrastructure for virtio, and the lack of advertising a
volative write cache on ide.
===

The _Operation_ that I believe is not behaving as expected is fdatasync under virtio. I understand your position that this is not a bug, but a configuration/packaging issue.

So I'll put it to you differently. When a Linux guest issues a fsync or fdatasync what should occur?

o If the system has been configured in writeback mode then you don't worry about getting the data to the disk, so when the hypervisor has received the data, be happy with it.

o If the system is configured in writethrough mode, shouldn't the hypervisor look to get the data to disk ASAP? Whether this is immediately, or batched with other data, I'll leave it to you guys.

As mentioned above, I am not saying it is a bug in KVM, and may well be a poor choice of configuration options within distributions. From what I can interpret from above, scsi and writethrough is the safest model to go for. By extension, for enterprise workloads where data integrity is more critical the default configuration of KVM under Ubuntu and possibly other distributions may be a poor choice.

Regards,

Matthew
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to