On 10/15/2009 09:17 PM, Christoph Hellwig wrote:
So can we please get the detailed setup where this happens, that is:
Here's a setup where it doesn't happen (pwrite() + fdatasync() get to
the disk):
filesystem used in the guest
ext4
any volume manager / software raid used in
On Wed, Oct 14, 2009 at 05:54:23PM -0500, Anthony Liguori wrote:
Historically it didn't and the only safe way to use virtio was in
cache=writethrough mode.
Which should be the default on Ubuntu's kvm that this report is
concerned with so I'm a bit confused.
So can we please get the
On Thu, Oct 15, 2009 at 02:17:02PM +0200, Christoph Hellwig wrote:
On Wed, Oct 14, 2009 at 05:54:23PM -0500, Anthony Liguori wrote:
Historically it didn't and the only safe way to use virtio was in
cache=writethrough mode.
Which should be the default on Ubuntu's kvm that this report is
On 10/14/2009 07:37 AM, Christoph Hellwig wrote:
Christoph, wasn't there a bug where the guest didn't wait for requests
in response to a barrier request?
Can't remember anything like that. The bug was the complete lack of
cache flush infrastructure for virtio, and the lack of advertising
I understand. However the test itself is fairly trivial
representation of a single teir high-transactional load system. (Ie:
a system that is logging a large number of events).
The phoronix test suite simply hands over to a binary using sqlite and
does 25000 sequential inserts. The overhead of
On Wed, Oct 14, 2009 at 08:03:41PM +0900, Avi Kivity wrote:
Can't remember anything like that. The bug was the complete lack of
cache flush infrastructure for virtio, and the lack of advertising a
volative write cache on ide.
By complete flush infrastructure, you mean host-side and
On 10/14/2009 10:41 PM, Christoph Hellwig wrote:
But can't this be also implemented using QUEUE_ORDERED_DRAIN, and on the
host side disabling the backing device write cache? I'm talking about
cache=none, primarily.
Yes, it could. But as I found out in a long discussion with Stephen
it's
On Thu, Oct 15, 2009 at 01:56:40AM +0900, Avi Kivity wrote:
Does virtio say it has a write cache or not (and how does one say it?)?
Historically it didn't and the only safe way to use virtio was in
cache=writethrough mode. Since qemu git as of 4th Sempember and Linux
2.6.32-rc there is a
Christoph Hellwig wrote:
On Thu, Oct 15, 2009 at 01:56:40AM +0900, Avi Kivity wrote:
Does virtio say it has a write cache or not (and how does one say it?)?
Historically it didn't and the only safe way to use virtio was in
cache=writethrough mode.
Which should be the default on
On 10/15/2009 07:54 AM, Anthony Liguori wrote:
Christoph Hellwig wrote:
On Thu, Oct 15, 2009 at 01:56:40AM +0900, Avi Kivity wrote:
Does virtio say it has a write cache or not (and how does one say it?)?
Historically it didn't and the only safe way to use virtio was in
cache=writethrough
On Sun, Oct 11, 2009 at 11:16:42AM +0200, Avi Kivity wrote:
if scsi is used, you incur the cost of virtualization,
if virtio is used, your guests fsyncs incur less cost.
So back to the question to the kvm team. It appears that with the
stock KVM setup customers who need higher data
Matthew Tippett wrote:
Thanks Duncan for reproducing the behavior outside myself and Phoronix.
I dug deeper into the actual syscalls being made by sqlite. The
salient part of the behaviour is small sequential writes followed by a
fdatasync (effectively a metadata-free fsync).
As Dustin
No, it's an absurd assessment.
You have additional layers of caching happening because you're running a
guest from a filesystem on the host.
Comments below.
A benchmark running under a guest that happens do be faster than the
host does not indicate anything. It could be that the
On Tue, Oct 13, 2009 at 9:09 PM, Matthew Tippett tippe...@gmail.com wrote:
I believe that I have removed the benchmark from discussion, we are now
looking at semantics of small writes followed by
...
And quoting from Dustin
===
I have tried this, exactly as you have described. The tests
On 10/09/2009 09:06 PM, Matthew Tippett wrote:
Thanks Duncan for reproducing the behavior outside myself and Phoronix.
I dug deeper into the actual syscalls being made by sqlite. The
salient part of the behaviour is small sequential writes followed by a
fdatasync (effectively a metadata-free
On Wed, Oct 7, 2009 at 2:31 PM, Matthew Tippett tippe...@gmail.com wrote:
The benchmark used was the sqlite subtest in the phoronix test suite.
My awareness and involvement is beyond reading a magazine article, I can
elaborate if needed, but I don't believe it is necessary.
Process for
On Fri, Oct 9, 2009 at 6:25 AM, Matthew Tippett tippe...@gmail.com wrote:
Can I ask you to do the following...
1) Re-affirm that Ubuntu does not carry any non-stream patches and
the build command and possibly any other unusual patches or
commandline options. This should push it back onto
.
Is that assessment correct?
Regards,
Matthew
Original Message
Subject: Re: sync guest calls made async on host - SQLite performance
From: Dustin Kirkland dustin.kirkl...@gmail.com
To: Matthew Tippett tippe...@gmail.com, Anthony Liguori
anth...@codemonkey.ws, Avi Kivity
be relevant for how they support their customers.
Regards,
Matthew
Original Message
Subject: Re: sync guest calls made async on host - SQLite performance
From: Anthony Liguori anth...@codemonkey.ws
To: Matthew Tippett tippe...@gmail.com
Cc: Avi Kivity a...@redhat.com, RW k
and
upstream changes that may be relevant for how they support their customers.
Regards,
Matthew
Original Message
Subject: Re: sync guest calls made async on host - SQLite performance
From: Anthony Liguori anth...@codemonkey.ws
To: Matthew Tippett tippe...@gmail.com
Cc: Avi Kivity
On Wed, Oct 7, 2009 at 11:53 AM, Matthew Tippett tippe...@gmail.com wrote:
When you indicated that you had attempted to reproduce the problem, what
mechanism did you use? Was it Karmic + KVM as the host and Karmic as
the guest? What test did you use?
I ran the following in several places:
Original Message
Subject: Re: sync guest calls made async on host - SQLite performance
From: Dustin Kirkland dustin.kirkl...@gmail.com
To: Matthew Tippett tippe...@gmail.com
Cc: Anthony Liguori anth...@codemonkey.ws, Avi Kivity
a...@redhat.com, RW k...@tauceti.net, kvm
Original Message
Subject: Re: sync guest calls made async on host - SQLite performance
From: Avi Kivity a...@redhat.com
To: Matthew Tippett tippe...@gmail.com
Cc: Dustin Kirkland dustin.kirkl...@gmail.com, Anthony Liguori
anth...@codemonkey.ws, RW k...@tauceti.net, kvm
Matthew Tippett wrote:
Hi,
I would like to call attention to the SQLite performance under KVM in
the current Ubuntu Alpha.
http://www.phoronix.com/scan.php?page=articleitem=linux_2631_kvmnum=3
SQLite's benchmark as part of the Phoronix Test Suite is typically IO
limited and is affected by
Avi Kivity wrote:
On 09/24/2009 10:49 PM, Matthew Tippett wrote:
The test itself is a simple usage of SQLite. It is stock KVM as
available in 2.6.31 on Ubuntu Karmic. So it would be the environment,
not the test.
So assuming that KVM upstream works as expected that would leave
either 2.6.31
Matthew Tippett wrote:
First up, Phoronix hasn't tuned. It's observing the delivered state
by an OS vendor. I started with what I believe to be the starting
point - KVM.
So the position of the KVM now is that it is either QEMU's
configuration or Ubuntu's configuration. No further guidance or
Matthew Tippett wrote:
I have created a launchpad bug against qemu-kvm in Ubuntu.
https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/437473
Just re-iterating, my concern isn't so much performance, but integrity
of stock KVM configurations with server or other workloads that expect
sync
than riling against Phoronix or the results as
presented, ask questions to seek further information about what was
tested rather than writing off all of it as completely invalid.
Regards,
Matthew
Original Message
Subject: Re: sync guest calls made async on host - SQLite
On Tue, Sep 29, 2009 at 2:32 PM, Matthew Tippett tippe...@gmail.com wrote:
I would prefer rather than riling against Phoronix or the results as
presented, ask questions to seek further information about what was tested
rather than writing off all of it as completely invalid.
Matthew-
If you
Matthew Tippett wrote:
Okay, bringing the leafs of the discussions onto this thread.
As per
http://www.phoronix.com/scan.php?page=articleitem=linux_2631_kvmnum=1single=1
The host OS (as well as the guest OS when testing under KVM) was
running an Ubuntu 9.10 daily snapshot with the Linux
On 09/25/2009 10:00 AM, RW wrote:
I think ext3 with data=writeback in a KVM and KVM started
with if=virtio,cache=none is a little bit crazy. I don't know
if this is the case with current Ubuntu Alpha but it looks
like so.
It's not crazy, qemu bypasses the cache with cache=none so the ext3
to be honored and synchronous to the underlying
physical disk.
(That and ensuring that sanity reigns where a benchmark doesn't show a
guest operating 10 times faster than a host for the same test :).
Regards,
Matthew
Original Message
Subject: Re: sync guest calls made async on host
I've read the article a few days ago and it was interesting.
As I upgraded vom 2.6.29 to 2.6.30 (Gentoo) I also saw a dramatic
increase disk and filesystem performance. But then I realized
that the default mode for ext3 changed to data=writeback.
So I changed that back to data=ordered and
On 09/24/2009 10:49 PM, Matthew Tippett wrote:
The test itself is a simple usage of SQLite. It is stock KVM as
available in 2.6.31 on Ubuntu Karmic. So it would be the environment,
not the test.
So assuming that KVM upstream works as expected that would leave
either 2.6.31 having an issue, or
First up, Phoronix hasn't tuned. It's observing the delivered state
by an OS vendor. I started with what I believe to be the starting
point - KVM.
So the position of the KVM now is that it is either QEMU's
configuration or Ubuntu's configuration. No further guidance or
suggestions? Note that
On 09/25/2009 02:33 PM, Matthew Tippett wrote:
First up, Phoronix hasn't tuned. It's observing the delivered state
by an OS vendor. I started with what I believe to be the starting
point - KVM.
So the position of the KVM now is that it is either QEMU's
configuration or Ubuntu's configuration.
On 09/23/2009 06:58 PM, Matthew Tippett wrote:
Hi,
I would like to call attention to the SQLite performance under KVM in
the current Ubuntu Alpha.
http://www.phoronix.com/scan.php?page=articleitem=linux_2631_kvmnum=3
SQLite's benchmark as part of the Phoronix Test Suite is typically IO
Thanks Avi,
I am still trying to reconcile the your statement with the potential
data risks and the numbers observed.
My read of your response is that the guest sees a consistent view -
the data is commited to the virtual disk device. Does a synchronous
write within the guest trigger a
On 09/24/2009 03:31 PM, Matthew Tippett wrote:
Thanks Avi,
I am still trying to reconcile the your statement with the potential
data risks and the numbers observed.
My read of your response is that the guest sees a consistent view -
the data is commited to the virtual disk device. Does a
The test itself is a simple usage of SQLite. It is stock KVM as
available in 2.6.31 on Ubuntu Karmic. So it would be the environment,
not the test.
So assuming that KVM upstream works as expected that would leave
either 2.6.31 having an issue, or Ubuntu having an issue.
Care to make an
The Phoronix Test Suite is designed to test a (client) operating
system out of the box and it does a good job at that.
It's certainly valid to run PTS inside a virtual machine but you
you're going to need to tune the host, in this case Karmic.
The way you'd configure a client operating system to
Subject: Re: sync guest calls made async on host - SQLite performance
From: Ian Woodstock ian.woodst...@gmail.com
To: kvm@vger.kernel.org
Date: 09/24/2009 10:11 PM
The Phoronix Test Suite is designed to test a (client) operating
system out of the box and it does a good job at that.
It's
Hi,
I would like to call attention to the SQLite performance under KVM in
the current Ubuntu Alpha.
http://www.phoronix.com/scan.php?page=articleitem=linux_2631_kvmnum=3
SQLite's benchmark as part of the Phoronix Test Suite is typically IO
limited and is affected by both disk and filesystem
43 matches
Mail list logo