Re: sync guest calls made async on host - SQLite performance

2009-10-18 Thread Avi Kivity

On 10/15/2009 09:17 PM, Christoph Hellwig wrote:


So can we please get the detailed setup where this happens, that is:

   


Here's a setup where it doesn't happen (pwrite() + fdatasync() get to 
the disk):



filesystem used in the guest
   


ext4


any volume manager / software raid used in the guest
   


lvm over one physical device


kernel version in the guest
   


2.6.30.8 (F11)


image format used
   


qcow2


qemu command line including caching mode, using ide/scsi/virtio, etc
   


-drive file=...,if=virtio


qemu/kvm version
   


qemu-kvm.git master


filesystem used in the host
   


ext4


any volume manager / software raid used in the host
   


lvm over one device


kernel version in the host
   

2.6.30.8 (F11)

So it seems this is no longer an issue, but may be an issue on older 
setups.  I suspect LVM as it used to swallow barriers.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-15 Thread Christoph Hellwig
On Wed, Oct 14, 2009 at 05:54:23PM -0500, Anthony Liguori wrote:
 Historically it didn't and the only safe way to use virtio was in
 cache=writethrough mode.
 
 Which should be the default on Ubuntu's kvm that this report is 
 concerned with so I'm a bit confused.

So can we please get the detailed setup where this happens, that is:

filesystem used in the guest
any volume manager / software raid used in the guest
kernel version in the guest
image format used
qemu command line including caching mode, using ide/scsi/virtio, etc
qemu/kvm version
filesystem used in the host
any volume manager / software raid used in the host
kernel version in the host


 Avi's patch is a performance optimization, not a correctness issue?

It could actually minimally degrade performace.  For the existing
filesystems as upper layer it doesn not improve correctness either.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-15 Thread Christoph Hellwig
On Thu, Oct 15, 2009 at 02:17:02PM +0200, Christoph Hellwig wrote:
 On Wed, Oct 14, 2009 at 05:54:23PM -0500, Anthony Liguori wrote:
  Historically it didn't and the only safe way to use virtio was in
  cache=writethrough mode.
  
  Which should be the default on Ubuntu's kvm that this report is 
  concerned with so I'm a bit confused.
 
 So can we please get the detailed setup where this happens, that is:
 
 filesystem used in the guest
 any volume manager / software raid used in the guest
 kernel version in the guest
 image format used
 qemu command line including caching mode, using ide/scsi/virtio, etc
 qemu/kvm version
 filesystem used in the host
 any volume manager / software raid used in the host
 kernel version in the host

And very important the mount options (/proc/self/mounts) of both host
and guest.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-14 Thread Avi Kivity

On 10/14/2009 07:37 AM, Christoph Hellwig wrote:

Christoph, wasn't there a bug where the guest didn't wait for requests
in response to a barrier request?
 

Can't remember anything like that.  The bug was the complete lack of
cache flush infrastructure for virtio, and the lack of advertising a
volative write cache on ide.
   


By complete flush infrastructure, you mean host-side and guest-side 
support for a new barrier command, yes?


But can't this be also implemented using QUEUE_ORDERED_DRAIN, and on the 
host side disabling the backing device write cache?  I'm talking about 
cache=none, primarily.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-14 Thread Matthew Tippett
I understand.  However the test itself is fairly trivial
representation of a single teir high-transactional load system.  (Ie:
a system that is logging a large number of events).

The phoronix test suite simply hands over to a binary using sqlite and
does 25000 sequential inserts.  The overhead of the suite would be
measured in milliseconds at the start and end.  Over the life of the
test (100-2500 seconds), it becomes insignificant noise.

As I said, the relevant system calls itself for the running of the
test are expressed as

write
write
write
fdatasync

The writes are typically small (5-100) bytes.

With that information, I believe the method of execution is mostly
irrelevant.  If people are still concerned, I can write a trivial
application that should reproduce the behaviour.

It still ultimately comes down to the guests expected semantics of
fdatasync, and the actual behaviour relative to the hosts physical
device.  I am not saying that the currentl behaviour is wrong, I just
want a clear understanding of what is expected by the kvm team vs what
we are seeing.

Regards... Matthew


On 10/14/09, Dustin Kirkland kirkl...@canonical.com wrote:
 On Tue, Oct 13, 2009 at 9:09 PM, Matthew Tippett tippe...@gmail.com wrote:
 I believe that I have removed the benchmark from discussion, we are now
 looking at semantics of small writes followed by
 ...
 And quoting from Dustin

 ===
 I have tried this, exactly as you have described.  The tests took:

  * 1162.08033204 seconds on native hardware
  * 2306.68306303 seconds in a kvm using if=scsi disk
  * 405.382308006 seconds in a kvm using if=virtio

 Hang on now...

 My timings are from running the Phoronix test *as you described*.  I
 have not looked at what magic is happening inside of this Phoronix
 test.  I am most certainly *not* speaking as to the quality or
 legitimacy of the test.

 :-Dustin


-- 
Sent from my mobile device
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-14 Thread Christoph Hellwig
On Wed, Oct 14, 2009 at 08:03:41PM +0900, Avi Kivity wrote:
 Can't remember anything like that.  The bug was the complete lack of
 cache flush infrastructure for virtio, and the lack of advertising a
 volative write cache on ide.

 
 By complete flush infrastructure, you mean host-side and guest-side 
 support for a new barrier command, yes?

The cache flush command, not barrier command.  The new virtio code
implements barrier the same way we do for IDE and SCSI - all barrier
semantics are implemented by generic code in the block layer by draining
the queues, the only thing we send over the wire are cache flush
commands in strategic places.

 But can't this be also implemented using QUEUE_ORDERED_DRAIN, and on the 
 host side disabling the backing device write cache?  I'm talking about 
 cache=none, primarily.

Yes, it could.  But as I found out in a long discussion with Stephen
it's not actually nessecary.  All filesystems do the right thing for
a device not claiming to support barriers if it doesn't include write
caches, that is implement ordering internally.  So there is no urge to
set QUEUE_ORDERED_DRAIN for the case without write cache.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-14 Thread Avi Kivity

On 10/14/2009 10:41 PM, Christoph Hellwig wrote:

But can't this be also implemented using QUEUE_ORDERED_DRAIN, and on the
host side disabling the backing device write cache?  I'm talking about
cache=none, primarily.
 

Yes, it could.  But as I found out in a long discussion with Stephen
it's not actually nessecary.  All filesystems do the right thing for
a device not claiming to support barriers if it doesn't include write
caches, that is implement ordering internally.  So there is no urge to
set QUEUE_ORDERED_DRAIN for the case without write cache.
   


Does virtio say it has a write cache or not (and how does one say it?)?

According to the report, a write+fdatasync completes too fast, at least 
on Ubuntu's qemu.  So perhaps somewhere this information is lost.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-14 Thread Christoph Hellwig
On Thu, Oct 15, 2009 at 01:56:40AM +0900, Avi Kivity wrote:
 Does virtio say it has a write cache or not (and how does one say it?)?

Historically it didn't and the only safe way to use virtio was in
cache=writethrough mode.  Since qemu git as of 4th Sempember and Linux
2.6.32-rc there is a virtio-blk feature to communicate the existance
of a volatile write cache, and the support for a cache flush command.
With the combination of these two data=writeback and data=none modes
are safe for the first time.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-14 Thread Anthony Liguori

Christoph Hellwig wrote:

On Thu, Oct 15, 2009 at 01:56:40AM +0900, Avi Kivity wrote:
  

Does virtio say it has a write cache or not (and how does one say it?)?



Historically it didn't and the only safe way to use virtio was in
cache=writethrough mode.


Which should be the default on Ubuntu's kvm that this report is 
concerned with so I'm a bit confused.


Avi's patch is a performance optimization, not a correctness issue?

Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-14 Thread Avi Kivity

On 10/15/2009 07:54 AM, Anthony Liguori wrote:

Christoph Hellwig wrote:

On Thu, Oct 15, 2009 at 01:56:40AM +0900, Avi Kivity wrote:

Does virtio say it has a write cache or not (and how does one say it?)?


Historically it didn't and the only safe way to use virtio was in
cache=writethrough mode.


It didn't say?  So it's up to the default, which is what?



Which should be the default on Ubuntu's kvm that this report is 
concerned with so I'm a bit confused.


Avi's patch is a performance optimization, not a correctness issue?


If filesystems do drain by default, it should be a no-op on 
cache!=writeback.


However if lseek(0); write(1); fdatasync(); are faster than disk speed, 
then something in our assumptions has to be wrong.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-13 Thread Christoph Hellwig
On Sun, Oct 11, 2009 at 11:16:42AM +0200, Avi Kivity wrote:
if scsi is used, you incur the cost of virtualization,
if virtio is used, your guests fsyncs incur less cost.
 
 So back to the question to the kvm team.  It appears that with the 
 stock KVM setup customers who need higher data integrity (through 
 fsync) should steer away from virtio for the moment.
 
 Is that assessment correct?
 
 
 Christoph, wasn't there a bug where the guest didn't wait for requests 
 in response to a barrier request?

Can't remember anything like that.  The bug was the complete lack of
cache flush infrastructure for virtio, and the lack of advertising a
volative write cache on ide.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-13 Thread Anthony Liguori

Matthew Tippett wrote:

Thanks Duncan for reproducing the behavior outside myself and Phoronix.

I dug deeper into the actual syscalls being made by sqlite.  The 
salient part of the behaviour is small sequential writes followed by a 
fdatasync (effectively a metadata-free fsync).


As Dustin indicates,

   if scsi is used, you incur the cost of virtualization,
   if virtio is used, your guests fsyncs incur less cost.

So back to the question to the kvm team.  It appears that with the 
stock KVM setup customers who need higher data integrity (through 
fsync) should steer away from virtio for the moment.


Is that assessment correct?


No, it's an absurd assessment.

You have additional layers of caching happening because you're running a 
guest from a filesystem on the host.


A benchmark running under a guest that happens do be faster than the 
host does not indicate anything.  It could be that the benchmark is 
poorly written.


What operation, specifically, do you think is not behaving properly 
under kvm?  ext4 (karmic's default filesystem) does not enable barriers 
by default so it's unlikely this is anything barrier related.



Regards,

Matthew


Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-13 Thread Matthew Tippett



No, it's an absurd assessment.

You have additional layers of caching happening because you're running a 
guest from a filesystem on the host.


Comments below.

A benchmark running under a guest that happens do be faster than the 
host does not indicate anything.  It could be that the benchmark is 
poorly written.


I believe that I have removed the benchmark from discussion, we are now 
looking at semantics of small writes followed by


What operation, specifically, do you think is not behaving properly 
under kvm?  ext4 (karmic's default filesystem) does not enable barriers 
by default so it's unlikely this is anything barrier related.




Re-quoting me from two replies ago.

===
I dug deeper into the actual syscalls being made by sqlite.  The salient 
part of the behaviour is small sequential writes followed by a

fdatasync (effectively a metadata-free fsync).
===

And quoting from Dustin

===
I have tried this, exactly as you have described.  The tests took:

 * 1162.08033204 seconds on native hardware
 * 2306.68306303 seconds in a kvm using if=scsi disk
 * 405.382308006 seconds in a kvm using if=virtio
===

And finally Christoph

===
Can't remember anything like that.  The bug was the complete lack of
cache flush infrastructure for virtio, and the lack of advertising a
volative write cache on ide.
===

The _Operation_ that I believe is not behaving as expected is fdatasync 
under virtio. I understand your position that this is not a bug, but a 
configuration/packaging issue.


So I'll put it to you differently.  When a Linux guest issues a fsync or 
fdatasync what should occur?


o If the system has been configured in writeback mode then you don't 
worry about getting the data to the disk, so when the hypervisor has 
received the data, be happy with it.


o If the system is configured in writethrough mode, shouldn't the 
hypervisor look to get the data to disk ASAP?  Whether this is 
immediately, or batched with other data, I'll leave it to you guys.


As mentioned above, I am not saying it is a bug in KVM, and may well be 
a poor choice of configuration options within distributions.  From what 
I can interpret from above, scsi and writethrough is the safest model to 
go for.  By extension, for enterprise workloads where data integrity is 
more critical the default configuration of KVM under Ubuntu and possibly 
other distributions may be a poor choice.


Regards,

Matthew
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-13 Thread Dustin Kirkland
On Tue, Oct 13, 2009 at 9:09 PM, Matthew Tippett tippe...@gmail.com wrote:
 I believe that I have removed the benchmark from discussion, we are now
 looking at semantics of small writes followed by
...
 And quoting from Dustin

 ===
 I have tried this, exactly as you have described.  The tests took:

  * 1162.08033204 seconds on native hardware
  * 2306.68306303 seconds in a kvm using if=scsi disk
  * 405.382308006 seconds in a kvm using if=virtio

Hang on now...

My timings are from running the Phoronix test *as you described*.  I
have not looked at what magic is happening inside of this Phoronix
test.  I am most certainly *not* speaking as to the quality or
legitimacy of the test.

:-Dustin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-11 Thread Avi Kivity

On 10/09/2009 09:06 PM, Matthew Tippett wrote:

Thanks Duncan for reproducing the behavior outside myself and Phoronix.

I dug deeper into the actual syscalls being made by sqlite.  The 
salient part of the behaviour is small sequential writes followed by a 
fdatasync (effectively a metadata-free fsync).


As Dustin indicates,

   if scsi is used, you incur the cost of virtualization,
   if virtio is used, your guests fsyncs incur less cost.

So back to the question to the kvm team.  It appears that with the 
stock KVM setup customers who need higher data integrity (through 
fsync) should steer away from virtio for the moment.


Is that assessment correct?



Christoph, wasn't there a bug where the guest didn't wait for requests 
in response to a barrier request?


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-09 Thread Dustin Kirkland
On Wed, Oct 7, 2009 at 2:31 PM, Matthew Tippett tippe...@gmail.com wrote:
 The benchmark used was the sqlite subtest in the phoronix test suite.

 My awareness and involvement is beyond reading a magazine article, I can
 elaborate if needed, but I don't believe it is necessary.

 Process for reproduction, assuming Karmic,

        # apt-get install phoronix-test-suite

        $ phoronix-test-suite benchmark sqlite

 Answer the questions (test-names, etc, etc), it will download sqlite, build
 it and execute the test.  By default the test runs three timesand averages
 the results.  The results experienced should be similar to the values
 identified at

 http://www.phoronix.com/scan.php?page=articleitem=linux_2631_kvmnum=3

 Which is approximately 12 minutes for the native, and about 60 seconds for
 the guest.

I have tried this, exactly as you have described.  The tests took:

 * 1162.08033204 seconds on native hardware
 * 2306.68306303 seconds in a kvm using if=scsi disk
 * 405.382308006 seconds in a kvm using if=virtio

I am using an up-to-date Karmic amd64 system, running qemu-kvm-0.11.0,
on a Thinkpad x200, dual 2.4GHz, 4GB, and a somewhat slow 5400rpm SATA
disk.  The filesystem is ext4 in both the guest and the host.

So I'm not seeing a 10x or order-of-magnitude improvement by doing
this in the guest.  With a scsi interface, it's twice as slow.  With
virtio, it's a good bit faster, but not 10x faster.

That said, I don't know that I'm all that concerned about this, right
now.  I haven't looked in detail at what this test from phoronix is
actually doing (nor do I really have the time to do so).  Sorry.

:-Dustin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-09 Thread Dustin Kirkland
On Fri, Oct 9, 2009 at 6:25 AM, Matthew Tippett tippe...@gmail.com wrote:
 Can I ask you to do the following...

  1) Re-affirm that Ubuntu does not carry any non-stream patches and
 the build command and possibly any other unusual patches or
 commandline options.  This should push it back onto Avi and Anthony's
 plate.

I have put the patches we're carrying here, for your review:
 * http://rookery.canonical.com/~kirkland/patches

There's nothing exotic in here.  Most of these have been committed
upstream already.  All of them have been at least posted on these
lists.  None of these should affect your test case.

We configure with:
  ./configure --prefix=/usr --disable-blobs --audio-drv-list=alsa pa
oss sdl --audio-card-list=ac97 es1370 sb16 cs4231a adlib gus
--target-list=$(TARGET_SYSTEM_TCG) $(TARGET_LINUX_TCG)

We carry a number of compiler options, mostly in the interest of
hardening and security:
  * https://wiki.ubuntu.com/CompilerFlags

The #define'd variables on my local system (which should be similar,
though not identical to our build servers) can be seen here:
 * http://rookery.canonical.com/~kirkland/defined

  2) Carefully consider risks to virtualized environments in the
 server space and consider noting it in release notes.

Thank you for the suggestion.  I will take it under consideration.

:-Dustin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-09 Thread Matthew Tippett

Thanks Duncan for reproducing the behavior outside myself and Phoronix.

I dug deeper into the actual syscalls being made by sqlite.  The salient 
part of the behaviour is small sequential writes followed by a fdatasync 
(effectively a metadata-free fsync).


As Dustin indicates,

   if scsi is used, you incur the cost of virtualization,
   if virtio is used, your guests fsyncs incur less cost.

So back to the question to the kvm team.  It appears that with the stock 
KVM setup customers who need higher data integrity (through fsync) 
should steer away from virtio for the moment.


Is that assessment correct?

Regards,

Matthew


 Original Message  
Subject: Re: sync guest calls made async on host - SQLite performance
From: Dustin Kirkland dustin.kirkl...@gmail.com
To: Matthew Tippett tippe...@gmail.com, Anthony Liguori 
anth...@codemonkey.ws, Avi Kivity a...@redhat.com, RW 
k...@tauceti.net, kvm@vger.kernel.org

Date: 10/09/2009 11:18 AM


On Fri, Oct 9, 2009 at 6:25 AM, Matthew Tippett tippe...@gmail.com wrote:

Can I ask you to do the following...

 1) Re-affirm that Ubuntu does not carry any non-stream patches and
the build command and possibly any other unusual patches or
commandline options.  This should push it back onto Avi and Anthony's
plate.


I have put the patches we're carrying here, for your review:
 * http://rookery.canonical.com/~kirkland/patches

There's nothing exotic in here.  Most of these have been committed
upstream already.  All of them have been at least posted on these
lists.  None of these should affect your test case.

We configure with:
  ./configure --prefix=/usr --disable-blobs --audio-drv-list=alsa pa
oss sdl --audio-card-list=ac97 es1370 sb16 cs4231a adlib gus
--target-list=$(TARGET_SYSTEM_TCG) $(TARGET_LINUX_TCG)

We carry a number of compiler options, mostly in the interest of
hardening and security:
  * https://wiki.ubuntu.com/CompilerFlags

The #define'd variables on my local system (which should be similar,
though not identical to our build servers) can be seen here:
 * http://rookery.canonical.com/~kirkland/defined


 2) Carefully consider risks to virtualized environments in the
server space and consider noting it in release notes.


Thank you for the suggestion.  I will take it under consideration.

:-Dustin


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-07 Thread Matthew Tippett

I now have more information.

Dustin,

The version used was 0.11.0-rc2, from the 2009-09-11 karmic daily build. 
 The VM identifies itself as AMD QEMU Virtual CPU version 0.10.92 
stepping 03.


When you indicated that you had attempted to reproduce the problem, what 
mechanism did you use?  Was it Karmic + KVM as the host and Karmic as 
the guest?  What test did you use?


I will re-open the launchpad bug if you believe it makes sense to 
continue the discussions there.


Anthony,

If you can suspend your disbelief for a short while and ask questions to 
clarify the details.  My only interest here is to understand the results 
presented by the benchmark and determine if there are data integrity risks.


Fundamentally, if there are modes of operation that applications can get 
a considerable performance boost by running the same OS under KVM then 
there will be lots of people happy.  But realistically it is an 
indication of something wrong, misconfigured or just broken it bears at 
least some discussion.


Bear in mind that upstream is relevant for KVM, but for distributions 
shipping KVM, they may have secondary concerns about patchesets and 
upstream changes that may be relevant for how they support their customers.


Regards,

Matthew



 Original Message  
Subject: Re: sync guest calls made async on host - SQLite performance
From: Anthony Liguori anth...@codemonkey.ws
To: Matthew Tippett tippe...@gmail.com
Cc: Avi Kivity a...@redhat.com, RW k...@tauceti.net, kvm@vger.kernel.org
Date: 09/29/2009 04:51 PM


Matthew Tippett wrote:

Your confidence is misplaced apparently.

and I have pieced together the following information.  I should be 
able to get the actual daily build number but broadly it looks like it 
was


  Ubuntu 9.10 daily snapshot (~ 9th - 21st September)
  Linux 2.6.31 (packaged as 2.6.31-10.30 to 2.6.31-10.32)
  qemu-kvm 0.11 (packaged as 0.11.0~rc2-0ubuntu to 0.11.0~rc2-0ubuntu5


That's extremely unlikely.

But, if it turned out to be Ubuntu 9.10, linux 2.6.31, qemu-kvm 0.11 
would there be any concerns?


It's not relevant because it's not qemu-kvm-0.11.



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-07 Thread Matthew Tippett

(Resending to the list without multi-part).

I now have more information.

Dustin,

The version used was 0.11.0-rc2, from the 2009-09-11 karmic daily build.
 The VM identifies itself as AMD QEMU Virtual CPU version 0.10.92
stepping 03.

When you indicated that you had attempted to reproduce the problem, what
mechanism did you use?  Was it Karmic + KVM as the host and Karmic as
the guest?  What test did you use?

I will re-open the launchpad bug if you believe it makes sense to
continue the discussions there.

Anthony,

If you can suspend your disbelief for a short while and ask questions to
clarify the details.  My only interest here is to understand the results
presented by the benchmark and determine if there are data integrity risks.

Fundamentally, if there are modes of operation that applications can get
a considerable performance boost by running the same OS under KVM then
there will be lots of people happy.  But realistically it is an
indication of something wrong, misconfigured or just broken it bears at
least some discussion.

Bear in mind that upstream is relevant for KVM, but for distributions
shipping KVM, they may have secondary concerns about patchesets and
upstream changes that may be relevant for how they support their customers.

Regards,

Matthew



 Original Message  
Subject: Re: sync guest calls made async on host - SQLite performance
From: Anthony Liguori anth...@codemonkey.ws
To: Matthew Tippett tippe...@gmail.com
Cc: Avi Kivity a...@redhat.com, RW k...@tauceti.net, kvm@vger.kernel.org
Date: 09/29/2009 04:51 PM


Matthew Tippett wrote:

Your confidence is misplaced apparently.

and I have pieced together the following information.  I should be 
able to get the actual daily build number but broadly it looks like it 
was


  Ubuntu 9.10 daily snapshot (~ 9th - 21st September)
  Linux 2.6.31 (packaged as 2.6.31-10.30 to 2.6.31-10.32)
  qemu-kvm 0.11 (packaged as 0.11.0~rc2-0ubuntu to 0.11.0~rc2-0ubuntu5


That's extremely unlikely.

But, if it turned out to be Ubuntu 9.10, linux 2.6.31, qemu-kvm 0.11 
would there be any concerns?


It's not relevant because it's not qemu-kvm-0.11.



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-07 Thread Dustin Kirkland
On Wed, Oct 7, 2009 at 11:53 AM, Matthew Tippett tippe...@gmail.com wrote:
 When you indicated that you had attempted to reproduce the problem, what
 mechanism did you use?  Was it Karmic + KVM as the host and Karmic as
 the guest?  What test did you use?


I ran the following in several places:
  a) on the system running on real hardware,
time dd if=/dev/zero of=$HOME/foo bs=1M count=500
524288000 bytes (524 MB) copied, 9.72614 s, 53.9 MB/s
  b) in an vm running on qemu-kvm-0.11 on Karmic
time dd if=/dev/zero of=$HOME/foo bs=1M count=500 oflag=direct
524288000 bytes (524 MB) copied, 31.6961 s, 16.5 MB/s
  c) in a vm running on kvm-84 on Jaunty
time dd if=/dev/zero of=$HOME/foo bs=1M count=500 oflag=direct
524288000 bytes (524 MB) copied, 22.2169 s, 23.6 MB/s

Looking at the time it takes to write a 500MB file to a real hard
disk, and then inside of the VM.  If I were to experience the problem
on Karmic, I would have seen this dd of a 500MB file take far, far
less time than it takes to write that file to disk on the real
hardware.  This was not the case in my testing.

 I will re-open the launchpad bug if you believe it makes sense to
 continue the discussions there.

Please re-open the bug if you can describe a real test case that you
used to demonstrate the problem.  Without being rude, it's hard for me
to work from a bug that says a magazine article says that there's a
bug in the Ubuntu distribution of qemu-kvm-0.11.

If you can provide clear steps that you have used to experience the
problem, then I will be able to take this issue seriously, reproduce
it myself, and develop a fix.

:-Dustin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-07 Thread Matthew Tippett

The benchmark used was the sqlite subtest in the phoronix test suite.

My awareness and involvement is beyond reading a magazine article, I 
can elaborate if needed, but I don't believe it is necessary.


Process for reproduction, assuming Karmic,

# apt-get install phoronix-test-suite

$ phoronix-test-suite benchmark sqlite

Answer the questions (test-names, etc, etc), it will download sqlite, 
build it and execute the test.  By default the test runs three timesand 
averages the results.  The results experienced should be similar to the 
values identified at


http://www.phoronix.com/scan.php?page=articleitem=linux_2631_kvmnum=3

Which is approximately 12 minutes for the native, and about 60 seconds 
for the guest.


Given that the performance under the guest is expected to be around 60 
seconds, I would suggest confirming performance there first.


Regards,

Matthew

 Original Message  
Subject: Re: sync guest calls made async on host - SQLite performance
From: Dustin Kirkland dustin.kirkl...@gmail.com
To: Matthew Tippett tippe...@gmail.com
Cc: Anthony Liguori anth...@codemonkey.ws, Avi Kivity 
a...@redhat.com, RW k...@tauceti.net, kvm@vger.kernel.org

Date: 10/07/2009 02:59 PM


On Wed, Oct 7, 2009 at 11:53 AM, Matthew Tippett tippe...@gmail.com wrote:

When you indicated that you had attempted to reproduce the problem, what
mechanism did you use?  Was it Karmic + KVM as the host and Karmic as
the guest?  What test did you use?



I ran the following in several places:
  a) on the system running on real hardware,
time dd if=/dev/zero of=$HOME/foo bs=1M count=500
524288000 bytes (524 MB) copied, 9.72614 s, 53.9 MB/s
  b) in an vm running on qemu-kvm-0.11 on Karmic
time dd if=/dev/zero of=$HOME/foo bs=1M count=500 oflag=direct
524288000 bytes (524 MB) copied, 31.6961 s, 16.5 MB/s
  c) in a vm running on kvm-84 on Jaunty
time dd if=/dev/zero of=$HOME/foo bs=1M count=500 oflag=direct
524288000 bytes (524 MB) copied, 22.2169 s, 23.6 MB/s

Looking at the time it takes to write a 500MB file to a real hard
disk, and then inside of the VM.  If I were to experience the problem
on Karmic, I would have seen this dd of a 500MB file take far, far
less time than it takes to write that file to disk on the real
hardware.  This was not the case in my testing.


I will re-open the launchpad bug if you believe it makes sense to
continue the discussions there.


Please re-open the bug if you can describe a real test case that you
used to demonstrate the problem.  Without being rude, it's hard for me
to work from a bug that says a magazine article says that there's a
bug in the Ubuntu distribution of qemu-kvm-0.11.

If you can provide clear steps that you have used to experience the
problem, then I will be able to take this issue seriously, reproduce
it myself, and develop a fix.

:-Dustin


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-10-07 Thread Matthew Tippett

 Original Message  
Subject: Re: sync guest calls made async on host - SQLite performance
From: Avi Kivity a...@redhat.com
To: Matthew Tippett tippe...@gmail.com
Cc: Dustin Kirkland dustin.kirkl...@gmail.com, Anthony Liguori 
anth...@codemonkey.ws, RW k...@tauceti.net, kvm@vger.kernel.org

Date: 10/07/2009 04:12 PM



What is the data set for this benchmark?  If it's much larger than guest 
RAM, but smaller than host RAM, you could be seeing the effects of read 
caching.


2GB for host, 1.7GB accessible for the guest (although highly unlikely 
that the memory usage went very high at all.




Another possiblity is barriers and flushing.



That is what I am expecting, remember that the host and the guest were 
the same OS, same config, nothing special.  So the variable in the mix 
is how Ubuntu Karmic interacts with the bare metal vs the qemu-kvm 
virtual metal.


The test itself is simply 12500 sequential inserts, designed to model a 
simple high-transactional load single-tier system.  I still have some 
investigations pending on how sqlite responds at the syscall level, but 
I believe it is requesting synchronous writes and then doing many writes.


The consequence of the structure of the benchmark is that if there is 
any caching occurring at all from the sqlite library down, then it tends 
to show.  And I believe that it is unexpectedly showing here (since the 
writes are expected to be synchronous to a physical disk).


If there is a clear rationale that the KVM community is comfortable 
with, then it becomes a distribution or deployment issue relative to 
data integrity where a synchronous write within a guest may not be 
synchronous to a physical disk.  I assume this would concern commercial 
and server users of virtual machines.


Regards,

Matthew

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-29 Thread Anthony Liguori

Matthew Tippett wrote:

Hi,

I would like to call attention to the SQLite performance under KVM in 
the current Ubuntu Alpha.


http://www.phoronix.com/scan.php?page=articleitem=linux_2631_kvmnum=3

SQLite's benchmark as part of the Phoronix Test Suite is typically IO 
limited and is affected by both disk and filesystem performance.


Gotta love Phoronix's transparent methodology...

Ubuntu's Karmic release has _not_ been released yet.  For this 
particular test, Phoronix was probably using an alpha drop before Ubuntu 
switched from kvm-84 to qemu-kvm-0.11.0.


Before 0.11.0, there were known issues with qcow2 and it was not 
recommended for use in production environments.  If you read the release 
notes for 0.10.0, we made this very clear.  Because of some performance 
problems, in 0.10.x we made cache=writeback the default for qcow2.  We 
document this pretty thoroughly.  See 
http://www.qemu.org/qemu-doc.html#SEC10  Some other distros that shipped 
0.10.x made cache=none the default in order to ensure data integrity (at 
the cost of performance).


For 0.11.0, Kevin Wolf has fixed the performance/reliability issues in 
qcow2 and we now set cache=writethrough for qcow2 by default.


And FWIW, Karmic has been on the 0.11.0 tree now for at least a month.

Regards,

Anthony Liguori

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-29 Thread Anthony Liguori

Avi Kivity wrote:

On 09/24/2009 10:49 PM, Matthew Tippett wrote:

The test itself is a simple usage of SQLite.  It is stock KVM as
available in 2.6.31 on Ubuntu Karmic.  So it would be the environment,
not the test.

So assuming that KVM upstream works as expected that would leave
either 2.6.31 having an issue, or Ubuntu having an issue.

Care to make an assertion on the KVM in 2.6.31?  Leaving only Ubuntu's
installation.
   


kvm has nothing to do with it, it's purely qemu.  For a long time qemu 
has defaulted to write-through cacheing.  This can be overridden and 
maybe that's what Ubuntu or Phoronix do.



Can some KVM developers attempt to confirm that a 'correctly'
configured KVM will not demonstrate this behaviour?
http://www.phoronix-test-suite.com/ (or is already available in newer
distributions of Fedora, openSUSE and Ubuntu.
   


A correctly configured kvm will not demonstrate this behaviour.


It was a very old kvm version (kvm-84).  But of course, the version of 
kvm is not mentioned on the Phoronix site...


Regards,

Anthony Liguori


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-29 Thread Anthony Liguori

Matthew Tippett wrote:

First up, Phoronix hasn't tuned.  It's observing the delivered state
by an OS vendor.  I started with what I believe to be the starting
point - KVM.

So the position of the KVM now is that it is either QEMU's
configuration or Ubuntu's configuration.  No further guidance or
suggestions?  Note that the prevailing response here does not see the
10 fold sqlite performance with guest vs host as a problem.
  


Again, this isn't a problem.  Ubuntu updated to a newer package and the 
problem has long been resolved upstream.


Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-29 Thread Anthony Liguori

Matthew Tippett wrote:

I have created a launchpad bug against qemu-kvm in Ubuntu.

https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/437473

Just re-iterating, my concern isn't so much performance, but integrity 
of stock KVM configurations with server or other workloads that expect 
sync fileIO requests to be honored and synchronous to the underlying 
physical disk.


(That and ensuring that sanity reigns where a benchmark doesn't show a 
guest operating 10 times faster than a host for the same test :).


And I've closed it.  In the future, please actually reproduce a bug 
before filing it.  Reading it on a website doesn't mean it's true :-)


Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-29 Thread Matthew Tippett

Okay, bringing the leafs of the discussions onto this thread.

As per

http://www.phoronix.com/scan.php?page=articleitem=linux_2631_kvmnum=1single=1

The host OS (as well as the guest OS when testing under KVM) was 
running an Ubuntu 9.10 daily snapshot with the Linux 2.6.31 (final) kernel


I am attempting to get the actual daily snapshot to provide the 
precise version.  I should have that information shortly.  It is likely 
that it was within 1-2 weeks prior to the article posting.



 Ubuntu's Karmic release has _not_ been released yet.  For this
 particular test, Phoronix was probably using an alpha drop before
 Ubuntu switched from kvm-84 to qemu-kvm-0.11.0.

The probably was described above - it was a snapshot after the 2.6.31 
final as September 9th, the article was published on September 21st, so 
there is a finite window.


I have high confidence in the testing that Phoronix has done and don't 
expect to need to confirm the results explicitly, and I have pieced 
together the following information.  I should be able to get the actual 
daily build number but broadly it looks like it was


  Ubuntu 9.10 daily snapshot (~ 9th - 21st September)
  Linux 2.6.31 (packaged as 2.6.31-10.30 to 2.6.31-10.32)
  qemu-kvm 0.11 (packaged as 0.11.0~rc2-0ubuntu to 0.11.0~rc2-0ubuntu5

Once I get confirmation of the actual date, digger deeping can occur.



But, if it turned out to be Ubuntu 9.10, linux 2.6.31, qemu-kvm 0.11 
would there be any concerns?




I would prefer rather than riling against Phoronix or the results as 
presented, ask questions to seek further information about what was 
tested rather than writing off all of it as completely invalid.


Regards,

Matthew
 Original Message  
Subject: Re: sync guest calls made async on host - SQLite performance
From: Anthony Liguori anth...@codemonkey.ws
To: Matthew Tippett tippe...@gmail.com
Cc: Avi Kivity a...@redhat.com, RW k...@tauceti.net, kvm@vger.kernel.org
Date: 09/29/2009 03:02 PM


Matthew Tippett wrote:

I have created a launchpad bug against qemu-kvm in Ubuntu.

https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/437473

Just re-iterating, my concern isn't so much performance, but integrity 
of stock KVM configurations with server or other workloads that expect 
sync fileIO requests to be honored and synchronous to the underlying 
physical disk.


(That and ensuring that sanity reigns where a benchmark doesn't show a 
guest operating 10 times faster than a host for the same test :).


And I've closed it.  In the future, please actually reproduce a bug 
before filing it.  Reading it on a website doesn't mean it's true :-)


Regards,

Anthony Liguori


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-29 Thread Dustin Kirkland
On Tue, Sep 29, 2009 at 2:32 PM, Matthew Tippett tippe...@gmail.com wrote:
 I would prefer rather than riling against Phoronix or the results as
 presented, ask questions to seek further information about what was tested
 rather than writing off all of it as completely invalid.

Matthew-

If you could please provide very specific instructions that you have
personally used to reproduce this problem on the latest Ubuntu Karmic
qemu-kvm-0.11.0 package, that would help very much.

I have personally tried to reproduce this problem and I don't see the
problem manifested in Ubuntu's qemu-kvm package.

For the technical people on the list, our configure line in Ubuntu
looks like this:

./configure --prefix=/usr --disable-blobs --audio-drv-list=alsa pa
oss sdl --audio-card-list=ac97 es1370 sb16 cs4231a adlib gus ...

I don't think we're doing anything exotic, and we're not carrying any
major patches.  I have submitted each patch that we are carrying
upstream to the KVM and/or QEMU mailing lists as appropriate.

:-Dustin
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-29 Thread Anthony Liguori

Matthew Tippett wrote:

Okay, bringing the leafs of the discussions onto this thread.

As per

http://www.phoronix.com/scan.php?page=articleitem=linux_2631_kvmnum=1single=1 



The host OS (as well as the guest OS when testing under KVM) was 
running an Ubuntu 9.10 daily snapshot with the Linux 2.6.31 (final) 
kernel


I am attempting to get the actual daily snapshot to provide the 
precise version.  I should have that information shortly.  It is 
likely that it was within 1-2 weeks prior to the article posting.



 Ubuntu's Karmic release has _not_ been released yet.  For this
 particular test, Phoronix was probably using an alpha drop before
 Ubuntu switched from kvm-84 to qemu-kvm-0.11.0.

The probably was described above - it was a snapshot after the 
2.6.31 final as September 9th, the article was published on September 
21st, so there is a finite window.


I have high confidence in the testing that Phoronix has done and don't 
expect to need to confirm the results explicitly,


Your confidence is misplaced apparently.

and I have pieced together the following information.  I should be 
able to get the actual daily build number but broadly it looks like it 
was


  Ubuntu 9.10 daily snapshot (~ 9th - 21st September)
  Linux 2.6.31 (packaged as 2.6.31-10.30 to 2.6.31-10.32)
  qemu-kvm 0.11 (packaged as 0.11.0~rc2-0ubuntu to 0.11.0~rc2-0ubuntu5


That's extremely unlikely.

But, if it turned out to be Ubuntu 9.10, linux 2.6.31, qemu-kvm 0.11 
would there be any concerns?


It's not relevant because it's not qemu-kvm-0.11.

Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-27 Thread Avi Kivity

On 09/25/2009 10:00 AM, RW wrote:

I think ext3 with data=writeback in a KVM and KVM started
with if=virtio,cache=none is a little bit crazy. I don't know
if this is the case with current Ubuntu Alpha but it looks
like so.
   


It's not crazy, qemu bypasses the cache with cache=none so the ext3 
data= setting is immaterial.


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-27 Thread Matthew Tippett

I have created a launchpad bug against qemu-kvm in Ubuntu.

https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/437473

Just re-iterating, my concern isn't so much performance, but integrity 
of stock KVM configurations with server or other workloads that expect 
sync fileIO requests to be honored and synchronous to the underlying 
physical disk.


(That and ensuring that sanity reigns where a benchmark doesn't show a 
guest operating 10 times faster than a host for the same test :).


Regards,

Matthew
 Original Message  
Subject: Re: sync guest calls made async on host - SQLite performance
From: Avi Kivity a...@redhat.com
To: RW k...@tauceti.net
Cc: kvm@vger.kernel.org
Date: 09/27/2009 07:37 AM


On 09/25/2009 10:00 AM, RW wrote:

I think ext3 with data=writeback in a KVM and KVM started
with if=virtio,cache=none is a little bit crazy. I don't know
if this is the case with current Ubuntu Alpha but it looks
like so.
   


It's not crazy, qemu bypasses the cache with cache=none so the ext3 
data= setting is immaterial.




--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-25 Thread RW
I've read the article a few days ago and it was interesting.
As I upgraded vom 2.6.29 to 2.6.30 (Gentoo) I also saw a dramatic
increase disk and filesystem performance. But then I realized
that the default mode for ext3 changed to data=writeback.
So I changed that back to data=ordered and performance was
as it was with 2.6.29.

I think ext3 with data=writeback in a KVM and KVM started
with if=virtio,cache=none is a little bit crazy. I don't know
if this is the case with current Ubuntu Alpha but it looks
like so.

Regards,
Robert

 I would like to call attention to the SQLite performance under KVM in
 the current Ubuntu Alpha.

 http://www.phoronix.com/scan.php?page=articleitem=linux_2631_kvmnum=3

 SQLite's benchmark as part of the Phoronix Test Suite is typically IO
 limited and is affected by both disk and filesystem performance.

 en comparing SQLite under the host against the guest OS,  there is an
 der of magnitude _IMPROVEMENT_ in the measured performance  of the guest.

 I am expecting that the host is doing synchronous IO operations but
 somewhere in the stack the calls are ultimately being made asynchronous
 or at the very least batched for writing.

 On the surface, this represents a data integrity issue and  I am
 interested in the KVM communities thoughts on this behaviour.  Is it
 expected? Is it acceptable?  Is it safe?

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-25 Thread Avi Kivity

On 09/24/2009 10:49 PM, Matthew Tippett wrote:

The test itself is a simple usage of SQLite.  It is stock KVM as
available in 2.6.31 on Ubuntu Karmic.  So it would be the environment,
not the test.

So assuming that KVM upstream works as expected that would leave
either 2.6.31 having an issue, or Ubuntu having an issue.

Care to make an assertion on the KVM in 2.6.31?  Leaving only Ubuntu's
installation.
   


kvm has nothing to do with it, it's purely qemu.  For a long time qemu 
has defaulted to write-through cacheing.  This can be overridden and 
maybe that's what Ubuntu or Phoronix do.



Can some KVM developers attempt to confirm that a 'correctly'
configured KVM will not demonstrate this behaviour?
http://www.phoronix-test-suite.com/ (or is already available in newer
distributions of Fedora, openSUSE and Ubuntu.
   


A correctly configured kvm will not demonstrate this behaviour.

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-25 Thread Matthew Tippett
First up, Phoronix hasn't tuned.  It's observing the delivered state
by an OS vendor.  I started with what I believe to be the starting
point - KVM.

So the position of the KVM now is that it is either QEMU's
configuration or Ubuntu's configuration.  No further guidance or
suggestions?  Note that the prevailing response here does not see the
10 fold sqlite performance with guest vs host as a problem.

I'll move this discussion to qemu then, is there any kvm developers
who are willing to maintain this position in a discussion with QEMU?

Regards... Matthew


On 9/25/09, Avi Kivity a...@redhat.com wrote:
 On 09/24/2009 10:49 PM, Matthew Tippett wrote:
 The test itself is a simple usage of SQLite.  It is stock KVM as
 available in 2.6.31 on Ubuntu Karmic.  So it would be the environment,
 not the test.

 So assuming that KVM upstream works as expected that would leave
 either 2.6.31 having an issue, or Ubuntu having an issue.

 Care to make an assertion on the KVM in 2.6.31?  Leaving only Ubuntu's
 installation.


 kvm has nothing to do with it, it's purely qemu.  For a long time qemu
 has defaulted to write-through cacheing.  This can be overridden and
 maybe that's what Ubuntu or Phoronix do.

 Can some KVM developers attempt to confirm that a 'correctly'
 configured KVM will not demonstrate this behaviour?
 http://www.phoronix-test-suite.com/ (or is already available in newer
 distributions of Fedora, openSUSE and Ubuntu.


 A correctly configured kvm will not demonstrate this behaviour.

 --
 Do not meddle in the internals of kernels, for they are subtle and quick to
 panic.



-- 
Sent from my mobile device
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-25 Thread Avi Kivity

On 09/25/2009 02:33 PM, Matthew Tippett wrote:

First up, Phoronix hasn't tuned.  It's observing the delivered state
by an OS vendor.  I started with what I believe to be the starting
point - KVM.

So the position of the KVM now is that it is either QEMU's
configuration or Ubuntu's configuration.  No further guidance or
suggestions?  Note that the prevailing response here does not see the
10 fold sqlite performance with guest vs host as a problem.

I'll move this discussion to qemu then, is there any kvm developers
who are willing to maintain this position in a discussion with QEMU?
   


I suggest you start at the other end - verify with Ubuntu that their 
configuration is safe.


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-24 Thread Avi Kivity

On 09/23/2009 06:58 PM, Matthew Tippett wrote:

Hi,

I would like to call attention to the SQLite performance under KVM in 
the current Ubuntu Alpha.


http://www.phoronix.com/scan.php?page=articleitem=linux_2631_kvmnum=3

SQLite's benchmark as part of the Phoronix Test Suite is typically IO 
limited and is affected by both disk and filesystem performance.


When comparing SQLite under the host against the guest OS,  there is 
an order of magnitude _IMPROVEMENT_ in the measured performance  of 
the guest.


I am expecting that the host is doing synchronous IO operations but 
somewhere in the stack the calls are ultimately being made 
asynchronous or at the very least batched for writing.


On the surface, this represents a data integrity issue and  I am 
interested in the KVM communities thoughts on this behaviour.  Is it 
expected? Is it acceptable?  Is it safe?


qemu defaults to write-through caching, so there is no data integrity 
concern.


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-24 Thread Matthew Tippett
Thanks Avi,

I am still trying to reconcile the your statement with the potential
data risks and the numbers observed.

My read of your response is that the guest sees a consistent view -
the data is commited to the virtual disk device.  Does a synchronous
write within the guest trigger a synchronous write of the virtual
device within the host?

I don't think offering SQLite users a 10 fold increase in performance
with no data integrity risks just by using KVM is a sane proposition.

Regards... Matthew


On 9/24/09, Avi Kivity a...@redhat.com wrote:
 On 09/23/2009 06:58 PM, Matthew Tippett wrote:
 Hi,

 I would like to call attention to the SQLite performance under KVM in
 the current Ubuntu Alpha.

 http://www.phoronix.com/scan.php?page=articleitem=linux_2631_kvmnum=3

 SQLite's benchmark as part of the Phoronix Test Suite is typically IO
 limited and is affected by both disk and filesystem performance.

 When comparing SQLite under the host against the guest OS,  there is
 an order of magnitude _IMPROVEMENT_ in the measured performance  of
 the guest.

 I am expecting that the host is doing synchronous IO operations but
 somewhere in the stack the calls are ultimately being made
 asynchronous or at the very least batched for writing.

 On the surface, this represents a data integrity issue and  I am
 interested in the KVM communities thoughts on this behaviour.  Is it
 expected? Is it acceptable?  Is it safe?

 qemu defaults to write-through caching, so there is no data integrity
 concern.

 --
 Do not meddle in the internals of kernels, for they are subtle and quick to
 panic.



-- 
Sent from my mobile device
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-24 Thread Avi Kivity

On 09/24/2009 03:31 PM, Matthew Tippett wrote:

Thanks Avi,

I am still trying to reconcile the your statement with the potential
data risks and the numbers observed.

My read of your response is that the guest sees a consistent view -
the data is commited to the virtual disk device.  Does a synchronous
write within the guest trigger a synchronous write of the virtual
device within the host?
   


Yes.


I don't think offering SQLite users a 10 fold increase in performance
with no data integrity risks just by using KVM is a sane proposition.
   


It isn't, my guess is that the test setup is broken somehow.

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-24 Thread Matthew Tippett
The test itself is a simple usage of SQLite.  It is stock KVM as
available in 2.6.31 on Ubuntu Karmic.  So it would be the environment,
not the test.

So assuming that KVM upstream works as expected that would leave
either 2.6.31 having an issue, or Ubuntu having an issue.

Care to make an assertion on the KVM in 2.6.31?  Leaving only Ubuntu's
installation.

Can some KVM developers attempt to confirm that a 'correctly'
configured KVM will not demonstrate this behaviour?
http://www.phoronix-test-suite.com/ (or is already available in newer
distributions of Fedora, openSUSE and Ubuntu.

Regards... Matthew


On 9/24/09, Avi Kivity a...@redhat.com wrote:
 On 09/24/2009 03:31 PM, Matthew Tippett wrote:
 Thanks Avi,

 I am still trying to reconcile the your statement with the potential
 data risks and the numbers observed.

 My read of your response is that the guest sees a consistent view -
 the data is commited to the virtual disk device.  Does a synchronous
 write within the guest trigger a synchronous write of the virtual
 device within the host?


 Yes.

 I don't think offering SQLite users a 10 fold increase in performance
 with no data integrity risks just by using KVM is a sane proposition.


 It isn't, my guess is that the test setup is broken somehow.

 --
 Do not meddle in the internals of kernels, for they are subtle and quick to
 panic.



-- 
Sent from my mobile device
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-24 Thread Ian Woodstock
The Phoronix Test Suite is designed to test a (client) operating
system out of the box and it does a good job at that.
It's certainly valid to run PTS inside a virtual machine but you
you're going to need to tune the host, in this case Karmic.

The way you'd configure a client operating system to a server is
obviously different, for example selecting the right I/O elevator, in
the case of KVM you'll certainly see benefits there.
You'd also want to make sure that the guest OS has been optimally
installed - for exmaple in a VMware environment you'd install VMware
tools - in KVM you'd ensure that you're using VirtIO in the guest for
the same reason.
They you'd also look at optimizations like cpu pinning, use of huge pages, etc.

Just taking an generic installation of Karmic out of the box and
running VMs isn't going to give you real insight into the performance
of KVM. When deploying Linux as a virtualization host you should be
tuning it.
It would certainly be appropriate to have a spin of Karmic that was
designed to run as a virtualization host.

Maybe it would be more appropriate to actually run the test in a tuned
environment and present some results rather than ask a developer to
prove KVM is working.



 The test itself is a simple usage of SQLite.  It is stock KVM as
 available in 2.6.31 on Ubuntu Karmic.  So it would be the environment,
 not the test.

 So assuming that KVM upstream works as expected that would leave
 either 2.6.31 having an issue, or Ubuntu having an issue.

 Care to make an assertion on the KVM in 2.6.31?  Leaving only Ubuntu's
 installation.

 Can some KVM developers attempt to confirm that a 'correctly'
 configured KVM will not demonstrate this behaviour?
 http://www.phoronix-test-suite.com/ (or is already available in newer
 distributions of Fedora, openSUSE and Ubuntu.

 Regards... Matthew


On 9/24/09, Avi Kivity a...@redhat.com wrote:
 On 09/24/2009 03:31 PM, Matthew Tippett wrote:
 Thanks Avi,

 I am still trying to reconcile the your statement with the potential
 data risks and the numbers observed.

 My read of your response is that the guest sees a consistent view -
 the data is commited to the virtual disk device.  Does a synchronous
 write within the guest trigger a synchronous write of the virtual
 device within the host?


 Yes.

 I don't think offering SQLite users a 10 fold increase in performance
 with no data integrity risks just by using KVM is a sane proposition.


 It isn't, my guess is that the test setup is broken somehow.

 --
 Do not meddle in the internals of kernels, for they are subtle and quick to
 panic.



--
Sent from my mobile device
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sync guest calls made async on host - SQLite performance

2009-09-24 Thread Matthew Tippett

Thanks for your response.

Remember that I am not raising questions about the relative performance 
of KVM using guests.  The prevailing opinion would be that performance 
of a guest would range anywhere from considerably slower to around the 
same performance as native - depending on workload, tuning the guest and 
configuration.


I am looking further into a particular anomalous result.  The result is 
that SQLite experiences an _order of magnitude_ - 10x beneficial 
advantage when running under KVM.


My only rationalization of this would be as the subject suggests, is 
that somewhere between the host's HDD and the guests file layer 
something is making a synchronous call asynchronous and batching writes 
 together.


Intuitively this feels that running SQLite under at least a KVM 
virtualized environment will be putting the data at considerably higher 
risk than is present in a non-virtualized environment in case of system 
failure.  Performance is inconsequential in this case.


Focusing in particular on one response

 Maybe it would be more appropriate to actually run the test in a tuned
 environment and present some results rather than ask a developer to
 prove KVM is working.

I am not asking for comparative performance results, I am looking for 
more data that indicates if the anomalous performance increase is a 
Ubuntu+KVM+2.6.31 thing or a KVM+2.6.31 thing or a KVM thing.


I am looking to the KVM developers to either confirm that the behaviour 
is safe and expected, or to provide other data points to indicate that 
it is a Ubuntu+2.6.31 or a 2.6.31 thing by showing that when KVM is 
properly configured KVM environment the performance sits in the expected 
considerably slower to around the same speed.


Regards,

Matthew

 Original Message  
Subject: Re: sync guest calls made async on host - SQLite performance
From: Ian Woodstock ian.woodst...@gmail.com
To: kvm@vger.kernel.org
Date: 09/24/2009 10:11 PM


The Phoronix Test Suite is designed to test a (client) operating
system out of the box and it does a good job at that.
It's certainly valid to run PTS inside a virtual machine but you
you're going to need to tune the host, in this case Karmic.

The way you'd configure a client operating system to a server is
obviously different, for example selecting the right I/O elevator, in
the case of KVM you'll certainly see benefits there.
You'd also want to make sure that the guest OS has been optimally
installed - for exmaple in a VMware environment you'd install VMware
tools - in KVM you'd ensure that you're using VirtIO in the guest for
the same reason.
They you'd also look at optimizations like cpu pinning, use of huge pages, etc.

Just taking an generic installation of Karmic out of the box and
running VMs isn't going to give you real insight into the performance
of KVM. When deploying Linux as a virtualization host you should be
tuning it.
It would certainly be appropriate to have a spin of Karmic that was
designed to run as a virtualization host.

Maybe it would be more appropriate to actually run the test in a tuned
environment and present some results rather than ask a developer to
prove KVM is working.




The test itself is a simple usage of SQLite.  It is stock KVM as
available in 2.6.31 on Ubuntu Karmic.  So it would be the environment,
not the test.

So assuming that KVM upstream works as expected that would leave
either 2.6.31 having an issue, or Ubuntu having an issue.

Care to make an assertion on the KVM in 2.6.31?  Leaving only Ubuntu's
installation.

Can some KVM developers attempt to confirm that a 'correctly'
configured KVM will not demonstrate this behaviour?
http://www.phoronix-test-suite.com/ (or is already available in newer
distributions of Fedora, openSUSE and Ubuntu.

Regards... Matthew



On 9/24/09, Avi Kivity a...@redhat.com wrote:

On 09/24/2009 03:31 PM, Matthew Tippett wrote:

Thanks Avi,

I am still trying to reconcile the your statement with the potential
data risks and the numbers observed.

My read of your response is that the guest sees a consistent view -
the data is commited to the virtual disk device.  Does a synchronous
write within the guest trigger a synchronous write of the virtual
device within the host?


Yes.


I don't think offering SQLite users a 10 fold increase in performance
with no data integrity risks just by using KVM is a sane proposition.


It isn't, my guess is that the test setup is broken somehow.

--
Do not meddle in the internals of kernels, for they are subtle and quick to
panic.




--
Sent from my mobile device
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send

sync guest calls made async on host - SQLite performance

2009-09-23 Thread Matthew Tippett

Hi,

I would like to call attention to the SQLite performance under KVM in 
the current Ubuntu Alpha.


http://www.phoronix.com/scan.php?page=articleitem=linux_2631_kvmnum=3

SQLite's benchmark as part of the Phoronix Test Suite is typically IO 
limited and is affected by both disk and filesystem performance.


When comparing SQLite under the host against the guest OS,  there is an 
order of magnitude _IMPROVEMENT_ in the measured performance  of the guest.


I am expecting that the host is doing synchronous IO operations but 
somewhere in the stack the calls are ultimately being made asynchronous 
or at the very least batched for writing.


On the surface, this represents a data integrity issue and  I am 
interested in the KVM communities thoughts on this behaviour.  Is it 
expected? Is it acceptable?  Is it safe?


Regards,

Matthew
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html