On Fri, 18 Sep 2009 03:01:42 am Christoph Hellwig wrote:
Err, I'll take this one back for now pending some more discussion.
What we need more urgently is the writeback cache flag, which is now
implemented in qemu, patch following ASAP.
OK, still catching up on mail. I'll push them out of the
Err, I'll take this one back for now pending some more discussion.
What we need more urgently is the writeback cache flag, which is now
implemented in qemu, patch following ASAP.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
On 08/28/2009 04:15 AM, Rusty Russell wrote:
On Thu, 27 Aug 2009 08:34:19 pm Avi Kivity wrote:
There are two possible semantics to cache=writeback:
- simulate a drive with a huge write cache; use fsync() to implement
barriers
- tell the host that we aren't interested in data integrity, lie
On Wed, 26 Aug 2009 09:58:13 pm Avi Kivity wrote:
On 08/26/2009 03:06 PM, Rusty Russell wrote:
On Tue, 25 Aug 2009 11:46:08 pm Christoph Hellwig wrote:
On Tue, Aug 25, 2009 at 11:41:37PM +0930, Rusty Russell wrote:
On Fri, 21 Aug 2009 06:26:16 am Christoph Hellwig wrote:
On 08/27/2009 01:43 PM, Rusty Russell wrote:
Are you claiming qcow2 is unusual? I can believe snapshot is less common,
though I use it all the time.
You'd normally have to add a feature for something like this. I don't
think this is different.
Why do we need to add a feature for
I just wanted this small fix for cache modes that are sane out ASAP.
Maybe the picture is more clear once the we also add the support for
properly flagging volatile writecaches.
This is what I currently have, including experimental support in qemu
that I'm going to send out soon:
Index:
On Thu, 27 Aug 2009 08:34:19 pm Avi Kivity wrote:
There are two possible semantics to cache=writeback:
- simulate a drive with a huge write cache; use fsync() to implement
barriers
- tell the host that we aren't interested in data integrity, lie to the
guest to get best performance
Why
On Tue, 25 Aug 2009 11:46:08 pm Christoph Hellwig wrote:
On Tue, Aug 25, 2009 at 11:41:37PM +0930, Rusty Russell wrote:
On Fri, 21 Aug 2009 06:26:16 am Christoph Hellwig wrote:
Currently virtio-blk doesn't set any QUEUE_ORDERED_ flag by default, which
means it does not allow filesystems
On 08/26/2009 03:06 PM, Rusty Russell wrote:
On Tue, 25 Aug 2009 11:46:08 pm Christoph Hellwig wrote:
On Tue, Aug 25, 2009 at 11:41:37PM +0930, Rusty Russell wrote:
On Fri, 21 Aug 2009 06:26:16 am Christoph Hellwig wrote:
Currently virtio-blk doesn't set any QUEUE_ORDERED_
On Fri, 21 Aug 2009 06:26:16 am Christoph Hellwig wrote:
Currently virtio-blk doesn't set any QUEUE_ORDERED_ flag by default, which
means it does not allow filesystems to use barriers. But the typical use
case for virtio-blk is to use a backed that uses synchronous I/O
Really? Does qemu open
On Tue, Aug 25, 2009 at 11:41:37PM +0930, Rusty Russell wrote:
On Fri, 21 Aug 2009 06:26:16 am Christoph Hellwig wrote:
Currently virtio-blk doesn't set any QUEUE_ORDERED_ flag by default, which
means it does not allow filesystems to use barriers. But the typical use
case for virtio-blk is
Am Donnerstag 20 August 2009 22:56:16 schrieb Christoph Hellwig:
Currently virtio-blk doesn't set any QUEUE_ORDERED_ flag by default, which
means it does not allow filesystems to use barriers. But the typical use
case for virtio-blk is to use a backed that uses synchronous I/O, and in
that
Currently virtio-blk doesn't set any QUEUE_ORDERED_ flag by default, which
means it does not allow filesystems to use barriers. But the typical use
case for virtio-blk is to use a backed that uses synchronous I/O, and in
that case we can simply set QUEUE_ORDERED_DRAIN to make the block layer
13 matches
Mail list logo