On Tue, Nov 03, 2015 at 10:08:13AM +1100, Benjamin Herrenschmidt wrote:
> On Mon, 2015-11-02 at 22:45 +0100, Arnd Bergmann wrote:
> > > Then I would argue for naming this differently. Make it an optional
> > > hint "DMA_ATTR_HIGH_PERF" or something like that. Whether this is
> > > achieved via
The whole series looks good to me. Thanks for picking this work up!
Reviewed-by: Christoph Hellwig <h...@lst.de>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kern
On Fri, Aug 07, 2015 at 06:45:26PM +0200, Peter Zijlstra wrote:
Its just the swait_wake_all() that is not. The entire purpose of them
was to have something that allows bounded execution (RT and all).
Still not sure i that might be a too big burden for mainline, but at
least it's not as severe..
On Fri, Aug 07, 2015 at 01:14:15PM +0200, Peter Zijlstra wrote:
On that, we cannot convert completions to swait. Because swait wake_all
must not happen from IRQ context, and complete_all() typically is used
from just that.
If swait queues aren't useable from IRQ context they will be fairly
Hi Nic,
Al has been rewriting the vhost code to use iov_iter primitives,
can you please rebase it on top of that istead of using the obsolete
infrastructure?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info
+static int vhost_blk_req_submit(struct vhost_blk_req *req, struct file *file)
+{
+
+ struct inode *inode = file-f_mapping-host;
+ struct block_device *bdev = inode-i_bdev;
+ int ret;
Please just pass the block_device directly instead of a file struct.
+
+ ret =
Does anyone know when the kvm forum schedule for this year will be
published? I'm especially curious if Friday will be a full conference
day or if it makes sense to fly back in the afternoon.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
At least after review is done I really think this patch sopuld be folded
into the previous one.
Some more comments below:
@@ -58,6 +58,12 @@ struct virtblk_req
struct bio *bio;
struct virtio_blk_outhdr out_hdr;
struct virtio_scsi_inhdr in_hdr;
+ struct work_struct
On Tue, Aug 07, 2012 at 04:47:13PM +0800, Asias He wrote:
1) Ramdisk device
With bio-based IO path, sequential read/write, random read/write
IOPS boost : 28%, 24%, 21%, 16%
Latency improvement: 32%, 17%, 21%, 16%
2) Fusion IO device
With bio-based IO path,
On Thu, Aug 02, 2012 at 02:43:04PM +0800, Asias He wrote:
Even if it has a payload waiting is highly suboptimal and it should
use a non-blocking sequencing like it is done in the request layer.
So, for REQ_FLUSH, what we need is that send out the VIRTIO_BLK_T_FLUSH and
not to wait.
If it's
On Thu, Aug 02, 2012 at 02:25:56PM +0800, Asias He wrote:
We need to support both REQ_FLUSH and REQ_FUA for bio based path since
it does not get the sequencing of REQ_FUA into REQ_FLUSH that request
based drivers can request.
REQ_FLUSH is emulated by:
1. Send VIRTIO_BLK_T_FLUSH to device
On Mon, Jul 30, 2012 at 09:31:06AM +0200, Paolo Bonzini wrote:
You only need to add REQ_FLUSH support. The virtio-blk protocol does
not support REQ_FUA, because there's no easy way to do it in userspace.
A bio-based driver needs to handle both REQ_FLUSH and REQ_FUA as it does
not get the
On Mon, Jul 30, 2012 at 12:43:12PM +0800, Asias He wrote:
I think we can add REQ_FLUSH REQ_FUA support to bio path and that
deserves another patch.
Adding it is a requirement for merging the code.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On Mon, Jul 30, 2012 at 11:25:51AM +0930, Rusty Russell wrote:
I consider this approach a half-way step. Quick attempts on my laptop
and I couldn't find a case where the bio path was a loss, but in theory
if the host wasn't doing any reordering and it was a slow device, you'd
want the guest
On Wed, Jul 18, 2012 at 08:42:21AM -0500, Anthony Liguori wrote:
If you add support for a new command, you need to provide userspace
a way to disable this command. If you change what gets reported for
VPD, you need to provide userspace a way to make VPD look like what
it did in a previous
Please send a version that does direct block I/O similar to xen-blkback
for now. If we get proper in-kernel aio support one day you can add
back file backend support.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More
On Mon, Apr 16, 2012 at 09:34:41AM +0100, Stefan Hajnoczi wrote:
On Sun, Apr 15, 2012 at 5:16 AM, Ron Edison r...@idthq.com wrote:
The server is a Dell R710 with an H700 controller with 1gb of nvcache.
Writeback cache is enabled on the controller. There is a mix of linux and
windows
On Mon, Jan 02, 2012 at 05:12:00PM +0100, Paolo Bonzini wrote:
On 01/01/2012 05:45 PM, Stefan Hajnoczi wrote:
By the way, drivers for solid-state devices can set QUEUE_FLAG_NONROT
to hint that seek time optimizations may be sub-optimal. NBD and
other virtual/pseudo device drivers set this
On Sun, Jan 01, 2012 at 04:45:42PM +, Stefan Hajnoczi wrote:
win. The fact that you added batching suggests there is some benefit
to what the request-based code path does. So find out what's good
about the request-based code path and how to get the best of both
worlds.
Batching pretty
On Mon, Jan 02, 2012 at 05:18:13PM +0100, Paolo Bonzini wrote:
I tried a few times, and the only constant measureable
thing was that it regressed performance when used for rotating devices
in a few benchmarks.
Were you trying with cache=none or writeback? For cache=none,
that's exactly
On Thu, Dec 22, 2011 at 12:20:01PM +, Stefan Hajnoczi wrote:
virtblk_make_request() checks that the queue is not plugged. Do we
need to do that here too?
biot based drivers don't have a queue that could be plugged.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body
Thanks a lot Lucas,
I've applied the patches. And sorry for the delay, I'm pretty busy at the
moment.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Nov 18, 2011 at 11:25:38AM -0500, Michael Waite wrote:
Hi,
The Open Virtualization Alliance is going to be having a webinar on
December 8th which is intended to help promote KVM as an enterprise
class hypervisor. I see so much great engineering work going on to
make KVM a really
On Tue, Nov 08, 2011 at 04:41:40PM +0200, Avi Kivity wrote:
On 11/06/2011 03:35 AM, Alexander Graf wrote:
To quickly get going, just execute the following as user:
$ ./Documentation/run-qemu.sh -r / -a init=/bin/bash
This will drop you into a shell on your rootfs.
Doesn't work
On Tue, Nov 08, 2011 at 04:57:04PM +0200, Avi Kivity wrote:
Running qemu -snapshot on the actual root block device is the only
safe way to reuse the host installation, although it gets a bit
complicated if people have multiple devices mounted into the namespace.
How is -snapshot any
On Tue, Nov 08, 2011 at 05:26:03PM +0200, Pekka Enberg wrote:
On Tue, Nov 8, 2011 at 4:52 PM, Christoph Hellwig h...@infradead.org wrote:
Nevermind that running virtfs as a rootfs is a really dumb idea. ?You
do now want to run a VM that has a rootfs that gets changed all the
time behind
On Fri, Nov 04, 2011 at 10:38:21AM +0200, Pekka Enberg wrote:
Hi Linus,
Please consider pulling the latest KVM tool tree from:
There still is absolutely zero reason to throw it in the kernel tree.
Please prepare a nice standalone git repository and tarball for it.
--
To unsubscribe from
On Fri, Nov 04, 2011 at 02:35:18PM +0200, Pekka Enberg wrote:
We are reusing kernel code and headers and I am not interested in
copying them over. Living in the kernel tree is part of the design,
whether you like it or not.
That's pretty much a blanko argument for throwing everything into the
On Thu, Nov 03, 2011 at 06:12:49PM +1030, Rusty Russell wrote:
The old documentation is left over from when we used a structure with
strategy pointers.
Looks good,
Reviewed-by: Christoph Hellwig h...@lst.de
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body
On Thu, Nov 03, 2011 at 06:12:50PM +1030, Rusty Russell wrote:
Remove wrapper functions. This makes the allocation type explicit in
all callers; I used GPF_KERNEL where it seemed obvious, left it at
GFP_ATOMIC otherwise.
Looks good,
Reviewed-by: Christoph Hellwig h...@lst.de
--
To unsubscribe
On Thu, Nov 03, 2011 at 06:12:51PM +1030, Rusty Russell wrote:
Based on patch by Christoph for virtio_blk speedup:
Please credit it to Stefan - he also sent a pointer to his original
version in reply to the previous thread.
Also shouldn't virtqueue_kick have kerneldoc comments?
I also notices
On Wed, Nov 02, 2011 at 01:49:36PM +1030, Rusty Russell wrote:
I thought it was still a WIP?
The whole series - yes. This patch (and the serial number rewrite): no
- these are pretty much rock solid.
Since the problem is contention on the lock inside the block layer, the
simplest solution is
On Thu, Oct 06, 2011 at 12:51:39AM +0200, Boaz Harrosh wrote:
I have some questions.
- Could we later use this bio_map_sg() to implement blk_rq_map_sg() and
remove some duplicated code?
I didn't even think about that, but it actually looks very possible
to factor the meat in the for each
On Thu, Oct 06, 2011 at 12:22:14PM +1030, Rusty Russell wrote:
On Wed, 05 Oct 2011 15:54:08 -0400, Christoph Hellwig h...@infradead.org
wrote:
Add an alternate I/O path that implements -make_request for virtio-blk.
This is required for high IOPs devices which get slowed down to 1/5th
Split virtqueue_kick to be able to do the actual notification outside the
lock protecting the virtqueue. This patch was originally done by
Stefan Hajnoczi, but I can't find the original one anymore and had to
recreated it from memory. Pointers to the original or corrections for
the commit
issues:
- it doesn't implement FUA and FLUSH requests yet
- it hardcodes which I/O path to chose
Signed-off-by: Christoph Hellwig h...@lst.de
Index: linux-2.6/drivers/block/virtio_blk.c
===
--- linux-2.6.orig/drivers/block
Signed-off-by: Christoph Hellwig h...@lst.de
Index: linux-2.6/drivers/block/virtio_blk.c
===
--- linux-2.6.orig/drivers/block/virtio_blk.c 2011-10-03 19:55:29.061215040
+0200
+++ linux-2.6/drivers/block/virtio_blk.c2011-10
This patchset allows the virtio-blk driver to support much higher IOP
rates which can be driven out of modern PCI-e flash devices. At this
point it really is just a RFC due to various issues.
The first four patches are infrastructure that could go in fairly
soon as far as I'm concerned. Patch 5
If we want to do bio-based I/O in virtio-blk we have to implement reading
the serial attribute ourselves. Do that and also prepare struct virtblk_req
for dealing with different types of requests.
Signed-off-by: Christoph Hellwig h...@lst.de
Index: linux-2.6/drivers/block/virtio_blk.c
Add a helper to map a bio to a scatterlist, modelled after blk_rq_map_sg.
This helper is useful for any driver that wants to create a scatterlist
from its -make_request method.
Signed-off-by: Christoph Hellwig h...@lst.de
Index: linux-2.6/block/blk-merge.c
On Wed, Oct 05, 2011 at 04:31:16PM -0400, Vivek Goyal wrote:
So you no longer believe that request queue overhead can be brought
down to mangeable levels for these fast devices. And instead go for
bio based drivers and give up on merging and implement own FLUSH/FUA
machinery.
Not in a
On Thu, Aug 11, 2011 at 04:40:59PM +1000, David Gibson wrote:
Linus, please apply
hugetlbfs tracks the current usage of hugepages per hugetlbfs
mountpoint. To correctly track this when hugepages are released, it
must find the right hugetlbfs super_block from the struct page
available in
Any progress on these patches?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Aug 01, 2011 at 01:46:33PM +0800, Liu Yuan wrote:
- I focused on using vfs interfaces in the kernel, so that I can
use it for file-backed devices.
Our use-case scenario is mostly file-backed images.
vhost-blk's that uses Linux AIO also support file-backed images.
Actually, I have
On Fri, Jul 29, 2011 at 03:59:53PM +0800, Liu Yuan wrote:
I noted bdrv_aio_multiwrite() do the murging job, but I am not sure
Just like I/O schedulers it's actually fairly harmful on high IOPS,
low latency devices. I've just started doing a lot of qemu bencharks,
and disabling that multiwrite
On Thu, Jul 28, 2011 at 10:29:05PM +0800, Liu Yuan wrote:
From: Liu Yuan tailai...@taobao.com
Vhost-blk driver is an in-kernel accelerator, intercepting the
IO requests from KVM virtio-capable guests. It is based on the
vhost infrastructure.
This is supposed to be a module over latest
On Fri, Jul 22, 2011 at 04:51:17PM +0200, Hannes Reinecke wrote:
Not every command is support for any device type. This patch adds
a check for rejecting unsupported commands.
Signed-off-by: Hannes Reinecke h...@suse.de
This seems to conflic with Markus' series. But if we want to invest
any
On Mon, Jul 25, 2011 at 10:14:13AM +0200, Alexander Graf wrote:
So instead of thinking a bit and trying to realize that there might be a
reason people don't want all their user space in the kernel tree you go ahead
and start your own crusade of creating a new user space. Great. That's how I
On Mon, Jul 25, 2011 at 01:08:10PM +0200, Ingo Molnar wrote:
Fact is that developing ABIs within an integrated project is
*amazingly* powerful. You should try it one day, instead of
criticizing it :-)
I've been doing this long before you declare it the rosetta stone. Some
of the worst ABIs
On Mon, Jul 25, 2011 at 01:34:25PM +0200, Olivier Galibert wrote:
You need someone with taste in the loop. But if you do, evolved is
always better than designed before you actually know what you need.
As I'm sure you perfectly know, for the matter.
Neither is actually helpful. You reall
On Tue, Jun 14, 2011 at 05:30:24PM +0200, Hannes Reinecke wrote:
Which is exactly the problem I was referring to.
When using more than one channel the request ordering
_as seen by the initiator_ has to be preserved.
This is quite hard to do from a device's perspective;
it might be able to
On Sun, Jun 12, 2011 at 10:51:41AM +0300, Michael S. Tsirkin wrote:
For example, if the driver is crazy enough to put
all write requests on one queue and all barriers
on another one, how is the device supposed to ensure
ordering?
There is no such things as barriers in SCSI. The thing that
On Wed, Jun 29, 2011 at 10:23:26AM +0200, Paolo Bonzini wrote:
I agree here, in fact I misread Hannes's comment as if a driver
uses more than one queue it is responsibility of the driver to
ensure strict request ordering. If you send requests to different
queues, you know that those requests
On Wed, Jun 29, 2011 at 10:39:42AM +0100, Stefan Hajnoczi wrote:
I think we're missing a level of addressing. We need the ability to
talk to multiple target ports in order for list target ports to make
sense. Right now there is one implicit target that handles all
commands. That means there
On Wed, Jun 29, 2011 at 12:23:38PM +0200, Hannes Reinecke wrote:
The general idea here is that we can support NPIV.
With NPIV we'll have several scsi_hosts, each of which is assigned a
different set of LUNs by the array.
With virtio we need to able to react on LUN remapping on the array
side,
On Sun, Jun 19, 2011 at 10:48:41AM +0300, Michael S. Tsirkin wrote:
diff --git a/block/blk-core.c b/block/blk-core.c
index 4ce953f..a8672ec 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -433,6 +433,8 @@ void blk_run_queue(struct request_queue *q)
On Thu, Jun 16, 2011 at 09:21:03AM +0300, Pekka Enberg wrote:
And btw, we use sync_file_range()
Which doesn't help you at all. sync_file_range is just a hint for VM
writeback, but never commits filesystem metadata nor the physical
disk's write cache. In short it's a completely dangerous
On Thu, Jun 16, 2011 at 12:34:04PM +0300, Pekka Enberg wrote:
Hi Christoph,
On Thu, Jun 16, 2011 at 09:21:03AM +0300, Pekka Enberg wrote:
And btw, we use sync_file_range()
On Thu, Jun 16, 2011 at 12:24 PM, Christoph Hellwig h...@infradead.org
wrote:
Which doesn't help you at all
On Thu, Jun 16, 2011 at 12:57:36PM +0300, Pekka Enberg wrote:
Uh-oh. Someone needs to apply this patch to sync_file_range():
There actually are a few cases where using it makes sense. It's just
the minority.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a
On Thu, Jun 16, 2011 at 01:22:30PM +0200, Ingo Molnar wrote:
Such as? I don't think apps can actually know whether disk blocks
have been 'instantiated' by a particular filesystem or not, so the
manpage:
In general they can't. The only good use case for sync_file_range
is to paper
On Thu, Jun 16, 2011 at 01:40:45PM +0200, Ingo Molnar wrote:
Filesystems that cannot guarantee that should map their
sync_file_range() implementation to fdatasync() or so, right?
Filesystems aren't even told about sync_file_range, it's purely a VM
thing, which is the root of the problem.
On Wed, Jun 15, 2011 at 09:46:10AM -0400, Federico Simoncelli wrote:
qemu-img currently writes disk images using writeback and filling
up the cache buffers which are then flushed by the kernel preventing
other processes from accessing the storage.
This is particularly bad in cluster
On Fri, May 20, 2011 at 07:12:39PM +0200, Jan Kiszka wrote:
Upstream's and qemu-kvm's kvm_cpu_exec are not logically equivalent so
s/not/now/?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On Wed, May 11, 2011 at 08:46:56PM +1000, Paul Mackerras wrote:
arch/powerpc/sysdev/xics/icp-native.c.
What kernel tree do I need to actually have that file?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info
On Tue, May 03, 2011 at 04:05:37PM +0200, Jan Kiszka wrote:
This helps reducing our build-time checks for feature support in the
available Linux kernel headers. And it helps users that do not have
sufficiently recent headers installed on their build machine.
Header upstate is triggered via
On Tue, Apr 26, 2011 at 04:01:00PM +0200, Jan Kiszka wrote:
+static bool modifying_bit(uint64_t old, uint64_t new, uint64_t mask)
+{
+return (old ^ new) mask;
+}
+
A more usual name would be toggle_bit. But you're passing in a mask
to be modified, so it would be more a toggle_bits or
On Tue, Apr 26, 2011 at 07:06:34PM +0200, Jan Kiszka wrote:
On 2011-04-26 18:06, Christoph Hellwig wrote:
On Tue, Apr 26, 2011 at 04:01:00PM +0200, Jan Kiszka wrote:
+static bool modifying_bit(uint64_t old, uint64_t new, uint64_t mask)
+{
+return (old ^ new) mask
On Tue, Apr 12, 2011 at 10:42:00PM -0700, Josh Durgin wrote:
I suspect we only support the weird writing past size for the
file protocol, so we should only run the test for it.
Or does sheepdog do anything special about it?
Sheepdog supports it by truncating to the right size if a
On Wed, Apr 13, 2011 at 08:01:58PM +0100, Prasad Joshi wrote:
The patch only implements the basic read write support for QCOW version 1
images. Many of the QCOW features are not implmented, for example
What's the point? Qcow1 has been deprecated for a long time.
--
To unsubscribe from this
@@ -43,6 +43,10 @@ _supported_fmt raw
_supported_proto generic
_supported_os Linux
+# rbd images are not growable
+if [ $IMGPROTO = rbd ]; then
+_notrun image protocol $IMGPROTO does not support growable images
+fi
I suspect we only support the weird writing past size for the
file
How do you plan to handle I/O errors or ENOSPC conditions? Note that
shared writeable mappings are by far the feature in the VM/FS code
that is most error prone, including the impossiblity of doing sensible
error handling.
The version that accidentally used MAP_PRIVATE actually makes a lot of
On Tue, Feb 01, 2011 at 05:36:13PM +0100, Jan Kiszka wrote:
kvm_cpu_exec/kvm_run, and start wondering What needs to be done to
upstream so that qemu-kvm could use that implementation?. If they
differ, the reasons need to be understood and patched away, either by
fixing/enhancing upstream or
On Thu, Nov 11, 2010 at 01:47:21PM +, Stefan Hajnoczi wrote:
Some virtio devices are known to have guest drivers which expect a notify to
be
processed synchronously and spin waiting for completion. Only enable
ioeventfd
for virtio-blk and virtio-net for now.
Who guarantees that less
On Sun, Oct 31, 2010 at 09:06:29AM -0400, Christoph Hellwig wrote:
With Linus' git tree from today I can't boot qemu when using kvm. It
seems to do fine, just glacially slow without -enable-kvm. The command
simplest command line that fails is:
/opt/qemu/bin/qemu-system-x86_64 -enable
On Tue, Nov 02, 2010 at 11:59:48AM -0400, Avi Kivity wrote:
KVM: Fix fs/gs reload oops with invalid ldt
Interesting, I guess we corrupt %fs on x86_64.
Intel or AMD?
Intel:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name :
With Linus' git tree from today I can't boot qemu when using kvm. It
seems to do fine, just glacially slow without -enable-kvm. The command
simplest command line that fails is:
/opt/qemu/bin/qemu-system-x86_64 -enable-kvm
I tried to get a backtrace from gdb, but it looks like:
(gdb)
FYI, qemu 0.12.2 is missing:
block: fix sector comparism in multiwrite_req_compare
which in the past was very good at triggering XFS guest corruption.
Please try with the patch applied or even better latests qemu from git.
--
To unsubscribe from this list: send the line unsubscribe kvm
On Sat, Sep 25, 2010 at 05:40:34PM +0200, Peter Lieven wrote:
Am 25.09.2010 um 17:37 schrieb Christoph Hellwig:
FYI, qemu 0.12.2 is missing:
you mean 0.12.4 not 0.12.2, don't you?
Yes, sorry. (but 0.12.2 is of course missing it, too..)
which in the past was very good at triggering
On Fri, Sep 17, 2010 at 09:58:48AM -0500, Ryan Harper wrote:
Since __bio_map_kern() sets up bio-bi_end_io = bio_map_kern_endio
(which does a bio_put(bio)) doesn't that ensure we don't leak?
Indeed, that should take care of it.
--
To unsubscribe from this list: send the line unsubscribe kvm in
On Thu, Sep 09, 2010 at 05:00:42PM -0400, Mike Snitzer wrote:
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 1260628..831e75c 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -199,6 +199,7 @@ static int virtblk_get_id(struct gendisk
On Sat, Aug 21, 2010 at 04:01:15PM -0700, Nicholas A. Bellinger wrote:
From: Nicholas Bellinger n...@linux-iscsi.org
Greetings hch, tomo and Co,
What tree is this against? I can't see any specificc BSG support in qemu.
Even more I think all this in the wrong place. The only reason
SG_IO
On Mon, Aug 16, 2010 at 03:34:12PM -0500, Anthony Liguori wrote:
On 08/16/2010 01:42 PM, Christoph Hellwig wrote:
On Mon, Aug 16, 2010 at 09:43:09AM -0500, Anthony Liguori wrote:
Also, ext4 is _very_ slow on O_SYNC writes (which is
used in kvm with default cache).
Yeah, we probably need
On Tue, Aug 17, 2010 at 12:23:01PM +0300, Avi Kivity wrote:
On 08/17/2010 12:07 PM, Christoph Hellwig wrote:
In short it's completely worthless for any real filesystem.
The documentation should be updated then. It suggests that it is
usable for data integrity.
The manpage has
On Tue, Aug 17, 2010 at 07:56:04AM -0500, Anthony Liguori wrote:
But assuming that you had a preallocated disk image, it would
effectively flush the page cache so it sounds like the only real
issue is sparse and growable files.
For preallocated as in using fallocate() we still converting
On Tue, Aug 17, 2010 at 09:20:37AM -0500, Anthony Liguori wrote:
On 08/17/2010 08:07 AM, Christoph Hellwig wrote:
The point is that we don't want to flush the disk write cache. The
intention of writethrough is not to make the disk cache writethrough
but to treat the host's cache
On Tue, Aug 17, 2010 at 09:39:15AM -0500, Anthony Liguori wrote:
The type of cache we present to the guest only should relate to how
the hypervisor caches the storage. It should be independent of how
data is cached by the disk.
It is.
There can be many levels of caching in a storage
On Tue, Aug 17, 2010 at 09:44:49AM -0500, Anthony Liguori wrote:
I think the real issue is we're mixing host configuration with guest
visible state.
The last time I proposed to decouple the two you and Avi were heavily
opposed to it..
With O_SYNC, we're causing cache=writethrough to do
On Tue, Aug 17, 2010 at 09:54:07AM -0500, Anthony Liguori wrote:
This is simply unrealistic. O_SYNC might force data to be on a
platter when using a directly attached disk but many NAS's actually
do writeback caching and relying on having an UPS to preserve data
integrity. There's really no
On Tue, Aug 17, 2010 at 05:59:07PM +0300, Avi Kivity wrote:
I agree, but there's another case: tell the guest that we have a
write cache, use O_DSYNC, but only flush the disk cache on guest
flushes.
O_DSYNC flushes the disk write cache and any filesystem that supports
non-volatile cache. The
On Mon, Aug 16, 2010 at 09:43:09AM -0500, Anthony Liguori wrote:
Also, ext4 is _very_ slow on O_SYNC writes (which is
used in kvm with default cache).
Yeah, we probably need to switch to sync_file_range() to avoid the
journal commit on every write.
No, we don't. sync_file_range does not
On Thu, Jun 24, 2010 at 02:01:52PM -0500, Javier Guerra Giraldez wrote:
On Thu, Jun 24, 2010 at 1:32 PM, Freddie Cash fjwc...@gmail.com wrote:
??* virt-manager which requires X and seems to be more desktop-oriented;
don't know about the others, but virt-manager runs only on the admin
On Fri, Jun 18, 2010 at 01:38:02PM -0500, Ryan Harper wrote:
Create a new attribute for virtio-blk devices that will fetch the serial
number
of the block device. This attribute can be used by udev to create disk/by-id
symlinks for devices that don't have a UUID (filesystem) associated with
imply working barriers on old qemu versions or other
hypervisors that actually have a volatile write cache this is only a
cosmetic issue - these hypervisors don't guarantee any data integrity
with or without this patch, but with the patch we at least provide
data ordering.
Signed-off-by: Christoph
On Tue, Jun 15, 2010 at 08:18:12AM -0700, Chris Wright wrote:
KVM/qemu patches
- patch rate is high, documentation is low, review is low
- patches need to include better descriptions and documentation
- will slow down patch writers
- will make it easier for patch reviewers
What is the
On Mon, Jun 14, 2010 at 02:44:31AM -0700, Nicholas A. Bellinger wrote:
From: Nicholas Bellinger n...@linux-iscsi.org
This patch adds posix-aio-compat.c:paio_submit_len(), which is a identical
to paio_submit() expect that in expected nb_len instead of nb_sectors (* 512)
so that it can be used
On Sat, May 29, 2010 at 04:42:59PM +0700, Antoine Martin wrote:
Can someone explain the aio options?
All I can find is this:
# qemu-system-x86_64 -h | grep -i aio
[,addr=A][,id=name][,aio=threads|native]
I assume it means the aio=threads emulates the kernel's aio with
separate
On Sat, May 29, 2010 at 10:55:18AM +0100, Stefan Hajnoczi wrote:
I would expect that aio=native is faster but benchmarks show that this
isn't true for all workloads.
In what benchmark do you see worse results for aio=native compared to
aio=threads?
--
To unsubscribe from this list: send the
On Tue, May 25, 2010 at 02:25:53PM +0300, Avi Kivity wrote:
Currently if someone wants to add a new block format, they have to
upstream it and wait for a new qemu to be released. With a plugin API,
they can add a new block format to an existing, supported qemu.
So? Unless we want a
On Fri, May 21, 2010 at 09:49:56PM +0100, Stefan Hajnoczi wrote:
http://sourceware.org/systemtap/wiki/AddingUserSpaceProbingToApps
Requires kernel support - not sure if enough of utrace is in mainline
for this to work out-of-the-box across distros.
Nothing of utrace is in mainline, nevermind
On Tue, May 18, 2010 at 03:22:36PM +0200, Kevin Wolf wrote:
I think it's stuck here in an endless loop:
while (laiocb-ret == -EINPROGRESS)
qemu_laio_completion_cb(laiocb-ctx);
Can you verify this by single-stepping one or two loop iterations? ret
and errno after the read call
1 - 100 of 233 matches
Mail list logo