On Tue, 2012-07-03 at 15:19 +0200, Paolo Bonzini wrote:
This patch adds support for the new VIRTIO_BLK_F_CONFIG_WCE feature,
which exposes the cache mode in the configuration space and lets the
driver modify it. The cache mode is exposed via sysfs.
Even if the host does not support the new
On 8/14/2011 8:20 PM, Liu Yuan wrote:
On 08/13/2011 12:12 AM, Badari Pulavarty wrote:
On 8/12/2011 4:40 AM, Liu Yuan wrote:
On 08/12/2011 04:27 PM, Liu Yuan wrote:
On 08/12/2011 12:50 PM, Badari Pulavarty wrote:
On 8/10/2011 8:19 PM, Liu Yuan wrote:
On 08/11/2011 11:01 AM, Liu Yuan wrote
On 8/12/2011 4:40 AM, Liu Yuan wrote:
On 08/12/2011 04:27 PM, Liu Yuan wrote:
On 08/12/2011 12:50 PM, Badari Pulavarty wrote:
On 8/10/2011 8:19 PM, Liu Yuan wrote:
On 08/11/2011 11:01 AM, Liu Yuan wrote:
It looks like the patch wouldn't work for testing multiple devices.
vhost_blk_open
On 8/10/2011 8:19 PM, Liu Yuan wrote:
On 08/11/2011 11:01 AM, Liu Yuan wrote:
It looks like the patch wouldn't work for testing multiple devices.
vhost_blk_open() does
+ used_info_cachep = KMEM_CACHE(used_info, SLAB_HWCACHE_ALIGN |
SLAB_PANIC);
This is weird. how do you open
On 8/10/2011 8:19 PM, Liu Yuan wrote:
On 08/11/2011 11:01 AM, Liu Yuan wrote:
It looks like the patch wouldn't work for testing multiple devices.
vhost_blk_open() does
+ used_info_cachep = KMEM_CACHE(used_info, SLAB_HWCACHE_ALIGN |
SLAB_PANIC);
This is weird. how do you open
On Wed, 2011-08-10 at 10:19 +0800, Liu Yuan wrote:
On 08/09/2011 01:16 AM, Badari Pulavarty wrote:
On 8/8/2011 12:31 AM, Liu Yuan wrote:
On 08/08/2011 01:04 PM, Badari Pulavarty wrote:
On 8/7/2011 6:35 PM, Liu Yuan wrote:
On 08/06/2011 02:02 AM, Badari Pulavarty wrote:
On 8/5/2011 4:04
On 8/8/2011 12:31 AM, Liu Yuan wrote:
On 08/08/2011 01:04 PM, Badari Pulavarty wrote:
On 8/7/2011 6:35 PM, Liu Yuan wrote:
On 08/06/2011 02:02 AM, Badari Pulavarty wrote:
On 8/5/2011 4:04 AM, Liu Yuan wrote:
On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I started testing
On 8/7/2011 6:35 PM, Liu Yuan wrote:
On 08/06/2011 02:02 AM, Badari Pulavarty wrote:
On 8/5/2011 4:04 AM, Liu Yuan wrote:
On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I started testing your patches. I applied your kernel patch to 3.0
and applied QEMU to latest git.
I passed
On 8/5/2011 4:04 AM, Liu Yuan wrote:
On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I started testing your patches. I applied your kernel patch to 3.0
and applied QEMU to latest git.
I passed 6 blockdevices from the host to guest (4 vcpu, 4GB RAM).
I ran simple dd read tests
Hi Liu Yuan,
I started testing your patches. I applied your kernel patch to 3.0
and applied QEMU to latest git.
I passed 6 blockdevices from the host to guest (4 vcpu, 4GB RAM).
I ran simple dd read tests from the guest on all block devices
(with various blocksizes, iflag=direct).
Hi Liu Yuan,
I am glad to see that you started looking at vhost-blk. I did an
attempt year ago to improve block
performance using vhost-blk approach.
http://lwn.net/Articles/379864/
http://lwn.net/Articles/382543/
I will take a closer look at your patchset to find differences and
Hi All,
Here is the latest version of vhost-blk implementation.
Major difference from my previous implementation is that, I
now merge all contiguous requests (both read and write), before
submitting them. This significantly improved IO performance.
I am still collecting performance numbers, I
On Mon, 2010-04-05 at 15:23 -0400, Christoph Hellwig wrote:
On Wed, Mar 24, 2010 at 01:22:37PM -0700, Badari Pulavarty wrote:
iovecs and buffers are user-space pointers (from the host kernel point
of view). They are
guest address. So, I don't need to do any set_fs tricks.
From
On Mon, 2010-04-05 at 15:22 +0100, Stefan Hajnoczi wrote:
On Mon, Mar 29, 2010 at 4:41 PM, Badari Pulavarty pbad...@us.ibm.com wrote:
+static void handle_io_work(struct work_struct *work)
+{
+ struct vhost_blk_io *vbio;
+ struct vhost_virtqueue *vq;
+ struct vhost_blk
Hi Christoph,
I am wondering if you can provide your thoughts here..
I modified my vhost-blk implementation to offload work to
work_queues instead of doing synchronously. Infact, I tried
to spread the work across all the CPUs. But to my surprise,
this did not improve the performance compared to
On Mon, 2010-03-29 at 23:37 +0300, Avi Kivity wrote:
On 03/29/2010 09:20 PM, Chris Wright wrote:
* Badari Pulavarty (pbad...@us.ibm.com) wrote:
I modified my vhost-blk implementation to offload work to
work_queues instead of doing synchronously. Infact, I tried
to spread the work
Avi Kivity wrote:
On 03/24/2010 10:22 PM, Badari Pulavarty wrote:
Which caching mode is this? I assume data=writeback, because otherwise
you'd be doing synchronous I/O directly from the handler.
Yes. This is with default (writeback) cache model. As mentioned
earlier, readhead is helping
Christoph Hellwig wrote:
Inspired by vhost-net implementation, I did initial prototype
of vhost-blk to see if it provides any benefits over QEMU virtio-blk.
I haven't handled all the error cases, fixed naming conventions etc.,
but the implementation is stable to play with. I tried not to
Christoph Hellwig wrote:
Inspired by vhost-net implementation, I did initial prototype
of vhost-blk to see if it provides any benefits over QEMU virtio-blk.
I haven't handled all the error cases, fixed naming conventions etc.,
but the implementation is stable to play with. I tried not to
Avi Kivity wrote:
On 03/23/2010 04:50 AM, Badari Pulavarty wrote:
Anthony Liguori wrote:
On 03/22/2010 08:45 PM, Badari Pulavarty wrote:
Anthony Liguori wrote:
On 03/22/2010 08:00 PM, Badari Pulavarty wrote:
Forgot to CC: KVM list earlier
These virtio results are still with a 2.6.18
Avi Kivity wrote:
On 03/23/2010 03:00 AM, Badari Pulavarty wrote:
Forgot to CC: KVM list earlier
[RFC] vhost-blk implementation.eml
Subject:
[RFC] vhost-blk implementation
From:
Badari Pulavarty pbad...@us.ibm.com
Date:
Mon, 22 Mar 2010 17:34:06 -0700
To:
virtualizat...@lists.linux
-by: Badari Pulavarty pbad...@us.ibm.com
---
drivers/vhost/blk.c | 242
1 file changed, 242 insertions(+)
Index: net-next/drivers/vhost/blk.c
===
--- /dev/null 1970-01-01 00:00
Anthony Liguori wrote:
On 03/22/2010 08:00 PM, Badari Pulavarty wrote:
Forgot to CC: KVM list earlier
These virtio results are still with a 2.6.18 kernel with no aio, right?
Results are on 2.6.33-rc8-net-next kernel. But not using AIO.
Thanks,
Badari
--
To unsubscribe from this list
Anthony Liguori wrote:
On 03/22/2010 08:45 PM, Badari Pulavarty wrote:
Anthony Liguori wrote:
On 03/22/2010 08:00 PM, Badari Pulavarty wrote:
Forgot to CC: KVM list earlier
These virtio results are still with a 2.6.18 kernel with no aio, right?
Results are on 2.6.33-rc8-net-next kernel
24 matches
Mail list logo