On 08/15/2011 12:17 PM, Badari Pulavarty wrote:
On 8/14/2011 8:20 PM, Liu Yuan wrote:
On 08/13/2011 12:12 AM, Badari Pulavarty wrote:
On 8/12/2011 4:40 AM, Liu Yuan wrote:
On 08/12/2011 04:27 PM, Liu Yuan wrote:
On 08/12/2011 12:50 PM, Badari Pulavarty wrote:
On 8/10/2011 8:19 PM, Liu Yuan
On 08/15/2011 12:17 PM, Badari Pulavarty wrote:
On 8/14/2011 8:20 PM, Liu Yuan wrote:
On 08/13/2011 12:12 AM, Badari Pulavarty wrote:
On 8/12/2011 4:40 AM, Liu Yuan wrote:
On 08/12/2011 04:27 PM, Liu Yuan wrote:
On 08/12/2011 12:50 PM, Badari Pulavarty wrote:
On 8/10/2011 8:19 PM, Liu Yuan
On 08/13/2011 12:12 AM, Badari Pulavarty wrote:
On 8/12/2011 4:40 AM, Liu Yuan wrote:
On 08/12/2011 04:27 PM, Liu Yuan wrote:
On 08/12/2011 12:50 PM, Badari Pulavarty wrote:
On 8/10/2011 8:19 PM, Liu Yuan wrote:
On 08/11/2011 11:01 AM, Liu Yuan wrote:
It looks like the patch wouldn't work
On 8/14/2011 8:20 PM, Liu Yuan wrote:
On 08/13/2011 12:12 AM, Badari Pulavarty wrote:
On 8/12/2011 4:40 AM, Liu Yuan wrote:
On 08/12/2011 04:27 PM, Liu Yuan wrote:
On 08/12/2011 12:50 PM, Badari Pulavarty wrote:
On 8/10/2011 8:19 PM, Liu Yuan wrote:
On 08/11/2011 11:01 AM, Liu Yuan wrote:
Hi Badari,
On 12/08/11 06:50, Badari Pulavarty wrote:
On 8/10/2011 8:19 PM, Liu Yuan wrote:
When opening second device, we get panic since used_info_cachep is
already created. Just to make progress I moved this call to
vhost_blk_init().
I don't see any host panics now. With single block
On 08/12/2011 04:27 PM, Liu Yuan wrote:
On 08/12/2011 12:50 PM, Badari Pulavarty wrote:
On 8/10/2011 8:19 PM, Liu Yuan wrote:
On 08/11/2011 11:01 AM, Liu Yuan wrote:
It looks like the patch wouldn't work for testing multiple devices.
vhost_blk_open() does
+ used_info_cachep =
On 8/12/2011 4:40 AM, Liu Yuan wrote:
On 08/12/2011 04:27 PM, Liu Yuan wrote:
On 08/12/2011 12:50 PM, Badari Pulavarty wrote:
On 8/10/2011 8:19 PM, Liu Yuan wrote:
On 08/11/2011 11:01 AM, Liu Yuan wrote:
It looks like the patch wouldn't work for testing multiple devices.
vhost_blk_open()
On 8/10/2011 8:19 PM, Liu Yuan wrote:
On 08/11/2011 11:01 AM, Liu Yuan wrote:
It looks like the patch wouldn't work for testing multiple devices.
vhost_blk_open() does
+ used_info_cachep = KMEM_CACHE(used_info, SLAB_HWCACHE_ALIGN |
SLAB_PANIC);
This is weird. how do you open
On 8/10/2011 8:19 PM, Liu Yuan wrote:
On 08/11/2011 11:01 AM, Liu Yuan wrote:
It looks like the patch wouldn't work for testing multiple devices.
vhost_blk_open() does
+ used_info_cachep = KMEM_CACHE(used_info, SLAB_HWCACHE_ALIGN |
SLAB_PANIC);
This is weird. how do you open
On Wed, 2011-08-10 at 10:19 +0800, Liu Yuan wrote:
On 08/09/2011 01:16 AM, Badari Pulavarty wrote:
On 8/8/2011 12:31 AM, Liu Yuan wrote:
On 08/08/2011 01:04 PM, Badari Pulavarty wrote:
On 8/7/2011 6:35 PM, Liu Yuan wrote:
On 08/06/2011 02:02 AM, Badari Pulavarty wrote:
On 8/5/2011 4:04
It looks like the patch wouldn't work for testing multiple devices.
vhost_blk_open() does
+ used_info_cachep = KMEM_CACHE(used_info, SLAB_HWCACHE_ALIGN |
SLAB_PANIC);
This is weird. how do you open multiple device?I just opened the device
with following command:
-drive
On 08/11/2011 11:01 AM, Liu Yuan wrote:
It looks like the patch wouldn't work for testing multiple devices.
vhost_blk_open() does
+ used_info_cachep = KMEM_CACHE(used_info, SLAB_HWCACHE_ALIGN |
SLAB_PANIC);
This is weird. how do you open multiple device?I just opened the
device with
On 08/09/2011 01:16 AM, Badari Pulavarty wrote:
On 8/8/2011 12:31 AM, Liu Yuan wrote:
On 08/08/2011 01:04 PM, Badari Pulavarty wrote:
On 8/7/2011 6:35 PM, Liu Yuan wrote:
On 08/06/2011 02:02 AM, Badari Pulavarty wrote:
On 8/5/2011 4:04 AM, Liu Yuan wrote:
On 08/05/2011 05:58 AM, Badari
On 08/08/2011 01:04 PM, Badari Pulavarty wrote:
On 8/7/2011 6:35 PM, Liu Yuan wrote:
On 08/06/2011 02:02 AM, Badari Pulavarty wrote:
On 8/5/2011 4:04 AM, Liu Yuan wrote:
On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I started testing your patches. I applied your kernel patch
On 8/8/2011 12:31 AM, Liu Yuan wrote:
On 08/08/2011 01:04 PM, Badari Pulavarty wrote:
On 8/7/2011 6:35 PM, Liu Yuan wrote:
On 08/06/2011 02:02 AM, Badari Pulavarty wrote:
On 8/5/2011 4:04 AM, Liu Yuan wrote:
On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I started testing
On 08/06/2011 02:02 AM, Badari Pulavarty wrote:
On 8/5/2011 4:04 AM, Liu Yuan wrote:
On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I started testing your patches. I applied your kernel patch to 3.0
and applied QEMU to latest git.
I passed 6 blockdevices from the host to guest
On 8/7/2011 6:35 PM, Liu Yuan wrote:
On 08/06/2011 02:02 AM, Badari Pulavarty wrote:
On 8/5/2011 4:04 AM, Liu Yuan wrote:
On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I started testing your patches. I applied your kernel patch to 3.0
and applied QEMU to latest git.
I passed
On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I started testing your patches. I applied your kernel patch to 3.0
and applied QEMU to latest git.
I passed 6 blockdevices from the host to guest (4 vcpu, 4GB RAM).
I ran simple dd read tests from the guest on all block devices
(with
On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I started testing your patches. I applied your kernel patch to 3.0
and applied QEMU to latest git.
I passed 6 blockdevices from the host to guest (4 vcpu, 4GB RAM).
I ran simple dd read tests from the guest on all block devices
(with
On 8/5/2011 4:04 AM, Liu Yuan wrote:
On 08/05/2011 05:58 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I started testing your patches. I applied your kernel patch to 3.0
and applied QEMU to latest git.
I passed 6 blockdevices from the host to guest (4 vcpu, 4GB RAM).
I ran simple dd read tests
Hi Liu Yuan,
I started testing your patches. I applied your kernel patch to 3.0
and applied QEMU to latest git.
I passed 6 blockdevices from the host to guest (4 vcpu, 4GB RAM).
I ran simple dd read tests from the guest on all block devices
(with various blocksizes, iflag=direct).
On Mon, Aug 01, 2011 at 01:46:33PM +0800, Liu Yuan wrote:
- I focused on using vfs interfaces in the kernel, so that I can
use it for file-backed devices.
Our use-case scenario is mostly file-backed images.
vhost-blk's that uses Linux AIO also support file-backed images.
Actually, I have
On 07/29/2011 06:25 PM, Sasha Levin wrote:
On Fri, 2011-07-29 at 20:01 +0800, Liu Yuan wrote:
Looking at this long list,most are function pointers that can not be
inlined, and the internal data structures used by these functions are
dozons. Leave aside code complexity, this long code path
On 08/01/2011 04:17 PM, Avi Kivity wrote:
On 07/29/2011 06:25 PM, Sasha Levin wrote:
On Fri, 2011-07-29 at 20:01 +0800, Liu Yuan wrote:
Looking at this long list,most are function pointers that can not be
inlined, and the internal data structures used by these functions are
dozons. Leave
On 08/01/2011 12:18 PM, Liu Yuan wrote:
Agree. vhost-net works around the lack of async zero copy networking
interface. Block I/O on the other hand does have such an interface,
and in addition transaction rates are usually lower. All we're
saving is the syscall overhead.
Personally I
On 07/30/2011 02:12 AM, Badari Pulavarty wrote:
Hi Liu Yuan,
I am glad to see that you started looking at vhost-blk. I did an
attempt year ago to improve block
performance using vhost-blk approach.
http://lwn.net/Articles/379864/
http://lwn.net/Articles/382543/
I will take a closer look
Hi Stefan
On 07/28/2011 11:44 PM, Stefan Hajnoczi wrote:
On Thu, Jul 28, 2011 at 3:29 PM, Liu Yuannamei.u...@gmail.com wrote:
Did you investigate userspace virtio-blk performance? If so, what
issues did you find?
Yes, in the performance table I presented, virtio-blk in the user space
lags
Hi
On 07/29/2011 12:48 PM, Stefan Hajnoczi wrote:
On Thu, Jul 28, 2011 at 4:44 PM, Stefan Hajnoczistefa...@gmail.com wrote:
On Thu, Jul 28, 2011 at 3:29 PM, Liu Yuannamei.u...@gmail.com wrote:
Did you investigate userspace virtio-blk performance? If so, what
issues did you find?
I have a
On Fri, Jul 29, 2011 at 8:22 AM, Liu Yuan namei.u...@gmail.com wrote:
Hi Stefan
On 07/28/2011 11:44 PM, Stefan Hajnoczi wrote:
On Thu, Jul 28, 2011 at 3:29 PM, Liu Yuannamei.u...@gmail.com wrote:
Did you investigate userspace virtio-blk performance? If so, what
issues did you find?
On Fri, Jul 29, 2011 at 03:59:53PM +0800, Liu Yuan wrote:
I noted bdrv_aio_multiwrite() do the murging job, but I am not sure
Just like I/O schedulers it's actually fairly harmful on high IOPS,
low latency devices. I've just started doing a lot of qemu bencharks,
and disabling that multiwrite
On 07/29/2011 05:06 PM, Stefan Hajnoczi wrote:
I mean did you investigate *why* userspace virtio-blk has higher
latency? Did you profile it and drill down on its performance?
It's important to understand what is going on before replacing it with
another mechanism. What I'm saying is, if I
On Fri, Jul 29, 2011 at 1:01 PM, Liu Yuan namei.u...@gmail.com wrote:
On 07/29/2011 05:06 PM, Stefan Hajnoczi wrote:
I mean did you investigate *why* userspace virtio-blk has higher
latency? Did you profile it and drill down on its performance?
It's important to understand what is going on
I hit a weirdness yesterday, just want to mention it in case you notice it too.
When running vanilla qemu-kvm I forgot to use aio=native. When I
compared the results against virtio-blk-data-plane (which *always*
uses Linux AIO) I was surprised to find average 4k read latency was
lower and the
On 07/29/2011 08:50 PM, Stefan Hajnoczi wrote:
I hit a weirdness yesterday, just want to mention it in case you notice it too.
When running vanilla qemu-kvm I forgot to use aio=native. When I
compared the results against virtio-blk-data-plane (which *always*
uses Linux AIO) I was surprised to
On 07/29/2011 10:45 PM, Liu Yuan wrote:
On 07/29/2011 08:50 PM, Stefan Hajnoczi wrote:
I hit a weirdness yesterday, just want to mention it in case you
notice it too.
When running vanilla qemu-kvm I forgot to use aio=native. When I
compared the results against virtio-blk-data-plane (which
On Fri, 2011-07-29 at 20:01 +0800, Liu Yuan wrote:
Looking at this long list,most are function pointers that can not be
inlined, and the internal data structures used by these functions are
dozons. Leave aside code complexity, this long code path would really
need retrofit. As Christoph
Hi Liu Yuan,
I am glad to see that you started looking at vhost-blk. I did an
attempt year ago to improve block
performance using vhost-blk approach.
http://lwn.net/Articles/379864/
http://lwn.net/Articles/382543/
I will take a closer look at your patchset to find differences and
On Thu, Jul 28, 2011 at 3:29 PM, Liu Yuan namei.u...@gmail.com wrote:
Did you investigate userspace virtio-blk performance? If so, what
issues did you find?
I have a hacked up world here that basically implements vhost-blk in userspace:
On Thu, Jul 28, 2011 at 4:44 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
On Thu, Jul 28, 2011 at 3:29 PM, Liu Yuan namei.u...@gmail.com wrote:
Did you investigate userspace virtio-blk performance? If so, what
issues did you find?
I have a hacked up world here that basically implements
39 matches
Mail list logo