Il 17/07/2012 10:29, Asias He ha scritto:
So, vhost-blk at least saves ~6 syscalls for us in each request.
Are they really 6? If I/O is coalesced by a factor of 3, for example
(i.e. each exit processes 3 requests), it's really 2 syscalls per request.
Also, is there anything we can improve?
On 07/17/2012 04:52 PM, Paolo Bonzini wrote:
Il 17/07/2012 10:29, Asias He ha scritto:
So, vhost-blk at least saves ~6 syscalls for us in each request.
Are they really 6? If I/O is coalesced by a factor of 3, for example
(i.e. each exit processes 3 requests), it's really 2 syscalls per
Il 17/07/2012 11:21, Asias He ha scritto:
It depends. Like vhost-scsi, vhost-blk has the problem of a crippled
feature set: no support for block device formats, non-raw protocols,
etc. This makes it different from vhost-net.
Data-plane qemu also has this cripppled feature set problem, no?
On Tue, Jul 17, 2012 at 10:52:10AM +0200, Paolo Bonzini wrote:
Il 17/07/2012 10:29, Asias He ha scritto:
So, vhost-blk at least saves ~6 syscalls for us in each request.
Are they really 6? If I/O is coalesced by a factor of 3, for example
(i.e. each exit processes 3 requests), it's really
On Tue, Jul 17, 2012 at 11:32:45AM +0200, Paolo Bonzini wrote:
Il 17/07/2012 11:21, Asias He ha scritto:
It depends. Like vhost-scsi, vhost-blk has the problem of a crippled
feature set: no support for block device formats, non-raw protocols,
etc. This makes it different from vhost-net.
Il 17/07/2012 11:45, Michael S. Tsirkin ha scritto:
So it begs the question, is it going to be used in production, or just a
useful reference tool?
Sticking to raw already makes virtio-blk faster, doesn't it?
In that vhost-blk looks to me like just another optimization option.
Ideally I
On Tue, Jul 17, 2012 at 12:14:33PM +0200, Paolo Bonzini wrote:
Il 17/07/2012 11:45, Michael S. Tsirkin ha scritto:
So it begs the question, is it going to be used in production, or just a
useful reference tool?
Sticking to raw already makes virtio-blk faster, doesn't it?
In that
Il 17/07/2012 12:49, Michael S. Tsirkin ha scritto:
Ok, that would make more sense. One difference between vhost-blk and
vhost-net is that for vhost-blk there are also management actions that
would trigger the switch, for example a live snapshot.
So a prerequisite for vhost-blk would be that
On Tue, Jul 17, 2012 at 12:56:31PM +0200, Paolo Bonzini wrote:
Il 17/07/2012 12:49, Michael S. Tsirkin ha scritto:
Ok, that would make more sense. One difference between vhost-blk and
vhost-net is that for vhost-blk there are also management actions that
would trigger the switch, for
On Tue, Jul 17, 2012 at 10:21 AM, Asias He as...@redhat.com wrote:
On 07/17/2012 04:52 PM, Paolo Bonzini wrote:
Il 17/07/2012 10:29, Asias He ha scritto:
So, vhost-blk at least saves ~6 syscalls for us in each request.
Are they really 6? If I/O is coalesced by a factor of 3, for example
On Tue, Jul 17, 2012 at 12:11:15PM +0100, Stefan Hajnoczi wrote:
On Tue, Jul 17, 2012 at 10:21 AM, Asias He as...@redhat.com wrote:
On 07/17/2012 04:52 PM, Paolo Bonzini wrote:
Il 17/07/2012 10:29, Asias He ha scritto:
So, vhost-blk at least saves ~6 syscalls for us in each request.
On Tue, Jul 17, 2012 at 9:29 AM, Asias He as...@redhat.com wrote:
On 07/16/2012 07:58 PM, Stefan Hajnoczi wrote:
Does the vhost-blk implementation do anything fundamentally different
from userspace? Where is the overhead that userspace virtio-blk has?
Currently, no. But we could play with
On Tue, Jul 17, 2012 at 12:26 PM, Michael S. Tsirkin m...@redhat.com wrote:
On Tue, Jul 17, 2012 at 12:11:15PM +0100, Stefan Hajnoczi wrote:
On Tue, Jul 17, 2012 at 10:21 AM, Asias He as...@redhat.com wrote:
On 07/17/2012 04:52 PM, Paolo Bonzini wrote:
Il 17/07/2012 10:29, Asias He ha
On Tue, Jul 17, 2012 at 12:42 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
On Tue, Jul 17, 2012 at 12:26 PM, Michael S. Tsirkin m...@redhat.com wrote:
On Tue, Jul 17, 2012 at 12:11:15PM +0100, Stefan Hajnoczi wrote:
On Tue, Jul 17, 2012 at 10:21 AM, Asias He as...@redhat.com wrote:
On
On Tue, Jul 17, 2012 at 12:42:13PM +0100, Stefan Hajnoczi wrote:
On Tue, Jul 17, 2012 at 12:26 PM, Michael S. Tsirkin m...@redhat.com wrote:
On Tue, Jul 17, 2012 at 12:11:15PM +0100, Stefan Hajnoczi wrote:
On Tue, Jul 17, 2012 at 10:21 AM, Asias He as...@redhat.com wrote:
On 07/17/2012
On Tue, Jul 17, 2012 at 12:54 PM, Michael S. Tsirkin m...@redhat.com wrote:
Knowing the answer to that is important before anyone can say whether
this approach is good or not.
Stefan
Why is it?
Because there might be a fix to kvmtool which closes the gap. It
would be embarassing if
Il 17/07/2012 14:48, Michael S. Tsirkin ha scritto:
On Tue, Jul 17, 2012 at 01:03:39PM +0100, Stefan Hajnoczi wrote:
On Tue, Jul 17, 2012 at 12:54 PM, Michael S. Tsirkin m...@redhat.com wrote:
Knowing the answer to that is important before anyone can say whether
this approach is good or not.
On Tue, Jul 17, 2012 at 03:02:45PM +0200, Paolo Bonzini wrote:
Il 17/07/2012 14:48, Michael S. Tsirkin ha scritto:
On Tue, Jul 17, 2012 at 01:03:39PM +0100, Stefan Hajnoczi wrote:
On Tue, Jul 17, 2012 at 12:54 PM, Michael S. Tsirkin m...@redhat.com
wrote:
Knowing the answer to that is
On Fri, Jul 13, 2012 at 04:31:21PM +0800, zhenzhong.duan wrote:
When populate pages across a mem boundary at bootup, the page count
populated isn't correct. This is due to mem populated to non-mem
region and ignored.
Pfn range is also wrongly aligned when mem boundary isn't page aligned.
On Wed, Jul 11, 2012 at 09:15:00PM +, Nicholas A. Bellinger wrote:
From: Nicholas Bellinger n...@linux-iscsi.org
Hi folks,
The following is a RFC-v2 series of tcm_vhost target fabric driver code
currently in-flight for-3.6 mainline code.
After last week's developments along with the
On Fri, Jul 13, 2012 at 04:55:06PM +0800, Asias He wrote:
Hi folks,
[I am resending to fix the broken thread in the previous one.]
This patchset adds vhost-blk support. vhost-blk is a in kernel virito-blk
device accelerator. Compared to userspace virtio-blk implementation, vhost-blk
On Tue, 2012-07-17 at 09:04 -0700, Greg KH wrote:
On Sat, Jul 14, 2012 at 01:34:06PM -0700, K. Y. Srinivasan wrote:
Format GUIDS as per MSFT standard. This makes interacting with MSFT
tool stack easier.
[]
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
[]
@@ -147,7 +147,7 @@
On Tue, Jul 17, 2012 at 09:19:26AM -0700, Joe Perches wrote:
On Tue, 2012-07-17 at 09:04 -0700, Greg KH wrote:
On Sat, Jul 14, 2012 at 01:34:06PM -0700, K. Y. Srinivasan wrote:
Format GUIDS as per MSFT standard. This makes interacting with MSFT
tool stack easier.
[]
diff --git
This patch introduces the helper functions as well as the necessary changes
to teach compaction and migration bits how to cope with pages which are
part of a guest memory balloon, in order to make them movable by memory
compaction procedures.
Signed-off-by: Rafael Aquini aqu...@redhat.com
---
Memory fragmentation introduced by ballooning might reduce significantly
the number of 2MB contiguous memory blocks that can be used within a guest,
thus imposing performance penalties associated with the reduced number of
transparent huge pages that could be used by the guest workload.
This
Besides making balloon pages movable at allocation time and introducing
the necessary primitives to perform balloon page migration/compaction,
this patch also introduces the following locking scheme to provide the
proper synchronization and protection for struct virtio_balloon elements
against
This patch is only for testing report purposes and shall be dropped in case of
the rest of this patchset getting accepted for merging.
Signed-off-by: Rafael Aquini aqu...@redhat.com
---
drivers/virtio/virtio_balloon.c |1 +
include/linux/vm_event_item.h |2 ++
mm/compaction.c
On 07/17/2012 10:05 AM, Michael S. Tsirkin wrote:
On Wed, Jul 11, 2012 at 09:15:00PM +, Nicholas A. Bellinger wrote:
From: Nicholas Bellingern...@linux-iscsi.org
Hi folks,
The following is a RFC-v2 series of tcm_vhost target fabric driver code
currently in-flight for-3.6 mainline code.
On Tue, Jul 17, 2012 at 01:55:42PM -0500, Anthony Liguori wrote:
On 07/17/2012 10:05 AM, Michael S. Tsirkin wrote:
On Wed, Jul 11, 2012 at 09:15:00PM +, Nicholas A. Bellinger wrote:
From: Nicholas Bellingern...@linux-iscsi.org
Hi folks,
The following is a RFC-v2 series of tcm_vhost
On Tue, 2012-07-17 at 18:05 +0300, Michael S. Tsirkin wrote:
On Wed, Jul 11, 2012 at 09:15:00PM +, Nicholas A. Bellinger wrote:
From: Nicholas Bellinger n...@linux-iscsi.org
Hi folks,
The following is a RFC-v2 series of tcm_vhost target fabric driver code
currently in-flight
On Tue, Jul 17, 2012 at 02:17:22PM -0700, Nicholas A. Bellinger wrote:
On Tue, 2012-07-17 at 18:05 +0300, Michael S. Tsirkin wrote:
On Wed, Jul 11, 2012 at 09:15:00PM +, Nicholas A. Bellinger wrote:
From: Nicholas Bellinger n...@linux-iscsi.org
Hi folks,
The following is a
On Tue, 2012-07-17 at 13:55 -0500, Anthony Liguori wrote:
On 07/17/2012 10:05 AM, Michael S. Tsirkin wrote:
On Wed, Jul 11, 2012 at 09:15:00PM +, Nicholas A. Bellinger wrote:
SNIP
It still seems not 100% clear whether this driver will have major
userspace using it. And if not, it
On Tue, Jul 17, 2012 at 02:17:22PM -0700, Nicholas A. Bellinger wrote:
Wrt to staging, I'd like to avoid mucking with staging because:
*) The code has been posted for review
*) The code has been converted to use the latest target-core primitives
*) The code does not require cleanups between
On Wed, 2012-07-18 at 00:34 +0300, Michael S. Tsirkin wrote:
On Tue, Jul 17, 2012 at 02:17:22PM -0700, Nicholas A. Bellinger wrote:
On Tue, 2012-07-17 at 18:05 +0300, Michael S. Tsirkin wrote:
On Wed, Jul 11, 2012 at 09:15:00PM +, Nicholas A. Bellinger wrote:
From: Nicholas Bellinger
On Wed, 2012-07-18 at 00:58 +0300, Michael S. Tsirkin wrote:
On Tue, Jul 17, 2012 at 02:17:22PM -0700, Nicholas A. Bellinger wrote:
Wrt to staging, I'd like to avoid mucking with staging because:
*) The code has been posted for review
*) The code has been converted to use the latest
On Tue, Jul 17, 2012 at 03:02:08PM -0700, Nicholas A. Bellinger wrote:
On Wed, 2012-07-18 at 00:34 +0300, Michael S. Tsirkin wrote:
On Tue, Jul 17, 2012 at 02:17:22PM -0700, Nicholas A. Bellinger wrote:
On Tue, 2012-07-17 at 18:05 +0300, Michael S. Tsirkin wrote:
On Wed, Jul 11, 2012 at
On Wed, 2012-07-18 at 01:18 +0300, Michael S. Tsirkin wrote:
On Tue, Jul 17, 2012 at 03:02:08PM -0700, Nicholas A. Bellinger wrote:
On Wed, 2012-07-18 at 00:34 +0300, Michael S. Tsirkin wrote:
On Tue, Jul 17, 2012 at 02:17:22PM -0700, Nicholas A. Bellinger wrote:
On Tue, 2012-07-17 at
On Tue, Jul 17, 2012 at 03:37:20PM -0700, Nicholas A. Bellinger wrote:
On Wed, 2012-07-18 at 01:18 +0300, Michael S. Tsirkin wrote:
On Tue, Jul 17, 2012 at 03:02:08PM -0700, Nicholas A. Bellinger wrote:
On Wed, 2012-07-18 at 00:34 +0300, Michael S. Tsirkin wrote:
On Tue, Jul 17, 2012 at
On Wed, 2012-07-18 at 02:11 +0300, Michael S. Tsirkin wrote:
On Tue, Jul 17, 2012 at 03:37:20PM -0700, Nicholas A. Bellinger wrote:
On Wed, 2012-07-18 at 01:18 +0300, Michael S. Tsirkin wrote:
On Tue, Jul 17, 2012 at 03:02:08PM -0700, Nicholas A. Bellinger wrote:
On Wed, 2012-07-18 at
From: Nicholas Bellinger n...@linux-iscsi.org
Hi folks,
The following is the RFC-v3 series of tcm_vhost target fabric driver code
currently in-flight for-3.6 mainline code.
With the merge window opening soon, the tcm_vhost code has started seeing
time in linux-next. The v2 - v3 changelog from
From: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
In order for other vhost devices to use the VHOST_FEATURES bits the
vhost-net specific bits need to be moved to their own VHOST_NET_FEATURES
constant.
(Asias: Update drivers/vhost/test.c to use VHOST_NET_FEATURES)
Signed-off-by: Stefan Hajnoczi
From: Stefan Hajnoczi stefa...@gmail.com
The vhost work queue allows processing to be done in vhost worker thread
context, which uses the owner process mm. Access to the vring and guest
memory is typically only possible from vhost worker context so it is
useful to allow work to be queued
From: Nicholas Bellinger n...@risingtidesystems.com
This patch adds the initial vhost_scsi_ioctl() callers for
VHOST_SCSI_SET_ENDPOINT
and VHOST_SCSI_CLEAR_ENDPOINT respectively, and also adds struct
vhost_vring_target
that is used by tcm_vhost code when locating target ports during qemu setup.
From: Nicholas Bellinger n...@linux-iscsi.org
This patch adds the initial code for tcm_vhost, a Vhost level TCM
fabric driver for virtio SCSI initiators into KVM guest.
This code is currently up and running on v3.5-rc2 host+guest along
with the virtio-scsi vdev-scan() patch to allow a proper
On 07/18/2012 03:10 AM, Jeff Moyer wrote:
Asias He as...@redhat.com writes:
vhost-blk is a in kernel virito-blk device accelerator.
This patch is based on Liu Yuan's implementation with various
improvements and bug fixes. Notably, this patch makes guest notify and
host completion processing
On 07/17/2012 11:09 PM, Michael S. Tsirkin wrote:
On Fri, Jul 13, 2012 at 04:55:06PM +0800, Asias He wrote:
Hi folks,
[I am resending to fix the broken thread in the previous one.]
This patchset adds vhost-blk support. vhost-blk is a in kernel virito-blk
device accelerator. Compared to
When populate pages across a mem boundary at bootup, the page count
populated isn't correct. This is due to mem populated to non-mem
region and ignored.
Pfn range is also wrongly aligned when mem boundary isn't page aligned.
-v2: If xen_do_chunk fail(populate), abort this chunk and any others.
From c40ea05842fec8f6caa053b2d58f54608ed0835f Mon Sep 17 00:00:00 2001
From: Zhenzhong Duanzhenzhong.d...@oracle.com
Date: Wed, 4 Jul 2012 14:08:10 +0800
Subject: [PATCH] xen: populate right count of pages when across mem boundary
When populate pages across a mem boundary at bootup, the page
Sorry, pls ignore it. Tab still be translated to space.
δΊ 2012-07-18 11:08, zhenzhong.duan ει:
From c40ea05842fec8f6caa053b2d58f54608ed0835f Mon Sep 17 00:00:00 2001
From: Zhenzhong Duanzhenzhong.d...@oracle.com
Date: Wed, 4 Jul 2012 14:08:10 +0800
Subject: [PATCH] xen: populate right count of
When populate pages across a mem boundary at bootup, the page count
populated isn't correct. This is due to mem populated to non-mem
region and ignored.
Pfn range is also wrongly aligned when mem boundary isn't page aligned.
-v2: If xen_do_chunk fail(populate), abort this chunk and any others.
50 matches
Mail list logo