64%]/
Tests on mlx4 was ongoing, will post the result in next week.
Jason Wang (3):
virtio: support for urgent descriptors
vhost: support urgent descriptors
virtio-net: conditionally enable tx interrupt
drivers/net/virtio_net.c | 164 ++-
driv
On 09/22/2014 02:55 PM, Michael S. Tsirkin wrote:
> On Mon, Sep 22, 2014 at 11:30:23AM +0800, Jason Wang wrote:
>> On 09/20/2014 06:00 PM, Paolo Bonzini wrote:
>>> Il 19/09/2014 09:10, Jason Wang ha scritto:
>>>>>>
>>>>>> -
On 09/20/2014 06:00 PM, Paolo Bonzini wrote:
> Il 19/09/2014 09:10, Jason Wang ha scritto:
>>>>
>>>> - if (!vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX)) {
>>>> + if (vq->urgent || !vhost_has_feature(vq, VIRTIO_RING_F_EVENT_IDX)) {
>> So the
On 07/01/2014 06:49 PM, Michael S. Tsirkin wrote:
> Signed-off-by: Michael S. Tsirkin
> ---
> drivers/vhost/vhost.h | 19 +--
> drivers/vhost/net.c | 30 +-
> drivers/vhost/scsi.c | 23 +++
> drivers/vhost/test.c | 5 +++--
> dr
On 07/01/2014 06:49 PM, Michael S. Tsirkin wrote:
> Signed-off-by: Michael S. Tsirkin
> ---
> drivers/vhost/vhost.h | 19 +--
> drivers/vhost/net.c | 30 +-
> drivers/vhost/scsi.c | 23 +++
> drivers/vhost/test.c | 5 +++--
> dr
;> >> >this patch
>>>> >> >solve this problems by scheduling a delayed work when the count of irq
>>>> >> >injected
>>>> >> >during eoi broadcast exceeds a threshold value. After this patch, the
>>>> >>
On 08/29/2014 12:07 PM, Zhang, Yang Z wrote:
> Zhang Haoyu wrote on 2014-08-29:
>> > Hi, Yang, Gleb, Michael,
>> > Could you help review below patch please?
> I don't quite understand the background. Why ioacpi->irr is setting before
> EOI? It should be driver's responsibility to clear the interru
alue. After this patch, the guest
>> can
>> move a little forward when there's no suitable irq handler in case it may
>> register one very soon and for guest who has a bad irq detection routine (
>> such
>> as note_interrupt() in linux ), this bad irq would be reco
On 08/27/2014 05:31 PM, Zhang Haoyu wrote:
>>> Hi, all
>>> >>
>>> >> I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter
>>> >> e1000 NIC interrupt storm,
>>> >> because "if (!ent->fields.mask && (ioapic->irr & (1 << i)))" is
>>> >> always t
On 08/26/2014 05:28 PM, Zhang Haoyu wrote:
> Hi, all
>
> I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter e1000 NIC
> interrupt storm,
> because "if (!ent->fields.mask && (ioapic->irr & (1 << i)))" is always
> true in __kvm_ioapic_update_eoi().
>
> A
On 08/25/2014 03:17 PM, Zhang Haoyu wrote:
>>> Hi, all
>>> >>
>>> >> I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter e1000 NIC
>>> >> interrupt storm,
>>> >> because "if (!ent->fields.mask && (ioapic->irr & (1 << i)))" is always
>>> >> true in __kvm_ioapic_update_eoi().
>>> >>
>>>
On 08/25/2014 03:17 PM, Zhang Haoyu wrote:
>>> Hi, all
>>>
>>> I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter e1000 NIC
>>> interrupt storm,
>>> because "if (!ent->fields.mask && (ioapic->irr & (1 << i)))" is always true
>>> in __kvm_ioapic_update_eoi().
>>>
>>> Any ideas?
>> We
On 08/23/2014 06:36 PM, Zhang Haoyu wrote:
> Hi, all
>
> I use a qemu-1.4.1/qemu-2.0.0 to run win7 guest, and encounter e1000 NIC
> interrupt storm,
> because "if (!ent->fields.mask && (ioapic->irr & (1 << i)))" is always true
> in __kvm_ioapic_update_eoi().
>
> Any ideas?
We meet this several
On 08/22/2014 10:30 AM, Zhang Haoyu wrote:
> Hi, Krishna, Shirley
>
> How got get the latest patch of M:N Implementation of mulitiqueue,
>
> I am going to test the the combination of "M:N Implementation of mulitiqueue"
> and "vhost: add polling mode".
>
> Thanks,
> Zhang Haoyu
>
>
Just FYI. You
On 08/17/2014 06:22 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 15, 2014 at 10:55:32AM +0800, Jason Wang wrote:
>>>>> I wonder if k->set_guest_notifiers should be called after "hdev->started
>>>>> = true;" in vhost_dev_start.
>>>> Mic
On 08/17/2014 06:20 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 15, 2014 at 11:40:08AM +0800, Jason Wang wrote:
>> After rx vq was enabled, we never stop polling its socket. This is sub
>> optimal
>> when may lead unnecessary wake-ups after the rx net work has already been
ughput/cpu/normalized thru/
256/1/+1.9004%-4.7985% +7.0366%
256/25/-4.7366% -11.0809%+7.1349%
256/50/+3.9808% -5.2037% +9.6887%
4096/1/+2.1619% -0.7303% +2.9134%
4096/25/-13.1836% -14.7298%+1.8134%
4096/50/-11.1990% -15.4763%+5.0605%
Signed-off-by: Jason Wang
---
dri
On 08/14/2014 06:02 PM, Michael S. Tsirkin wrote:
> On Thu, Aug 14, 2014 at 04:52:40PM +0800, Jason Wang wrote:
>> On 08/07/2014 08:47 PM, Zhangjie (HZ) wrote:
>>> On 2014/8/5 20:14, Zhangjie (HZ) wrote:
>>>> On 2014/8/5 17:49, Michael S. Tsirkin wrote:
>>&g
On 08/07/2014 08:47 PM, Zhangjie (HZ) wrote:
> On 2014/8/5 20:14, Zhangjie (HZ) wrote:
>> On 2014/8/5 17:49, Michael S. Tsirkin wrote:
>>> On Tue, Aug 05, 2014 at 02:29:28PM +0800, Zhangjie (HZ) wrote:
Jason is right, the new order is not the cause of network unreachable.
Changing order s
On 07/23/2014 04:48 PM, Abel Gordon wrote:
> On Wed, Jul 23, 2014 at 11:42 AM, Jason Wang wrote:
>> >
>> > On 07/23/2014 04:12 PM, Razya Ladelsky wrote:
>>> > > Jason Wang wrote on 23/07/2014 08:26:36 AM:
>>> > >
>>>> > >
On 07/23/2014 04:12 PM, Razya Ladelsky wrote:
> Jason Wang wrote on 23/07/2014 08:26:36 AM:
>
>> From: Jason Wang
>> To: Razya Ladelsky/Haifa/IBM@IBMIL, kvm@vger.kernel.org, "Michael S.
>> Tsirkin" ,
>> Cc: abel.gor...@gmail.com, Joel Nider/Haifa/IBM@IB
On 07/21/2014 09:23 PM, Razya Ladelsky wrote:
> Hello All,
>
> When vhost is waiting for buffers from the guest driver (e.g., more
> packets
> to send in vhost-net's transmit queue), it normally goes to sleep and
> waits
> for the guest to "kick" it. This kick involves a PIO in the guest, and
> t
On Thu, 2014-04-10 at 17:27 +0800, Fam Zheng wrote:
> On Fri, 03/21 17:41, Jason Wang wrote:
> > This patch adds simple python to display vhost satistics of vhost, the codes
> > were based on kvm_stat script from qemu. As work function has been recored,
> > filters could b
On Tue, 2014-04-08 at 16:49 -0400, Simon Chen wrote:
> A little update on this..
>
> I turned on multiqueue of vhost-net. Now the receiving VM is getting
> traffic over all four queues - based on the CPU usage of the four
> vhost-[pid] threads. For some reason, the sender is now pegging 100%
> on
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 1 +
drivers/vhost/vhost.h | 3 +++
2 files changed, 4 insertions(+)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index a0fa5de..85d666c 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -708,6 +708,7 @@ static int
/478
Jason Wang (4):
vhost: introduce queue_index for tracing
vhost: basic tracepoints
vhost_net: add basic tracepoints for vhost_net
tools: virtio: add a top-like utility for displaying vhost satistics
drivers/vhost/net.c | 7 +
drivers/vhost/net_trace.h | 53 +++
drivers
To help performance analyze and debugging, this patch introduces
tracepoints for vhost_net. Two tracepoints were introduced, packets
sending and receiving.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 5 +
drivers/vhost/net_trace.h | 53
To help for the performance optimizations and debugging, this patch tracepoints
for vhost. Two kinds of activities were traced: virtio and vhost work
queuing/wakeup.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 1 +
drivers/vhost/trace.h | 175
) 707 0
vhost_work_queue_wakeup(rx_kick) 9 0
Signed-off-by: Jason Wang
---
tools/virtio/vhost_stat | 375
1 file changed, 375 insertions(+)
create mode 100755 tools/virtio/vhost_stat
diff --git a/tools/virtio
On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote:
> On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
>> > We used to stop the handling of tx when the number of pending DMAs
>> > exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>> > of b
On 03/08/2014 05:39 AM, David Miller wrote:
> From: Jason Wang
> Date: Fri, 7 Mar 2014 13:28:27 +0800
>
>> This is because the delay added by htb may lead the delay the finish
>> of DMAs and cause the pending DMAs for tap0 exceeds the limit
>> (VHOST_MAX_PEND). In th
when unlimited sndbuf. We still need a
solution for limited sndbuf.
Cc: Michael S. Tsirkin
Cc: Qin Chuanyu
Signed-off-by: Jason Wang
---
Changes from V1:
- Remove VHOST_MAX_PEND and switch to use half of the vq size as the limit
- Add cpu utilization in commit log
---
drivers/vhost/net.c | 19
On 02/26/2014 07:16 PM, Michael S. Tsirkin wrote:
> Please see MAINTAINERS and copy all relevant lists.
>
> On Wed, Feb 26, 2014 at 05:20:09PM +0800, Qin Chuanyu wrote:
>> guest kick host base on avail_ring flags value and get perfermance
> typo
>
>> improved, vhost_zerocopy_callback could do the s
On 02/26/2014 05:23 PM, Michael S. Tsirkin wrote:
> On Wed, Feb 26, 2014 at 03:11:21PM +0800, Jason Wang wrote:
>> > On 02/26/2014 02:32 PM, Qin Chuanyu wrote:
>>> > >On 2014/2/26 13:53, Jason Wang wrote:
>>>> > >>On 02/25/2014 09:57 PM, Michael S. T
On 02/26/2014 02:32 PM, Qin Chuanyu wrote:
On 2014/2/26 13:53, Jason Wang wrote:
On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is
- Original Message -
> guest kick vhost base on vring flag status and get perfermance improved,
> vhost_zerocopy_callback could do this in the same way, as
> virtqueue_enable_cb need one more check after change the status of
> avail_ring flags, vhost also do the same thing after vhost_ena
On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote:
On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote:
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive
On 02/25/2014 04:56 PM, Qin Chuanyu wrote:
> On 2014/2/25 16:13, Jason Wang wrote:
>> On 02/25/2014 03:53 PM, Qin Chuanyu wrote:
>>> On 2014/2/25 15:38, Jason Wang wrote:
>>>> On 02/25/2014 02:55 PM, Qin Chuanyu wrote:
>>>>> guest kick vhost
On 02/25/2014 03:53 PM, Qin Chuanyu wrote:
> On 2014/2/25 15:38, Jason Wang wrote:
>> On 02/25/2014 02:55 PM, Qin Chuanyu wrote:
>>> guest kick vhost base on vring flag status and get perfermance
>>> improved,
>>> vhost_zerocopy_callback could do this in the
On 02/25/2014 02:55 PM, Qin Chuanyu wrote:
> guest kick vhost base on vring flag status and get perfermance improved,
> vhost_zerocopy_callback could do this in the same way, as
> virtqueue_enable_cb need one more check after change the status of
> avail_ring flags, vhost also do the same thing aft
: Michael S. Tsirkin
Cc: Qin Chuanyu
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 17 +++--
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index a0fa5de..3e96e47 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
On 02/24/2014 09:12 PM, Qin Chuanyu wrote:
> with vhost tx zero_copy, guest nic might get hang when host reserving
> skb in socket queue delivered by guest, the case has been solved in
> tun, it also been needed by bridge. This could easily happened when a
> LAST_ACK state tcp occuring between gues
ef);
> + atomic_inc(&ubufs->refcount);
> nvq->upend_idx = (nvq->upend_idx + 1) % UIO_MAXIOV;
> } else {
> msg.msg_control = NULL;
> @@ -785,7 +784,7 @@ static void vhost_net_flush(struct vhost_net *n)
>
t_release(struct inode *inode, struct
> file *f)
> fput(tx_sock->file);
> if (rx_sock)
> fput(rx_sock->file);
> + /* Make sure no callbacks are outstanding */
> + synchronize_rcu_bh();
> /* We do an extra flush before freeing memo
On 02/12/2014 03:38 PM, Qin Chuanyu wrote:
> On 2013/8/30 12:29, Jason Wang wrote:
>> We used to poll vhost queue before making DMA is done, this is racy
>> if vhost
>> thread were waked up before marking DMA is done which can result the
>> signal to
>> be missed.
On 02/12/2014 02:46 PM, Qin Chuanyu wrote:
> On 2014/2/12 13:28, Jason Wang wrote:
>
>> A question: without NAPI weight, could this starve other net devices?
> tap xmit skb use thread context,the poll func of physical nic driver
> could be called in softirq context without chan
On 02/12/2014 02:26 PM, Eric Dumazet wrote:
> On Wed, 2014-02-12 at 13:50 +0800, Jason Wang wrote:
>> On 02/12/2014 01:47 PM, Eric Dumazet wrote:
>>> On Wed, 2014-02-12 at 13:28 +0800, Jason Wang wrote:
>>>
>>>> A question: without NAPI weight, could this s
)
Cc: Michael S. Tsirkin
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 9a68409..06268a0 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -525,7 +525,8 @@ static
On 02/12/2014 01:47 PM, Eric Dumazet wrote:
> On Wed, 2014-02-12 at 13:28 +0800, Jason Wang wrote:
>
>> A question: without NAPI weight, could this starve other net devices?
> Not really, as net devices are serviced by softirq handler.
>
>
Yes, then the issue is tun could be
On 02/11/2014 10:25 PM, Qin Chuanyu wrote:
> we could xmit directly instead of going through softirq to gain
> throughput and lantency improved.
> test model: VM-Host-Host just do transmit. with vhost thread and nic
> interrupt bind cpu1. netperf do throuhput test and qperf do lantency
> test.
> Ho
Ciudad de Buenos Aires - Argentina
> Cel: +549(11) 15-3770-1857
> Tel : +54(11) 4640-8443
>
>
> On Wed, Jan 22, 2014 at 12:22 PM, Stefan Hajnoczi wrote:
>> On Tue, Jan 21, 2014 at 04:06:05PM -0200, Alejandro Comisario wrote:
>>
>> CCed Michael Tsirkin and Jason W
On 01/22/2014 11:22 PM, Stefan Hajnoczi wrote:
> On Tue, Jan 21, 2014 at 04:06:05PM -0200, Alejandro Comisario wrote:
>
> CCed Michael Tsirkin and Jason Wang who work on KVM networking.
>
>> Hi guys, we had in the past when using physical servers, several
>> through
g cpu hotplug")
Cc: sta...@vger.kernel.org
Signed-off-by: Asias He
Reviewed-by: Paolo Bonzini
Signed-off-by: Jason Wang
---
Changes from V1:
- Add "Fixes" line
- CC stable
---
drivers/scsi/virtio_scsi.c | 15 ++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff
On 12/17/2013 11:09 AM, Rusty Russell wrote:
> Jason Wang writes:
>> > On 10/28/2013 04:01 PM, Asias He wrote:
>>> >> vqs are freed in virtscsi_freeze but the hotcpu_notifier is not
>>> >> unregistered. We will have a use-after-free usage when th
On 10/28/2013 04:01 PM, Asias He wrote:
> vqs are freed in virtscsi_freeze but the hotcpu_notifier is not
> unregistered. We will have a use-after-free usage when the notifier
> callback is called after virtscsi_freeze.
>
> Signed-off-by: Asias He
> ---
> drivers/scsi/virtio_scsi.c | 15 +
On 11/24/2013 05:22 PM, Razya Ladelsky wrote:
> Hi all,
>
> I am Razya Ladelsky, I work at IBM Haifa virtualization team, which
> developed Elvis, presented by Abel Gordon at the last KVM forum:
> ELVIS video: https://www.youtube.com/watch?v=9EyweibHfEs
> ELVIS slides: https://drive.google.com/
On 11/04/2013 12:35 PM, Jason Wang wrote:
> On 11/03/2013 04:07 PM, wangsitan wrote:
>> Hi all,
>>
>> A virtual net interface using virtio_net with TSO on may send big TCP
>> packets (up to 64KB). The receiver will get big packets if it's virtio_net,
>&g
On 11/03/2013 04:07 PM, wangsitan wrote:
> Hi all,
>
> A virtual net interface using virtio_net with TSO on may send big TCP packets
> (up to 64KB). The receiver will get big packets if it's virtio_net, too. But
> it will get common packets (according to MTU) if the receiver is e1000 (who
> re-p
st_priv(sh);
> + int err;
> +
> + err = virtscsi_init(vdev, vscsi);
> + if (err)
> + return err;
> +
> + err = register_hotcpu_notifier(&vscsi->nb);
> + if (err)
> + vdev->config->del_vqs(vdev);
>
> - return virtscsi_ini
On 10/20/2013 04:04 PM, Sahid Ferdjaoui wrote:
> Hi all,
>
> I'm working on create a large number of tcp connections on a guest;
> The environment is on OpenStack:
>
> Host (dedicated compute node):
> OS/Kernel: Ubuntu/3.2
> Cpus: 24
> Mems: 128GB
>
> Guest (alone on the Host):
> OS/Kernel:
On 09/26/2013 12:30 PM, Jason Wang wrote:
> On 09/23/2013 03:16 PM, Michael S. Tsirkin wrote:
>> > On Thu, Sep 05, 2013 at 10:54:44AM +0800, Jason Wang wrote:
>>>> >> > On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote:
>>>>>> >>> &g
On 09/23/2013 03:16 PM, Michael S. Tsirkin wrote:
> On Thu, Sep 05, 2013 at 10:54:44AM +0800, Jason Wang wrote:
>> > On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote:
>>> > > On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote:
>>>> > >> Curr
On 09/04/2013 07:59 PM, Daniel Borkmann wrote:
> On 09/04/2013 01:27 PM, Eric Dumazet wrote:
>> On Wed, 2013-09-04 at 03:30 -0700, Eric Dumazet wrote:
>>> On Wed, 2013-09-04 at 14:30 +0800, Jason Wang wrote:
>>>
>>>>> And tcpdump would
On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote:
> On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote:
>> Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if
>> upend_idx != done_idx we still set zcopy_used to true and rollback this
>> choice
None of its caller use its return value, so let it return void.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c |5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 969a859..280ee66 100644
--- a/drivers/vhost/net.c
+++ b
check based on Michael's suggestion.
Jason Wang (6):
vhost_net: make vhost_zerocopy_signal_used() return void
vhost_net: use vhost_add_used_and_signal_n() in
vhost_zerocopy_signal_used()
vhost: switch to use vhost_add_used_n()
vhost_net: determine whether or not to use zerocopy a
much less times of used index
updating and memory barriers.
2% performance improvement were seen on netperf TCP_RR test.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 13 -
1 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if
upend_idx != done_idx we still set zcopy_used to true and rollback this choice
later. This could be avoided by determining zerocopy once by checking all
conditions at one time before.
Signed-off-by: Jason Wang
into main loop. Tests shows about 5%-10%
improvement on per cpu throughput for guest tx.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 18 +++---
1 files changed, 7 insertions(+), 11 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 8e9dc55..831eb4f 1
Let vhost_add_used() to use vhost_add_used_n() to reduce the code
duplication. To avoid the overhead brought by __copy_to_user(). We will use
put_user() when one used need to be added.
Signed-off-by: Jason Wang
---
drivers/vhost/vhost.c | 54 ++--
1
We used to poll vhost queue before making DMA is done, this is racy if vhost
thread were waked up before marking DMA is done which can result the signal to
be missed. Fix this by always polling the vhost thread before DMA is done.
Signed-off-by: Jason Wang
---
- The patch is needed for stable
On 09/02/2013 02:30 PM, Jason Wang wrote:
> On 09/02/2013 01:56 PM, Michael S. Tsirkin wrote:
>> > On Fri, Aug 30, 2013 at 12:29:22PM +0800, Jason Wang wrote:
>>> >> As Michael point out, We used to limit the max pending DMAs to get
>>> >> better cac
On 09/02/2013 01:56 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 30, 2013 at 12:29:22PM +0800, Jason Wang wrote:
>> As Michael point out, We used to limit the max pending DMAs to get better
>> cache
>> utilization. But it was not done correctly since it was one done when
On 09/02/2013 01:50 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 30, 2013 at 12:29:18PM +0800, Jason Wang wrote:
>> > We tend to batch the used adding and signaling in vhost_zerocopy_callback()
>> > which may result more than 100 used buffers to be updated in
>> > v
On 09/02/2013 01:51 PM, Michael S. Tsirkin wrote:
> tweak subj s/returns/return/
>
> On Fri, Aug 30, 2013 at 12:29:17PM +0800, Jason Wang wrote:
>> > None of its caller use its return value, so let it return void.
>> >
>> > Signed-off-by: Jason Wang
&g
On 08/31/2013 12:45 PM, Qin Chuanyu wrote:
> On 2013/8/30 0:08, Anthony Liguori wrote:
>> Hi Qin,
>
>>> By change the memory copy and notify mechanism ,currently
>>> virtio-net with
>>> vhost_net could run on Xen with good performance。
>>
>> I think the key in doing this would be to implement a pro
On 08/31/2013 02:35 AM, Sergei Shtylyov wrote:
> Hello.
>
> On 08/30/2013 08:29 AM, Jason Wang wrote:
>
>> Currently, even if the packet length is smaller than
>> VHOST_GOODCOPY_LEN, if
>> upend_idx != done_idx we still set zcopy_used to true and rollback
>>
On 08/31/2013 12:44 AM, Ben Hutchings wrote:
> On Fri, 2013-08-30 at 12:29 +0800, Jason Wang wrote:
>> We used to poll vhost queue before making DMA is done, this is racy if vhost
>> thread were waked up before marking DMA is done which can result the signal
>> to
>> be
None of its caller use its return value, so let it return void.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c |5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 969a859..280ee66 100644
--- a/drivers/vhost/net.c
+++ b
!= done_idx
to (upend_idx + 1) % UIO_MAXIOV == done_idx.
- Switch to use put_user() in __vhost_add_used_n() if there's only one used
- Keep the max pending check based on Michael's suggestion.
Jason Wang (6):
vhost_net: make vhost_zerocopy_signal_used() returns void
vhos
much more less times of used index
updating and memory barriers.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 13 -
1 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 280ee66..8a6dd0d 100644
--- a/drivers/vhost/net.c
Let vhost_add_used() to use vhost_add_used_n() to reduce the code duplication.
Signed-off-by: Jason Wang
---
drivers/vhost/vhost.c | 54 ++--
1 files changed, 12 insertions(+), 42 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost
Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if
upend_idx != done_idx we still set zcopy_used to true and rollback this choice
later. This could be avoided by determine zerocopy once by checking all
conditions at one time before.
Signed-off-by: Jason Wang
---
drivers
We used to poll vhost queue before making DMA is done, this is racy if vhost
thread were waked up before marking DMA is done which can result the signal to
be missed. Fix this by always poll the vhost thread before DMA is done.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c |9
into main loop. Tests shows about 5%-10%
improvement on per cpu throughput for guest tx. But a 5% drop on per cpu
transaction rate for a single session TCP_RR.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 15 ---
1 files changed, 4 insertions(+), 11 deletions(-)
diff --
On 08/25/2013 07:53 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 23, 2013 at 04:55:49PM +0800, Jason Wang wrote:
>> On 08/20/2013 10:48 AM, Jason Wang wrote:
>>> On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote:
>>>>> On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jas
On 08/25/2013 07:53 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 23, 2013 at 04:55:49PM +0800, Jason Wang wrote:
>> On 08/20/2013 10:48 AM, Jason Wang wrote:
>>> On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote:
>>>>> On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jas
On 08/20/2013 10:48 AM, Jason Wang wrote:
> On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote:
>> > On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote:
>>> >> We used to limit the max pending DMAs to prevent guest from pinning too
>>> >> many
>
On 08/20/2013 10:33 AM, Jason Wang wrote:
> On 08/16/2013 05:54 PM, Michael S. Tsirkin wrote:
>> On Fri, Aug 16, 2013 at 01:16:26PM +0800, Jason Wang wrote:
>>>> Switch to use vhost_add_used_and_signal_n() to avoid multiple calls to
>>>> vhost_add_used_and_signal(
On 08/16/2013 06:02 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 16, 2013 at 01:16:30PM +0800, Jason Wang wrote:
>> We used to limit the max pending DMAs to prevent guest from pinning too many
>> pages. But this could be removed since:
>>
>> - We have the sk_wmem_alloc c
On 08/16/2013 06:00 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 16, 2013 at 01:16:29PM +0800, Jason Wang wrote:
>> We used to poll vhost queue before making DMA is done, this is racy if vhost
>> thread were waked up before marking DMA is done which can result the signal
>>
On 08/16/2013 05:56 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 16, 2013 at 01:16:27PM +0800, Jason Wang wrote:
>> > Let vhost_add_used() to use vhost_add_used_n() to reduce the code
>> > duplication.
>> >
>> > Signed-off-by: Jason Wang
> Does compiler
On 08/16/2013 05:54 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 16, 2013 at 01:16:26PM +0800, Jason Wang wrote:
>> > Switch to use vhost_add_used_and_signal_n() to avoid multiple calls to
>> > vhost_add_used_and_signal(). With the patch we will call at most 2 times
>>
Switch to use vhost_add_used_and_signal_n() to avoid multiple calls to
vhost_add_used_and_signal(). With the patch we will call at most 2 times
(consider done_idx warp around) compared to N times w/o this patch.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c | 13 -
1 files
We used to poll vhost queue before making DMA is done, this is racy if vhost
thread were waked up before marking DMA is done which can result the signal to
be missed. Fix this by always poll the vhost thread before DMA is done.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c |9
Let vhost_add_used() to use vhost_add_used_n() to reduce the code duplication.
Signed-off-by: Jason Wang
---
drivers/vhost/vhost.c | 43 ++-
1 files changed, 2 insertions(+), 41 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if
upend_idx != done_idx we still set zcopy_used to true and rollback this choice
later. This could be avoided by determine zerocopy once by checking all
conditions at one time before.
Signed-off-by: Jason Wang
---
drivers
oming from guest. Guest can easily exceeds the limitation.
- We've already check upend_idx != done_idx and switch to non zerocopy then. So
even if all vq->heads were used, we can still does the packet transmission.
So remove this check completely.
Signed-off-by: Jason Wang
---
driver
None of its caller use its return value, so let it return void.
Signed-off-by: Jason Wang
---
drivers/vhost/net.c |5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 969a859..280ee66 100644
--- a/drivers/vhost/net.c
+++ b
Hi all:
This series tries to unify and simplify vhost codes especially for zerocopy.
Plase review.
Thanks
Jason Wang (6):
vhost_net: make vhost_zerocopy_signal_used() returns void
vhost_net: use vhost_add_used_and_signal_n() in
vhost_zerocopy_signal_used()
vhost: switch to use
101 - 200 of 731 matches
Mail list logo