From: Yuval Shaia
Date: Wed, 3 Apr 2019 11:20:45 +0300
> This header is not in use - remove it.
>
> Signed-off-by: Yuval Shaia
Applied to net-next
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundatio
From: Yuval Shaia
Date: Wed, 3 Apr 2019 12:10:13 +0300
> Signed-off-by: Yuval Shaia
Applied to net-next
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
From: Si-Wei Liu
Date: Mon, 8 Apr 2019 19:45:27 -0400
> When a netdev appears through hot plug then gets enslaved by a failover
> master that is already up and running, the slave will be opened
> right away after getting enslaved. Today there's a race that userspace
> (udev) may fail to rename t
From: Jason Wang
Date: Tue, 9 Apr 2019 12:10:25 +0800
> We used to accept zero size iova range which will lead a infinite loop
> in translate_desc(). Fixing this by failing the request in this case.
>
> Reported-by: syzbot+d21e6e297322a900c...@syzkaller.appspotmail.com
> Fixes: 6b1e6cc7 ("vhost
From: Stefano Garzarella
Date: Fri, 10 May 2019 14:58:37 +0200
> @@ -827,12 +827,20 @@ static bool virtio_transport_close(struct vsock_sock
> *vsk)
>
> void virtio_transport_release(struct vsock_sock *vsk)
> {
> + struct virtio_vsock_sock *vvs = vsk->trans;
> + struct virtio_vsock_bu
From: Jason Wang
Date: Mon, 13 May 2019 01:27:45 -0400
> Vhost log dirty pages directly to a userspace bitmap through GUP and
> kmap_atomic() since kernel doesn't have a set_bit_to_user()
> helper. This will cause issues for the arch that has virtually tagged
> caches. The way to fix is to keep u
From: Stefano Garzarella
Date: Fri, 17 May 2019 16:45:43 +0200
> When the socket is released, we should free all packets
> queued in the per-socket list in order to avoid a memory
> leak.
>
> Signed-off-by: Stefano Garzarella
Applied and queued up for -stable.
_
From: "Jorge E. Moreira"
Date: Thu, 16 May 2019 13:51:07 -0700
> Avoid a race in which static variables in net/vmw_vsock/af_vsock.c are
> accessed (while handling interrupts) before they are initialized.
...
> Fixes: 22b5c0b63f32 ("vsock/virtio: fix kernel panic after device hot-unplug")
> Cc: S
From: Jason Wang
Date: Fri, 17 May 2019 00:29:48 -0400
> Hi:
>
> This series try to prevent a guest triggerable CPU hogging through
> vhost kthread. This is done by introducing and checking the weight
> after each requrest. The patch has been tested with reproducer of
> vsock and virtio-net. Onl
From: Stefano Garzarella
Date: Tue, 28 May 2019 12:56:20 +0200
> @@ -68,7 +68,13 @@ struct virtio_vsock {
>
> static struct virtio_vsock *virtio_vsock_get(void)
> {
> - return the_virtio_vsock;
> + struct virtio_vsock *vsock;
> +
> + mutex_lock(&the_virtio_vsock_mutex);
> + vs
From: Jason Wang
Date: Fri, 24 May 2019 04:12:12 -0400
> This series tries to access virtqueue metadata through kernel virtual
> address instead of copy_user() friends since they had too much
> overheads like checks, spec barriers or even hardware feature
> toggling like SMAP. This is done throug
From: "Michael S. Tsirkin"
Date: Thu, 30 May 2019 14:13:28 -0400
> On Thu, May 30, 2019 at 11:07:30AM -0700, David Miller wrote:
>> From: Jason Wang
>> Date: Fri, 24 May 2019 04:12:12 -0400
>>
>> > This series tries to access virtqueue metadata throug
From: Stefano Garzarella
Date: Fri, 31 May 2019 15:39:51 +0200
> @@ -434,7 +434,9 @@ void virtio_transport_set_buffer_size(struct vsock_sock
> *vsk, u64 val)
> if (val > vvs->buf_size_max)
> vvs->buf_size_max = val;
> vvs->buf_size = val;
> + spin_lock_bh(&vvs->rx_l
From: Sunil Muthuswamy
Date: Thu, 13 Jun 2019 03:52:27 +
> The current vsock code for removal of socket from the list is both
> subject to race and inefficient. It takes the lock, checks whether
> the socket is in the list, drops the lock and if the socket was on the
> list, deletes it from t
From: Jason Wang
Date: Mon, 17 Jun 2019 05:20:54 -0400
> Vhost_net was known to suffer from HOL[1] issues which is not easy to
> fix. Several downstream disable the feature by default. What's more,
> the datapath was split and datacopy path got the support of batching
> and XDP support recently w
From: Stefano Garzarella
Date: Fri, 5 Jul 2019 13:04:51 +0200
> During the review of "[PATCH] vsock/virtio: Initialize core virtio vsock
> before registering the driver", Stefan pointed out some possible issues
> in the .probe() and .remove() callbacks of the virtio-vsock driver.
...
Series ap
From: Arnd Bergmann
Date: Tue, 30 Jul 2019 21:50:28 +0200
> Each of these drivers has a copy of the same trivial helper function to
> convert the pointer argument and then call the native ioctl handler.
>
> We now have a generic implementation of that, so use it.
>
> Acked-by: Greg Kroah-Hartma
From: Jason Wang
Date: Wed, 7 Aug 2019 03:06:08 -0400
> This series try to fix several issues introduced by meta data
> accelreation series. Please review.
...
My impression is that patch #7 will be changed to use spinlocks so there
will be a v5.
___
From: Jason Wang
Date: Mon, 12 Aug 2019 10:44:51 +0800
> On 2019/8/11 上午1:52, Michael S. Tsirkin wrote:
>> At this point how about we revert
>> 7f466032dc9e5a61217f22ea34b2df932786bbfc
>> for this release, and then re-apply a corrected version
>> for the next one?
>
> If possible, consider we've
From: "Michael S. Tsirkin"
Date: Tue, 3 Sep 2019 03:38:16 -0400
> The comment we have is just repeating what the code does.
> Include the *reason* for the condition instead.
>
> Cc: Stefano Garzarella
> Signed-off-by: Michael S. Tsirkin
Applied.
___
From: Jason Wang
Date: Fri, 6 Sep 2019 18:02:35 +0800
> On 2019/9/5 下午9:59, Jason Gunthorpe wrote:
>> I think you should apply the revert this cycle and rebase the other
>> patch for next..
>>
>> Jason
>
> Yes, the plan is to revert in this release cycle.
Then you should reset patch #1 all by i
From: Jason Wang
Date: Tue, 23 Jan 2018 17:27:25 +0800
> We used to call mutex_lock() in vhost_dev_lock_vqs() which tries to
> hold mutexes of all virtqueues. This may confuse lockdep to report a
> possible deadlock because of trying to hold locks belong to same
> class. Switch to use mutex_lock_
From: "Michael S. Tsirkin"
Date: Wed, 24 Jan 2018 23:46:19 +0200
> On Wed, Jan 24, 2018 at 04:38:30PM -0500, David Miller wrote:
>> From: Jason Wang
>> Date: Tue, 23 Jan 2018 17:27:25 +0800
>>
>> > We used to call mutex_lock() in vhost_dev_lock_vqs()
From: Jason Wang
Date: Thu, 25 Jan 2018 22:03:52 +0800
> We don't stop device before reset owner, this means we could try to
> serve any virtqueue kick before reset dev->worker. This will result a
> warn since the work was pending at llist during owner resetting. Fix
> this by stopping device dur
From: Jason Wang
Date: Wed, 28 Feb 2018 18:20:04 +0800
> We try to disable NAPI to prevent a single XDP TX queue being used by
> multiple cpus. But we don't check if device is up (NAPI is enabled),
> this could result stall because of infinite wait in
> napi_disable(). Fixing this by checking dev
From: Jason Wang
Date: Fri, 2 Mar 2018 17:29:14 +0800
> XDP_REDIRECT support for mergeable buffer was removed since commit
> 7324f5399b06 ("virtio_net: disable XDP_REDIRECT in receive_mergeable()
> case"). This is because we don't reserve enough tailroom for struct
> skb_shared_info which breaks
From: Jason Wang
Date: Mon, 5 Mar 2018 10:43:41 +0800
>
>
> On 2018年03月05日 07:38, David Miller wrote:
>> From: Jason Wang
>> Date: Fri, 2 Mar 2018 17:29:14 +0800
>>
>>> XDP_REDIRECT support for mergeable buffer was removed since commit
>>> 7324f
From: Vaibhav Murkute
Date: Fri, 9 Mar 2018 08:26:03 +0530
> Fixed a coding style issue.
>
> Signed-off-by: Vaibhav Murkute
Applied to net-next, thanks.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfou
From: Jason Wang
Date: Fri, 9 Mar 2018 14:50:31 +0800
> This small series try to fix several bugs of ptr_ring usage in
> vhost_net. Please review.
Series applied, thanks Jason.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
From: Jason Wang
Date: Mon, 26 Mar 2018 16:10:23 +0800
> We try to hold TX virtqueue mutex in vhost_net_rx_peek_head_len()
> after RX virtqueue mutex is held in handle_rx(). This requires an
> appropriate lock nesting notation to calm down deadlock detector.
>
> Fixes: 0308813724606 ("vhost_net:
From: Jason Wang
Date: Tue, 27 Mar 2018 20:50:52 +0800
> We tried to remove vq poll from wait queue, but do not check whether
> or not it was in a list before. This will lead double free. Fixing
> this by switching to use vhost_poll_stop() which zeros poll->wqh after
> removing poll from waitqueu
From: "Michael S. Tsirkin"
Date: Thu, 29 Mar 2018 15:48:22 +0300
> On Thu, Mar 29, 2018 at 04:00:04PM +0800, Jason Wang wrote:
>> Vq log_base is the userspace address of bitmap which has nothing to do
>> with IOTLB. So it needs to be validated unconditionally otherwise we
>> may try use 0 as log_
From: Vladislav Yasevich
Date: Mon, 2 Apr 2018 09:40:01 -0400
> Now that we have SCTP offload capabilities in the kernel, we can add
> them to virtio as well. First step is SCTP checksum.
Vlad, the net-next tree is closed, please resubmit this when the merge window
is over and the net-next tre
From: David Ahern
Date: Wed, 4 Apr 2018 11:21:54 -0600
> It is a netdev so there is no reason to have a separate ip command to
> inspect it. 'ip link' is the right place.
I agree on this.
What I really don't understand still is the use case... really.
So there are control netdevs, what exactly
From: David Ahern
Date: Wed, 4 Apr 2018 11:37:52 -0600
> Networking vendors have out of tree kernel modules. Those modules use a
> netdev (call it a master netdev, a control netdev, cpu port, whatever)
> to pull packets from the ASIC and deliver to virtual netdevices
> representing physical ports
From: Siwei Liu
Date: Fri, 6 Apr 2018 19:32:05 -0700
> And I assume everyone here understands the use case for live
> migration (in the context of providing cloud service) is very
> different, and we have to hide the netdevs. If not, I'm more than
> happy to clarify.
I think you still need to cl
From: haibinzhang(张海斌)
Date: Fri, 6 Apr 2018 08:22:37 +
> handle_tx will delay rx for tens or even hundreds of milliseconds when tx busy
> polling udp packets with small length(e.g. 1byte udp payload), because setting
> VHOST_NET_WEIGHT takes into account only sent-bytes but no single packet
From: haibinzhang(张海斌)
Date: Mon, 9 Apr 2018 07:22:17 +
> handle_tx will delay rx for tens or even hundreds of milliseconds when tx busy
> polling udp packets with small length(e.g. 1byte udp payload), because setting
> VHOST_NET_WEIGHT takes into account only sent-bytes but no single packet
From: Jason Wang
Date: Tue, 10 Apr 2018 14:40:10 +0800
> On 2018年04月10日 13:26, Stefan Hajnoczi wrote:
>> v2:
>> * Rewrote the conditional to make the vq access check clearer [Linus]
>> * Added Patch 2 to make the return type consistent and harder to misuse
>> * [Linus]
>>
>> The first patch
From: "Michael S. Tsirkin"
Date: Wed, 11 Apr 2018 16:24:02 +0300
> On Wed, Apr 11, 2018 at 10:35:39AM +0800, Stefan Hajnoczi wrote:
>> v3:
>> * Rebased onto net/master and resolved conflict [DaveM]
>>
>> v2:
>> * Rewrote the conditional to make the vq access check clearer [Linus]
>> * Added P
From: Jason Wang
Date: Fri, 13 Apr 2018 14:58:25 +0800
> We tends to batch submitting packets during XDP_TX. This requires to
> kick virtqueue after a batch, we tried to do it through
> xdp_do_flush_map() which only makes sense for devmap not XDP_TX. So
> explicitly kick the virtqueue in this cas
From: Mikulas Patocka
Date: Wed, 18 Apr 2018 12:44:25 -0400 (EDT)
> The structure net_device is followed by arbitrary driver-specific data
> (accessible with the function netdev_priv). And for virtio-net, these
> driver-specific data must be in DMA memory.
And we are saying that this assumptio
From: Eric Dumazet
Date: Wed, 18 Apr 2018 09:51:25 -0700
> I suggest that virtio_net clearly identifies which part needs a specific
> allocation
> and does its itself, instead of abusing the netdev_priv storage.
>
> Ie use a pointer to a block of memory, allocated by virtio_net, for
> virtio_n
From: Stephen Hemminger
Date: Fri, 20 Apr 2018 08:28:02 -0700
> On Thu, 19 Apr 2018 18:42:04 -0700
> Sridhar Samudrala wrote:
>
>> Use the registration/notification framework supported by the generic
>> failover infrastructure.
>>
>> Signed-off-by: Sridhar Samudrala
>
> Do what you want to o
From: "Michael S. Tsirkin"
Date: Fri, 20 Apr 2018 18:43:54 +0300
> On Fri, Apr 20, 2018 at 08:28:02AM -0700, Stephen Hemminger wrote:
>> Plus, DPDK is now dependent on existing model.
>
> DPDK does the kernel bypass thing, doesn't it? Why does the kernel care?
+1
___
From: Paolo Abeni
Date: Tue, 24 Apr 2018 10:34:36 +0200
> Similar to commit a2ac99905f1e ("vhost-net: set packet weight of
> tx polling to 2 * vq size"), we need a packet-based limit for
> handler_rx, too - elsewhere, under rx flood with small packets,
> tx can be delayed for a very long time, ev
From: "Michael S. Tsirkin"
Date: Fri, 27 Apr 2018 19:02:05 +0300
> There's a 32 bit hole just after type. It's best to
> give it a name, this way compiler is forced to initialize
> it with rest of the structure.
>
> Reported-by: Kevin Easton
> Signed-off-by: Michael S. Tsirkin
Who applied thi
From: "Michael S. Tsirkin"
Date: Fri, 27 Apr 2018 19:02:05 +0300
> There's a 32 bit hole just after type. It's best to
> give it a name, this way compiler is forced to initialize
> it with rest of the structure.
>
> Reported-by: Kevin Easton
> Signed-off-by: Michael S. Tsirkin
Michael, will y
From: "Michael S. Tsirkin"
Date: Tue, 1 May 2018 20:19:19 +0300
> On Tue, May 01, 2018 at 11:28:22AM -0400, David Miller wrote:
>> From: "Michael S. Tsirkin"
>> Date: Fri, 27 Apr 2018 19:02:05 +0300
>>
>> > There's a 32 bit hole just
From: "Michael S. Tsirkin"
Date: Wed, 2 May 2018 16:36:37 +0300
> Ouch. True - and in particular the 32 bit ABI on 64 bit kernels doesn't
> work at all. Hmm. It's relatively new and maybe there aren't any 32 bit
> users yet. Thoughts?
If it's been in a released kernel version, we really aren't
From: "Michael S. Tsirkin"
Date: Wed, 2 May 2018 17:19:05 +0300
> This reverts commit 93c0d549c4c5a7382ad70de6b86610b7aae57406.
>
> Unfortunately the padding will break 32 bit userspace.
> Ouch. Need to add some compat code, revert for now.
>
> Signed-off-by: Michael S. Tsirkin
Applied, thank
From: Jason Wang
Date: Tue, 22 May 2018 11:44:27 +0800
> Please review the patches that tries to fix sevreal issues of
> virtio-net mergeable XDP.
>
> Changes from V1:
> - check against 1 before decreasing instead of resetting to 1
> - typoe fixes
Series applied and queued up for -stable.
_
From: Jason Wang
Date: Tue, 22 May 2018 19:58:57 +0800
> DaeRyong Jeong reports a race between vhost_dev_cleanup() and
> vhost_process_iotlb_msg():
>
> Thread interleaving:
> CPU0 (vhost_process_iotlb_msg)CPU1 (vhost_dev_cleanup)
> (In the case of both VHOST_IOTLB_UPDATE
From: Jason Wang
Date: Tue, 22 May 2018 19:58:57 +0800
> DaeRyong Jeong reports a race between vhost_dev_cleanup() and
> vhost_process_iotlb_msg():
>
> Thread interleaving:
> CPU0 (vhost_process_iotlb_msg)CPU1 (vhost_dev_cleanup)
> (In the case of both VHOST_IOTLB_UPDATE
From: Sridhar Samudrala
Date: Thu, 24 May 2018 09:55:12 -0700
> The main motivation for this patch is to enable cloud service providers
> to provide an accelerated datapath to virtio-net enabled VMs in a
> transparent manner with no/minimal guest userspace changes. This also
> enables hypervisor
From: Jason Wang
Date: Tue, 29 May 2018 14:18:19 +0800
> After commit e2b3b35eb989 ("vhost_net: batch used ring update in rx"),
> we tend to batch updating used heads. But it doesn't flush batched
> heads before trying to do busy polling, this will cause vhost to wait
> for guest TX which waits f
From: Wei Yongjun
Date: Thu, 31 May 2018 02:05:07 +
> Fix to return a negative error code from the failover create fail error
> handling case instead of 0, as done elsewhere in this function.
>
> Fixes: ba5e4426e80e ("virtio_net: Extend virtio to use VF datapath when
> available")
> Signed-
From: Jason Wang
Date: Fri, 8 Jun 2018 11:50:42 +0800
> This feature bit is duplicated with VIRTIO_F_ANY_LAYOUT, this means if
> a userpsace want to enable VRITIO_F_ANY_LAYOUT,
> VHOST_NET_F_VIRTIO_NET_HDR will be implied too. This is wrong and will
> break networking. Fixing this by safely remo
From: Jason Wang
Date: Thu, 21 Jun 2018 13:11:31 +0800
> Sock will be NULL if we pass -1 to vhost_net_set_backend(), but when
> we meet errors during ubuf allocation, the code does not check for
> NULL before calling sockfd_put(), this will lead NULL
> dereferencing. Fixing by checking sock point
From: Toshiaki Makita
Date: Tue, 3 Jul 2018 16:31:30 +0900
> Under heavy load vhost tx busypoll tend not to suppress vq kicks, which
> causes poor guest tx performance. The detailed scenario is described in
> commitlog of patch 2.
> Rx seems not to have that serious problem, but for consistency
From: Tiwei Bie
Date: Wed, 11 Jul 2018 10:27:06 +0800
> Hello everyone,
>
> This patch set implements packed ring support in virtio driver.
>
> Some functional tests have been done with Jason's
> packed ring implementation in vhost:
>
> https://lkml.org/lkml/2018/7/3/33
>
> Both of ping and n
From: "Michael S. Tsirkin"
Date: Mon, 16 Jul 2018 15:49:04 +0300
> I'm not sure I understand this approach. Packed ring is just an
> optimization. What value is there in merging it if it does not help
> speed?
So it seems that both Tiwei's and Jason's packed patch sets are kind
of in limbo due
From: Jason Wang
Date: Fri, 20 Jul 2018 08:15:12 +0800
> This series implement batch updating of used ring for TX. This help to
> reduce the cache contention on used ring. The idea is first split
> datacopy path from zerocopy, and do only batching for datacopy. This
> is because zercopy had alrea
From: "Michael S. Tsirkin"
Date: Sun, 22 Jul 2018 17:37:05 +0300
> On Fri, Jul 20, 2018 at 08:15:12AM +0800, Jason Wang wrote:
>> Hi:
>>
>> This series implement batch updating of used ring for TX. This help to
>> reduce the cache contention on used ring. The idea is first split
>> datacopy path
From: Toshiaki Makita
Date: Mon, 23 Jul 2018 23:36:03 +0900
> From: Toshiaki Makita
>
> Add some ethtool stat items useful for performance analysis.
>
> Signed-off-by: Toshiaki Makita
Michael and Jason, any objections to these new stats?
___
Virtua
From: "Michael S. Tsirkin"
Date: Wed, 25 Jul 2018 12:40:12 +0300
> On Mon, Jul 23, 2018 at 11:36:03PM +0900, Toshiaki Makita wrote:
>> From: Toshiaki Makita
>>
>> Add some ethtool stat items useful for performance analysis.
>>
>> Signed-off-by: Toshiaki Makita
>
> Series:
>
> Acked-by: Mich
From: xiangxia.m@gmail.com
Date: Sat, 21 Jul 2018 11:03:58 -0700
> From: Tonghao Zhang
>
> This patches improve the guest receive performance.
> On the handle_tx side, we poll the sock receive queue
> at the same time. handle_rx do that in the same way.
>
> For more performance report, see
From: Jason Wang
Date: Tue, 31 Jul 2018 17:43:38 +0800
> Commit 5b8f3c8d30a6 ("virtio_net: Add XDP related stats") tries to
> count TX XDP stats in virtnet_receive(). This will cause several
> issues:
>
> - virtnet_xdp_sq() was called without checking whether or not XDP is
> set. This may caus
From: Jason Wang
Date: Tue, 31 Jul 2018 17:43:39 +0800
> We don't maintain tx counters in rx stats any more. There's no need
> for an extra container of rq stats.
>
> Cc: Toshiaki Makita
> Signed-off-by: Jason Wang
Applied.
___
Virtualization mailin
From: Jason Wang
Date: Fri, 3 Aug 2018 15:04:51 +0800
> So fixing this by introducing a new message type with an explicit
> 32bit reserved field after type like:
>
> struct vhost_msg_v2 {
> int type;
> __u32 reserved;
Please use fixed sized types consistently. Use 's32' instead of
From: "Gustavo A. R. Silva"
Date: Sat, 4 Aug 2018 21:42:05 -0500
> In preparation to enabling -Wimplicit-fallthrough, mark switch cases
> where we are expecting to fall through.
>
> Addresses-Coverity-ID: 1402059 ("Missing break in switch")
> Addresses-Coverity-ID: 1402060 ("Missing break in swi
From: Jason Wang
Date: Mon, 6 Aug 2018 11:17:47 +0800
> We use to have message like:
>
> struct vhost_msg {
> int type;
> union {
> struct vhost_iotlb_msg iotlb;
> __u8 padding[64];
> };
> };
>
> Unfortunately, there will be a hole of 32bit in 64bi
From: Andrei Vagin
Date: Mon, 6 Aug 2018 21:14:54 -0700
> This patch adds an ability to call __netif_set_xps_queue under
> cpu_read_lock().
Please don't add conditional locking using a boolean argument.
Simply wrap calls to __netif_set_xps_queue() with cpu read
lock held by the caller or simil
From: Jason Wang
Date: Wed, 8 Aug 2018 11:43:04 +0800
> We need to reset metadata cache during new IOTLB initialization,
> otherwise the stale pointers to previous IOTLB may be still accessed
> which will lead a use after free.
>
> Reported-by: syzbot+c51e6736a1bf614b3...@syzkaller.appspotmail.
From: YueHaibing
Date: Mon, 13 Aug 2018 14:13:15 +0800
> Remove duplicated include linux/netdevice.h
>
> Signed-off-by: YueHaibing
Applied.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/ma
From: Jason Wang
Date: Fri, 24 Aug 2018 16:53:13 +0800
> We don't wakeup the virtqueue if the first byte of pending iova range
> is the last byte of the range we just got updated. This will lead a
> virtqueue to wait for IOTLB updating forever. Fixing by correct the
> check and wake up the virtqu
From: "Michael S. Tsirkin"
Date: Mon, 3 Sep 2018 23:11:11 -0400
> On Tue, Sep 04, 2018 at 11:08:40AM +0800, Jason Wang wrote:
>>
>>
>> On 2018年09月04日 10:22, Michael S. Tsirkin wrote:
>> > On Mon, Sep 03, 2018 at 08:59:13PM +0300, Gleb Fotengauer-Malinovskiy
>> > wrote:
>> > > The _IOC_READ fla
From: xiangxia.m@gmail.com
Date: Sun, 9 Sep 2018 04:51:21 -0700
> From: Tonghao Zhang
>
> This patches improve the guest receive performance.
> On the handle_tx side, we poll the sock receive queue
> at the same time. handle_rx do that in the same way.
>
> For more performance report, see
From: Jason Wang
Date: Wed, 12 Sep 2018 11:16:58 +0800
> This series tries to batch submitting packets to underlayer socket
> through msg_control during sendmsg(). This is done by:
...
Series applied, thanks Jason.
___
Virtualization mailing list
Virt
From: "Michael S. Tsirkin"
Date: Thu, 13 Sep 2018 14:02:13 -0400
> On Thu, Sep 13, 2018 at 09:28:19AM -0700, David Miller wrote:
>> From: Jason Wang
>> Date: Wed, 12 Sep 2018 11:16:58 +0800
>>
>> > This series tries to batch submitting packets to un
From: Jason Wang
Date: Thu, 13 Sep 2018 13:35:45 +0800
> Toggle tx napi through a bit in tx-frames.
This is not what the code implements as the interface any more.
Please fix the commit message to match the code.
Thanks.
___
Virtualization mailing li
From: xiangxia.m@gmail.com
Date: Tue, 25 Sep 2018 05:36:48 -0700
> From: Tonghao Zhang
>
> This patches improve the guest receive performance.
> On the handle_tx side, we poll the sock receive queue
> at the same time. handle_rx do that in the same way.
>
> For more performance report, see
From: Jason Wang
Date: Tue, 9 Oct 2018 10:06:26 +0800
> Implement ethtool .set_coalesce (-C) and .get_coalesce (-c) handlers.
> Interrupt moderation is currently not supported, so these accept and
> display the default settings of 0 usec and 1 frame.
>
> Toggle tx napi through setting tx-frames
From: Ake Koomsin
Date: Wed, 17 Oct 2018 19:44:12 +0900
> Commit 713a98d90c5e ("virtio-net: serialize tx routine during reset")
> introduces netif_tx_disable() after netif_device_detach() in order to
> avoid use-after-free of tx queues. However, there are two issues.
>
> 1) Its operation is redu
From: Sebastian Andrzej Siewior
Date: Thu, 18 Oct 2018 10:43:13 +0200
> on 32bit, lockdep notices that virtnet_open() and refill_work() invoke
> try_fill_recv() from process context while virtnet_receive() invokes the
> same function from BH context. The problem that the seqcounter within
> u64_s
From: Jason Wang
Date: Tue, 30 Oct 2018 14:10:49 +0800
> The idx in vhost_vring_ioctl() was controlled by userspace, hence a
> potential exploitation of the Spectre variant 1 vulnerability.
>
> Fixing this by sanitizing idx before using it to index d->vqs.
>
> Cc: Michael S. Tsirkin
> Cc: Josh
From: Jason Wang
Date: Thu, 15 Nov 2018 17:43:09 +0800
> We do a get_page() which involves a atomic operation. This patch tries
> to mitigate a per packet atomic operation by maintaining a reference
> bias which is initially USHRT_MAX. Each time a page is got, instead of
> calling get_page() we d
From: Jason Wang
Date: Thu, 15 Nov 2018 17:43:10 +0800
> Thanks to the batched XDP buffs through msg_control. Instead of
> calling put_page() for each page which involves a atomic operation,
> let's batch them by record the last page that needs to be freed and
> its refcnt count and free them in
From: "Michael S. Tsirkin"
Date: Wed, 21 Nov 2018 07:20:27 -0500
> Dave, given the holiday, attempts to wrap up the 1.1 spec and the
> patchset size I would very much appreciate a bit more time for
> review. Say until Nov 28?
Ok.
___
Virtualization mai
From: Jason Wang
Date: Thu, 22 Nov 2018 14:36:30 +0800
> We don't disable VIRTIO_NET_F_GUEST_CSUM if XDP was set. This means we
> can receive partial csumed packets with metadata kept in the
> vnet_hdr. This may have several side effects:
>
> - It could be overridden by header adjustment, thus i
From: Jason Wang
Date: Thu, 22 Nov 2018 14:36:31 +0800
> We don't support partial csumed packet since its metadata will be lost
> or incorrect during XDP processing. So fail the XDP set if guest_csum
> feature is negotiated.
>
> Fixes: f600b6905015 ("virtio_net: Add XDP support")
> Reported-by:
From: "Michael S. Tsirkin"
Date: Tue, 27 Nov 2018 01:08:08 -0500
> On Wed, Nov 21, 2018 at 06:03:17PM +0800, Tiwei Bie wrote:
>> Hi,
>>
>> This patch set implements packed ring support in virtio driver.
>>
>> A performance test between pktgen (pktgen_sample03_burst_single_flow.sh)
>> and DPDK v
From: Jason Wang
Date: Thu, 29 Nov 2018 13:53:16 +0800
> We copy vnet header unconditionally in page_to_skb() this is wrong
> since XDP may modify the packet data. So let's keep a zeroed vnet
> header for not confusing the conversion between vnet header and skb
> metadata.
>
> In the future, we
From: Jean-Philippe Brucker
Date: Fri, 30 Nov 2018 16:05:53 +
> Commit 78139c94dc8c ("net: vhost: lock the vqs one by one") moved the vq
> lock to improve scalability, but introduced a possible deadlock in
> vhost-iotlb. vhost_iotlb_notify_vq() now takes vq->mutex while holding
> the device's
From: Jason Wang
Date: Mon, 10 Dec 2018 17:44:50 +0800
> This series tries to fix various issues of vhost:
>
> - Patch 1 adds a missing write barrier between used idx updating and
> logging.
> - Patch 2-3 brings back the protection of device IOTLB through vq
> mutex, this fixes possible use
From: Jason Wang
Date: Thu, 13 Dec 2018 10:53:36 +0800
> This series tries to fix various issues of vhost:
>
> - Patch 1 adds a missing write barrier between used idx updating and
> logging.
> - Patch 2-3 brings back the protection of device IOTLB through vq
> mutex, this fixes possible use
From: jiangyiwen
Date: Wed, 12 Dec 2018 17:28:16 +0800
> +static int fill_mergeable_rx_buff(struct virtio_vsock *vsock,
> + struct virtqueue *vq)
> +{
> + struct page_frag *alloc_frag = &vsock->alloc_frag;
> + struct scatterlist sg;
> + /* Currently we don't use ewma len,
From: jiangyiwen
Date: Wed, 12 Dec 2018 17:29:31 +0800
> diff --git a/include/uapi/linux/virtio_vsock.h
> b/include/uapi/linux/virtio_vsock.h
> index 1d57ed3..2292f30 100644
> --- a/include/uapi/linux/virtio_vsock.h
> +++ b/include/uapi/linux/virtio_vsock.h
> @@ -63,6 +63,11 @@ struct virtio_vso
From: Jason Wang
Date: Wed, 12 Dec 2018 18:08:15 +0800
> This series tries to fix various issues of vhost:
>
> - Patch 1 adds a missing write barrier between used idx updating and
> logging.
> - Patch 2-3 brings back the protection of device IOTLB through vq
> mutex, this fixes possible use
From: jiangyiwen
Date: Thu, 13 Dec 2018 11:11:48 +0800
> I hope Host can fill fewer bytes into rx virtqueue, so
> I keep structure virtio_vsock_mrg_rxbuf_hdr one byte
> alignment.
The question is if this actully matters.
Do you know?
If the obejct this is embeeded inside of is at least 2 byte
201 - 300 of 536 matches
Mail list logo