On 2016年12月01日 10:48, wangyunjian wrote:
-Original Message-
From: Michael S. Tsirkin [mailto:m...@redhat.com]
Sent: Wednesday, November 30, 2016 9:41 PM
To: wangyunjian
Cc: jasow...@redhat.com; netdev@vger.kernel.org; linux-ker...@vger.kernel.org;
caihe
Subject: Re: [PATCH net]
s from guest with this patch.
Acked-by: Jason Wang <jasow...@redhat.com>
On 2016年12月01日 11:21, Michael S. Tsirkin wrote:
On Thu, Dec 01, 2016 at 02:48:59AM +, wangyunjian wrote:
-Original Message-
From: Michael S. Tsirkin [mailto:m...@redhat.com]
Sent: Wednesday, November 30, 2016 9:41 PM
To: wangyunjian
Cc: jasow...@redhat.com; netdev@vger.kernel.org;
On 2017年01月03日 03:44, John Fastabend wrote:
Add support for XDP adjust head by allocating a 256B header region
that XDP programs can grow into. This is only enabled when a XDP
program is loaded.
In order to ensure that we do not have to unwind queue headroom push
queue setup below
On 2017年01月03日 06:30, John Fastabend wrote:
XDP programs can not consume multiple pages so we cap the MTU to
avoid this case. Virtio-net however only checks the MTU at XDP
program load and does not block MTU changes after the program
has loaded.
This patch sets/clears the max_mtu value at XDP
On 2017年01月03日 06:43, John Fastabend wrote:
On 16-12-23 06:37 AM, Jason Wang wrote:
Commit f600b6905015 ("virtio_net: Add XDP support") leaves the case of
small receive buffer untouched. This will confuse the user who want to
set XDP but use small buffers. Other than forbid XD
On 2017年01月05日 02:58, John Fastabend wrote:
[...]
@@ -393,34 +397,39 @@ static u32 do_xdp_prog(struct virtnet_info *vi,
struct bpf_prog *xdp_prog,
void *data, int len)
{
-int hdr_padded_len;
struct xdp_buff xdp;
-void *buf;
On 2017年01月05日 11:18, Michael S. Tsirkin wrote:
On Wed, Jan 04, 2017 at 07:11:18PM -0800, John Fastabend wrote:
XDP programs can not consume multiple pages so we cap the MTU to
avoid this case. Virtio-net however only checks the MTU at XDP
program load and does not block MTU changes after the
On 2017年01月05日 02:57, John Fastabend wrote:
[...]
On 2017年01月04日 00:48, John Fastabend wrote:
On 17-01-02 10:14 PM, Jason Wang wrote:
On 2017年01月03日 06:30, John Fastabend wrote:
XDP programs can not consume multiple pages so we cap the MTU to
avoid this case. Virtio-net however only
On 2017年01月03日 21:33, Stefan Hajnoczi wrote:
On Wed, Dec 28, 2016 at 04:09:31PM +0800, Jason Wang wrote:
+static int tun_rx_batched(struct tun_file *tfile, struct sk_buff *skb,
+ int more)
+{
+ struct sk_buff_head *queue = >sk.sk_write_queue;
+ str
On 2017年01月04日 00:40, John Fastabend wrote:
On 17-01-02 10:16 PM, Jason Wang wrote:
On 2017年01月03日 06:43, John Fastabend wrote:
On 16-12-23 06:37 AM, Jason Wang wrote:
Commit f600b6905015 ("virtio_net: Add XDP support") leaves the case of
small receive buffer untouched. This wi
case.
On 2017年01月04日 00:48, John Fastabend wrote:
On 17-01-02 10:14 PM, Jason Wang wrote:
On 2017年01月03日 06:30, John Fastabend wrote:
XDP programs can not consume multiple pages so we cap the MTU to
avoid this case. Virtio-net however only checks the MTU at XDP
program load and does
On 2017年01月04日 00:54, John Fastabend wrote:
+/* Changing the headroom in buffers is a disruptive operation because
+ * existing buffers must be flushed and reallocated. This will happen
+ * when a xdp program is initially added or xdp is disabled by removing
+ * the xdp
On 2017年01月04日 00:57, John Fastabend wrote:
+/* Changing the headroom in buffers is a disruptive operation because
+ * existing buffers must be flushed and reallocated. This will happen
+ * when a xdp program is initially added or xdp is disabled by removing
+ * the xdp
the limitation of batched pacekts from vhost to tuntap
Please review.
Thanks
Jason Wang (3):
vhost: better detection of available buffers
vhost_net: tx batching
tun: rx batching
drivers/net/tun.c | 50 --
drivers/vhost/net.c | 23
%
rx_batched=16 0.98 +8.9%
rx_batched=32 1.03 +14.4%
rx_batched=48 1.09 +21.1%
rx_batched=64 1.02 +13.3%
The maximum number of batched packets were specified through a module
parameter.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/tun.
tep.
This patch is need for batching supports which needs to peek whether
or not there's still available buffers in the ring.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/vhost/vhost.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/
This patch tries to utilize tuntap rx batching by peeking the tx
virtqueue during transmission, if there's more available buffers in
the virtqueue, set MSG_MORE flag for a hint for backend (e.g tuntap)
to batch the packets.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/vhost
On 2016年12月30日 00:35, David Miller wrote:
From: Jason Wang <jasow...@redhat.com>
Date: Wed, 28 Dec 2016 16:09:31 +0800
+ spin_lock(>lock);
+ qlen = skb_queue_len(queue);
+ if (qlen > rx_batched)
+ goto drop;
+ __skb_queue_tai
On 2017年01月01日 01:31, David Miller wrote:
From: Jason Wang <jasow...@redhat.com>
Date: Fri, 30 Dec 2016 13:20:51 +0800
@@ -1283,10 +1314,15 @@ static ssize_t tun_get_user(struct tun_struct *tun,
struct tun_file *tfile,
skb_probe_transport_header(skb, 0);
rxhash = skb_ge
On 2017年01月01日 05:03, Stephen Hemminger wrote:
On Fri, 30 Dec 2016 13:20:51 +0800
Jason Wang <jasow...@redhat.com> wrote:
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index cd8e02c..a268ed9 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -75,6 +75,10 @@
#i
On 2017年01月07日 03:47, Michael S. Tsirkin wrote:
+static int tun_get_coalesce(struct net_device *dev,
+ struct ethtool_coalesce *ec)
+{
+ struct tun_struct *tun = netdev_priv(dev);
+
+ ec->rx_max_coalesced_frames = tun->rx_batched;
+
+ return 0;
+}
+
On 2017年01月07日 03:55, Michael S. Tsirkin wrote:
On Fri, Jan 06, 2017 at 10:13:15AM +0800, Jason Wang wrote:
This patch tries to do several tweaks on vhost_vq_avail_empty() for a
better performance:
- check cached avail index first which could avoid userspace memory access.
- using unlikely
to change per device batched packets through
ethtool -C rx-frames. NAPI_POLL_WEIGHT were used as upper limitation
to prevent bh from being disabled too long.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/tun.c | 76 ++-
1 file c
tep.
This patch is need for batching supports which needs to peek whether
or not there's still available buffers in the ring.
Reviewed-by: Stefan Hajnoczi <stefa...@redhat.com>
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/vhost/vhost.c | 8 ++--
1 file changed, 6 i
improvement on available buffer detection
- move the limitation of batched pacekts from vhost to tuntap
Please review.
Thanks
Jason Wang (3):
vhost: better detection of available buffers
vhost_net: tx batching
tun: rx batching
drivers/net/tun.c | 76
This patch tries to utilize tuntap rx batching by peeking the tx
virtqueue during transmission, if there's more available buffers in
the virtqueue, set MSG_MORE flag for a hint for backend (e.g tuntap)
to batch the packets.
Reviewed-by: Stefan Hajnoczi <stefa...@redhat.com>
Signed-off-by:
io-net create skbs during refill, this is sub optimal which could
be optimized in the future.
Cc: John Fastabend <john.r.fastab...@intel.com>
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/virtio_net.c | 112 ---
1 file chan
We drop csumed packet when do XDP for packets. This breaks
XDP_PASS when GUEST_CSUM is supported. Fix this by allowing csum flag
to be set. With this patch, simple TCP works for XDP_PASS.
Cc: John Fastabend <john.r.fastab...@intel.com>
Signed-off-by: Jason Wang <jasow...@redhat.com>
misconfiguration.
Cc: John Fastabend <john.r.fastab...@intel.com>
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/virtio_net.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 77ae358..c1
Now we in fact don't allow XDP for big packets, remove its codes.
Cc: John Fastabend <john.r.fastab...@intel.com>
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/virtio_net.c | 44 +++-
1 file changed, 3 insertions(+), 41 deleti
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/virtio_net.c | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 08327e0..1067253 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/
We don't update ewma rx buf size in the case of XDP. This will lead
underestimation of rx buf size which causes host to produce more than
one buffers. This will greatly increase the possibility of XDP page
linearization.
Cc: John Fastabend <john.r.fastab...@intel.com>
Signed-off-by: Jaso
. With this patch, we won't get OOM after linearize
huge number of packets.
Cc: John Fastabend <john.r.fastab...@intel.com>
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/virtio_net.c | 19 +++
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a
uot; works for XDP_PASS.
Cc: John Fastabend <john.r.fastab...@intel.com>
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/virtio_net.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
After we linearize page, we should xmit this page instead of the page
of first buffer which may lead unexpected result. With this patch, we
can see correct packet during XDP_TX.
Cc: John Fastabend <john.r.fastab...@intel.com>
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
when GUEST_UFO is support
- remove big packet XDP support
- add XDP support or small buffer
Please see individual patches for details.
Thanks
Jason Wang (9):
virtio-net: remove the warning before XDP linearizing
virtio-net: correctly xmit linearized page on XDP_TX
virtio-net: fix page
tep.
This patch is need for batching supports which needs to peek whether
or not there's still available buffers in the ring.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/vhost/vhost.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/
%
rx_batched=16 0.98 +8.9%
rx_batched=32 1.03 +14.4%
rx_batched=48 1.09 +21.1%
rx_batched=64 1.02 +13.3%
The maximum number of batched packets were specified through a module
parameter.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/tun.
%
rx_batched=64 1.02 +13.3%
Changes from V1:
- drop NAPI handler since we don't use NAPI now
- fix the issues that may exceeds max pending of zerocopy
- more improvement on available buffer detection
- move the limitation of batched pacekts from vhost to tuntap
Please review.
Thanks
Jason Wang (3
This patch tries to utilize tuntap rx batching by peeking the tx
virtqueue during transmission, if there's more available buffers in
the virtqueue, set MSG_MORE flag for a hint for backend (e.g tuntap)
to batch the packets.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/vhost
On 2016年12月24日 03:31, Daniel Borkmann wrote:
Hi Jason,
On 12/23/2016 03:37 PM, Jason Wang wrote:
Since we use EWMA to estimate the size of rx buffer. When rx buffer
size is underestimated, it's usual to have a packet with more than one
buffers. Consider this is not a bug, remove the warning
arbox.net>
Cc: John Fastabend <john.r.fastab...@intel.com>
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
include/linux/filter.h | 1 -
net/core/filter.c | 6 --
2 files changed, 7 deletions(-)
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 7023142..
On 2016年12月23日 23:54, John Fastabend wrote:
On 16-12-23 06:37 AM, Jason Wang wrote:
We don't put page during linearizing, the would cause leaking when
xmit through XDP_TX or the packet exceeds PAGE_SIZE. Fix them by
put page accordingly. Also decrease the number of buffers during
linearizing
On 2016年12月23日 23:57, John Fastabend wrote:
On 16-12-23 06:37 AM, Jason Wang wrote:
When XDP_PASS were determined for linearized packets, we try to get
new buffers in the virtqueue and build skbs from them. This is wrong,
we should create skbs based on existed buffers instead. Fixing them
On 2016年12月24日 01:10, John Fastabend wrote:
On 16-12-23 06:37 AM, Jason Wang wrote:
Merry Xmas and a Happy New year to all:
This series tries to fixes several issues for virtio-net XDP which
could be categorized into several parts:
- fix several issues during XDP linearizing
- allow csumed
On 2016年12月24日 00:10, John Fastabend wrote:
On 16-12-23 08:02 AM, John Fastabend wrote:
On 16-12-23 06:37 AM, Jason Wang wrote:
When VIRTIO_NET_F_GUEST_UFO is negotiated, host could still send UFO
packet that exceeds a single page which could not be handled
correctly by XDP. So this patch
On 2017年01月16日 08:01, John Fastabend wrote:
Add support for XDP adjust head by allocating a 256B header region
that XDP programs can grow into. This is only enabled when a XDP
program is loaded.
In order to ensure that we do not have to unwind queue headroom push
queue setup below
On 2017年01月16日 07:59, John Fastabend wrote:
This has a fix to handle small buffer free logic correctly and then
also adds adjust head support.
I pushed adjust head at net (even though its rc3) to avoid having
to push another exception case into virtio_net to catch if the
program uses
On 2017年01月16日 08:01, John Fastabend wrote:
In virtio_net we need to do a full reset of the device to support
queue reconfiguration and also we can trigger this via ethtool
commands. So instead of open coding this in net driver push this
into generic code in virtio. This also avoid exporting a
On 2017年03月22日 21:43, Michael S. Tsirkin wrote:
On Tue, Mar 21, 2017 at 12:04:40PM +0800, Jason Wang wrote:
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
include/linux/ptr_ring.h | 65
1 file changed, 65 insertions(+)
diff
On 2017年03月22日 22:16, Michael S. Tsirkin wrote:
On Tue, Mar 21, 2017 at 12:04:46PM +0800, Jason Wang wrote:
We used to dequeue one skb during recvmsg() from skb_array, this could
be inefficient because of the bad cache utilization and spinlock
touching for each packet. This patch tries
msg_control for underlayer socket to finish the
userspace copying.
Tests were done by XDP1:
- small buffer:
Before: 1.88Mpps
After : 2.25Mpps (+19.6%)
- mergeable buffer:
Before: 1.83Mpps
After : 2.10Mpps (+14.7%)
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/vhost/net.
This patch makes tap_recvmsg() can receive from skb from its caller
through msg_control. Vhost_net will be the first user.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/tap.c | 12
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/drivers/net/t
This patch exports skb_array through tun_get_skb_array(). Caller can
then manipulate skb array directly.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/tun.c | 13 +
include/linux/if_tun.h | 5 +
2 files changed, 18 insertions(+)
diff --git a/drive
This patch introduce a batched version of consuming, consumer can
dequeue more than one pointers from the ring at a time. We don't care
about the reorder of reading here so no need for compiler barrier.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
include/linux/ptr_ring.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
include/linux/skb_array.h | 25 +
1 file changed, 25 insertions(+)
diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h
index f4dfade..90e44b9 100644
--- a/include/linux/skb_array.h
+++ b/include
, so
it's not safe to call lockless one
Jason Wang (7):
ptr_ring: introduce batch dequeuing
skb_array: introduce batch dequeuing
tun: export skb_array
tap: export skb_array
tun: support receiving skb through msg_control
tap: support receiving skb from msg_control
vhost_net: try batch
This patch makes tun_recvmsg() can receive from skb from its caller
through msg_control. Vhost_net will be the first user.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/tun.c | 18 ++
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/drive
This patch exports skb_array through tap_get_skb_array(). Caller can
then manipulate skb array directly.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/tap.c | 13 +
include/linux/if_tap.h | 5 +
2 files changed, 18 insertions(+)
diff --git a/drive
On 2017年03月29日 20:07, Michael S. Tsirkin wrote:
On Tue, Mar 21, 2017 at 12:04:47PM +0800, Jason Wang wrote:
For the socket that exports its skb array, we can use lockless polling
to avoid touching spinlock during busy polling.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
d
On 2017年03月30日 10:33, Michael S. Tsirkin wrote:
On Thu, Mar 30, 2017 at 10:16:15AM +0800, Jason Wang wrote:
On 2017年03月29日 20:07, Michael S. Tsirkin wrote:
On Tue, Mar 21, 2017 at 12:04:47PM +0800, Jason Wang wrote:
For the socket that exports its skb array, we can use lockless polling
On 2017年03月29日 20:37, Michael S. Tsirkin wrote:
On xdp error we try to free head_skb without having
initialized it, that's clearly bogus.
Fixes: f600b6905015 ("virtio_net: Add XDP support")
Cc: John Fastabend
Signed-off-by: Michael S. Tsirkin
---
On 2017年03月29日 20:38, Michael S. Tsirkin wrote:
If one enables e.g. jumbo frames without mergeable
buffers, packets won't fit in 1500 byte buffers
we use. Switch to big packet mode instead.
TODO: make sizing more exact, possibly extend small
packet mode to use larger pages.
Signed-off-by:
On 2017年03月30日 01:42, Michael S. Tsirkin wrote:
When ring size is small (<32 entries) making buffers smaller means a
full ring might not be able to hold enough buffers to fit a single large
packet.
Make sure a ring full of buffers is large enough to allow at least one
packet of max size.
wn(dev);
_remove_vq_common(vi);
- dev->config->reset(dev);
virtio_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE);
virtio_add_status(dev, VIRTIO_CONFIG_S_DRIVER);
Acked-by: Jason Wang <jasow...@redhat.com>
On 2017年03月30日 04:48, Michael S. Tsirkin wrote:
We are going to add more parameters to find_vqs, let's wrap the call so
we don't need to tweak all drivers every time.
Signed-off-by: Michael S. Tsirkin
---
A quick glance and it looks ok, but what the benefit of this series,
On 2017年03月29日 18:46, Pankaj Gupta wrote:
Hi Jason,
On 2017年03月23日 13:34, Jason Wang wrote:
+{
+if (rvq->rh != rvq->rt)
+goto out;
+
+rvq->rh = rvq->rt = 0;
+rvq->rt = skb_array_consume_batched_bh(rvq->rx_array, rvq->rxq,
+
On 2017年03月23日 13:34, Jason Wang wrote:
+{
+if (rvq->rh != rvq->rt)
+goto out;
+
+rvq->rh = rvq->rt = 0;
+rvq->rt = skb_array_consume_batched_bh(rvq->rx_array, rvq->rxq,
+VHOST_RX_BATCH);
A comment explaining why is is
On 2017年03月30日 23:06, Michael S. Tsirkin wrote:
On Thu, Mar 30, 2017 at 03:22:28PM +0800, Jason Wang wrote:
This patch makes tun_recvmsg() can receive from skb from its caller
through msg_control. Vhost_net will be the first user.
Signed-off-by: Jason Wang<jasow...@redhat.com>
Do w
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
include/linux/skb_array.h | 25 +
1 file changed, 25 insertions(+)
diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h
index f4dfade..90e44b9 100644
--- a/include/linux/skb_array.h
+++ b/include
This patch makes tun_recvmsg() can receive from skb from its caller
through msg_control. Vhost_net will be the first user.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/tun.c | 18 ++
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/drive
msg_control for underlayer socket to finish the
userspace copying.
Tests were done by XDP1:
- small buffer:
Before: 1.88Mpps
After : 2.25Mpps (+19.6%)
- mergeable buffer:
Before: 1.83Mpps
After : 2.10Mpps (+14.7%)
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/vhost/net.
This patch exports skb_array through tun_get_skb_array(). Caller can
then manipulate skb array directly.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/tun.c | 13 +
include/linux/if_tun.h | 5 +
2 files changed, 18 insertions(+)
diff --git a/drive
This patch makes tap_recvmsg() can receive from skb from its caller
through msg_control. Vhost_net will be the first user.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/tap.c | 12
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/drivers/net/t
This patch exports skb_array through tap_get_skb_array(). Caller can
then manipulate skb array directly.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/net/tap.c | 13 +
include/linux/if_tap.h | 5 +
2 files changed, 18 insertions(+)
diff --git a/drive
.
Thanks
Jason Wang (8):
ptr_ring: introduce batch dequeuing
skb_array: introduce batch dequeuing
tun: export skb_array
tap: export skb_array
tun: support receiving skb through msg_control
tap: support receiving skb from msg_control
vhost_net: try batch dequing from skb array
vhost_net
For the socket that exports its skb array, we can use lockless polling
to avoid touching spinlock during busy polling.
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
drivers/vhost/net.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/net.c b/d
On 2017年03月21日 18:25, Sergei Shtylyov wrote:
Hello!
On 3/21/2017 7:04 AM, Jason Wang wrote:
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
include/linux/ptr_ring.h | 65
1 file changed, 65 insertions(+)
diff --git a/include
On 2017年03月30日 23:03, Michael S. Tsirkin wrote:
On Thu, Mar 30, 2017 at 03:22:29PM +0800, Jason Wang wrote:
This patch makes tap_recvmsg() can receive from skb from its caller
through msg_control. Vhost_net will be the first user.
Signed-off-by: Jason Wang<jasow...@redhat.com>
---
d
On 2017年03月31日 12:02, Jason Wang wrote:
On 2017年03月30日 22:21, Michael S. Tsirkin wrote:
On Thu, Mar 30, 2017 at 03:22:30PM +0800, Jason Wang wrote:
We used to dequeue one skb during recvmsg() from skb_array, this could
be inefficient because of the bad cache utilization
which cache does
On 2017年03月31日 22:31, Michael S. Tsirkin wrote:
On Fri, Mar 31, 2017 at 11:52:24AM +0800, Jason Wang wrote:
On 2017年03月30日 21:53, Michael S. Tsirkin wrote:
On Thu, Mar 30, 2017 at 03:22:24PM +0800, Jason Wang wrote:
This patch introduce a batched version of consuming, consumer can
dequeue
On 2017年04月20日 21:58, Willem de Bruijn wrote:
On Thu, Apr 20, 2017 at 2:27 AM, Jason Wang <jasow...@redhat.com> wrote:
On 2017年04月19日 04:21, Willem de Bruijn wrote:
+static void virtnet_napi_tx_enable(struct virtnet_info *vi,
+ struct virtque
On 2017年04月20日 23:34, Vlad Yasevich wrote:
On 04/17/2017 11:01 PM, Jason Wang wrote:
On 2017年04月16日 00:38, Vladislav Yasevich wrote:
Curreclty virtion net header is fixed size and adding things to it is rather
difficult to do. This series attempt to add the infrastructure as well as some
On 2017年04月16日 00:38, Vladislav Yasevich wrote:
This is the basic sceleton which will be fleshed out by individiual
extensions.
Signed-off-by: Vladislav Yasevich
---
drivers/net/virtio_net.c| 21 +
include/linux/virtio_net.h | 12
On 2017年04月17日 07:19, Michael S. Tsirkin wrote:
Applications that consume a batch of entries in one go
can benefit from ability to return some of them back
into the ring.
Add an API for that - assuming there's space. If there's no space
naturally we can't do this and have to drop entries, but
erated compared to the
current model, which keeps interrupts disabled as long as the ring
has enough free descriptors. Keep tx napi optional and disabled for
now. Follow-on patches will reduce the interrupt cost.
Signed-off-by: Willem de Bruijn <will...@google.com>
Signed-off-by: Jason Wang <jaso
On 2017年04月19日 04:21, Willem de Bruijn wrote:
From: Willem de Bruijn
Tx napi mode increases the rate of transmit interrupts. Suppress some
by masking interrupts while more packets are expected. The interrupts
will be reenabled before the last packet is sent.
This
On 2017年04月19日 04:21, Willem de Bruijn wrote:
+static void virtnet_napi_tx_enable(struct virtnet_info *vi,
+ struct virtqueue *vq,
+ struct napi_struct *napi)
+{
+ if (!napi->weight)
+ return;
+
+ if
On 2017年04月21日 21:08, Vlad Yasevich wrote:
On 04/21/2017 12:05 AM, Jason Wang wrote:
On 2017年04月20日 23:34, Vlad Yasevich wrote:
On 04/17/2017 11:01 PM, Jason Wang wrote:
On 2017年04月16日 00:38, Vladislav Yasevich wrote:
Curreclty virtion net header is fixed size and adding things
On 2017年03月03日 22:39, Willem de Bruijn wrote:
From: Willem de Bruijn
Convert virtio-net to a standard napi tx completion path. This enables
better TCP pacing using TCP small queues and increases single stream
throughput.
The virtio-net driver currently cleans tx
On 2017年03月03日 22:39, Willem de Bruijn wrote:
+void vhost_signal(struct vhost_dev *dev, struct vhost_virtqueue *vq);
+static enum hrtimer_restart vhost_coalesce_timer(struct hrtimer *timer)
+{
+ struct vhost_virtqueue *vq =
+ container_of(timer, struct vhost_virtqueue,
On 2017年03月03日 22:39, Willem de Bruijn wrote:
From: Willem de Bruijn
Amortize the cost of virtual interrupts by doing both rx and tx work
on reception of a receive interrupt. Together VIRTIO_F_EVENT_IDX and
vhost interrupt moderation, this suppresses most explicit tx
On 2017年03月07日 01:31, Willem de Bruijn wrote:
On Mon, Mar 6, 2017 at 4:28 AM, Jason Wang <jasow...@redhat.com> wrote:
On 2017年03月03日 22:39, Willem de Bruijn wrote:
+void vhost_signal(struct vhost_dev *dev, struct vhost_virtqueue *vq);
+static enum hrtimer_restart vhost_coalesce_timer(
On 2017年03月30日 22:32, Michael S. Tsirkin wrote:
On Thu, Mar 30, 2017 at 02:00:08PM +0800, Jason Wang wrote:
On 2017年03月30日 04:48, Michael S. Tsirkin wrote:
We are going to add more parameters to find_vqs, let's wrap the call so
we don't need to tweak all drivers every time.
Signed-off
On 2017年03月30日 21:53, Michael S. Tsirkin wrote:
On Thu, Mar 30, 2017 at 03:22:24PM +0800, Jason Wang wrote:
This patch introduce a batched version of consuming, consumer can
dequeue more than one pointers from the ring at a time. We don't care
about the reorder of reading here so no need
On 2017年03月30日 22:21, Michael S. Tsirkin wrote:
On Thu, Mar 30, 2017 at 03:22:30PM +0800, Jason Wang wrote:
We used to dequeue one skb during recvmsg() from skb_array, this could
be inefficient because of the bad cache utilization
which cache does this refer to btw?
Both icache and dcache
On 2017年04月16日 00:38, Vladislav Yasevich wrote:
Curreclty virtion net header is fixed size and adding things to it is rather
difficult to do. This series attempt to add the infrastructure as well as some
extensions that try to resolve some deficiencies we currently have.
First, vnet header
On 2017年04月16日 00:38, Vladislav Yasevich wrote:
This extension allows us to pass vlan ID and vlan protocol data to the
host hypervisor as part of the vnet header and lets us take advantage
of HW accelerated vlan tagging in the host. It requires support in the
host to negotiate the feature.
On 2017年08月16日 11:55, Michael S. Tsirkin wrote:
On Tue, Aug 15, 2017 at 08:45:20PM -0700, Eric Dumazet wrote:
On Fri, 2017-08-11 at 19:41 +0800, Jason Wang wrote:
We use tun_alloc_skb() which calls sock_alloc_send_pskb() to allocate
skb in the past. This socket based method is not suitable
301 - 400 of 1011 matches
Mail list logo