with copying large files w/ and w/o migration in both linux and windows
guests.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c |2 +-
drivers/vhost/vhost.c | 49 -
drivers/vhost/vhost.h | 18 --
3
Tested-by: Jason Wang jasow...@redhat.com
- Michael S. Tsirkin m...@redhat.com wrote:
In mergeable buffer case, we use headcount, log_num
and seg as indexes in same-size arrays, and
we know that headcount = seg and
log_num equals either 0 or seg.
Therefore, the right thing to do
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c |2 +-
drivers/vhost/vhost.h |2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index d10da28..14fc189 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost
When counting pages we should increase it by 1 instead of VHOST_PAGE_SIZE,
and also make log_write() can correctly process the request across
pages with write_address not start at page boundary.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/vhost.c | 20
Michael S. Tsirkin writes:
On Mon, Nov 29, 2010 at 01:48:20PM +0800, Jason Wang wrote:
When counting pages we should increase it by 1 instead of VHOST_PAGE_SIZE,
and also make log_write() can correctly process the request across
pages with write_address not start at page boundary
No need to check the support of mergeable buffer inside the recevie
loop as the whole handle_rx()_xx is in the read critical region. So
this patch move it ahead of the receiving loop.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c |5 +++--
1 files changed, 3
buffers, the quota is just 1), and then the
previous handle_rx_mergeable() could be resued also for big buffers.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c | 128 +++
1 files changed, 7 insertions(+), 121 deletions
We can use lock_sock_fast() instead of lock_sock() in order to get
speedup in peek_head_len().
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index c32a2e4
Michael S. Tsirkin writes:
On Mon, Jan 17, 2011 at 04:11:08PM +0800, Jason Wang wrote:
Codes duplication were found between the handling of mergeable and big
buffers, so this patch tries to unify them. This could be easily done
by adding a quota to the get_rx_bufs() which is used
Michael S. Tsirkin writes:
On Mon, Jan 17, 2011 at 04:10:59PM +0800, Jason Wang wrote:
No need to check the support of mergeable buffer inside the recevie
loop as the whole handle_rx()_xx is in the read critical region. So
this patch move it ahead of the receiving loop.
Signed
Michael S. Tsirkin writes:
On Tue, Jan 18, 2011 at 11:05:33AM +0800, Jason Wang wrote:
Michael S. Tsirkin writes:
On Mon, Jan 17, 2011 at 04:11:08PM +0800, Jason Wang wrote:
Codes duplication were found between the handling of mergeable and big
buffers, so this patch tries
Michael S. Tsirkin writes:
On Tue, Jan 18, 2011 at 12:26:17PM +0800, Jason Wang wrote:
Michael S. Tsirkin writes:
On Mon, Jan 17, 2011 at 04:10:59PM +0800, Jason Wang wrote:
No need to check the support of mergeable buffer inside the recevie
loop as the whole handle_rx()_xx
could be measured by netperf.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/net/macvtap.c |2 ++
drivers/net/tun.c |2 ++
drivers/net/virtio_net.c |2 ++
include/linux/virtio_net.h |1 +
net/packet/af_packet.c |2 ++
5 files changed, 9 insertions
We need to set log when updating flags of used ring, otherwise they may
be missed after migration. A helper was introduced to write used_flags
back to guest memory and update the log if necessary.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/vhost.c | 26
This patch move the used ring initialization after backend was set. This make us
possible to disable the backend and tweak the used ring then restart. And it's
also useful for log setting as used ring have been checked then.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/net.c
We need set log when updating used flags and avail event. Otherwise guest may
see stale values after migration and then do not exit or exit unexpectedly.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost/vhost.c | 61 +++--
1 files
packets.
- addressing the comments of virtio-net driver
- performance tunning
Please review and comment it, Thanks.
---
Jason Wang (5):
tuntap: move socket/sock related structures to tun_file
tuntap: categorize ioctl
tuntap: introduce multiqueue related flags
tuntap
to be attached to a single tap device.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/net/tun.c | 349 +++--
1 files changed, 180 insertions(+), 169 deletions(-)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index 71f3d1a..2739887
As we've moved socket related structure to
file-private_data, we can separate system calls that only
touch tfile from others as they don't need hold rtnl lock.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/net/tun.c | 52 ++--
1 files
Signed-off-by: Jason Wang jasow...@redhat.com
---
include/linux/if_tun.h |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/include/linux/if_tun.h b/include/linux/if_tun.h
index 06b1829..c92a291 100644
--- a/include/linux/if_tun.h
+++ b/include/linux/if_tun.h
@@ -34,6 +34,7
for
multiqueue tap device. And RCU is used for doing
synchronization between packet handling and system calls
such as removing queues.
Currently, multiqueue support is limited for tap , but it's
easy also enable it for tun if we find it was also helpful.
Signed-off-by: Jason Wang jasow
, and this file could be re-attach to the
tap device as a queue again.
After those ioctls were added, userspace can create a
multiqueue tap device by open /dev/net/tap and call
TUNSETIFF, then it could easily control the number of queues
through TUNATTACHQUEUE and TUNDETACHQUEUE.
Signed-off-by: Jason Wang
From: Krishna Kumar krkum...@in.ibm.com
Move queue_index from virtio_pci_vq_info to virtqueue. This
allows callback handlers to figure out the queue number for
the vq that needs attention.
Signed-off-by: Krishna Kumar krkum...@in.ibm.com
---
drivers/virtio/virtio_pci.c | 10 +++---
From: Krishna Kumar krkum...@in.ibm.com
Implement mq virtio-net driver.
Though struct virtio_net_config changes, it works with the old
qemu since the last element is not accessed unless qemu sets
VIRTIO_NET_F_MULTIQUEUE.
Signed-off-by: Krishna Kumar krkum...@in.ibm.com
Signed-off-by: Jason Wang
Jason Wang writes:
As multi-queue nics were commonly used for high-end servers,
current single queue based tap can not satisfy the
requirement of scaling guest network performance as the
numbers of vcpus increase. So the following series
implements multiple queue support in tun/tap
- Original Message -
On Fri, 2011-08-12 at 09:55 +0800, Jason Wang wrote:
From: Krishna Kumar krkum...@in.ibm.com
Implement mq virtio-net driver.
Though struct virtio_net_config changes, it works with the old
qemu since the last element is not accessed unless qemu sets
- Original Message -
Le vendredi 12 août 2011 à 09:55 +0800, Jason Wang a écrit :
+ rxq = skb_get_rxhash(skb);
+ if (rxq) {
+ tfile = rcu_dereference(tun-tfiles[rxq % numqueues]);
+ if (tfile)
+ goto out;
+ }
You can avoid an expensive divide with following trick :
u32 idx
- Original Message -
On Fri, Aug 12, 2011 at 09:55:20AM +0800, Jason Wang wrote:
With the abstraction that each socket were a backend of a
queue for userspace, this patch adds multiqueue support for
tap device by allowing multiple sockets to be attached to a
tap device. Then we
- Original Message -
On Fri, 2011-08-12 at 09:54 +0800, Jason Wang wrote:
As multi-queue nics were commonly used for high-end servers,
current single queue based tap can not satisfy the
requirement of scaling guest network performance as the
numbers of vcpus increase. So
On 11/15/2011 12:44 PM, Krishna Kumar2 wrote:
Sasha Levin levinsasha...@gmail.com wrote on 11/14/2011 03:45:40 PM:
Why both the bandwidth and latency performance are dropping so
dramatically with multiple VQ?
It looks like theres no hash sync between host and guest, which makes
the RX VQ
On 11/16/2011 05:09 PM, Krishna Kumar2 wrote:
jason wang jasow...@redhat.com wrote on 11/16/2011 11:40:45 AM:
Hi Jason,
Have any thought in mind to solve the issue of flow handling?
So far nothing concrete.
Maybe some performance numbers first is better, it would let us know
where we
On 11/25/2011 12:14 AM, Michael S. Tsirkin wrote:
On Thu, Nov 24, 2011 at 08:56:45PM +0800, jasowang wrote:
On 11/24/2011 06:34 PM, Michael S. Tsirkin wrote:
On Thu, Nov 24, 2011 at 06:13:41PM +0800, jasowang wrote:
On 11/24/2011 05:59 PM, Michael S. Tsirkin wrote:
On Thu, Nov 24,
On 11/25/2011 10:58 AM, Krishna Kumar2 wrote:
jasowangjasow...@redhat.com wrote on 11/24/2011 06:30:52 PM:
On Thu, Nov 24, 2011 at 01:47:14PM +0530, Krishna Kumar wrote:
It was reported that the macvtap device selects a
different vhost (when used with multiqueue feature)
for incoming packets
On 11/25/2011 11:07 AM, Krishna Kumar2 wrote:
Michael S. Tsirkinm...@redhat.com wrote on 11/24/2011 09:44:31 PM:
As far as I can see, ixgbe binds queues to physical cpu, so let
consider:
vhost thread transmits packets of flow A on processor M
during packet transmission, ixgbe driver
On 11/25/2011 12:09 PM, Krishna Kumar2 wrote:
Jason Wangjasow...@redhat.com wrote on 11/25/2011 08:51:57 AM:
My description is not clear again :(
I mean the same vhost thead:
vhost thread #0 transmits packets of flow A on processor M
...
vhost thread #0 move to another process N and start to
On 11/28/2011 01:23 AM, Michael S. Tsirkin wrote:
On Fri, Nov 25, 2011 at 01:35:52AM -0500, David Miller wrote:
From: Krishna Kumar2krkum...@in.ibm.com
Date: Fri, 25 Nov 2011 09:39:11 +0530
Jason Wangjasow...@redhat.com wrote on 11/25/2011 08:51:57 AM:
My description is not clear again :(
I
:
- An alternative idea instead of shared page is ctrl vq, the reason
that a shared table is preferable is the delay of ctrl vq itself.
- Optimization on irq affinity and tx queue selection
Comments are welcomed, thanks!
---
Jason Wang (5):
virtio_net: passing rxhash through vnet_hdr
tuntap
This patch enables the ability to pass the rxhash value to guest
through vnet_hdr. This is useful for guest when it wants to cooperate
with virtual device to steer a flow to dedicated guest cpu.
This feature is negotiated through VIRTIO_NET_F_GUEST_RXHASH.
Signed-off-by: Jason Wang jasow
set through a new kind of ioctl - TUNSETFD and
were pinned until device exit or another new page were specified.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/net/tun.c | 63
include/linux/if_tun.h | 10
2 files
Device specific irq configuration may be need in order to do some
optimization. So a new configuration is needed to get the irq of a
virtqueue.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/lguest/lguest_device.c |8
drivers/s390/kvm/kvm_virtio.c |6 ++
drivers
use the guest scheduler to
balance the load of TX and reduce the lock contention on egress path,
so the processor_id() were used to tx queue selection.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/net/virtio_net.c | 165 +++-
include/linux
On 12/05/2011 06:55 PM, Stefan Hajnoczi wrote:
On Mon, Dec 5, 2011 at 8:59 AM, Jason Wangjasow...@redhat.com wrote:
+static int virtnet_set_fd(struct net_device *dev, u32 pfn)
+{
+ struct virtnet_info *vi = netdev_priv(dev);
+ struct virtio_device *vdev = vi-vdev;
+
+ if
On 12/06/2011 04:09 AM, Ben Hutchings wrote:
On Mon, 2011-12-05 at 16:58 +0800, Jason Wang wrote:
This patch adds a simple flow director to tun/tap device. It is just a
page that contains the hash to queue mapping which could be changed by
user-space. The backend (tap/macvtap) would query
On 12/06/2011 04:42 AM, Ben Hutchings wrote:
On Mon, 2011-12-05 at 16:59 +0800, Jason Wang wrote:
In order to let the packets of a flow to be passed to the desired
guest cpu, we can co-operate with devices through programming the flow
director which was just a hash to queue table.
This kinds
On 12/06/2011 05:18 PM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 6:33 AM, Jason Wangjasow...@redhat.com wrote:
On 12/05/2011 06:55 PM, Stefan Hajnoczi wrote:
On Mon, Dec 5, 2011 at 8:59 AM, Jason Wangjasow...@redhat.comwrote:
+static int virtnet_set_fd(struct net_device *dev, u32
On 12/06/2011 09:15 PM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 10:21 AM, Jason Wangjasow...@redhat.com wrote:
On 12/06/2011 05:18 PM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 6:33 AM, Jason Wangjasow...@redhat.comwrote:
On 12/05/2011 06:55 PM, Stefan Hajnoczi wrote:
On Mon,
On 12/06/2011 11:42 PM, Sridhar Samudrala wrote:
On 12/6/2011 5:15 AM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 10:21 AM, Jason Wangjasow...@redhat.com wrote:
On 12/06/2011 05:18 PM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 6:33 AM, Jason Wangjasow...@redhat.com
wrote:
On
On 12/07/2011 07:10 AM, Sridhar Samudrala wrote:
On 12/6/2011 8:14 AM, Michael S. Tsirkin wrote:
On Tue, Dec 06, 2011 at 07:42:54AM -0800, Sridhar Samudrala wrote:
On 12/6/2011 5:15 AM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 10:21 AM, Jason Wangjasow...@redhat.com
wrote:
On
On 12/07/2011 03:30 PM, Rusty Russell wrote:
On Mon, 05 Dec 2011 16:58:37 +0800, Jason Wangjasow...@redhat.com wrote:
multiple queue virtio-net: flow steering through host/guest cooperation
Hello all:
This is a rough series adds the guest/host cooperation of flow
steering support based on
On 12/07/2011 05:08 PM, Stefan Hajnoczi wrote:
[...]
Consider the complexity of the host nic each with their own steering
features, this series make the first step with minimal effort to try to let
guest driver and host tap/macvtap co-operate like what physical nic does.
There may be
On 12/08/2011 12:10 AM, Michael S. Tsirkin wrote:
On Fri, Nov 25, 2011 at 01:35:52AM -0500, David Miller wrote:
From: Krishna Kumar2krkum...@in.ibm.com
Date: Fri, 25 Nov 2011 09:39:11 +0530
Jason Wangjasow...@redhat.com wrote on 11/25/2011 08:51:57 AM:
My description is not clear again :(
I
On 12/08/2011 01:02 AM, Ben Hutchings wrote:
On Wed, 2011-12-07 at 19:31 +0800, Jason Wang wrote:
On 12/07/2011 03:30 PM, Rusty Russell wrote:
On Mon, 05 Dec 2011 16:58:37 +0800, Jason Wangjasow...@redhat.com wrote:
multiple queue virtio-net: flow steering through host/guest cooperation
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/virtio/virtio_ring.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 79e1b29..78428a8 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers
Use virtio_mb() to make sure the available index to be exposed before
checking the the avail event. Otherwise we may get stale value of
avail event in guest and never kick the host after.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/virtio/virtio_ring.c |6 +++---
1 files
between unregister_dev() and workqueue
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/net/virtio_net.c | 27 ++-
include/linux/virtio_net.h |2 ++
2 files changed, 28 insertions(+), 1 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net
On 03/18/2012 08:22 PM, Michael S. Tsirkin wrote:
On Fri, Mar 16, 2012 at 11:20:26PM +0800, Jason Wang wrote:
This patch splits the device status field of virtio-net into ro and rw
byte. This would simplify the implementation of both host and guest
and make the layout more clean
On 03/19/2012 04:44 PM, Michael S. Tsirkin wrote:
On Mon, Mar 19, 2012 at 12:46:29PM +1030, Rusty Russell wrote:
On Tue, 13 Mar 2012 16:33:31 +0200, Michael S. Tsirkinm...@redhat.com
wrote:
diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
index 970d5a2..44a38d6 100644
---
and
also simplify the implementation.
Signed-off-by: Jason Wang jasow...@redhat.com
---
virtio-0.9.4.lyx | 23 +--
1 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/virtio-0.9.4.lyx b/virtio-0.9.4.lyx
index 6c7bab1..614ab55 100644
--- a/virtio-0.9.4.lyx
+++ b
On 03/22/2012 12:30 PM, Rusty Russell wrote:
On Wed, 21 Mar 2012 08:37:46 +0200, Michael S. Tsirkinm...@redhat.com
wrote:
Ah. Right, we need to trap for host to clear the bit.
OK, so let's make the bit RO, and add
VIRTIO_NET_CTRL_ANNOUNCED to acknowledge that we've
seen VIRTIO_NET_S_ANNOUNCE
:
- Send the gratuitous packets or mark them as pending before send
VIRTIO_NET_CTRL_ANNOUNCE_ACK command.
Signed-off-by: Jason Wang jasow...@redhat.com
---
virtio-0.9.4.lyx | 76 +-
1 files changed, 69 insertions(+), 7 deletions(-)
diff --git
On 03/28/2012 10:31 AM, David Miller wrote:
From: Jason Wangjasow...@redhat.com
Date: Fri, 16 Mar 2012 17:01:01 +0800
As hypervior does not have the knowledge of guest network configuration, it's
better to ask guest to send gratuitous packets when needed.
Guest tests
to bit 8 to separate rw bits from ro bits
Changes from v3:
- cancel the workqueue during freeze
Changes from v2:
- fix the race between unregister_dev() and workqueue
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/net/virtio_net.c | 32 +++-
include
On 04/04/2012 03:49 PM, Michael S. Tsirkin wrote:
On Wed, Mar 28, 2012 at 01:44:28PM +0800, Jason Wang wrote:
As hypervior does not have the knowledge of guest network configuration, it's
better to ask guest to send gratuitous packets when needed.
Guest tests VIRTIO_NET_S_ANNOUNCE bit during
On 04/04/2012 05:32 PM, Michael S. Tsirkin wrote:
On Thu, Dec 29, 2011 at 09:12:38PM +1030, Rusty Russell wrote:
Michael S. Tsirkin noticed that we could run the refill work after
ndo_close, which can re-enable napi - we don't disable it until
virtnet_remove. This is clearly wrong, so
specific tracepoints?
---
Jason Wang (2):
vhost: basic tracepoints
tools: virtio: add a top-like utility for displaying vhost satistics
drivers/vhost/trace.h | 153
drivers/vhost/vhost.c | 17 ++
tools/virtio/vhost_stat | 360
To help for the performance optimizations and debugging, this patch tracepoints
for vhost. Pay attention that the tracepoints are only for vhost, net code are
not touched.
Two kinds of activities were traced: virtio and vhost work.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/vhost
4 0
Signed-off-by: Jason Wang jasow...@redhat.com
---
tools/virtio/vhost_stat | 360 +++
1 files changed, 360 insertions(+), 0 deletions(-)
create mode 100755 tools/virtio/vhost_stat
diff --git a/tools/virtio/vhost_stat b/tools/virtio
through handling the whole config
change interrupt in an non-reentrant workqueue.
Signed-off-by: Jason Wang jasow...@redhat.com
---
Changes from v6:
- move the whole event processing to system_nrt_wq
- introduce the config_enable and config_lock to synchronize with dev removing
and pm
- protect
On 06/05/2012 06:10 PM, Michael S. Tsirkin wrote:
On Tue, Jun 05, 2012 at 04:38:41PM +0800, Jason Wang wrote:
Satistics counters is useful for debugging and performance optimization, so this
patch lets virtio_net driver collect following and export them to userspace
through ethtool -S
Currently, we store the statistics in the independent fields of virtnet_stats,
this is not scalable when we want to add more counters. As suggested by Michael,
this patch convert it to an array and use the enum as the index to access them.
Signed-off-by: Jason Wang jasow...@redhat.com
: 399537
rx_kicks: 7
rx_callbacks: 19794
TODO:
- more statistics
- calculate the pending bytes/pkts
Signed-off-by: Jason Wang jasow...@redhat.com
---
Changes from v1:
- style typo fixs
- convert the statistics fields to array
- use unlikely()
---
drivers/net/virtio_net.c | 115
On 06/06/2012 04:27 PM, Michael S. Tsirkin wrote:
On Wed, Jun 06, 2012 at 03:52:17PM +0800, Jason Wang wrote:
Satistics counters is useful for debugging and performance optimization, so this
patch lets virtio_net driver collect following and export them to userspace
through ethtool -S
On 06/06/2012 04:45 PM, Eric Dumazet wrote:
On Wed, 2012-06-06 at 10:35 +0200, Eric Dumazet wrote:
From: Eric Dumazeteduma...@google.com
commit 3fa2a1df909 (virtio-net: per cpu 64 bit stats (v2)) added a race
on 32bit arches.
We must use separate syncp for rx and tx path as they can be run at
On 06/08/2012 04:56 AM, Ben Hutchings wrote:
On Thu, 2012-06-07 at 13:39 -0700, Rick Jones wrote:
On 06/07/2012 01:24 PM, Ben Hutchings wrote:
On Thu, 2012-06-07 at 13:05 -0700, David Miller wrote:
From: Ben Hutchingsbhutchi...@solarflare.com
Date: Thu, 7 Jun 2012 18:15:06 +0100
I would
On 06/08/2012 06:19 AM, Michael S. Tsirkin wrote:
On Wed, Jun 06, 2012 at 03:52:17PM +0800, Jason Wang wrote:
Satistics counters is useful for debugging and performance optimization, so
this
patch lets virtio_net driver collect following and export them to userspace
through ethtool -S
9374.67 138% 214.50 160.25 74%
Changes from V3:
- Rebase to the net-next
- Let queue 2 to be the control virtqueue to obey the spec
- Prodives irq affinity
- Choose txq based on processor id
References:
- V3: http://lwn.net/Articles/467283/
---
Jason Wang (3):
virtio_ring: move
From: Krishna Kumar krkum...@in.ibm.com
Introduce VIRTIO_NET_F_MULTIQUEUE.
Signed-off-by: Krishna Kumar krkum...@in.ibm.com
Signed-off-by: Jason Wang jasow...@redhat.com
---
include/linux/virtio_net.h |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/include/linux
Instead of storing the queue index in virtio infos, this patch moves them to
vring_virtqueue and introduces helpers to set and get the value. This would
simplify the management and tracing.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/virtio/virtio_mmio.c |5 +
drivers
.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/lguest/lguest_device.c |8
drivers/s390/kvm/kvm_virtio.c |6 ++
drivers/virtio/virtio_mmio.c |8
drivers/virtio/virtio_pci.c| 12
include/linux/virtio_config.h |4
5 files
:
- Txq selection is based on the processor id in order to avoid contending a lock
whose owner may exits to host.
- Since the txq/txq were per-cpu, affinity hint were set to the cpu that owns
the queue pairs.
Signed-off-by: Krishna Kumar krkum...@in.ibm.com
Signed-off-by: Jason Wang jasow
review and comments.
---
Jason Wang (4):
option: introduce qemu_get_opt_all()
tap: multiqueue support
net: multiqueue support
virtio-net: add multiqueue support
hw/dp8393x.c |2
hw/mcf_fec.c |2
hw/qdev-properties.c | 33 +++-
hw/qdev.h
-by: Jason Wang jasow...@redhat.com
---
qemu-option.c | 19 +++
qemu-option.h |2 ++
2 files changed, 21 insertions(+), 0 deletions(-)
diff --git a/qemu-option.c b/qemu-option.c
index bb3886c..9263125 100644
--- a/qemu-option.c
+++ b/qemu-option.c
@@ -545,6 +545,25 @@ static QemuOpt
and detach
file. Platform-specific helpers were called and only linux helper has its
content as multiqueue tap were only supported in linux.
Signed-off-by: Jason Wang jasow...@redhat.com
---
net.c |4 +
net/tap-aix.c | 13 +++-
net/tap-bsd.c | 13 +++-
net/tap-haiku.c | 13
from or sent
out. Virtio-net would be the first user.
Signed-off-by: Jason Wang jasow...@redhat.com
---
hw/dp8393x.c |2 +-
hw/mcf_fec.c |2 +-
hw/qdev-properties.c | 33 +++-
hw/qdev.h|3 ++-
net.c| 58
be used without
changes in vhost code. So each vhost_net structure were used to track a single
VLANClientState and two virtqueues in the past. As multiple VLANClientState were
stored in the NICState, we can infer the correspond VLANClientState from this
and queue_index easily.
Signed-off-by: Jason Wang
On 06/25/2012 06:14 PM, Michael S. Tsirkin wrote:
On Mon, Jun 25, 2012 at 05:41:17PM +0800, Jason Wang wrote:
Device specific irq optimizations such as irq affinity may be used by virtio
drivers. So this patch introduce a new method to get the irq of a specific
virtqueue.
After this patch
On 06/26/2012 01:49 AM, Sridhar Samudrala wrote:
On 6/25/2012 2:16 AM, Jason Wang wrote:
Hello All:
This series is an update version of multiqueue virtio-net driver
based on
Krishna Kumar's work to let virtio-net use multiple rx/tx queues to
do the
packets reception and transmission. Please
On 06/26/2012 02:01 AM, Shirley Ma wrote:
Hello Jason,
Good work. Do you have local guest to guest results?
Thanks
Shirley
Hi Shirley:
I would run tests to measure the performance and post here.
Thanks
___
Virtualization mailing list
This patch introduces the multiqueue capabilities to virtio net devices. The
number of tx/rx queue pairs available in the device were exposed through config
space, and driver could negotiate the number of pairs it wish to use through
ctrl vq.
Signed-off-by: Jason Wang jasow...@redhat.com
On 07/01/2012 05:43 PM, Michael S. Tsirkin wrote:
On Mon, Jun 25, 2012 at 06:04:49PM +0800, Jason Wang wrote:
This patch let the virtio-net can transmit and recevie packets through multiuple
VLANClientStates and abstract them as multiple virtqueues to guest. A new
parameter 'queues' were
Instead of storing the queue index in virtio infos, this patch moves them to
vring_virtqueue and introduces helpers to set and get the value. This would
simplify the management and tracing.
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/virtio/virtio_mmio.c |5 +
drivers
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/virtio/virtio_pci.c | 46 +
include/linux/virtio_config.h | 21 ++
2 files changed, 67 insertions(+), 0 deletions(-)
diff --git a/drivers/virtio/virtio_pci.c b/drivers/virtio
pairs.
Signed-off-by: Krishna Kumar krkum...@in.ibm.com
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/net/virtio_net.c | 645 ++-
include/linux/virtio_net.h |2 +
2 files changed, 452 insertions(+), 195 deletions(-)
diff --git a/drivers/net
index).
Signed-off-by: Jason Wang jasow...@redhat.com
---
drivers/net/virtio_net.c | 171 ++-
include/linux/virtio_net.h |7 ++
2 files changed, 142 insertions(+), 36 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
On 07/05/2012 07:40 PM, Sasha Levin wrote:
On Thu, 2012-07-05 at 18:29 +0800, Jason Wang wrote:
Instead of storing the queue index in virtio infos, this patch moves them to
vring_virtqueue and introduces helpers to set and get the value. This would
simplify the management and tracing.
Signed
On 07/05/2012 08:51 PM, Sasha Levin wrote:
On Thu, 2012-07-05 at 18:29 +0800, Jason Wang wrote:
@@ -1387,6 +1404,10 @@ static int virtnet_probe(struct virtio_device *vdev)
if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ))
vi-has_cvq = true;
+ /* Use single tx
On 07/06/2012 01:45 AM, Rick Jones wrote:
On 07/05/2012 03:29 AM, Jason Wang wrote:
Test result:
1) 1 vm 2 vcpu 1q vs 2q, 1 - 1q, 2 - 2q, no pinning
- Guest to External Host TCP STREAM
sessions size throughput1 throughput2 norm1 norm2
1 64 650.55 655.61 100% 24.88 24.86 99%
2 64 1446.81
On 07/06/2012 04:02 AM, Amos Kong wrote:
On 07/05/2012 06:29 PM, Jason Wang wrote:
This patch converts virtio_net to a multi queue device. After negotiated
VIRTIO_NET_F_MULTIQUEUE feature, the virtio device has many tx/rx queue pairs,
and driver could read the number from config space
On 07/06/2012 04:07 AM, Amos Kong wrote:
On 07/05/2012 08:51 PM, Sasha Levin wrote:
On Thu, 2012-07-05 at 18:29 +0800, Jason Wang wrote:
@@ -1387,6 +1404,10 @@ static int virtnet_probe(struct virtio_device *vdev)
if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ))
vi
On 07/06/2012 02:38 PM, Stephen Hemminger wrote:
On Fri, 06 Jul 2012 11:20:06 +0800
Jason Wangjasow...@redhat.com wrote:
On 07/05/2012 08:51 PM, Sasha Levin wrote:
On Thu, 2012-07-05 at 18:29 +0800, Jason Wang wrote:
@@ -1387,6 +1404,10 @@ static int virtnet_probe(struct virtio_device *vdev
1 - 100 of 4687 matches
Mail list logo