Call for Workshops Proposals - WorldCIST'19, La Toja Island, Spain

2018-08-19 Thread Maria Lemos
- CALL FOR WORKSHOPS PROPOSALS 
WorldCIST'19 - 7th World Conference on Information Systems and Technologies
  16th-19th of April 2019, La Toja Island, Galicia, Spain
   http://www.worldcist.org/ 

---


The Information Systems and Technologies research and industrial community is 
invited to submit proposals for the organization of Workshops at WorldCist'19 - 
7th World Conference on Information Systems and Technologies, to be held at La 
Toja Island, Galicia, Spain, 16 - 19 April 2019. WorldCist is a global forum 
for researchers and practitioners to present and discuss the most recent 
innovations, trends, results, experiences and concerns in the several 
perspectives of Information Systems and Technologies.


###
WORKSHOP FORMAT
###

Workshops should focus on a specific scientific subject on the scope of 
WorldCist'19 but not directly included on the main conference areas. Each 
workshop will be coordinated by an Organizing Committee composed of, at least, 
two researchers in the field, preferably from different institutions and 
different countries. The organizers should create an international Program 
Committee for the Workshop, with recognized researchers within the specific 
Workshop scientific area. Each workshop should have at least ten submissions 
and five accepted papers in order to be conducted at WorldCist'19.

The selection of Workshops will be performed by WorldCist'19 
Conference/Workshop Chairs. Workshops full and short papers will be published 
in the conference main proceedings in specific Workshop chapters published by 
Springer in a book of the AISC series. Proceedings will be submitted for 
indexation by ISI Thomson, SCOPUS, DBLP, EI-Compendex among several other 
scientific databases. Extended versions of best selected papers will be 
published in journals indexed by ISI/SCI, SCOPUS and DBLP. Detailed and 
up-to-date information may be found at WorldCist'19 website: 
http://www.worldcist.org/ 


#
WORKSHOP ORGANIZATION
#

The Organizing Committee of each Workshop will be responsible for:

- Producing and distributing the Workshop Call for Papers (CFP);
- Coordinating the review and selection process for the papers submitted to the 
Workshop, as Workshop chairs (on the paper submission system to be installed);
- Delivering the final versions of the papers accepted for the Workshop in 
accordance with the guidelines and deadlines defined by WorldCist'19 organizers;
- Coordinating and chairing the Workshop sessions at the conference.

WorldCist'19 organizers reserve the right to cancel any Workshop if deadlines 
are missed or if the number of registered attendees is too low to support the 
costs associated with the Workshop.



PROPOSAL CONTENT


Workshop proposals should contain the following information:

- Workshop title;
- Brief description of the specific scientific scope of the Workshop;
- List of topics of interest (max 15 topics);
- Reasons the Workshop should be held within WorldCist’19;
- Name, postal address, phone and email of all the members of the Workshop 
Organizing Committee;
- Preliminary proposal for the Workshop Program Committee (Names and 
affiliations).

Proposals should be submitted at 
https://easychair.org/conferences/?conf=worldcist-workshops2019 
 in PDF (in 
English), by September 10, 2018.


###
IMPORTANT DATES
###

- Deadline for Workshop proposals: September 10, 2018
- Notification of Workshop acceptance: September 20, 2018
- Workshop Final Information and Program Committee: October 10, 2018
- Deadline for paper submission: November 30, 2018
- Notification of paper acceptance: January 6, 2019
- Deadline for final versions and conference registration: January 20, 2019
- Conference dates: April 16-19, 2019


#
CHAIR
#

Luis Paulo Reis, AISTI, IEEE & University of Porto, Portugal


WorldCIST'19 Website: http://www.worldcist.org/ 






---
This email has been checked for viruses by AVG.
https://www.avg.com
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH net-next v8 7/7] net: vhost: make busyloop_intr more accurate

2018-08-19 Thread xiangxia . m . yue
From: Tonghao Zhang 

The patch uses vhost_has_work_pending() to check if
the specified handler is scheduled, because in the most case,
vhost_has_work() return true when other side handler is added
to worker list. Use the vhost_has_work_pending() insead of
vhost_has_work().

Topology:
[Host] ->linux bridge -> tap vhost-net ->[Guest]

TCP_STREAM (netperf):
* Without the patch:  38035.39 Mbps, 3.37 us mean latency
* With the patch: 38409.44 Mbps, 3.34 us mean latency

Signed-off-by: Tonghao Zhang 
---
 drivers/vhost/net.c | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index db63ae2..b6939ef 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -487,10 +487,8 @@ static void vhost_net_busy_poll(struct vhost_net *net,
endtime = busy_clock() + busyloop_timeout;
 
while (vhost_can_busy_poll(endtime)) {
-   if (vhost_has_work(>dev)) {
-   *busyloop_intr = true;
+   if (vhost_has_work(>dev))
break;
-   }
 
if ((sock_has_rx_data(sock) &&
 !vhost_vq_avail_empty(>dev, rvq)) ||
@@ -513,6 +511,11 @@ static void vhost_net_busy_poll(struct vhost_net *net,
!vhost_has_work_pending(>dev, VHOST_NET_VQ_RX))
vhost_net_enable_vq(net, rvq);
 
+   if (vhost_has_work_pending(>dev,
+  poll_rx ?
+  VHOST_NET_VQ_RX: VHOST_NET_VQ_TX))
+   *busyloop_intr = true;
+
mutex_unlock(>mutex);
 }
 
-- 
1.8.3.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH net-next v8 6/7] net: vhost: disable rx wakeup during tx busypoll

2018-08-19 Thread xiangxia . m . yue
From: Tonghao Zhang 

In the handle_tx, the busypoll will vhost_net_disable/enable_vq
because we have poll the sock. This can improve performance.

This is suggested by Toshiaki Makita and Jason Wang.

If the rx handle is scheduled, we will not enable vq, because it's
not necessary. We do it not in last 'else' because if we receive
the data, but can't queue the rx handle(rx vring is full), then we
enable the vq to avoid case: guest receives the data, vring is not
full then guest can get more data, but vq is disabled, rx vq can't
be wakeup to receive more data.

Topology:
[Host] ->linux bridge -> tap vhost-net ->[Guest]

TCP_STREAM (netperf):
* Without the patch:  37598.20 Mbps, 3.43 us mean latency
* With the patch: 38035.39 Mbps, 3.37 us mean latency

Signed-off-by: Tonghao Zhang 
---
 drivers/vhost/net.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 23d7ffc..db63ae2 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -480,6 +480,9 @@ static void vhost_net_busy_poll(struct vhost_net *net,
busyloop_timeout = poll_rx ? rvq->busyloop_timeout:
 tvq->busyloop_timeout;
 
+   if (!poll_rx)
+   vhost_net_disable_vq(net, rvq);
+
preempt_disable();
endtime = busy_clock() + busyloop_timeout;
 
@@ -506,6 +509,10 @@ static void vhost_net_busy_poll(struct vhost_net *net,
else /* On tx here, sock has no rx data. */
vhost_enable_notify(>dev, rvq);
 
+   if (!poll_rx &&
+   !vhost_has_work_pending(>dev, VHOST_NET_VQ_RX))
+   vhost_net_enable_vq(net, rvq);
+
mutex_unlock(>mutex);
 }
 
-- 
1.8.3.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH net-next v8 5/7] net: vhost: introduce bitmap for vhost_poll

2018-08-19 Thread xiangxia . m . yue
From: Tonghao Zhang 

The bitmap of vhost_dev can help us to check if the
specified poll is scheduled. This patch will be used
for next two patches.

Signed-off-by: Tonghao Zhang 
---
 drivers/vhost/net.c   | 11 +--
 drivers/vhost/vhost.c | 17 +++--
 drivers/vhost/vhost.h |  7 ++-
 3 files changed, 30 insertions(+), 5 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 1eff72d..23d7ffc 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -1135,8 +1135,15 @@ static int vhost_net_open(struct inode *inode, struct 
file *f)
}
vhost_dev_init(dev, vqs, VHOST_NET_VQ_MAX);
 
-   vhost_poll_init(n->poll + VHOST_NET_VQ_TX, handle_tx_net, EPOLLOUT, 
dev);
-   vhost_poll_init(n->poll + VHOST_NET_VQ_RX, handle_rx_net, EPOLLIN, dev);
+   vhost_poll_init(n->poll + VHOST_NET_VQ_TX,
+   handle_tx_net,
+   VHOST_NET_VQ_TX,
+   EPOLLOUT, dev);
+
+   vhost_poll_init(n->poll + VHOST_NET_VQ_RX,
+   handle_rx_net,
+   VHOST_NET_VQ_RX,
+   EPOLLIN, dev);
 
f->private_data = n;
 
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index a1c06e7..dc88a60 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -186,7 +186,7 @@ void vhost_work_init(struct vhost_work *work, 
vhost_work_fn_t fn)
 
 /* Init poll structure */
 void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
-__poll_t mask, struct vhost_dev *dev)
+__u8 poll_id, __poll_t mask, struct vhost_dev *dev)
 {
init_waitqueue_func_entry(>wait, vhost_poll_wakeup);
init_poll_funcptr(>table, vhost_poll_func);
@@ -194,6 +194,7 @@ void vhost_poll_init(struct vhost_poll *poll, 
vhost_work_fn_t fn,
poll->dev = dev;
poll->wqh = NULL;
 
+   poll->poll_id = poll_id;
vhost_work_init(>work, fn);
 }
 EXPORT_SYMBOL_GPL(vhost_poll_init);
@@ -276,8 +277,16 @@ bool vhost_has_work(struct vhost_dev *dev)
 }
 EXPORT_SYMBOL_GPL(vhost_has_work);
 
+bool vhost_has_work_pending(struct vhost_dev *dev, int poll_id)
+{
+   return !llist_empty(>work_list) &&
+   test_bit(poll_id, dev->work_pending);
+}
+EXPORT_SYMBOL_GPL(vhost_has_work_pending);
+
 void vhost_poll_queue(struct vhost_poll *poll)
 {
+   set_bit(poll->poll_id, poll->dev->work_pending);
vhost_work_queue(poll->dev, >work);
 }
 EXPORT_SYMBOL_GPL(vhost_poll_queue);
@@ -354,6 +363,7 @@ static int vhost_worker(void *data)
if (!node)
schedule();
 
+   bitmap_zero(dev->work_pending, VHOST_DEV_MAX_VQ);
node = llist_reverse_order(node);
/* make sure flag is seen after deletion */
smp_wmb();
@@ -420,6 +430,8 @@ void vhost_dev_init(struct vhost_dev *dev,
struct vhost_virtqueue *vq;
int i;
 
+   BUG_ON(nvqs > VHOST_DEV_MAX_VQ);
+
dev->vqs = vqs;
dev->nvqs = nvqs;
mutex_init(>mutex);
@@ -428,6 +440,7 @@ void vhost_dev_init(struct vhost_dev *dev,
dev->iotlb = NULL;
dev->mm = NULL;
dev->worker = NULL;
+   bitmap_zero(dev->work_pending, VHOST_DEV_MAX_VQ);
init_llist_head(>work_list);
init_waitqueue_head(>wait);
INIT_LIST_HEAD(>read_list);
@@ -445,7 +458,7 @@ void vhost_dev_init(struct vhost_dev *dev,
vhost_vq_reset(dev, vq);
if (vq->handle_kick)
vhost_poll_init(>poll, vq->handle_kick,
-   EPOLLIN, dev);
+   i, EPOLLIN, dev);
}
 }
 EXPORT_SYMBOL_GPL(vhost_dev_init);
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 6c844b9..60b6f6d 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -30,6 +30,7 @@ struct vhost_poll {
wait_queue_head_t*wqh;
wait_queue_entry_t  wait;
struct vhost_work work;
+   __u8  poll_id;
__poll_t  mask;
struct vhost_dev *dev;
 };
@@ -37,9 +38,10 @@ struct vhost_poll {
 void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn);
 void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work);
 bool vhost_has_work(struct vhost_dev *dev);
+bool vhost_has_work_pending(struct vhost_dev *dev, int poll_id);
 
 void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
-__poll_t mask, struct vhost_dev *dev);
+__u8 id, __poll_t mask, struct vhost_dev *dev);
 int vhost_poll_start(struct vhost_poll *poll, struct file *file);
 void vhost_poll_stop(struct vhost_poll *poll);
 void vhost_poll_flush(struct vhost_poll *poll);
@@ -152,6 +154,8 @@ struct vhost_msg_node {
   struct list_head node;
 };
 
+#define VHOST_DEV_MAX_VQ   128
+
 struct vhost_dev {
struct 

[PATCH net-next v8 4/7] net: vhost: add rx busy polling in tx path

2018-08-19 Thread xiangxia . m . yue
From: Tonghao Zhang 

This patch improves the guest receive performance.
On the handle_tx side, we poll the sock receive queue at the
same time. handle_rx do that in the same way.

We set the poll-us=100us and use the netperf to test throughput
and mean latency. When running the tests, the vhost-net kthread
of that VM, is alway 100% CPU. The commands are shown as below.

Rx performance is greatly improved by this patch. There is not
notable performance change on tx with this series though. This
patch is useful for bi-directional traffic.

netperf -H IP -t TCP_STREAM -l 20 -- -O "THROUGHPUT, THROUGHPUT_UNITS, 
MEAN_LATENCY"

Topology:
[Host] ->linux bridge -> tap vhost-net ->[Guest]

TCP_STREAM:
* Without the patch:  19842.95 Mbps, 6.50 us mean latency
* With the patch: 37598.20 Mbps, 3.43 us mean latency

Signed-off-by: Tonghao Zhang 
---
 drivers/vhost/net.c | 33 +
 1 file changed, 13 insertions(+), 20 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 453c061..1eff72d 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -510,31 +510,24 @@ static void vhost_net_busy_poll(struct vhost_net *net,
 }
 
 static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
-   struct vhost_net_virtqueue *nvq,
+   struct vhost_net_virtqueue *tnvq,
unsigned int *out_num, unsigned int *in_num,
bool *busyloop_intr)
 {
-   struct vhost_virtqueue *vq = >vq;
-   unsigned long uninitialized_var(endtime);
-   int r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+   struct vhost_net_virtqueue *rnvq = >vqs[VHOST_NET_VQ_RX];
+   struct vhost_virtqueue *rvq = >vq;
+   struct vhost_virtqueue *tvq = >vq;
+
+   int r = vhost_get_vq_desc(tvq, tvq->iov, ARRAY_SIZE(tvq->iov),
  out_num, in_num, NULL, NULL);
 
-   if (r == vq->num && vq->busyloop_timeout) {
-   if (!vhost_sock_zcopy(vq->private_data))
-   vhost_net_signal_used(nvq);
-   preempt_disable();
-   endtime = busy_clock() + vq->busyloop_timeout;
-   while (vhost_can_busy_poll(endtime)) {
-   if (vhost_has_work(vq->dev)) {
-   *busyloop_intr = true;
-   break;
-   }
-   if (!vhost_vq_avail_empty(vq->dev, vq))
-   break;
-   cpu_relax();
-   }
-   preempt_enable();
-   r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+   if (r == tvq->num && tvq->busyloop_timeout) {
+   if (!vhost_sock_zcopy(tvq->private_data))
+   vhost_net_signal_used(tnvq);
+
+   vhost_net_busy_poll(net, rvq, tvq, busyloop_intr, false);
+
+   r = vhost_get_vq_desc(tvq, tvq->iov, ARRAY_SIZE(tvq->iov),
  out_num, in_num, NULL, NULL);
}
 
-- 
1.8.3.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH net-next v8 3/7] net: vhost: factor out busy polling logic to vhost_net_busy_poll()

2018-08-19 Thread xiangxia . m . yue
From: Tonghao Zhang 

Factor out generic busy polling logic and will be
used for in tx path in the next patch. And with the patch,
qemu can set differently the busyloop_timeout for rx queue.

To avoid duplicate codes, introduce the helper functions:
* sock_has_rx_data(changed from sk_has_rx_data)
* vhost_net_busy_poll_try_queue

Signed-off-by: Tonghao Zhang 
---
 drivers/vhost/net.c | 111 +---
 1 file changed, 71 insertions(+), 40 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 32c1b52..453c061 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -440,6 +440,75 @@ static void vhost_net_signal_used(struct 
vhost_net_virtqueue *nvq)
nvq->done_idx = 0;
 }
 
+static int sock_has_rx_data(struct socket *sock)
+{
+   if (unlikely(!sock))
+   return 0;
+
+   if (sock->ops->peek_len)
+   return sock->ops->peek_len(sock);
+
+   return skb_queue_empty(>sk->sk_receive_queue);
+}
+
+static void vhost_net_busy_poll_try_queue(struct vhost_net *net,
+ struct vhost_virtqueue *vq)
+{
+   if (!vhost_vq_avail_empty(>dev, vq)) {
+   vhost_poll_queue(>poll);
+   } else if (unlikely(vhost_enable_notify(>dev, vq))) {
+   vhost_disable_notify(>dev, vq);
+   vhost_poll_queue(>poll);
+   }
+}
+
+static void vhost_net_busy_poll(struct vhost_net *net,
+   struct vhost_virtqueue *rvq,
+   struct vhost_virtqueue *tvq,
+   bool *busyloop_intr,
+   bool poll_rx)
+{
+   unsigned long busyloop_timeout;
+   unsigned long endtime;
+   struct socket *sock;
+   struct vhost_virtqueue *vq = poll_rx ? tvq : rvq;
+
+   mutex_lock_nested(>mutex, poll_rx ? VHOST_NET_VQ_TX: 
VHOST_NET_VQ_RX);
+   vhost_disable_notify(>dev, vq);
+   sock = rvq->private_data;
+
+   busyloop_timeout = poll_rx ? rvq->busyloop_timeout:
+tvq->busyloop_timeout;
+
+   preempt_disable();
+   endtime = busy_clock() + busyloop_timeout;
+
+   while (vhost_can_busy_poll(endtime)) {
+   if (vhost_has_work(>dev)) {
+   *busyloop_intr = true;
+   break;
+   }
+
+   if ((sock_has_rx_data(sock) &&
+!vhost_vq_avail_empty(>dev, rvq)) ||
+   !vhost_vq_avail_empty(>dev, tvq))
+   break;
+
+   cpu_relax();
+   }
+
+   preempt_enable();
+
+   if (poll_rx)
+   vhost_net_busy_poll_try_queue(net, tvq);
+   else if (sock_has_rx_data(sock))
+   vhost_net_busy_poll_try_queue(net, rvq);
+   else /* On tx here, sock has no rx data. */
+   vhost_enable_notify(>dev, rvq);
+
+   mutex_unlock(>mutex);
+}
+
 static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
struct vhost_net_virtqueue *nvq,
unsigned int *out_num, unsigned int *in_num,
@@ -753,16 +822,6 @@ static int peek_head_len(struct vhost_net_virtqueue *rvq, 
struct sock *sk)
return len;
 }
 
-static int sk_has_rx_data(struct sock *sk)
-{
-   struct socket *sock = sk->sk_socket;
-
-   if (sock->ops->peek_len)
-   return sock->ops->peek_len(sock);
-
-   return skb_queue_empty(>sk_receive_queue);
-}
-
 static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk,
  bool *busyloop_intr)
 {
@@ -770,41 +829,13 @@ static int vhost_net_rx_peek_head_len(struct vhost_net 
*net, struct sock *sk,
struct vhost_net_virtqueue *tnvq = >vqs[VHOST_NET_VQ_TX];
struct vhost_virtqueue *rvq = >vq;
struct vhost_virtqueue *tvq = >vq;
-   unsigned long uninitialized_var(endtime);
int len = peek_head_len(rnvq, sk);
 
-   if (!len && tvq->busyloop_timeout) {
+   if (!len && rvq->busyloop_timeout) {
/* Flush batched heads first */
vhost_net_signal_used(rnvq);
/* Both tx vq and rx socket were polled here */
-   mutex_lock_nested(>mutex, VHOST_NET_VQ_TX);
-   vhost_disable_notify(>dev, tvq);
-
-   preempt_disable();
-   endtime = busy_clock() + tvq->busyloop_timeout;
-
-   while (vhost_can_busy_poll(endtime)) {
-   if (vhost_has_work(>dev)) {
-   *busyloop_intr = true;
-   break;
-   }
-   if ((sk_has_rx_data(sk) &&
-!vhost_vq_avail_empty(>dev, rvq)) ||
-   !vhost_vq_avail_empty(>dev, tvq))
-   break;
-   cpu_relax();
-   }
-
-   

[PATCH net-next v8 2/7] net: vhost: replace magic number of lock annotation

2018-08-19 Thread xiangxia . m . yue
From: Tonghao Zhang 

Use the VHOST_NET_VQ_XXX as a subclass for mutex_lock_nested.

Signed-off-by: Tonghao Zhang 
Acked-by: Jason Wang 
---
 drivers/vhost/net.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 367d802..32c1b52 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -712,7 +712,7 @@ static void handle_tx(struct vhost_net *net)
struct vhost_virtqueue *vq = >vq;
struct socket *sock;
 
-   mutex_lock(>mutex);
+   mutex_lock_nested(>mutex, VHOST_NET_VQ_TX);
sock = vq->private_data;
if (!sock)
goto out;
@@ -777,7 +777,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_net 
*net, struct sock *sk,
/* Flush batched heads first */
vhost_net_signal_used(rnvq);
/* Both tx vq and rx socket were polled here */
-   mutex_lock_nested(>mutex, 1);
+   mutex_lock_nested(>mutex, VHOST_NET_VQ_TX);
vhost_disable_notify(>dev, tvq);
 
preempt_disable();
@@ -919,7 +919,7 @@ static void handle_rx(struct vhost_net *net)
__virtio16 num_buffers;
int recv_pkts = 0;
 
-   mutex_lock_nested(>mutex, 0);
+   mutex_lock_nested(>mutex, VHOST_NET_VQ_RX);
sock = vq->private_data;
if (!sock)
goto out;
-- 
1.8.3.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH net-next v8 1/7] net: vhost: lock the vqs one by one

2018-08-19 Thread xiangxia . m . yue
From: Tonghao Zhang 

This patch changes the way that lock all vqs
at the same, to lock them one by one. It will
be used for next patch to avoid the deadlock.

Signed-off-by: Tonghao Zhang 
Acked-by: Jason Wang 
Signed-off-by: Jason Wang 
---
 drivers/vhost/vhost.c | 24 +++-
 1 file changed, 7 insertions(+), 17 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index a502f1a..a1c06e7 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -294,8 +294,11 @@ static void vhost_vq_meta_reset(struct vhost_dev *d)
 {
int i;
 
-   for (i = 0; i < d->nvqs; ++i)
+   for (i = 0; i < d->nvqs; ++i) {
+   mutex_lock(>vqs[i]->mutex);
__vhost_vq_meta_reset(d->vqs[i]);
+   mutex_unlock(>vqs[i]->mutex);
+   }
 }
 
 static void vhost_vq_reset(struct vhost_dev *dev,
@@ -890,20 +893,6 @@ static inline void __user *__vhost_get_user(struct 
vhost_virtqueue *vq,
 #define vhost_get_used(vq, x, ptr) \
vhost_get_user(vq, x, ptr, VHOST_ADDR_USED)
 
-static void vhost_dev_lock_vqs(struct vhost_dev *d)
-{
-   int i = 0;
-   for (i = 0; i < d->nvqs; ++i)
-   mutex_lock_nested(>vqs[i]->mutex, i);
-}
-
-static void vhost_dev_unlock_vqs(struct vhost_dev *d)
-{
-   int i = 0;
-   for (i = 0; i < d->nvqs; ++i)
-   mutex_unlock(>vqs[i]->mutex);
-}
-
 static int vhost_new_umem_range(struct vhost_umem *umem,
u64 start, u64 size, u64 end,
u64 userspace_addr, int perm)
@@ -953,7 +942,10 @@ static void vhost_iotlb_notify_vq(struct vhost_dev *d,
if (msg->iova <= vq_msg->iova &&
msg->iova + msg->size - 1 > vq_msg->iova &&
vq_msg->type == VHOST_IOTLB_MISS) {
+   mutex_lock(>vq->mutex);
vhost_poll_queue(>vq->poll);
+   mutex_unlock(>vq->mutex);
+
list_del(>node);
kfree(node);
}
@@ -985,7 +977,6 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev,
int ret = 0;
 
mutex_lock(>mutex);
-   vhost_dev_lock_vqs(dev);
switch (msg->type) {
case VHOST_IOTLB_UPDATE:
if (!dev->iotlb) {
@@ -1019,7 +1010,6 @@ static int vhost_process_iotlb_msg(struct vhost_dev *dev,
break;
}
 
-   vhost_dev_unlock_vqs(dev);
mutex_unlock(>mutex);
 
return ret;
-- 
1.8.3.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH net-next v8 0/7] net: vhost: improve performance when enable busyloop

2018-08-19 Thread xiangxia . m . yue
From: Tonghao Zhang 

This patches improve the guest receive performance.
On the handle_tx side, we poll the sock receive queue
at the same time. handle_rx do that in the same way.

For more performance report, see patch 4, 6, 7

Tonghao Zhang (7):
  net: vhost: lock the vqs one by one
  net: vhost: replace magic number of lock annotation
  net: vhost: factor out busy polling logic to vhost_net_busy_poll()
  net: vhost: add rx busy polling in tx path
  net: vhost: introduce bitmap for vhost_poll
  net: vhost: disable rx wakeup during tx busypoll
  net: vhost: make busyloop_intr more accurate

 drivers/vhost/net.c   | 169 +++---
 drivers/vhost/vhost.c |  41 ++--
 drivers/vhost/vhost.h |   7 ++-
 3 files changed, 133 insertions(+), 84 deletions(-)

-- 
1.8.3.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization