Re: [PATCH 3/4] vsock/virtio: fix flush of works during the .remove()

2019-05-28 Thread Jason Wang


On 2019/5/28 下午6:56, Stefano Garzarella wrote:

We flush all pending works before to call vdev->config->reset(vdev),
but other works can be queued before the vdev->config->del_vqs(vdev),
so we add another flush after it, to avoid use after free.

Suggested-by: Michael S. Tsirkin 
Signed-off-by: Stefano Garzarella 
---
  net/vmw_vsock/virtio_transport.c | 23 +--
  1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index e694df10ab61..ad093ce96693 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
return ret;
  }
  
+static void virtio_vsock_flush_works(struct virtio_vsock *vsock)

+{
+   flush_work(>loopback_work);
+   flush_work(>rx_work);
+   flush_work(>tx_work);
+   flush_work(>event_work);
+   flush_work(>send_pkt_work);
+}
+
  static void virtio_vsock_remove(struct virtio_device *vdev)
  {
struct virtio_vsock *vsock = vdev->priv;
@@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
mutex_lock(_virtio_vsock_mutex);
the_virtio_vsock = NULL;
  
-	flush_work(>loopback_work);

-   flush_work(>rx_work);
-   flush_work(>tx_work);
-   flush_work(>event_work);
-   flush_work(>send_pkt_work);
-
/* Reset all connected sockets when the device disappear */
vsock_for_each_connected_socket(virtio_vsock_reset_sock);
  
@@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)

vsock->event_run = false;
mutex_unlock(>event_lock);
  
+	/* Flush all pending works */

+   virtio_vsock_flush_works(vsock);
+
/* Flush all device writes and interrupts, device will not use any
 * more buffers.
 */
@@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
/* Delete virtqueues and flush outstanding callbacks if any */
vdev->config->del_vqs(vdev);
  
+	/* Other works can be queued before 'config->del_vqs()', so we flush

+* all works before to free the vsock object to avoid use after free.
+*/
+   virtio_vsock_flush_works(vsock);



Some questions after a quick glance:

1) It looks to me that the work could be queued from the path of 
vsock_transport_cancel_pkt() . Is that synchronized here?


2) If we decide to flush after dev_vqs(), is tx_run/rx_run/event_run 
still needed? It looks to me we've already done except that we need 
flush rx_work in the end since send_pkt_work can requeue rx_work.


Thanks



+
kfree(vsock);
mutex_unlock(_virtio_vsock_mutex);
  }

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v2 1/8] vsock/virtio: limit the memory used per-socket

2019-05-28 Thread Jason Wang


On 2019/5/29 上午12:45, Stefano Garzarella wrote:

On Wed, May 15, 2019 at 10:48:44AM +0800, Jason Wang wrote:

On 2019/5/15 上午12:35, Stefano Garzarella wrote:

On Tue, May 14, 2019 at 11:25:34AM +0800, Jason Wang wrote:

On 2019/5/14 上午1:23, Stefano Garzarella wrote:

On Mon, May 13, 2019 at 05:58:53PM +0800, Jason Wang wrote:

On 2019/5/10 下午8:58, Stefano Garzarella wrote:

+static struct virtio_vsock_buf *
+virtio_transport_alloc_buf(struct virtio_vsock_pkt *pkt, bool zero_copy)
+{
+   struct virtio_vsock_buf *buf;
+
+   if (pkt->len == 0)
+   return NULL;
+
+   buf = kzalloc(sizeof(*buf), GFP_KERNEL);
+   if (!buf)
+   return NULL;
+
+   /* If the buffer in the virtio_vsock_pkt is full, we can move it to
+* the new virtio_vsock_buf avoiding the copy, because we are sure that
+* we are not use more memory than that counted by the credit mechanism.
+*/
+   if (zero_copy && pkt->len == pkt->buf_len) {
+   buf->addr = pkt->buf;
+   pkt->buf = NULL;
+   } else {

Is the copy still needed if we're just few bytes less? We meet similar issue
for virito-net, and virtio-net solve this by always copy first 128bytes for
big packets.

See receive_big()

I'm seeing, It is more sophisticated.
IIUC, virtio-net allocates a sk_buff with 128 bytes of buffer, then copies the
first 128 bytes, then adds the buffer used to receive the packet as a frag to
the skb.

Yes and the point is if the packet is smaller than 128 bytes the pages will
be recycled.



So it's avoid the overhead of allocation of a large buffer. I got it.

Just a curiosity, why the threshold is 128 bytes?


 From its name (GOOD_COPY_LEN), I think it just a value that won't lose much
performance, e.g the size two cachelines.


Jason, Stefan,
since I'm removing the patches to increase the buffers to 64 KiB and I'm
adding a threshold for small packets, I would simplify this patch,
removing the new buffer allocation and copying small packets into the
buffers already queued (if there is a space).
In this way, I should solve the issue of 1 byte packets.

Do you think could be better?



I think so.

Thanks




Thanks,
Stefano

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v2 1/8] vsock/virtio: limit the memory used per-socket

2019-05-28 Thread Stefano Garzarella
On Wed, May 15, 2019 at 10:48:44AM +0800, Jason Wang wrote:
> 
> On 2019/5/15 上午12:35, Stefano Garzarella wrote:
> > On Tue, May 14, 2019 at 11:25:34AM +0800, Jason Wang wrote:
> > > On 2019/5/14 上午1:23, Stefano Garzarella wrote:
> > > > On Mon, May 13, 2019 at 05:58:53PM +0800, Jason Wang wrote:
> > > > > On 2019/5/10 下午8:58, Stefano Garzarella wrote:
> > > > > > +static struct virtio_vsock_buf *
> > > > > > +virtio_transport_alloc_buf(struct virtio_vsock_pkt *pkt, bool 
> > > > > > zero_copy)
> > > > > > +{
> > > > > > +   struct virtio_vsock_buf *buf;
> > > > > > +
> > > > > > +   if (pkt->len == 0)
> > > > > > +   return NULL;
> > > > > > +
> > > > > > +   buf = kzalloc(sizeof(*buf), GFP_KERNEL);
> > > > > > +   if (!buf)
> > > > > > +   return NULL;
> > > > > > +
> > > > > > +   /* If the buffer in the virtio_vsock_pkt is full, we can move 
> > > > > > it to
> > > > > > +* the new virtio_vsock_buf avoiding the copy, because we are 
> > > > > > sure that
> > > > > > +* we are not use more memory than that counted by the credit 
> > > > > > mechanism.
> > > > > > +*/
> > > > > > +   if (zero_copy && pkt->len == pkt->buf_len) {
> > > > > > +   buf->addr = pkt->buf;
> > > > > > +   pkt->buf = NULL;
> > > > > > +   } else {
> > > > > Is the copy still needed if we're just few bytes less? We meet 
> > > > > similar issue
> > > > > for virito-net, and virtio-net solve this by always copy first 
> > > > > 128bytes for
> > > > > big packets.
> > > > > 
> > > > > See receive_big()
> > > > I'm seeing, It is more sophisticated.
> > > > IIUC, virtio-net allocates a sk_buff with 128 bytes of buffer, then 
> > > > copies the
> > > > first 128 bytes, then adds the buffer used to receive the packet as a 
> > > > frag to
> > > > the skb.
> > > 
> > > Yes and the point is if the packet is smaller than 128 bytes the pages 
> > > will
> > > be recycled.
> > > 
> > > 
> > So it's avoid the overhead of allocation of a large buffer. I got it.
> > 
> > Just a curiosity, why the threshold is 128 bytes?
> 
> 
> From its name (GOOD_COPY_LEN), I think it just a value that won't lose much
> performance, e.g the size two cachelines.
> 

Jason, Stefan,
since I'm removing the patches to increase the buffers to 64 KiB and I'm
adding a threshold for small packets, I would simplify this patch,
removing the new buffer allocation and copying small packets into the
buffers already queued (if there is a space).
In this way, I should solve the issue of 1 byte packets.

Do you think could be better?

Thanks,
Stefano
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v2 8/8] virtio/s390: make airq summary indicators DMA

2019-05-28 Thread Michael Mueller




On 28.05.19 16:33, Halil Pasic wrote:

On Mon, 27 May 2019 14:00:18 +0200
Cornelia Huck  wrote:


On Thu, 23 May 2019 18:22:09 +0200
Michael Mueller  wrote:


From: Halil Pasic 

Hypervisor needs to interact with the summary indicators, so these
need to be DMA memory as well (at least for protected virtualization
guests).

Signed-off-by: Halil Pasic 
---
  drivers/s390/virtio/virtio_ccw.c | 22 +++---
  1 file changed, 15 insertions(+), 7 deletions(-)


(...)


@@ -1501,6 +1508,7 @@ static int __init virtio_ccw_init(void)
  {
/* parse no_auto string before we do anything further */
no_auto_parse();
+   summary_indicators = cio_dma_zalloc(MAX_AIRQ_AREAS);


What happens if this fails?


Bad things could happen!

How about adding

if (!summary_indicators)
virtio_ccw_use_airq = 0; /* fall back to classic */

?

Since it ain't very likely to happen, we could also just fail
virtio_ccw_init() with -ENOMEM.


That is what I'm currently doing in v3.



Regards,
Halil





return ccw_driver_register(_ccw_driver);
  }
  device_initcall(virtio_ccw_init);






Michael

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2 8/8] virtio/s390: make airq summary indicators DMA

2019-05-28 Thread Cornelia Huck
On Tue, 28 May 2019 16:33:42 +0200
Halil Pasic  wrote:

> On Mon, 27 May 2019 14:00:18 +0200
> Cornelia Huck  wrote:
> 
> > On Thu, 23 May 2019 18:22:09 +0200
> > Michael Mueller  wrote:
> >   
> > > From: Halil Pasic 
> > > 
> > > Hypervisor needs to interact with the summary indicators, so these
> > > need to be DMA memory as well (at least for protected virtualization
> > > guests).
> > > 
> > > Signed-off-by: Halil Pasic 
> > > ---
> > >  drivers/s390/virtio/virtio_ccw.c | 22 +++---
> > >  1 file changed, 15 insertions(+), 7 deletions(-)  
> > 
> > (...)
> >   
> > > @@ -1501,6 +1508,7 @@ static int __init virtio_ccw_init(void)
> > >  {
> > >   /* parse no_auto string before we do anything further */
> > >   no_auto_parse();
> > > + summary_indicators = cio_dma_zalloc(MAX_AIRQ_AREAS);  
> > 
> > What happens if this fails?  
> 
> Bad things could happen!
> 
> How about adding
> 
> if (!summary_indicators)
>   virtio_ccw_use_airq = 0; /* fall back to classic */
> 
> ?
> 
> Since it ain't very likely to happen, we could also just fail
> virtio_ccw_init() with -ENOMEM.

How high are the chances of things working if we fail to allocate here?
Returning with -ENOMEM is probably the more reasonable approach here.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2 8/8] virtio/s390: make airq summary indicators DMA

2019-05-28 Thread Halil Pasic
On Mon, 27 May 2019 14:00:18 +0200
Cornelia Huck  wrote:

> On Thu, 23 May 2019 18:22:09 +0200
> Michael Mueller  wrote:
> 
> > From: Halil Pasic 
> > 
> > Hypervisor needs to interact with the summary indicators, so these
> > need to be DMA memory as well (at least for protected virtualization
> > guests).
> > 
> > Signed-off-by: Halil Pasic 
> > ---
> >  drivers/s390/virtio/virtio_ccw.c | 22 +++---
> >  1 file changed, 15 insertions(+), 7 deletions(-)
> 
> (...)
> 
> > @@ -1501,6 +1508,7 @@ static int __init virtio_ccw_init(void)
> >  {
> > /* parse no_auto string before we do anything further */
> > no_auto_parse();
> > +   summary_indicators = cio_dma_zalloc(MAX_AIRQ_AREAS);
> 
> What happens if this fails?

Bad things could happen!

How about adding

if (!summary_indicators)
virtio_ccw_use_airq = 0; /* fall back to classic */

?

Since it ain't very likely to happen, we could also just fail
virtio_ccw_init() with -ENOMEM.

Regards,
Halil


> 
> > return ccw_driver_register(_ccw_driver);
> >  }
> >  device_initcall(virtio_ccw_init);
> 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] virtio_console: remove vq buf while unpluging port

2019-05-28 Thread Amit Shah
On Fri, 2019-05-24 at 20:51 +0200, Greg KH wrote:
> On Sun, Apr 28, 2019 at 09:50:04AM +0800, zhenwei pi wrote:
> > A bug can be easily reproduced:
> > Host# cat guest-agent.xml
> > 
> >   
> >> state="connected"/>
> > 
> > Host# virsh attach-device instance guest-agent.xml
> > Host# virsh detach-device instance guest-agent.xml
> > Host# virsh attach-device instance guest-agent.xml
> > 
> > and guest report: virtio-ports vport0p1: Error allocating inbufs
> > 
> > The reason is that the port is unplugged and the vq buf still
> > remained.
> > So, fix two cases in this patch:
> > 1, fix memory leak with attach-device/detach-device.
> > 2, fix logic bug with attach-device/detach-device/attach-device.

The "leak" happens because the host-side of the connection is still
connected.  This is by design -- if a guest has written data before
being unplugged, the port isn't released till the host connection goes
down to ensure a host process reads all the data out of the port.

Can you try similar, but also disconnecting the host side and see if
that fixes things?

> Amit, any ideas if this is valid or not and if this should be
> applied?

This had indeed been missed, thanks!


Amit

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 4/4] vsock/virtio: free used buffers during the .remove()

2019-05-28 Thread Stefano Garzarella
Before this patch, we only freed unused buffers, but there may
still be used buffers to be freed.

Signed-off-by: Stefano Garzarella 
---
 net/vmw_vsock/virtio_transport.c | 18 ++
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index ad093ce96693..6a2afb989562 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -669,6 +669,18 @@ static void virtio_vsock_flush_works(struct virtio_vsock 
*vsock)
flush_work(>send_pkt_work);
 }
 
+static void virtio_vsock_free_buf(struct virtqueue *vq)
+{
+   struct virtio_vsock_pkt *pkt;
+   unsigned int len;
+
+   while ((pkt = virtqueue_detach_unused_buf(vq)))
+   virtio_transport_free_pkt(pkt);
+
+   while ((pkt = virtqueue_get_buf(vq, )))
+   virtio_transport_free_pkt(pkt);
+}
+
 static void virtio_vsock_remove(struct virtio_device *vdev)
 {
struct virtio_vsock *vsock = vdev->priv;
@@ -702,13 +714,11 @@ static void virtio_vsock_remove(struct virtio_device 
*vdev)
vdev->config->reset(vdev);
 
mutex_lock(>rx_lock);
-   while ((pkt = virtqueue_detach_unused_buf(vsock->vqs[VSOCK_VQ_RX])))
-   virtio_transport_free_pkt(pkt);
+   virtio_vsock_free_buf(vsock->vqs[VSOCK_VQ_RX]);
mutex_unlock(>rx_lock);
 
mutex_lock(>tx_lock);
-   while ((pkt = virtqueue_detach_unused_buf(vsock->vqs[VSOCK_VQ_TX])))
-   virtio_transport_free_pkt(pkt);
+   virtio_vsock_free_buf(vsock->vqs[VSOCK_VQ_TX]);
mutex_unlock(>tx_lock);
 
spin_lock_bh(>send_pkt_list_lock);
-- 
2.20.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 3/4] vsock/virtio: fix flush of works during the .remove()

2019-05-28 Thread Stefano Garzarella
We flush all pending works before to call vdev->config->reset(vdev),
but other works can be queued before the vdev->config->del_vqs(vdev),
so we add another flush after it, to avoid use after free.

Suggested-by: Michael S. Tsirkin 
Signed-off-by: Stefano Garzarella 
---
 net/vmw_vsock/virtio_transport.c | 23 +--
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index e694df10ab61..ad093ce96693 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -660,6 +660,15 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
return ret;
 }
 
+static void virtio_vsock_flush_works(struct virtio_vsock *vsock)
+{
+   flush_work(>loopback_work);
+   flush_work(>rx_work);
+   flush_work(>tx_work);
+   flush_work(>event_work);
+   flush_work(>send_pkt_work);
+}
+
 static void virtio_vsock_remove(struct virtio_device *vdev)
 {
struct virtio_vsock *vsock = vdev->priv;
@@ -668,12 +677,6 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
mutex_lock(_virtio_vsock_mutex);
the_virtio_vsock = NULL;
 
-   flush_work(>loopback_work);
-   flush_work(>rx_work);
-   flush_work(>tx_work);
-   flush_work(>event_work);
-   flush_work(>send_pkt_work);
-
/* Reset all connected sockets when the device disappear */
vsock_for_each_connected_socket(virtio_vsock_reset_sock);
 
@@ -690,6 +693,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
vsock->event_run = false;
mutex_unlock(>event_lock);
 
+   /* Flush all pending works */
+   virtio_vsock_flush_works(vsock);
+
/* Flush all device writes and interrupts, device will not use any
 * more buffers.
 */
@@ -726,6 +732,11 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
/* Delete virtqueues and flush outstanding callbacks if any */
vdev->config->del_vqs(vdev);
 
+   /* Other works can be queued before 'config->del_vqs()', so we flush
+* all works before to free the vsock object to avoid use after free.
+*/
+   virtio_vsock_flush_works(vsock);
+
kfree(vsock);
mutex_unlock(_virtio_vsock_mutex);
 }
-- 
2.20.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 2/4] vsock/virtio: stop workers during the .remove()

2019-05-28 Thread Stefano Garzarella
Before to call vdev->config->reset(vdev) we need to be sure that
no one is accessing the device, for this reason, we add new variables
in the struct virtio_vsock to stop the workers during the .remove().

This patch also add few comments before vdev->config->reset(vdev)
and vdev->config->del_vqs(vdev).

Suggested-by: Stefan Hajnoczi 
Suggested-by: Michael S. Tsirkin 
Signed-off-by: Stefano Garzarella 
---
 net/vmw_vsock/virtio_transport.c | 49 +++-
 1 file changed, 48 insertions(+), 1 deletion(-)

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index d3ba7747aa73..e694df10ab61 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -39,6 +39,7 @@ struct virtio_vsock {
 * must be accessed with tx_lock held.
 */
struct mutex tx_lock;
+   bool tx_run;
 
struct work_struct send_pkt_work;
spinlock_t send_pkt_list_lock;
@@ -54,6 +55,7 @@ struct virtio_vsock {
 * must be accessed with rx_lock held.
 */
struct mutex rx_lock;
+   bool rx_run;
int rx_buf_nr;
int rx_buf_max_nr;
 
@@ -61,6 +63,7 @@ struct virtio_vsock {
 * vqs[VSOCK_VQ_EVENT] must be accessed with event_lock held.
 */
struct mutex event_lock;
+   bool event_run;
struct virtio_vsock_event event_list[8];
 
u32 guest_cid;
@@ -98,6 +101,10 @@ static void virtio_transport_loopback_work(struct 
work_struct *work)
spin_unlock_bh(>loopback_list_lock);
 
mutex_lock(>rx_lock);
+
+   if (!vsock->rx_run)
+   goto out;
+
while (!list_empty()) {
struct virtio_vsock_pkt *pkt;
 
@@ -106,6 +113,7 @@ static void virtio_transport_loopback_work(struct 
work_struct *work)
 
virtio_transport_recv_pkt(pkt);
}
+out:
mutex_unlock(>rx_lock);
 }
 
@@ -134,6 +142,9 @@ virtio_transport_send_pkt_work(struct work_struct *work)
 
mutex_lock(>tx_lock);
 
+   if (!vsock->tx_run)
+   goto out;
+
vq = vsock->vqs[VSOCK_VQ_TX];
 
for (;;) {
@@ -192,6 +203,7 @@ virtio_transport_send_pkt_work(struct work_struct *work)
if (added)
virtqueue_kick(vq);
 
+out:
mutex_unlock(>tx_lock);
 
if (restart_rx)
@@ -314,6 +326,10 @@ static void virtio_transport_tx_work(struct work_struct 
*work)
 
vq = vsock->vqs[VSOCK_VQ_TX];
mutex_lock(>tx_lock);
+
+   if (!vsock->tx_run)
+   goto out;
+
do {
struct virtio_vsock_pkt *pkt;
unsigned int len;
@@ -324,6 +340,8 @@ static void virtio_transport_tx_work(struct work_struct 
*work)
added = true;
}
} while (!virtqueue_enable_cb(vq));
+
+out:
mutex_unlock(>tx_lock);
 
if (added)
@@ -352,6 +370,9 @@ static void virtio_transport_rx_work(struct work_struct 
*work)
 
mutex_lock(>rx_lock);
 
+   if (!vsock->rx_run)
+   goto out;
+
do {
virtqueue_disable_cb(vq);
for (;;) {
@@ -461,6 +482,9 @@ static void virtio_transport_event_work(struct work_struct 
*work)
 
mutex_lock(>event_lock);
 
+   if (!vsock->event_run)
+   goto out;
+
do {
struct virtio_vsock_event *event;
unsigned int len;
@@ -475,7 +499,7 @@ static void virtio_transport_event_work(struct work_struct 
*work)
} while (!virtqueue_enable_cb(vq));
 
virtqueue_kick(vsock->vqs[VSOCK_VQ_EVENT]);
-
+out:
mutex_unlock(>event_lock);
 }
 
@@ -611,12 +635,18 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
INIT_WORK(>send_pkt_work, virtio_transport_send_pkt_work);
INIT_WORK(>loopback_work, virtio_transport_loopback_work);
 
+   mutex_lock(>tx_lock);
+   vsock->tx_run = true;
+   mutex_unlock(>tx_lock);
+
mutex_lock(>rx_lock);
virtio_vsock_rx_fill(vsock);
+   vsock->rx_run = true;
mutex_unlock(>rx_lock);
 
mutex_lock(>event_lock);
virtio_vsock_event_fill(vsock);
+   vsock->event_run = true;
mutex_unlock(>event_lock);
 
the_virtio_vsock = vsock;
@@ -647,6 +677,22 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
/* Reset all connected sockets when the device disappear */
vsock_for_each_connected_socket(virtio_vsock_reset_sock);
 
+   /* Stop all work handlers to make sure no one is accessing the device */
+   mutex_lock(>rx_lock);
+   vsock->rx_run = false;
+   mutex_unlock(>rx_lock);
+
+   mutex_lock(>tx_lock);
+   vsock->tx_run = false;
+   mutex_unlock(>tx_lock);
+
+   mutex_lock(>event_lock);
+   vsock->event_run = false;
+   mutex_unlock(>event_lock);
+
+   /* Flush all device writes and interrupts, device will not use any
+* more buffers.
+*/

[PATCH 1/4] vsock/virtio: fix locking around 'the_virtio_vsock'

2019-05-28 Thread Stefano Garzarella
This patch protects the reading of 'the_virtio_vsock' taking the
mutex used when it is set.
We also move the 'the_virtio_vsock' assignment at the end of the
.probe(), when we finished all the initialization, and at the
beginning of .remove(), before to release resources, taking the
lock until the end of the function.

Signed-off-by: Stefano Garzarella 
---
 net/vmw_vsock/virtio_transport.c | 19 +--
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index 96ab344f17bb..d3ba7747aa73 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -68,7 +68,13 @@ struct virtio_vsock {
 
 static struct virtio_vsock *virtio_vsock_get(void)
 {
-   return the_virtio_vsock;
+   struct virtio_vsock *vsock;
+
+   mutex_lock(_virtio_vsock_mutex);
+   vsock = the_virtio_vsock;
+   mutex_unlock(_virtio_vsock_mutex);
+
+   return vsock;
 }
 
 static u32 virtio_transport_get_local_cid(void)
@@ -592,7 +598,6 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
atomic_set(>queued_replies, 0);
 
vdev->priv = vsock;
-   the_virtio_vsock = vsock;
mutex_init(>tx_lock);
mutex_init(>rx_lock);
mutex_init(>event_lock);
@@ -614,6 +619,8 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
virtio_vsock_event_fill(vsock);
mutex_unlock(>event_lock);
 
+   the_virtio_vsock = vsock;
+
mutex_unlock(_virtio_vsock_mutex);
return 0;
 
@@ -628,6 +635,9 @@ static void virtio_vsock_remove(struct virtio_device *vdev)
struct virtio_vsock *vsock = vdev->priv;
struct virtio_vsock_pkt *pkt;
 
+   mutex_lock(_virtio_vsock_mutex);
+   the_virtio_vsock = NULL;
+
flush_work(>loopback_work);
flush_work(>rx_work);
flush_work(>tx_work);
@@ -667,13 +677,10 @@ static void virtio_vsock_remove(struct virtio_device 
*vdev)
}
spin_unlock_bh(>loopback_list_lock);
 
-   mutex_lock(_virtio_vsock_mutex);
-   the_virtio_vsock = NULL;
-   mutex_unlock(_virtio_vsock_mutex);
-
vdev->config->del_vqs(vdev);
 
kfree(vsock);
+   mutex_unlock(_virtio_vsock_mutex);
 }
 
 static struct virtio_device_id id_table[] = {
-- 
2.20.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 0/4] vsock/virtio: several fixes in the .probe() and .remove()

2019-05-28 Thread Stefano Garzarella
During the review of "[PATCH] vsock/virtio: Initialize core virtio vsock
before registering the driver", Stefan pointed out some possible issues
in the .probe() and .remove() callbacks of the virtio-vsock driver.

This series tries to solve these issues:
- Patch 1 postpones the 'the_virtio_vsock' assignment at the end of the
  .probe() to avoid that some sockets queue works when the initialization
  is not finished.
- Patches 2 and 3 stop workers before to call vdev->config->reset(vdev) to
  be sure that no one is accessing the device, and adds another flush at the
  end of the .remove() to avoid use after free.
- Patch 4 free also used buffers in the virtqueues during the .remove().

Stefano Garzarella (4):
  vsock/virtio: fix locking around 'the_virtio_vsock'
  vsock/virtio: stop workers during the .remove()
  vsock/virtio: fix flush of works during the .remove()
  vsock/virtio: free used buffers during the .remove()

 net/vmw_vsock/virtio_transport.c | 105 ++-
 1 file changed, 90 insertions(+), 15 deletions(-)

-- 
2.20.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v7 0/7] Add virtio-iommu driver

2019-05-28 Thread Jean-Philippe Brucker
On 27/05/2019 16:15, Michael S. Tsirkin wrote:
> On Mon, May 27, 2019 at 11:26:04AM +0200, Joerg Roedel wrote:
>> On Sun, May 12, 2019 at 12:31:59PM -0400, Michael S. Tsirkin wrote:
>>> OK this has been in next for a while.
>>>
>>> Last time IOMMU maintainers objected. Are objections
>>> still in force?
>>>
>>> If not could we get acks please?
>>
>> No objections against the code, I only hesitated because the Spec was
>> not yet official.
>>
>> So for the code:
>>
>>  Acked-by: Joerg Roedel 
> 
> Last spec patch had a bunch of comments not yet addressed.
> But I do not remember whether comments are just about wording
> or about the host/guest interface as well.
> Jean-Philippe could you remind me please?

It's mostly wording, but there is a small change in the config space
layout and two new feature bits. I'll send a new version of the driver
when possible.

Thanks,
Jean
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization