Re: [PATCH v7 3/3] vhost: add an RPMsg API

2020-09-21 Thread Guennadi Liakhovetski
Hi Mathieu,

On Fri, Sep 18, 2020 at 09:52:49AM -0600, Mathieu Poirier wrote:
> Good morning,
> 
> On Fri, Sep 18, 2020 at 11:02:29AM +0200, Guennadi Liakhovetski wrote:
> > Hi Mathieu,
> > 
> > On Thu, Sep 17, 2020 at 04:01:38PM -0600, Mathieu Poirier wrote:
> > > On Thu, Sep 10, 2020 at 01:13:51PM +0200, Guennadi Liakhovetski wrote:
> > > > Linux supports running the RPMsg protocol over the VirtIO transport
> > > > protocol, but currently there is only support for VirtIO clients and
> > > > no support for VirtIO servers. This patch adds a vhost-based RPMsg
> > > > server implementation, which makes it possible to use RPMsg over
> > > > VirtIO between guest VMs and the host.
> > > 
> > > I now get the client/server concept you are describing above but that 
> > > happened
> > > only after a lot of mental gymnastics.  If you drop the whole 
> > > client/server
> > > concept and concentrate on what this patch does, things will go better.  
> > > I would
> > > personally go with what you have in the Kconfig: 
> > > 
> > > > + Vhost RPMsg API allows vhost drivers to communicate with 
> > > > VirtIO
> > > > + drivers on guest VMs, using the RPMsg over VirtIO protocol.
> > > 
> > > It is concise but describes exactly what this patch provide.
> > 
> > Ok, thanks, will try to improve.
> > 
> > > > Signed-off-by: Guennadi Liakhovetski 
> > > > 
> > > > ---
> > > >  drivers/vhost/Kconfig   |   7 +
> > > >  drivers/vhost/Makefile  |   3 +
> > > >  drivers/vhost/rpmsg.c   | 370 
> > > >  drivers/vhost/vhost_rpmsg.h |  74 
> > > >  4 files changed, 454 insertions(+)
> > > >  create mode 100644 drivers/vhost/rpmsg.c
> > > >  create mode 100644 drivers/vhost/vhost_rpmsg.h

[snip]

> > > > diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
> > > > new file mode 100644
> > > > index ..0ddee5b5f017
> > > > --- /dev/null
> > > > +++ b/drivers/vhost/rpmsg.c
> > > > @@ -0,0 +1,370 @@

[snip]

> > > > +/*
> > > > + * Return false to terminate the external loop only if we fail to 
> > > > obtain either
> > > > + * a request or a response buffer
> > > > + */
> > > > +static bool handle_rpmsg_req_single(struct vhost_rpmsg *vr,
> > > > +   struct vhost_virtqueue *vq)
> > > > +{
> > > > +   struct vhost_rpmsg_iter iter;
> > > > +   int ret = vhost_rpmsg_start_lock(vr, , 
> > > > VIRTIO_RPMSG_REQUEST, -EINVAL);
> > > > +   if (!ret)
> > > > +   ret = vhost_rpmsg_finish_unlock(vr, );
> > > > +   if (ret < 0) {
> > > > +   if (ret != -EAGAIN)
> > > > +   vq_err(vq, "%s(): RPMSG processing failed %d\n",
> > > > +  __func__, ret);
> > > > +   return false;
> > > > +   }
> > > > +
> > > > +   if (!iter.ept->write)
> > > > +   return true;
> > > > +
> > > > +   ret = vhost_rpmsg_start_lock(vr, , VIRTIO_RPMSG_RESPONSE, 
> > > > -EINVAL);
> > > > +   if (!ret)
> > > > +   ret = vhost_rpmsg_finish_unlock(vr, );
> > > > +   if (ret < 0) {
> > > > +   vq_err(vq, "%s(): RPMSG finalising failed %d\n", 
> > > > __func__, ret);
> > > > +   return false;
> > > > +   }
> > > 
> > > As I said before dealing with the "response" queue here seems to be 
> > > introducing
> > > coupling with vhost_rpmsg_start_lock()...  Endpoints should be doing that.
> > 
> > Sorry, could you elaborate a bit, what do you mean by coupling?
> 
> In function vhost_rpmsg_start_lock() the rpmsg header is prepared for a 
> response
> at the end of the processing associated with the reception of a
> VIRTIO_RPMSG_REQUEST.  I assumed (perhaps wrongly) that such as response was
> sent here.  In that case preparing the response and sending the response 
> should
> be done at the same place.

This will change in the next version, in it I'll remove response preparation 
from 
request handling.

> But my assumption may be completely wrong... A better question should probably
> be why is the VIRTIO_RPMSG_RESPONSE probed in handle_rpmsg_req_single()?
> Shouldn't this be solely concerned with handling requests from the guest?  If
> I'm wondering what is going on I expect other people will also do the same,
> something that could be alleviated with more comments.

My RPMsg implementation supports two modes for sending data from the host (in 
VM terms) to guests: as responses to their requests and as asynchronous 
messages. If there isn't a strict request-response pattern on a certain 
endpont, 
you leave the .write callback NULL and then you send your messages as you 
please 
independent of requests. But you can also specify a .write pointer in which 
case 
after each request to generate a response.

In principle this response handling could be removed, but then drivers, that do 
need to respond to requests would have to schedule an asynchronous action in 
their .read callbacks to be triggered 

Re: [PATCH v7 3/3] vhost: add an RPMsg API

2020-09-18 Thread Guennadi Liakhovetski
Hi Mathieu,

On Thu, Sep 17, 2020 at 04:01:38PM -0600, Mathieu Poirier wrote:
> On Thu, Sep 10, 2020 at 01:13:51PM +0200, Guennadi Liakhovetski wrote:
> > Linux supports running the RPMsg protocol over the VirtIO transport
> > protocol, but currently there is only support for VirtIO clients and
> > no support for VirtIO servers. This patch adds a vhost-based RPMsg
> > server implementation, which makes it possible to use RPMsg over
> > VirtIO between guest VMs and the host.
> 
> I now get the client/server concept you are describing above but that happened
> only after a lot of mental gymnastics.  If you drop the whole client/server
> concept and concentrate on what this patch does, things will go better.  I 
> would
> personally go with what you have in the Kconfig: 
> 
> > + Vhost RPMsg API allows vhost drivers to communicate with VirtIO
> > + drivers on guest VMs, using the RPMsg over VirtIO protocol.
> 
> It is concise but describes exactly what this patch provide.

Ok, thanks, will try to improve.

> > Signed-off-by: Guennadi Liakhovetski 
> > ---
> >  drivers/vhost/Kconfig   |   7 +
> >  drivers/vhost/Makefile  |   3 +
> >  drivers/vhost/rpmsg.c   | 370 
> >  drivers/vhost/vhost_rpmsg.h |  74 
> >  4 files changed, 454 insertions(+)
> >  create mode 100644 drivers/vhost/rpmsg.c
> >  create mode 100644 drivers/vhost/vhost_rpmsg.h
> > 
> > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > index 587fbae06182..ee1a19b7ab3d 100644
> > --- a/drivers/vhost/Kconfig
> > +++ b/drivers/vhost/Kconfig
> > @@ -38,6 +38,13 @@ config VHOST_NET
> >   To compile this driver as a module, choose M here: the module will
> >   be called vhost_net.
> >  
> > +config VHOST_RPMSG
> > +   tristate
> > +   select VHOST
> > +   help
> > + Vhost RPMsg API allows vhost drivers to communicate with VirtIO
> > + drivers on guest VMs, using the RPMsg over VirtIO protocol.
> > +
> 
> I suppose you intend this to be selectable from another config option?

yes.

> >  config VHOST_SCSI
> > tristate "VHOST_SCSI TCM fabric driver"
> > depends on TARGET_CORE && EVENTFD
> > diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> > index f3e1897cce85..9cf459d59f97 100644
> > --- a/drivers/vhost/Makefile
> > +++ b/drivers/vhost/Makefile
> > @@ -2,6 +2,9 @@
> >  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> >  vhost_net-y := net.o
> >  
> > +obj-$(CONFIG_VHOST_RPMSG) += vhost_rpmsg.o
> > +vhost_rpmsg-y := rpmsg.o
> > +
> >  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
> >  vhost_scsi-y := scsi.o
> >  
> > diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
> > new file mode 100644
> > index ..0ddee5b5f017
> > --- /dev/null
> > +++ b/drivers/vhost/rpmsg.c
> > @@ -0,0 +1,370 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/*
> > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > + *
> > + * Author: Guennadi Liakhovetski 
> > + *
> > + * Vhost RPMsg VirtIO interface provides a set of functions to be used on 
> > the
> > + * host side as a counterpart to the guest side RPMsg VirtIO API, provided 
> > by
> > + * drivers/rpmsg/virtio_rpmsg_bus.c. This API can be used by any vhost 
> > driver to
> > + * handle RPMsg specific virtqueue processing.
> > + * Vhost drivers, using this API will use their own VirtIO device IDs, that
> > + * should then also be added to the ID table in virtio_rpmsg_bus.c
> > + */
> > +
> > +#include 
> > +#include 
> > +#include 
> 
> As far as I can tell the above two are not needed.

Look like left-over, will remove.

> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +
> > +#include "vhost.h"
> > +#include "vhost_rpmsg.h"
> > +
> > +/*
> > + * All virtio-rpmsg virtual queue kicks always come with just one buffer -
> > + * either input or output, but we can also handle split messages
> > + */
> > +static int vhost_rpmsg_get_msg(struct vhost_virtqueue *vq, unsigned int 
> > *cnt)
> > +{
> > +   struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
> > +   unsigned int out, in;
> > +   int head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov), , 
> > ,
> > +NULL, NULL);
> > +   if (head < 0) {
> > +   vq_err(vq, "%s(): error %d getting buffer\n",
> > +  __func__, head);
> > +   return head;
> > +   }
> > +
> > +   /* Nothing new? */
> > +   if (head == vq->num)
> > +   return head;
> > +
> > +   if (vq == >vq[VIRTIO_RPMSG_RESPONSE]) {
> > +   if (out) {
> > +   vq_err(vq, "%s(): invalid %d output in response 
> > queue\n",
> > +  __func__, out);
> > +   goto return_buf;
> > +   }
> > +
> > +   *cnt = in;
> > +   }
> > +
> > +   if (vq == >vq[VIRTIO_RPMSG_REQUEST]) {
> > +   if (in) {
> > +   vq_err(vq, "%s(): invalid %d input in request queue\n",
> > +   

Re: [PATCH v7 3/3] vhost: add an RPMsg API

2020-09-18 Thread Guennadi Liakhovetski
Hi Vincent,

On Thu, Sep 17, 2020 at 10:55:59AM +0200, Vincent Whitchurch wrote:
> On Thu, Sep 10, 2020 at 01:13:51PM +0200, Guennadi Liakhovetski wrote:
> > +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter 
> > *iter,
> > +  unsigned int qid, ssize_t len)
> > +   __acquires(vq->mutex)
> > +{
> > +   struct vhost_virtqueue *vq = vr->vq + qid;
> > +   unsigned int cnt;
> > +   ssize_t ret;
> > +   size_t tmp;
> > +
> > +   if (qid >= VIRTIO_RPMSG_NUM_OF_VQS)
> > +   return -EINVAL;
> > +
> > +   iter->vq = vq;
> > +
> > +   mutex_lock(>mutex);
> > +   vhost_disable_notify(>dev, vq);
> > +
> > +   iter->head = vhost_rpmsg_get_msg(vq, );
> > +   if (iter->head == vq->num)
> > +   iter->head = -EAGAIN;
> > +
> > +   if (iter->head < 0) {
> > +   ret = iter->head;
> > +   goto unlock;
> > +   }
> > +
> [...]
> > +
> > +return_buf:
> > +   vhost_add_used(vq, iter->head, 0);
> > +unlock:
> > +   vhost_enable_notify(>dev, vq);
> > +   mutex_unlock(>mutex);
> > +
> > +   return ret;
> > +}
> 
> There is a race condition here.  New buffers could have been added while
> notifications were disabled (between vhost_disable_notify() and
> vhost_enable_notify()), so the other vhost drivers check the return
> value of vhost_enable_notify() and rerun their work loops if it returns
> true.  This driver doesn't do that so it stops processing requests if
> that condition hits.

You're right, thanks for spotting this!

> Something like the below seems to fix it but the correct fix could maybe
> involve changing this API to account for this case so that it looks more
> like the code in other vhost drivers.

I'll try to use your proposed code, we'll see if it turns out incorrect.

> diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
> index 7c753258d42..673dd4ec865 100644
> --- a/drivers/vhost/rpmsg.c
> +++ b/drivers/vhost/rpmsg.c
> @@ -302,8 +302,14 @@ static void handle_rpmsg_req_kick(struct vhost_work 
> *work)
>   struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
> poll.work);
>   struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
> + struct vhost_virtqueue *reqvq = vr->vq + VIRTIO_RPMSG_REQUEST;

This is only called on the request queue, so, we can just use vq here.

>  
> - while (handle_rpmsg_req_single(vr, vq))
> + /*
> +  * The !vhost_vq_avail_empty() check is needed since the vhost_rpmsg*
> +  * APIs don't check the return value of vhost_enable_notify() and retry
> +  * if there were buffers added while notifications were disabled.
> +  */
> + while (handle_rpmsg_req_single(vr, vq) || 
> !vhost_vq_avail_empty(reqvq->dev, reqvq))
>   ;
>  }

Thanks
Guennadi
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v7 3/3] vhost: add an RPMsg API

2020-09-17 Thread Vincent Whitchurch
On Thu, Sep 10, 2020 at 01:13:51PM +0200, Guennadi Liakhovetski wrote:
> +int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter 
> *iter,
> +unsigned int qid, ssize_t len)
> + __acquires(vq->mutex)
> +{
> + struct vhost_virtqueue *vq = vr->vq + qid;
> + unsigned int cnt;
> + ssize_t ret;
> + size_t tmp;
> +
> + if (qid >= VIRTIO_RPMSG_NUM_OF_VQS)
> + return -EINVAL;
> +
> + iter->vq = vq;
> +
> + mutex_lock(>mutex);
> + vhost_disable_notify(>dev, vq);
> +
> + iter->head = vhost_rpmsg_get_msg(vq, );
> + if (iter->head == vq->num)
> + iter->head = -EAGAIN;
> +
> + if (iter->head < 0) {
> + ret = iter->head;
> + goto unlock;
> + }
> +
[...]
> +
> +return_buf:
> + vhost_add_used(vq, iter->head, 0);
> +unlock:
> + vhost_enable_notify(>dev, vq);
> + mutex_unlock(>mutex);
> +
> + return ret;
> +}

There is a race condition here.  New buffers could have been added while
notifications were disabled (between vhost_disable_notify() and
vhost_enable_notify()), so the other vhost drivers check the return
value of vhost_enable_notify() and rerun their work loops if it returns
true.  This driver doesn't do that so it stops processing requests if
that condition hits.

Something like the below seems to fix it but the correct fix could maybe
involve changing this API to account for this case so that it looks more
like the code in other vhost drivers.

diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
index 7c753258d42..673dd4ec865 100644
--- a/drivers/vhost/rpmsg.c
+++ b/drivers/vhost/rpmsg.c
@@ -302,8 +302,14 @@ static void handle_rpmsg_req_kick(struct vhost_work *work)
struct vhost_virtqueue *vq = container_of(work, struct vhost_virtqueue,
  poll.work);
struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
+   struct vhost_virtqueue *reqvq = vr->vq + VIRTIO_RPMSG_REQUEST;
 
-   while (handle_rpmsg_req_single(vr, vq))
+   /*
+* The !vhost_vq_avail_empty() check is needed since the vhost_rpmsg*
+* APIs don't check the return value of vhost_enable_notify() and retry
+* if there were buffers added while notifications were disabled.
+*/
+   while (handle_rpmsg_req_single(vr, vq) || 
!vhost_vq_avail_empty(reqvq->dev, reqvq))
;
 }
 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v7 3/3] vhost: add an RPMsg API

2020-09-10 Thread Guennadi Liakhovetski
Linux supports running the RPMsg protocol over the VirtIO transport
protocol, but currently there is only support for VirtIO clients and
no support for VirtIO servers. This patch adds a vhost-based RPMsg
server implementation, which makes it possible to use RPMsg over
VirtIO between guest VMs and the host.

Signed-off-by: Guennadi Liakhovetski 
---
 drivers/vhost/Kconfig   |   7 +
 drivers/vhost/Makefile  |   3 +
 drivers/vhost/rpmsg.c   | 370 
 drivers/vhost/vhost_rpmsg.h |  74 
 4 files changed, 454 insertions(+)
 create mode 100644 drivers/vhost/rpmsg.c
 create mode 100644 drivers/vhost/vhost_rpmsg.h

diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
index 587fbae06182..ee1a19b7ab3d 100644
--- a/drivers/vhost/Kconfig
+++ b/drivers/vhost/Kconfig
@@ -38,6 +38,13 @@ config VHOST_NET
  To compile this driver as a module, choose M here: the module will
  be called vhost_net.
 
+config VHOST_RPMSG
+   tristate
+   select VHOST
+   help
+ Vhost RPMsg API allows vhost drivers to communicate with VirtIO
+ drivers on guest VMs, using the RPMsg over VirtIO protocol.
+
 config VHOST_SCSI
tristate "VHOST_SCSI TCM fabric driver"
depends on TARGET_CORE && EVENTFD
diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
index f3e1897cce85..9cf459d59f97 100644
--- a/drivers/vhost/Makefile
+++ b/drivers/vhost/Makefile
@@ -2,6 +2,9 @@
 obj-$(CONFIG_VHOST_NET) += vhost_net.o
 vhost_net-y := net.o
 
+obj-$(CONFIG_VHOST_RPMSG) += vhost_rpmsg.o
+vhost_rpmsg-y := rpmsg.o
+
 obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
 vhost_scsi-y := scsi.o
 
diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
new file mode 100644
index ..0ddee5b5f017
--- /dev/null
+++ b/drivers/vhost/rpmsg.c
@@ -0,0 +1,370 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright(c) 2020 Intel Corporation. All rights reserved.
+ *
+ * Author: Guennadi Liakhovetski 
+ *
+ * Vhost RPMsg VirtIO interface provides a set of functions to be used on the
+ * host side as a counterpart to the guest side RPMsg VirtIO API, provided by
+ * drivers/rpmsg/virtio_rpmsg_bus.c. This API can be used by any vhost driver 
to
+ * handle RPMsg specific virtqueue processing.
+ * Vhost drivers, using this API will use their own VirtIO device IDs, that
+ * should then also be added to the ID table in virtio_rpmsg_bus.c
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "vhost.h"
+#include "vhost_rpmsg.h"
+
+/*
+ * All virtio-rpmsg virtual queue kicks always come with just one buffer -
+ * either input or output, but we can also handle split messages
+ */
+static int vhost_rpmsg_get_msg(struct vhost_virtqueue *vq, unsigned int *cnt)
+{
+   struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
+   unsigned int out, in;
+   int head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov), , 
,
+NULL, NULL);
+   if (head < 0) {
+   vq_err(vq, "%s(): error %d getting buffer\n",
+  __func__, head);
+   return head;
+   }
+
+   /* Nothing new? */
+   if (head == vq->num)
+   return head;
+
+   if (vq == >vq[VIRTIO_RPMSG_RESPONSE]) {
+   if (out) {
+   vq_err(vq, "%s(): invalid %d output in response 
queue\n",
+  __func__, out);
+   goto return_buf;
+   }
+
+   *cnt = in;
+   }
+
+   if (vq == >vq[VIRTIO_RPMSG_REQUEST]) {
+   if (in) {
+   vq_err(vq, "%s(): invalid %d input in request queue\n",
+  __func__, in);
+   goto return_buf;
+   }
+
+   *cnt = out;
+   }
+
+   return head;
+
+return_buf:
+   vhost_add_used(vq, head, 0);
+
+   return -EINVAL;
+}
+
+static const struct vhost_rpmsg_ept *vhost_rpmsg_ept_find(struct vhost_rpmsg 
*vr, int addr)
+{
+   unsigned int i;
+
+   for (i = 0; i < vr->n_epts; i++)
+   if (vr->ept[i].addr == addr)
+   return vr->ept + i;
+
+   return NULL;
+}
+
+/*
+ * if len < 0, then for reading a request, the complete virtual queue buffer
+ * size is prepared, for sending a response, the length in the iterator is used
+ */
+int vhost_rpmsg_start_lock(struct vhost_rpmsg *vr, struct vhost_rpmsg_iter 
*iter,
+  unsigned int qid, ssize_t len)
+   __acquires(vq->mutex)
+{
+   struct vhost_virtqueue *vq = vr->vq + qid;
+   unsigned int cnt;
+   ssize_t ret;
+   size_t tmp;
+
+   if (qid >= VIRTIO_RPMSG_NUM_OF_VQS)
+   return -EINVAL;
+
+   iter->vq = vq;
+
+   mutex_lock(>mutex);
+   vhost_disable_notify(>dev, vq);
+
+   iter->head = vhost_rpmsg_get_msg(vq, );
+   if (iter->head ==