Re: [PATCH v4 4/4] vhost: add an RPMsg API

2020-08-25 Thread Michael S. Tsirkin
On Tue, Aug 25, 2020 at 03:41:10PM +0200, Guennadi Liakhovetski wrote:
> Hi Michael,
> 
> ...back from holidays and still unsure what your preferred solution 
> for the message layout would be:
> 
> On Wed, Aug 12, 2020 at 02:32:43PM +0200, Guennadi Liakhovetski wrote:
> > Hi Michael,
> > 
> > Thanks for a review.
> > 
> > On Mon, Aug 10, 2020 at 09:44:15AM -0400, Michael S. Tsirkin wrote:
> > > On Tue, Aug 04, 2020 at 05:19:17PM +0200, Guennadi Liakhovetski wrote:
> 
> [snip]
> 
> > > > > > +static int vhost_rpmsg_get_single(struct vhost_virtqueue *vq)
> > > > > > +{
> > > > > > +   struct vhost_rpmsg *vr = container_of(vq->dev, struct 
> > > > > > vhost_rpmsg, dev);
> > > > > > +   unsigned int out, in;
> > > > > > +   int head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov), 
> > > > > > , ,
> > > > > > +NULL, NULL);
> > > > > > +   if (head < 0) {
> > > > > > +   vq_err(vq, "%s(): error %d getting buffer\n",
> > > > > > +  __func__, head);
> > > > > > +   return head;
> > > > > > +   }
> > > > > > +
> > > > > > +   /* Nothing new? */
> > > > > > +   if (head == vq->num)
> > > > > > +   return head;
> > > > > > +
> > > > > > +   if (vq == >vq[VIRTIO_RPMSG_RESPONSE] && (out || in != 1)) {
> > > > > 
> > > > > This in != 1 looks like a dependency on a specific message layout.
> > > > > virtio spec says to avoid these. Using iov iters it's not too hard to 
> > > > > do
> > > > > ...
> > > > 
> > > > This is an RPMsg VirtIO implementation, and it has to match the 
> > > > virtio_rpmsg_bus.c 
> > > > driver, and that one has specific VirtIO queue and message usage 
> > > > patterns.
> > > 
> > > That could be fine for legacy virtio, but now you are claiming support
> > > for virtio 1, so need to fix these assumptions in the device.
> > 
> > I can just deop these checks without changing anything else, that still 
> > would work. 
> > I could also make this work with "any" layout - either ignoring any 
> > left-over 
> > buffers or maybe even getting them one by one. But I wouldn't even be able 
> > to test 
> > those modes without modifying / breaking the current virtio-rpmsg driver. 
> > What's 
> > the preferred solution?
> 
> Could you elaborate a bit please?
> 
> Thanks
> Guennadi

I'd prefer it that we make it work with any layout.
For testing, there was a hack for virtio ring which
split up descriptors at a random boundary.
I'll try to locate it and post, sounds like something
we might want upstream for debugging.

-- 
MST

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v4 4/4] vhost: add an RPMsg API

2020-08-25 Thread Guennadi Liakhovetski
Hi Michael,

...back from holidays and still unsure what your preferred solution 
for the message layout would be:

On Wed, Aug 12, 2020 at 02:32:43PM +0200, Guennadi Liakhovetski wrote:
> Hi Michael,
> 
> Thanks for a review.
> 
> On Mon, Aug 10, 2020 at 09:44:15AM -0400, Michael S. Tsirkin wrote:
> > On Tue, Aug 04, 2020 at 05:19:17PM +0200, Guennadi Liakhovetski wrote:

[snip]

> > > > > +static int vhost_rpmsg_get_single(struct vhost_virtqueue *vq)
> > > > > +{
> > > > > + struct vhost_rpmsg *vr = container_of(vq->dev, struct 
> > > > > vhost_rpmsg, dev);
> > > > > + unsigned int out, in;
> > > > > + int head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov), 
> > > > > , ,
> > > > > +  NULL, NULL);
> > > > > + if (head < 0) {
> > > > > + vq_err(vq, "%s(): error %d getting buffer\n",
> > > > > +__func__, head);
> > > > > + return head;
> > > > > + }
> > > > > +
> > > > > + /* Nothing new? */
> > > > > + if (head == vq->num)
> > > > > + return head;
> > > > > +
> > > > > + if (vq == >vq[VIRTIO_RPMSG_RESPONSE] && (out || in != 1)) {
> > > > 
> > > > This in != 1 looks like a dependency on a specific message layout.
> > > > virtio spec says to avoid these. Using iov iters it's not too hard to do
> > > > ...
> > > 
> > > This is an RPMsg VirtIO implementation, and it has to match the 
> > > virtio_rpmsg_bus.c 
> > > driver, and that one has specific VirtIO queue and message usage patterns.
> > 
> > That could be fine for legacy virtio, but now you are claiming support
> > for virtio 1, so need to fix these assumptions in the device.
> 
> I can just deop these checks without changing anything else, that still would 
> work. 
> I could also make this work with "any" layout - either ignoring any left-over 
> buffers or maybe even getting them one by one. But I wouldn't even be able to 
> test 
> those modes without modifying / breaking the current virtio-rpmsg driver. 
> What's 
> the preferred solution?

Could you elaborate a bit please?

Thanks
Guennadi
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH] vhost-iotlb: fix vhost_iotlb_itree_next() documentation

2020-08-25 Thread Stefano Garzarella
This patch contains trivial changes for the vhost_iotlb_itree_next()
documentation, fixing the function name and the description of
first argument (@map).

Signed-off-by: Stefano Garzarella 
---
 drivers/vhost/iotlb.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/vhost/iotlb.c b/drivers/vhost/iotlb.c
index 1f0ca6e44410..34aec4ba331e 100644
--- a/drivers/vhost/iotlb.c
+++ b/drivers/vhost/iotlb.c
@@ -159,8 +159,8 @@ vhost_iotlb_itree_first(struct vhost_iotlb *iotlb, u64 
start, u64 last)
 EXPORT_SYMBOL_GPL(vhost_iotlb_itree_first);
 
 /**
- * vhost_iotlb_itree_first - return the next overlapped range
- * @iotlb: the IOTLB
+ * vhost_iotlb_itree_next - return the next overlapped range
+ * @map: the starting map node
  * @start: start of IOVA range
  * @end: end of IOVA range
  */
-- 
2.26.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v6 02/76] KVM: SVM: Add GHCB definitions

2020-08-25 Thread Borislav Petkov
On Tue, Aug 25, 2020 at 11:22:24AM +0200, Joerg Roedel wrote:
> I don't think so, if I look at the history of these checks their whole
> purpose seems to be to alert the developer/maintainer when their size
> changes and that they might not fit on the stack anymore. But that is
> taken care of in patch 1.

Why? What's wrong with:

BUILD_BUG_ON(sizeof(struct vmcb_save_area) != VMCB_SAVE_AREA_SIZE);
BUILD_BUG_ON(sizeof(struct vmcb_control_area) != 
VMCB_CONTROL_AREA_SIZE);
BUILD_BUG_ON(sizeof(struct ghcb) != PAGE_SIZE);

?

-- 
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v6 02/76] KVM: SVM: Add GHCB definitions

2020-08-25 Thread Joerg Roedel
On Mon, Aug 24, 2020 at 12:44:51PM +0200, Borislav Petkov wrote:
> On Mon, Aug 24, 2020 at 10:53:57AM +0200, Joerg Roedel wrote:
> >  static inline void __unused_size_checks(void)
> >  {
> > -   BUILD_BUG_ON(sizeof(struct vmcb_save_area) != 0x298);
> > +   BUILD_BUG_ON(sizeof(struct vmcb_save_area) != 1032);
> > BUILD_BUG_ON(sizeof(struct vmcb_control_area) != 256);
> > +   BUILD_BUG_ON(sizeof(struct ghcb) != 4096);
> 
> Could those naked numbers be proper, meaningfully named defines?

I don't think so, if I look at the history of these checks their whole
purpose seems to be to alert the developer/maintainer when their size
changes and that they might not fit on the stack anymore. But that is
taken care of in patch 1.

Regards,

Joerg
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v6 00/76] x86: SEV-ES Guest Support

2020-08-25 Thread Joerg Roedel
Hi Mike,

On Tue, Aug 25, 2020 at 12:21:03AM +, Mike Stunes wrote:
> Thanks for the new update! I still see the same FSGSBASE behavior on our 
> platform.
> 
> That is, APs come up offline; masking out either FSGSBASE or RDPID from the
> guest's CPUID results in all CPUs online.
> 
> Is that still expected with this patch set? (As you mentioned in an earlier 
> reply,
> I’m testing on a Rome system.)

The RDPID fix (removing RDPID usage from paranoid_entry) is probably not
yet merged into the base you have been using. But removing RDPID from
CPUID should make things work until the fix is merged.

Regards,

Joerg
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization