Re: [PATCH v2] drm/virtio: add plane check

2019-08-26 Thread Gerd Hoffmann
On Mon, Aug 26, 2019 at 03:34:56PM -0700, Chia-I Wu wrote:
> On Thu, Aug 22, 2019 at 2:47 AM Gerd Hoffmann  wrote:
> >
> > Use drm_atomic_helper_check_plane_state()
> > to sanity check the plane state.
> >
> > Signed-off-by: Gerd Hoffmann 
> > ---
> >  drivers/gpu/drm/virtio/virtgpu_plane.c | 17 -
> >  1 file changed, 16 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c 
> > b/drivers/gpu/drm/virtio/virtgpu_plane.c
> > index a492ac3f4a7e..fe5efb2de90d 100644
> > --- a/drivers/gpu/drm/virtio/virtgpu_plane.c
> > +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
> > @@ -84,7 +84,22 @@ static const struct drm_plane_funcs 
> > virtio_gpu_plane_funcs = {
> >  static int virtio_gpu_plane_atomic_check(struct drm_plane *plane,
> >  struct drm_plane_state *state)
> >  {
> > -   return 0;
> > +   bool is_cursor = plane->type == DRM_PLANE_TYPE_CURSOR;
> > +   struct drm_crtc_state *crtc_state;
> > +   int ret;
> > +
> > +   if (!state->fb || !state->crtc)
> > +   return 0;
> > +
> > +   crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc);
> > +   if (IS_ERR(crtc_state))
> > +return PTR_ERR(crtc_state);
> Is drm_atomic_get_new_crtc_state better here?

We don't have to worry about old/new state here.  The drm_plane_state we
get passed is the state we should check in this callback (and I think
this always is the new state).

cheers,
  Gerd

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2] drm/virtio: add plane check

2019-08-26 Thread Chia-I Wu
On Thu, Aug 22, 2019 at 2:47 AM Gerd Hoffmann  wrote:
>
> Use drm_atomic_helper_check_plane_state()
> to sanity check the plane state.
>
> Signed-off-by: Gerd Hoffmann 
> ---
>  drivers/gpu/drm/virtio/virtgpu_plane.c | 17 -
>  1 file changed, 16 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c 
> b/drivers/gpu/drm/virtio/virtgpu_plane.c
> index a492ac3f4a7e..fe5efb2de90d 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_plane.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
> @@ -84,7 +84,22 @@ static const struct drm_plane_funcs virtio_gpu_plane_funcs 
> = {
>  static int virtio_gpu_plane_atomic_check(struct drm_plane *plane,
>  struct drm_plane_state *state)
>  {
> -   return 0;
> +   bool is_cursor = plane->type == DRM_PLANE_TYPE_CURSOR;
> +   struct drm_crtc_state *crtc_state;
> +   int ret;
> +
> +   if (!state->fb || !state->crtc)
> +   return 0;
> +
> +   crtc_state = drm_atomic_get_crtc_state(state->state, state->crtc);
> +   if (IS_ERR(crtc_state))
> +return PTR_ERR(crtc_state);
Is drm_atomic_get_new_crtc_state better here?

> +
> +   ret = drm_atomic_helper_check_plane_state(state, crtc_state,
> + DRM_PLANE_HELPER_NO_SCALING,
> + DRM_PLANE_HELPER_NO_SCALING,
> + is_cursor, true);
> +   return ret;
>  }
>
>  static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
> --
> 2.18.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2] drm/virtio: make resource id workaround runtime switchable.

2019-08-26 Thread Chia-I Wu
On Thu, Aug 22, 2019 at 3:26 AM Gerd Hoffmann  wrote:
>
> Also update the comment with a reference to the virglrenderer fix.
>
> Signed-off-by: Gerd Hoffmann 
Reviewed-by: Chia-I Wu 
> ---
>  drivers/gpu/drm/virtio/virtgpu_object.c | 44 ++---
>  1 file changed, 24 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c 
> b/drivers/gpu/drm/virtio/virtgpu_object.c
> index b2da31310d24..e98aaa00578d 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_object.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_object.c
> @@ -27,34 +27,38 @@
>
>  #include "virtgpu_drv.h"
>
> +static int virtio_gpu_virglrenderer_workaround = 1;
> +module_param_named(virglhack, virtio_gpu_virglrenderer_workaround, int, 
> 0400);
> +
>  static int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
>uint32_t *resid)
>  {
> -#if 0
> -   int handle = ida_alloc(>resource_ida, GFP_KERNEL);
> -
> -   if (handle < 0)
> -   return handle;
> -#else
> -   static int handle;
> -
> -   /*
> -* FIXME: dirty hack to avoid re-using IDs, virglrenderer
> -* can't deal with that.  Needs fixing in virglrenderer, also
> -* should figure a better way to handle that in the guest.
> -*/
> -   handle++;
> -#endif
> -
> -   *resid = handle + 1;
> +   if (virtio_gpu_virglrenderer_workaround) {
> +   /*
> +* Hack to avoid re-using resource IDs.
> +*
> +* virglrenderer versions up to (and including) 0.7.0
> +* can't deal with that.  virglrenderer commit
> +* "f91a9dd35715 Fix unlinking resources from hash
> +* table." (Feb 2019) fixes the bug.
> +*/
> +   static int handle;
> +   handle++;
> +   *resid = handle + 1;
> +   } else {
> +   int handle = ida_alloc(>resource_ida, GFP_KERNEL);
> +   if (handle < 0)
> +   return handle;
> +   *resid = handle + 1;
> +   }
> return 0;
>  }
>
>  static void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, 
> uint32_t id)
>  {
> -#if 0
> -   ida_free(>resource_ida, id - 1);
> -#endif
> +   if (!virtio_gpu_virglrenderer_workaround) {
> +   ida_free(>resource_ida, id - 1);
> +   }
>  }
>
>  static void virtio_gpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
> --
> 2.18.1
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


RE: [RFC PATCH] virtio_ring: Use DMA API if guest memory is encrypted

2019-08-26 Thread Ram Pai
On Tue, Aug 13, 2019 at 08:45:37AM -0700, Ram Pai wrote:
> On Wed, Aug 14, 2019 at 12:24:39AM +1000, David Gibson wrote:
> > On Tue, Aug 13, 2019 at 03:26:17PM +0200, Christoph Hellwig wrote:
> > > On Mon, Aug 12, 2019 at 07:51:56PM +1000, David Gibson wrote:
> > > > AFAICT we already kind of abuse this for the VIRTIO_F_IOMMU_PLATFORM,
> > > > because to handle for cases where it *is* a device limitation, we
> > > > assume that if the hypervisor presents VIRTIO_F_IOMMU_PLATFORM then
> > > > the guest *must* select it.
> > > > 
> > > > What we actually need here is for the hypervisor to present
> > > > VIRTIO_F_IOMMU_PLATFORM as available, but not required.  Then we need
> > > > a way for the platform core code to communicate to the virtio driver
> > > > that *it* requires the IOMMU to be used, so that the driver can select
> > > > or not the feature bit on that basis.
> > > 
> > > I agree with the above, but that just brings us back to the original
> > > issue - the whole bypass of the DMA OPS should be an option that the
> > > device can offer, not the other way around.  And we really need to
> > > fix that root cause instead of doctoring around it.
> > 
> > I'm not exactly sure what you mean by "device" in this context.  Do
> > you mean the hypervisor (qemu) side implementation?
> > 
> > You're right that this was the wrong way around to begin with, but as
> > well as being hard to change now, I don't see how it really addresses
> > the current problem.  The device could default to IOMMU and allow
> > bypass, but the driver would still need to get information from the
> > platform to know that it *can't* accept that option in the case of a
> > secure VM.  Reversed sense, but the same basic problem.
> > 
> > The hypervisor does not, and can not be aware of the secure VM
> > restrictions - only the guest side platform code knows that.
> 
> This statement is almost entirely right. I will rephrase it to make it
> entirely right.   
> 
> The hypervisor does not, and can not be aware of the secure VM
> requirement that it needs to do some special processing that has nothing
> to do with DMA address translation - only the guest side platform code
> know that.
> 
> BTW: I do not consider 'bounce buffering' as 'DMA address translation'.
> DMA address translation, translates CPU address to DMA address.  Bounce
> buffering moves the data from one buffer at a given CPU address to
> another buffer at a different CPU address.  Unfortunately the current
> DMA ops conflates the two.  The need to do 'DMA address translation' 
> is something the device can enforce.  But the need to do bounce
> buffering, is something that the device should not be aware and should be
> entirely a decision made locally by the kernel/driver in the secure VM.


Christoph,

Since we have not heard back from you, I am not sure where you
stand on this issue now.   One of the three things are
possible..

(a) our above explaination did not make sense and hence
you decided to ignore it.
(b) our above above made some sense and need more time to think 
and respond.
(c) you totally forgot about this.


I hope it is (b). We want a solution that works for all, and your inputs 
are important to us.


Thanks,
RP

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization