Re: [PATCH v4 11/13] drm/vboxvideo: Use drm_gem_vram_vmap_local() in cursor update

2021-01-11 Thread Thomas Zimmermann

Hi

Am 11.01.21 um 18:06 schrieb Daniel Vetter:

On Fri, Jan 08, 2021 at 10:43:38AM +0100, Thomas Zimmermann wrote:

Cursor updates in vboxvideo require a short-term mapping of the
source BO. Use drm_gem_vram_vmap_local() and avoid the pinning
operations.

Signed-off-by: Thomas Zimmermann 


All these drivers patches break the dma_resv_lock vs
dma_fence_begin/end_signalling nesting rules, so this doesn't work.

Generally this is what the prepare/cleanup_fb hooks are for, that's where
mappings (including vmaps) are meant to be set up, permanently.

I'm kinda not clear on why we need all these changes, I thought the
locking problem is just in the fb helper paths, because it's outside of
the atomic path and could conflict with an atomic update at the same time?
So only that one should get the vmap_local treatment, everything else
should keep the normal vmap treatment.


Kind of responding to all your comment on the driver changes:

These drivers only require short-term mappings, so using vmap_local 
would be the natural choice. For SHMEM helpers, it's mostly a cosmetic 
thing. For VRAM helpers, I was hoping to remove the vmap/vunmap helpers 
entirely. One cannot really map the BOs for the long-term, so not having 
the helpers at all would make sense.


But reading all your comments on the driver patches, I'd rather not 
update the drivers here but later convert them to use 
prepare_fb/cleanup_fb in the correct way.


Best regards
Thomas


-Daniel

---
  drivers/gpu/drm/vboxvideo/vbox_mode.c | 15 +--
  1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c 
b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index dbc0dd53c69e..215b37c78c10 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -381,7 +381,8 @@ static void vbox_cursor_atomic_update(struct drm_plane 
*plane,
container_of(plane->dev, struct vbox_private, ddev);
struct vbox_crtc *vbox_crtc = to_vbox_crtc(plane->state->crtc);
struct drm_framebuffer *fb = plane->state->fb;
-   struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(fb->obj[0]);
+   struct drm_gem_object *obj = fb->obj[0];
+   struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(obj);
u32 width = plane->state->crtc_w;
u32 height = plane->state->crtc_h;
size_t data_size, mask_size;
@@ -401,11 +402,12 @@ static void vbox_cursor_atomic_update(struct drm_plane 
*plane,
  
  	vbox_crtc->cursor_enabled = true;
  
-	ret = drm_gem_vram_vmap(gbo, &map);

+   ret = dma_resv_lock(obj->resv, NULL);
+   if (ret)
+   return;
+   ret = drm_gem_vram_vmap_local(gbo, &map);
if (ret) {
-   /*
-* BUG: we should have pinned the BO in prepare_fb().
-*/
+   dma_resv_unlock(obj->resv);
mutex_unlock(&vbox->hw_mutex);
DRM_WARN("Could not map cursor bo, skipping update\n");
return;
@@ -421,7 +423,8 @@ static void vbox_cursor_atomic_update(struct drm_plane 
*plane,
data_size = width * height * 4 + mask_size;
  
  	copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);

-   drm_gem_vram_vunmap(gbo, &map);
+   drm_gem_vram_vunmap_local(gbo, &map);
+   dma_resv_unlock(obj->resv);
  
  	flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |

VBOX_MOUSE_POINTER_ALPHA;
--
2.29.2





--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer



OpenPGP_signature
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v3] vhost_vdpa: fix the problem in vhost_vdpa_set_config_call

2021-01-11 Thread Greg KH
On Tue, Jan 12, 2021 at 01:36:29PM +0800, Cindy Lu wrote:
> In vhost_vdpa_set_config_call, the cb.private should be vhost_vdpa.
> this cb.private will finally use in vhost_vdpa_config_cb as
> vhost_vdpa. Fix this issue.
> 
> Fixes: 776f395004d82 ("vhost_vdpa: Support config interrupt in vdpa")
> Acked-by: Jason Wang 
> Signed-off-by: Cindy Lu 
> ---
>  drivers/vhost/vdpa.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)



This is not the correct way to submit patches for inclusion in the
stable kernel tree.  Please read:
https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2] vhost_vdpa: fix the problem in vhost_vdpa_set_config_call

2021-01-11 Thread Greg KH
On Tue, Jan 12, 2021 at 01:32:27PM +0800, Cindy Lu wrote:
> In vhost_vdpa_set_config_call, the cb.private should be vhost_vdpa.
> this cb.private will finally use in vhost_vdpa_config_cb as
> vhost_vdpa. Fix this issue.
> 
> Signed-off-by: Cindy Lu 
> ---
>  drivers/vhost/vdpa.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)



This is not the correct way to submit patches for inclusion in the
stable kernel tree.  Please read:
https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v4 01/13] dma-buf: Add vmap_local and vnumap_local operations

2021-01-11 Thread Thomas Zimmermann

Hi

Am 08.01.21 um 17:12 schrieb Ruhl, Michael J:

-Original Message-
From: dri-devel  On Behalf Of
Thomas Zimmermann
Sent: Friday, January 8, 2021 4:43 AM
To: sumit.sem...@linaro.org; christian.koe...@amd.com;
airl...@redhat.com; dan...@ffwll.ch; maarten.lankho...@linux.intel.com;
mrip...@kernel.org; kra...@redhat.com; hdego...@redhat.com;
s...@poorly.run; e...@anholt.net; s...@ravnborg.org
Cc: Daniel Vetter ; dri-de...@lists.freedesktop.org;
virtualization@lists.linux-foundation.org; linaro-mm-...@lists.linaro.org;
Thomas Zimmermann ; linux-
me...@vger.kernel.org
Subject: [PATCH v4 01/13] dma-buf: Add vmap_local and vnumap_local
operations

The existing dma-buf calls dma_buf_vmap() and dma_buf_vunmap() are
allowed to pin the buffer or acquire the buffer's reservation object
lock.

This is a problem for callers that only require a short-term mapping
of the buffer without the pinning, or callers that have special locking
requirements. These may suffer from unnecessary overhead or interfere
with regular pin operations.

The new interfaces dma_buf_vmap_local(), dma_buf_vunmapo_local(), and
their rsp callbacks in struct dma_buf_ops provide an alternative without
pinning or reservation locking. Callers are responsible for these
operations.

v4:
* update documentation (Daniel)

Signed-off-by: Thomas Zimmermann 
Reviewed-by: Daniel Vetter 
Suggested-by: Daniel Vetter 
---
drivers/dma-buf/dma-buf.c | 81
+++
include/linux/dma-buf.h   | 34 
2 files changed, 115 insertions(+)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index b8465243eca2..01f9c74d97fa 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1295,6 +1295,87 @@ void dma_buf_vunmap(struct dma_buf *dmabuf,
struct dma_buf_map *map)
}
EXPORT_SYMBOL_GPL(dma_buf_vunmap);

+/**
+ * dma_buf_vmap_local - Create virtual mapping for the buffer object into
kernel
+ * address space.
+ * @dmabuf:[in]buffer to vmap
+ * @map:   [out]   returns the vmap pointer
+ *
+ * Unlike dma_buf_vmap() this is a short term mapping and will not pin
+ * the buffer. The struct dma_resv for the @dmabuf must be locked until
+ * dma_buf_vunmap_local() is called.
+ *
+ * Returns:
+ * 0 on success, or a negative errno code otherwise.
+ */
+int dma_buf_vmap_local(struct dma_buf *dmabuf, struct dma_buf_map
*map)
+{
+   struct dma_buf_map ptr;
+   int ret = 0;
+
+   dma_buf_map_clear(map);
+
+   if (WARN_ON(!dmabuf))
+   return -EINVAL;
+
+   dma_resv_assert_held(dmabuf->resv);
+
+   if (!dmabuf->ops->vmap_local)
+   return -EINVAL;


You are clearing the map, and then doing the above checks.

Is it ok to change the map info and then exit on error?


In vmap_local map argument returns the mapping's address. Callers are 
expected to check the return code. But I would expect a careless caller 
to not check, or check for map being cleared. Clearing it here first is 
the save thing to do.


Best regards
Thomas



Mike


+   mutex_lock(&dmabuf->lock);
+   if (dmabuf->vmapping_counter) {
+   dmabuf->vmapping_counter++;
+   BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
+   *map = dmabuf->vmap_ptr;
+   goto out_unlock;
+   }
+
+   BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr));
+
+   ret = dmabuf->ops->vmap_local(dmabuf, &ptr);
+   if (WARN_ON_ONCE(ret))
+   goto out_unlock;
+
+   dmabuf->vmap_ptr = ptr;
+   dmabuf->vmapping_counter = 1;
+
+   *map = dmabuf->vmap_ptr;
+
+out_unlock:
+   mutex_unlock(&dmabuf->lock);
+   return ret;
+}
+EXPORT_SYMBOL_GPL(dma_buf_vmap_local);
+
+/**
+ * dma_buf_vunmap_local - Unmap a vmap obtained by
dma_buf_vmap_local.
+ * @dmabuf:[in]buffer to vunmap
+ * @map:   [in]vmap pointer to vunmap
+ *
+ * Release a mapping established with dma_buf_vmap_local().
+ */
+void dma_buf_vunmap_local(struct dma_buf *dmabuf, struct
dma_buf_map *map)
+{
+   if (WARN_ON(!dmabuf))
+   return;
+
+   dma_resv_assert_held(dmabuf->resv);
+
+   BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr));
+   BUG_ON(dmabuf->vmapping_counter == 0);
+   BUG_ON(!dma_buf_map_is_equal(&dmabuf->vmap_ptr, map));
+
+   mutex_lock(&dmabuf->lock);
+   if (--dmabuf->vmapping_counter == 0) {
+   if (dmabuf->ops->vunmap_local)
+   dmabuf->ops->vunmap_local(dmabuf, map);
+   dma_buf_map_clear(&dmabuf->vmap_ptr);
+   }
+   mutex_unlock(&dmabuf->lock);
+}
+EXPORT_SYMBOL_GPL(dma_buf_vunmap_local);
+
#ifdef CONFIG_DEBUG_FS
static int dma_buf_debug_show(struct seq_file *s, void *unused)
{
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 628681bf6c99..aeed754b5467 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -264,6 +264,38 @@ struct dma_buf_ops {

int (*vmap)(struct dma_buf *dmabuf, struct dma_bu

RE: [PATCH linux-next v3 6/6] vdpa_sim_net: Add support for user supported devices

2021-01-11 Thread Parav Pandit
Hi Michael,

> From: Virtualization  On
> Behalf Of Parav Pandit
> > >
> > > When we add mac address attribute in add command, at that point also
> > remove the module parameter macaddr.
> >
> > Will that be mandatory? I'm not to happy with a UAPI we intend to
> > break straight away ...
> No. Specifying mac address shouldn't be mandatory. UAPI wont' be broken.

Shall we please proceed with this patchset?
I would like to complete the iproute2 part and converting remaining two drivers 
to follow mgmt tool subsequent to this series.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v1] vhost_vdpa: fix the problem in vhost_vdpa_set_config_call

2021-01-11 Thread Jason Wang


On 2021/1/12 上午10:46, Cindy Lu wrote:

in vhost_vdpa_set_config_call, the cb.private should be vhost_vdpa.



Should be "In"



this cb.private will finally use in vhost_vdpa_config_cb as
vhost_vdpa.Fix this issue



An whitespace is needed before Fix and a full stop after "issue"

Fixes: 776f395004d82 ("vhost_vdpa: Support config interrupt in vdpa")

Acked-by: Jason Wang 

Please post a V2 with the above fixed and cc stable.

Thanks




Signed-off-by: Cindy Lu 
---
  drivers/vhost/vdpa.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index ef688c8c0e0e..3fbb9c1f49da 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -319,7 +319,7 @@ static long vhost_vdpa_set_config_call(struct vhost_vdpa 
*v, u32 __user *argp)
struct eventfd_ctx *ctx;
  
  	cb.callback = vhost_vdpa_config_cb;

-   cb.private = v->vdpa;
+   cb.private = v;
if (copy_from_user(&fd, argp, sizeof(fd)))
return  -EFAULT;
  


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH 21/21] vdpasim: control virtqueue support

2021-01-11 Thread Jason Wang


On 2021/1/11 下午8:26, Eli Cohen wrote:

On Wed, Dec 16, 2020 at 02:48:18PM +0800, Jason Wang wrote:

This patch introduces the control virtqueue support for vDPA
simulator. This is a requirement for supporting advanced features like
multiqueue.

A requirement for control virtqueue is to isolate its memory access
from the rx/tx virtqueues. This is because when using vDPA device
for VM, the control virqueue is not directly assigned to VM. Userspace
(Qemu) will present a shadow control virtqueue to control for
recording the device states.

The isolation is done via the virtqueue groups and ASID support in
vDPA through vhost-vdpa. The simulator is extended to have:

1) three virtqueues: RXVQ, TXVQ and CVQ (control virtqueue)
2) two virtqueue groups: group 0 contains RXVQ and TXVQ; group 1
contains CVQ
3) two address spaces and the simulator simply implements the address
spaces by mapping it 1:1 to IOTLB.

For the VM use cases, userspace(Qemu) may set AS 0 to group 0 and AS 1
to group 1. So we have:

1) The IOTLB for virtqueue group 0 contains the mappings of guest, so
RX and TX can be assigned to guest directly.
2) The IOTLB for virtqueue group 1 contains the mappings of CVQ which
is the buffers that allocated and managed by VMM only. So CVQ of
vhost-vdpa is visible to VMM only. And Guest can not access the CVQ
of vhost-vdpa.

For the other use cases, since AS 0 is associated to all virtqueue
groups by default. All virtqueues share the same mapping by default.

To demonstrate the function, VIRITO_NET_F_CTRL_MACADDR is
implemented in the simulator for the driver to set mac address.


Hi Jason,

is there any version of qemu/libvirt available that I can see the
control virtqueue working in action?



Not yet, the qemu part depends on the shadow virtqueue work of Eugenio. 
But it will work as:


1) qemu will use a separated address space for the control virtqueue 
(shadow) exposed through vhost-vDPA
2) the commands sent through control virtqueue by guest driver will 
intercept by qemu

3) Qemu will send those commands to the shadow control virtqueue

Eugenio, any ETA for the new version of shadow virtqueue support in Qemu?

Thanks






___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH] mlx5: vdpa: fix possible uninitialized var

2021-01-11 Thread Jason Wang


On 2021/1/10 下午2:39, Eli Cohen wrote:

On Fri, Jan 08, 2021 at 04:24:43PM +0800, Jason Wang wrote:

Upstream: posted

When compiling with -Werror=maybe-uninitialized, gcc may complains the
possible uninitialized umem. Fix that.

Signed-off-by: Jason Wang 
---
  drivers/vdpa/mlx5/net/mlx5_vnet.c | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c 
b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index f1d54814db97..a6ad83d8d8e2 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -706,6 +706,9 @@ static void umem_destroy(struct mlx5_vdpa_net *ndev, struct 
mlx5_vdpa_virtqueue
case 3:
umem = &mvq->umem3;
break;
+   default:
+   WARN(1, "unsupported umem num %d\n", num);
+   return;
}
  
  	MLX5_SET(destroy_umem_in, in, opcode, MLX5_CMD_OP_DESTROY_UMEM);

Since the "default" case will never be executed, maybe it's better to
just change "case 3:" to "default:" and avoid the WARN().



Fine with me. Will do that in V3.

Thanks





--
2.25.1



___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH] VMCI: Enforce queuepair max size for IOCTL_VMCI_QUEUEPAIR_ALLOC

2021-01-11 Thread Jorgen Hansen
On 11 Jan 2021, at 13:46, Greg KH  wrote:
> 
> On Mon, Jan 11, 2021 at 04:18:53AM -0800, Jorgen Hansen wrote:
>> When create the VMCI queue pair tracking data structures on the host
>> side, the IOCTL for creating the VMCI queue pair didn't validate
>> the queue pair size parameters. This change adds checks for this.
>> 
>> This avoids a memory allocation issue in qp_host_alloc_queue, as
>> reported by nslusa...@gmx.net. The check in qp_host_alloc_queue
>> has also been updated to enforce the maximum queue pair size
>> as defined by VMCI_MAX_GUEST_QP_MEMORY.
>> 
>> The fix has been verified using sample code supplied by
>> nslusa...@gmx.net.
>> 
>> Reported-by: nslusa...@gmx.net
>> Reviewed-by: Vishnu Dasa 
>> Signed-off-by: Jorgen Hansen 
>> ---
>> drivers/misc/vmw_vmci/vmci_queue_pair.c | 12 
>> include/linux/vmw_vmci_defs.h   |  4 ++--
>> 2 files changed, 10 insertions(+), 6 deletions(-)
> 
> Hi,
> 
> This is the friendly patch-bot of Greg Kroah-Hartman.  You have sent him
> a patch that has triggered this response.  He used to manually respond
> to these common problems, but in order to save his sanity (he kept
> writing the same thing over and over, yet to different people), I was
> created.  Hopefully you will not take offence and will fix the problem
> in your patch and resubmit it so that it can be accepted into the Linux
> kernel tree.
> 
> You are receiving this message because of the following common error(s)
> as indicated below:
> 
> - You sent multiple patches, yet no indication of which ones should be
>  applied in which order.  Greg could just guess, but if you are
>  receiving this email, he guessed wrong and the patches didn't apply.
>  Please read the section entitled "The canonical patch format" in the
>  kernel file, Documentation/SubmittingPatches for a description of how
>  to do this so that Greg has a chance to apply these correctly.
> 
> 
> If you wish to discuss this problem further, or you have questions about
> how to resolve this issue, please feel free to respond to this email and
> Greg will reply once he has dug out from the pending patches received
> from other developers.
> 
> thanks,
> 
> greg k-h's patch email bot

Hi,

The patches are independent and should be able to apply in any order;
I didn’t see any problems when applying them in different orders locally.

I’m hesitant to provide the patches as a series of patches, since one of
them:
 VMCI: Use set_page_dirty_lock() when unregistering guest memory
is marked as fixing an original checkin, and should be considered for
backporting, whereas the others are either not important to backport
or rely on other recent changes. However, if formatting them as a
series of misc fixes is preferred, I’ll do that.

Thanks,
Jorgen
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v4 11/13] drm/vboxvideo: Use drm_gem_vram_vmap_local() in cursor update

2021-01-11 Thread Daniel Vetter
On Fri, Jan 08, 2021 at 10:43:38AM +0100, Thomas Zimmermann wrote:
> Cursor updates in vboxvideo require a short-term mapping of the
> source BO. Use drm_gem_vram_vmap_local() and avoid the pinning
> operations.
> 
> Signed-off-by: Thomas Zimmermann 

All these drivers patches break the dma_resv_lock vs
dma_fence_begin/end_signalling nesting rules, so this doesn't work.

Generally this is what the prepare/cleanup_fb hooks are for, that's where
mappings (including vmaps) are meant to be set up, permanently.

I'm kinda not clear on why we need all these changes, I thought the
locking problem is just in the fb helper paths, because it's outside of
the atomic path and could conflict with an atomic update at the same time?
So only that one should get the vmap_local treatment, everything else
should keep the normal vmap treatment.
-Daniel
> ---
>  drivers/gpu/drm/vboxvideo/vbox_mode.c | 15 +--
>  1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c 
> b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> index dbc0dd53c69e..215b37c78c10 100644
> --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
> +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> @@ -381,7 +381,8 @@ static void vbox_cursor_atomic_update(struct drm_plane 
> *plane,
>   container_of(plane->dev, struct vbox_private, ddev);
>   struct vbox_crtc *vbox_crtc = to_vbox_crtc(plane->state->crtc);
>   struct drm_framebuffer *fb = plane->state->fb;
> - struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(fb->obj[0]);
> + struct drm_gem_object *obj = fb->obj[0];
> + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(obj);
>   u32 width = plane->state->crtc_w;
>   u32 height = plane->state->crtc_h;
>   size_t data_size, mask_size;
> @@ -401,11 +402,12 @@ static void vbox_cursor_atomic_update(struct drm_plane 
> *plane,
>  
>   vbox_crtc->cursor_enabled = true;
>  
> - ret = drm_gem_vram_vmap(gbo, &map);
> + ret = dma_resv_lock(obj->resv, NULL);
> + if (ret)
> + return;
> + ret = drm_gem_vram_vmap_local(gbo, &map);
>   if (ret) {
> - /*
> -  * BUG: we should have pinned the BO in prepare_fb().
> -  */
> + dma_resv_unlock(obj->resv);
>   mutex_unlock(&vbox->hw_mutex);
>   DRM_WARN("Could not map cursor bo, skipping update\n");
>   return;
> @@ -421,7 +423,8 @@ static void vbox_cursor_atomic_update(struct drm_plane 
> *plane,
>   data_size = width * height * 4 + mask_size;
>  
>   copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
> - drm_gem_vram_vunmap(gbo, &map);
> + drm_gem_vram_vunmap_local(gbo, &map);
> + dma_resv_unlock(obj->resv);
>  
>   flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
>   VBOX_MOUSE_POINTER_ALPHA;
> -- 
> 2.29.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v4 06/13] drm/cirrus: Use drm_gem_shmem_vmap_local() in damage handling

2021-01-11 Thread Daniel Vetter
On Mon, Jan 11, 2021 at 06:00:42PM +0100, Daniel Vetter wrote:
> On Fri, Jan 08, 2021 at 10:43:33AM +0100, Thomas Zimmermann wrote:
> > Damage handling in cirrus requires a short-term mapping of the source
> > BO. Use drm_gem_shmem_vmap_local().
> > 
> > Signed-off-by: Thomas Zimmermann 
> 
> Hm more possible errors that we don't report to userspace ... Why don't we
> vmap/vunmap these in prepare/cleanup_fb? Generally we'd want a long-term
> vmap here to make sure this all works nicely.
> 
> Since it's nothing new, on this patch:
> 
> Reviewed-by: Daniel Vetter 

Ok, also strike this r-b here. This is called from that atomic commit
paths, and we cannot call dma_resv_lock here. This should splat with
lockdep enabled against the dma-fence annotations I've merged, I'm kinda
surprised it doesn't?
-Daniel

> 
> > ---
> >  drivers/gpu/drm/tiny/cirrus.c | 10 --
> >  1 file changed, 8 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
> > index a043e602199e..21cd7056d45f 100644
> > --- a/drivers/gpu/drm/tiny/cirrus.c
> > +++ b/drivers/gpu/drm/tiny/cirrus.c
> > @@ -315,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer 
> > *fb,
> >struct drm_rect *rect)
> >  {
> > struct cirrus_device *cirrus = to_cirrus(fb->dev);
> > +   struct drm_gem_object *obj = fb->obj[0];
> > struct dma_buf_map map;
> > void *vmap;
> > int idx, ret;
> > @@ -323,9 +324,12 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer 
> > *fb,
> > if (!drm_dev_enter(&cirrus->dev, &idx))
> > goto out;
> >  
> > -   ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> > +   ret = dma_resv_lock(obj->resv, NULL);
> > if (ret)
> > goto out_dev_exit;
> > +   ret = drm_gem_shmem_vmap_local(fb->obj[0], &map);
> > +   if (ret)
> > +   goto out_dma_resv_unlock;
> > vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
> >  
> > if (cirrus->cpp == fb->format->cpp[0])
> > @@ -345,9 +349,11 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer 
> > *fb,
> > else
> > WARN_ON_ONCE("cpp mismatch");
> >  
> > -   drm_gem_shmem_vunmap(fb->obj[0], &map);
> > ret = 0;
> >  
> > +   drm_gem_shmem_vunmap_local(obj, &map);
> > +out_dma_resv_unlock:
> > +   dma_resv_unlock(obj->resv);
> >  out_dev_exit:
> > drm_dev_exit(idx);
> >  out:
> > -- 
> > 2.29.2
> > 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v4 07/13] drm/gm12u320: Use drm_gem_shmem_vmap_local() in damage handling

2021-01-11 Thread Daniel Vetter
On Fri, Jan 08, 2021 at 10:43:34AM +0100, Thomas Zimmermann wrote:
> Damage handling in gm12u320 requires a short-term mapping of the source
> BO. Use drm_gem_shmem_vmap_local().
> 
> Signed-off-by: Thomas Zimmermann 
> ---
>  drivers/gpu/drm/tiny/gm12u320.c | 14 +++---
>  1 file changed, 11 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
> index 33f65f4626e5..b0c6e350f2b3 100644
> --- a/drivers/gpu/drm/tiny/gm12u320.c
> +++ b/drivers/gpu/drm/tiny/gm12u320.c
> @@ -265,11 +265,16 @@ static void gm12u320_copy_fb_to_blocks(struct 
> gm12u320_device *gm12u320)
>   y1 = gm12u320->fb_update.rect.y1;
>   y2 = gm12u320->fb_update.rect.y2;
>  
> - ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + ret = dma_resv_lock(fb->obj[0]->resv, NULL);
>   if (ret) {
> - GM12U320_ERR("failed to vmap fb: %d\n", ret);
> + GM12U320_ERR("failed to reserve fb: %d\n", ret);
>   goto put_fb;
>   }
> + ret = drm_gem_shmem_vmap_local(fb->obj[0], &map);
> + if (ret) {
> + GM12U320_ERR("failed to vmap fb: %d\n", ret);
> + goto unlock_resv;
> + }
>   vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>  
>   if (fb->obj[0]->import_attach) {
> @@ -321,8 +326,11 @@ static void gm12u320_copy_fb_to_blocks(struct 
> gm12u320_device *gm12u320)
>   if (ret)
>   GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
>   }
> +
> +unlock_resv:
> + dma_resv_unlock(fb->obj[0]->resv);

Unlock before vunmap.
-Daniel

>  vunmap:
> - drm_gem_shmem_vunmap(fb->obj[0], &map);
> + drm_gem_shmem_vunmap_local(fb->obj[0], &map);
>  put_fb:
>   drm_framebuffer_put(fb);
>   gm12u320->fb_update.fb = NULL;
> -- 
> 2.29.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v4 06/13] drm/cirrus: Use drm_gem_shmem_vmap_local() in damage handling

2021-01-11 Thread Daniel Vetter
On Fri, Jan 08, 2021 at 10:43:33AM +0100, Thomas Zimmermann wrote:
> Damage handling in cirrus requires a short-term mapping of the source
> BO. Use drm_gem_shmem_vmap_local().
> 
> Signed-off-by: Thomas Zimmermann 

Hm more possible errors that we don't report to userspace ... Why don't we
vmap/vunmap these in prepare/cleanup_fb? Generally we'd want a long-term
vmap here to make sure this all works nicely.

Since it's nothing new, on this patch:

Reviewed-by: Daniel Vetter 

> ---
>  drivers/gpu/drm/tiny/cirrus.c | 10 --
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
> index a043e602199e..21cd7056d45f 100644
> --- a/drivers/gpu/drm/tiny/cirrus.c
> +++ b/drivers/gpu/drm/tiny/cirrus.c
> @@ -315,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
>  struct drm_rect *rect)
>  {
>   struct cirrus_device *cirrus = to_cirrus(fb->dev);
> + struct drm_gem_object *obj = fb->obj[0];
>   struct dma_buf_map map;
>   void *vmap;
>   int idx, ret;
> @@ -323,9 +324,12 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer 
> *fb,
>   if (!drm_dev_enter(&cirrus->dev, &idx))
>   goto out;
>  
> - ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + ret = dma_resv_lock(obj->resv, NULL);
>   if (ret)
>   goto out_dev_exit;
> + ret = drm_gem_shmem_vmap_local(fb->obj[0], &map);
> + if (ret)
> + goto out_dma_resv_unlock;
>   vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>  
>   if (cirrus->cpp == fb->format->cpp[0])
> @@ -345,9 +349,11 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer 
> *fb,
>   else
>   WARN_ON_ONCE("cpp mismatch");
>  
> - drm_gem_shmem_vunmap(fb->obj[0], &map);
>   ret = 0;
>  
> + drm_gem_shmem_vunmap_local(obj, &map);
> +out_dma_resv_unlock:
> + dma_resv_unlock(obj->resv);
>  out_dev_exit:
>   drm_dev_exit(idx);
>  out:
> -- 
> 2.29.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v4 05/13] drm/mgag200: Use drm_gem_shmem_vmap_local() in damage handling

2021-01-11 Thread Daniel Vetter
On Mon, Jan 11, 2021 at 05:53:41PM +0100, Daniel Vetter wrote:
> On Fri, Jan 08, 2021 at 10:43:32AM +0100, Thomas Zimmermann wrote:
> > Damage handling in mgag200 requires a short-term mapping of the source
> > BO. Use drm_gem_shmem_vmap_local().
> > 
> > Signed-off-by: Thomas Zimmermann 
> 
> Reviewed-by: Daniel Vetter 

On second thought, strike that r-b, I have a confused question.
> 
> > ---
> >  drivers/gpu/drm/mgag200/mgag200_mode.c | 16 +---
> >  1 file changed, 13 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c 
> > b/drivers/gpu/drm/mgag200/mgag200_mode.c
> > index 1dfc42170059..a33e28d4c5e9 100644
> > --- a/drivers/gpu/drm/mgag200/mgag200_mode.c
> > +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
> > @@ -1552,22 +1552,32 @@ mgag200_handle_damage(struct mga_device *mdev, 
> > struct drm_framebuffer *fb,
> >   struct drm_rect *clip)
> >  {
> > struct drm_device *dev = &mdev->base;
> > +   struct drm_gem_object *obj = fb->obj[0];
> > struct dma_buf_map map;
> > void *vmap;
> > int ret;
> >  
> > -   ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> > +   ret = dma_resv_lock(obj->resv, NULL);
> > if (drm_WARN_ON(dev, ret))
> > -   return; /* BUG: SHMEM BO should always be vmapped */
> > +   return;
> > +   ret = drm_gem_shmem_vmap_local(obj, &map);
> > +   if (drm_WARN_ON(dev, ret))
> > +   goto err_dma_resv_unlock; /* BUG: SHMEM BO should always be 
> > vmapped */

Why is this guaranteed? I tried to hunt for a vmap in mga200g code, and
dind't find any. I'd ahve expected something in prepare/finish_fb.

Also since this is not a vram-helper using driver, why convert it over to
vmap_local? I guess that should also be explained in the commit message a
bit better.
-Daniel

> > vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
> >  
> > drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
> >  
> > -   drm_gem_shmem_vunmap(fb->obj[0], &map);
> > +   drm_gem_shmem_vunmap_local(obj, &map);
> > +   dma_resv_unlock(obj->resv);
> >  
> > /* Always scanout image at VRAM offset 0 */
> > mgag200_set_startadd(mdev, (u32)0);
> > mgag200_set_offset(mdev, fb);
> > +
> > +   return;
> > +
> > +err_dma_resv_unlock:
> > +   dma_resv_unlock(obj->resv);
> >  }
> >  
> >  static void
> > -- 
> > 2.29.2
> > 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v4 05/13] drm/mgag200: Use drm_gem_shmem_vmap_local() in damage handling

2021-01-11 Thread Daniel Vetter
On Fri, Jan 08, 2021 at 10:43:32AM +0100, Thomas Zimmermann wrote:
> Damage handling in mgag200 requires a short-term mapping of the source
> BO. Use drm_gem_shmem_vmap_local().
> 
> Signed-off-by: Thomas Zimmermann 

Reviewed-by: Daniel Vetter 

> ---
>  drivers/gpu/drm/mgag200/mgag200_mode.c | 16 +---
>  1 file changed, 13 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c 
> b/drivers/gpu/drm/mgag200/mgag200_mode.c
> index 1dfc42170059..a33e28d4c5e9 100644
> --- a/drivers/gpu/drm/mgag200/mgag200_mode.c
> +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
> @@ -1552,22 +1552,32 @@ mgag200_handle_damage(struct mga_device *mdev, struct 
> drm_framebuffer *fb,
> struct drm_rect *clip)
>  {
>   struct drm_device *dev = &mdev->base;
> + struct drm_gem_object *obj = fb->obj[0];
>   struct dma_buf_map map;
>   void *vmap;
>   int ret;
>  
> - ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> + ret = dma_resv_lock(obj->resv, NULL);
>   if (drm_WARN_ON(dev, ret))
> - return; /* BUG: SHMEM BO should always be vmapped */
> + return;
> + ret = drm_gem_shmem_vmap_local(obj, &map);
> + if (drm_WARN_ON(dev, ret))
> + goto err_dma_resv_unlock; /* BUG: SHMEM BO should always be 
> vmapped */
>   vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>  
>   drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
>  
> - drm_gem_shmem_vunmap(fb->obj[0], &map);
> + drm_gem_shmem_vunmap_local(obj, &map);
> + dma_resv_unlock(obj->resv);
>  
>   /* Always scanout image at VRAM offset 0 */
>   mgag200_set_startadd(mdev, (u32)0);
>   mgag200_set_offset(mdev, fb);
> +
> + return;
> +
> +err_dma_resv_unlock:
> + dma_resv_unlock(obj->resv);
>  }
>  
>  static void
> -- 
> 2.29.2
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v4 13/13] drm/vram-helper: Remove unused drm_gem_vram_{vmap,vunmap}()

2021-01-11 Thread Daniel Vetter
On Fri, Jan 08, 2021 at 10:43:40AM +0100, Thomas Zimmermann wrote:
> VRAM-helper BO's cannot be exported, so calls for vmap and vunmap
> can only come from the BO's drivers or a kernel client. These are
> supposed use vmap_local functionality.

  ^to

> 
> The vmap and vunmap operations in VRAM helpers are therefore unused
> and can be removed.
> 
> Signed-off-by: Thomas Zimmermann 

Reviewed-by: Daniel Vetter 

> ---
>  drivers/gpu/drm/drm_gem_vram_helper.c | 98 ---
>  include/drm/drm_gem_vram_helper.h |  2 -
>  2 files changed, 100 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c 
> b/drivers/gpu/drm/drm_gem_vram_helper.c
> index c7fba3a0758e..c7d166cd16df 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -379,72 +379,6 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
>  }
>  EXPORT_SYMBOL(drm_gem_vram_unpin);
>  
> -/**
> - * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
> - *   space
> - * @gbo: The GEM VRAM object to map
> - * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> - *   store.
> - *
> - * The vmap function pins a GEM VRAM object to its current location, either
> - * system or video memory, and maps its buffer into kernel address space.
> - * As pinned object cannot be relocated, you should avoid pinning objects
> - * permanently. Call drm_gem_vram_vunmap() with the returned address to
> - * unmap and unpin the GEM VRAM object.
> - *
> - * Returns:
> - * 0 on success, or a negative error code otherwise.
> - */
> -int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map 
> *map)
> -{
> - int ret;
> -
> - ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
> - if (ret)
> - return ret;
> -
> - ret = drm_gem_vram_pin_locked(gbo, 0);
> - if (ret)
> - goto err_ttm_bo_unreserve;
> - ret = drm_gem_vram_vmap_local(gbo, map);
> - if (ret)
> - goto err_drm_gem_vram_unpin_locked;
> -
> - ttm_bo_unreserve(&gbo->bo);
> -
> - return 0;
> -
> -err_drm_gem_vram_unpin_locked:
> - drm_gem_vram_unpin_locked(gbo);
> -err_ttm_bo_unreserve:
> - ttm_bo_unreserve(&gbo->bo);
> - return ret;
> -}
> -EXPORT_SYMBOL(drm_gem_vram_vmap);
> -
> -/**
> - * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
> - * @gbo: The GEM VRAM object to unmap
> - * @map: Kernel virtual address where the VRAM GEM object was mapped
> - *
> - * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
> - * the documentation for drm_gem_vram_vmap() for more information.
> - */
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map 
> *map)
> -{
> - int ret;
> -
> - ret = ttm_bo_reserve(&gbo->bo, false, false, NULL);
> - if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
> - return;
> -
> - drm_gem_vram_vunmap_local(gbo, map);
> - drm_gem_vram_unpin_locked(gbo);
> -
> - ttm_bo_unreserve(&gbo->bo);
> -}
> -EXPORT_SYMBOL(drm_gem_vram_vunmap);
> -
>  /**
>   * drm_gem_vram_vmap_local() - Maps a GEM VRAM object into kernel address 
> space
>   * @gbo: The GEM VRAM object to map
> @@ -870,36 +804,6 @@ static void drm_gem_vram_object_unpin(struct 
> drm_gem_object *gem)
>   drm_gem_vram_unpin(gbo);
>  }
>  
> -/**
> - * drm_gem_vram_object_vmap() -
> - *   Implements &struct drm_gem_object_funcs.vmap
> - * @gem: The GEM object to map
> - * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> - *   store.
> - *
> - * Returns:
> - * 0 on success, or a negative error code otherwise.
> - */
> -static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct 
> dma_buf_map *map)
> -{
> - struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
> -
> - return drm_gem_vram_vmap(gbo, map);
> -}
> -
> -/**
> - * drm_gem_vram_object_vunmap() -
> - *   Implements &struct drm_gem_object_funcs.vunmap
> - * @gem: The GEM object to unmap
> - * @map: Kernel virtual address where the VRAM GEM object was mapped
> - */
> -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct 
> dma_buf_map *map)
> -{
> - struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
> -
> - drm_gem_vram_vunmap(gbo, map);
> -}
> -
>  static int drm_gem_vram_object_vmap_local(struct drm_gem_object *gem, struct 
> dma_buf_map *map)
>  {
>   struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
> @@ -922,8 +826,6 @@ static const struct drm_gem_object_funcs 
> drm_gem_vram_object_funcs = {
>   .free = drm_gem_vram_object_free,
>   .pin = drm_gem_vram_object_pin,
>   .unpin = drm_gem_vram_object_unpin,
> - .vmap = drm_gem_vram_object_vmap,
> - .vunmap = drm_gem_vram_object_vunmap,
>   .vmap_local = drm_gem_vram_object_vmap_local,
>   .vunmap_local = drm_gem_vram_object_vunmap_local,
>   .

Re: [PATCH v4 04/13] drm/shmem-helper: Provide a vmap function for short-term mappings

2021-01-11 Thread Daniel Vetter
On Fri, Jan 08, 2021 at 10:43:31AM +0100, Thomas Zimmermann wrote:
> Implementations of the vmap/vunmap GEM callbacks may perform pinning
> of the BO and may acquire the associated reservation object's lock.
> Callers that only require a mapping of the contained memory can thus
> interfere with other tasks that require exact pinning, such as scanout.
> This is less of an issue with private SHMEM buffers, but may happen
> with imported ones.
> 
> Therefore provide the new interfaces drm_gem_shmem_vmap_local() and
> drm_gem_shmem_vunmap_local(), which only perform the vmap/vunmap
> operations. Callers have to hold the reservation lock while the mapping
> persists.
> 
> This patch also connects GEM SHMEM helpers to GEM object functions with
> equivalent functionality.
> 
> v4:
>   * call dma_buf_{vmap,vunmap}_local() where necessary (Daniel)
>   * move driver changes into separate patches (Daniel)
> 
> Signed-off-by: Thomas Zimmermann 
> ---
>  drivers/gpu/drm/drm_gem_shmem_helper.c | 90 +++---
>  include/drm/drm_gem_shmem_helper.h |  2 +
>  2 files changed, 84 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
> b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 9825c378dfa6..298832b2b43b 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -32,6 +32,8 @@ static const struct drm_gem_object_funcs 
> drm_gem_shmem_funcs = {
>   .get_sg_table = drm_gem_shmem_get_sg_table,
>   .vmap = drm_gem_shmem_vmap,
>   .vunmap = drm_gem_shmem_vunmap,
> + .vmap_local = drm_gem_shmem_vmap_local,
> + .vunmap_local = drm_gem_shmem_vunmap_local,
>   .mmap = drm_gem_shmem_mmap,
>  };
>  
> @@ -261,7 +263,8 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
>  }
>  EXPORT_SYMBOL(drm_gem_shmem_unpin);
>  
> -static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, 
> struct dma_buf_map *map)
> +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, 
> struct dma_buf_map *map,
> +  bool local)

This is a bit spaghetti and also has the problem that we're not changing
shmem->vmap_use_count under different locks, depending upon which path
we're taking.

I think the cleanest would be if we pull the if (import_attach) case out
of the _locked() version completely, for all cases, and also outside of
the shmem->vmap_lock. This means no caching of vmaps in the shmem layer
anymore for imported buffers, but this is no longer a problem: We cache
them in the exporters instead (I think at least, if not maybe need to fix
that where it's expensive).

Other option would be to unly pull it out for the _vmap_local case, but
that's a bit ugly because no longer symmetrical in the various paths.

>  {
>   struct drm_gem_object *obj = &shmem->base;
>   int ret = 0;
> @@ -272,7 +275,10 @@ static int drm_gem_shmem_vmap_locked(struct 
> drm_gem_shmem_object *shmem, struct
>   }
>  
>   if (obj->import_attach) {
> - ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> + if (local)
> + ret = dma_buf_vmap_local(obj->import_attach->dmabuf, 
> map);
> + else
> + ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
>   if (!ret) {
>   if (WARN_ON(map->is_iomem)) {
>   ret = -EIO;
> @@ -313,7 +319,7 @@ static int drm_gem_shmem_vmap_locked(struct 
> drm_gem_shmem_object *shmem, struct
>   return ret;
>  }
>  
> -/*
> +/**
>   * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
>   * @shmem: shmem GEM object
>   * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> @@ -339,15 +345,53 @@ int drm_gem_shmem_vmap(struct drm_gem_object *obj, 
> struct dma_buf_map *map)
>   ret = mutex_lock_interruptible(&shmem->vmap_lock);
>   if (ret)
>   return ret;
> - ret = drm_gem_shmem_vmap_locked(shmem, map);
> + ret = drm_gem_shmem_vmap_locked(shmem, map, false);
>   mutex_unlock(&shmem->vmap_lock);
>  
>   return ret;
>  }
>  EXPORT_SYMBOL(drm_gem_shmem_vmap);
>  
> +/**
> + * drm_gem_shmem_vmap_local - Create a virtual mapping for a shmem GEM object
> + * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + *   store.
> + *
> + * This function makes sure that a contiguous kernel virtual address mapping
> + * exists for the buffer backing the shmem GEM object.
> + *
> + * The function is called with the BO's reservation object locked. Callers 
> must
> + * hold the lock until after unmapping the buffer.
> + *
> + * This function can be used to implement &drm_gem_object_funcs.vmap_local. 
> But
> + * it can also be called by drivers directly, in which case it will hide the
> + * differences between dma-buf imported and natively allocated objects.

So for the other callbacks I

Re: [PATCH] VMCI: Enforce queuepair max size for IOCTL_VMCI_QUEUEPAIR_ALLOC

2021-01-11 Thread kernel test robot
Hi Jorgen,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linux/master]
[also build test WARNING on char-misc/char-misc-testing linus/master v5.11-rc3 
next-20210111]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:
https://github.com/0day-ci/linux/commits/Jorgen-Hansen/VMCI-Enforce-queuepair-max-size-for-IOCTL_VMCI_QUEUEPAIR_ALLOC/20210111-204259
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 
09162bc32c880a791c6c0668ce0745cf7958f576
config: i386-randconfig-s001-20210111 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce:
# apt-get install sparse
# sparse version: v0.6.3-208-g46a52ca4-dirty
# 
https://github.com/0day-ci/linux/commit/0923aeac7af9635dd6bf0141e8188f4815e573d2
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Jorgen-Hansen/VMCI-Enforce-queuepair-max-size-for-IOCTL_VMCI_QUEUEPAIR_ALLOC/20210111-204259
git checkout 0923aeac7af9635dd6bf0141e8188f4815e573d2
# save the attached .config to linux build tree
make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 


"sparse warnings: (new ones prefixed by >>)"
>> drivers/misc/vmw_vmci/vmci_queue_pair.c:533:20: sparse: sparse: incompatible 
>> types in comparison expression (different type sizes):
>> drivers/misc/vmw_vmci/vmci_queue_pair.c:533:20: sparse:unsigned int *
>> drivers/misc/vmw_vmci/vmci_queue_pair.c:533:20: sparse:unsigned long *

vim +533 drivers/misc/vmw_vmci/vmci_queue_pair.c

   519  
   520  /*
   521   * Allocates kernel VA space of specified size plus space for the queue
   522   * and kernel interface.  This is different from the guest queue 
allocator,
   523   * because we do not allocate our own queue header/data pages here but
   524   * share those of the guest.
   525   */
   526  static struct vmci_queue *qp_host_alloc_queue(u64 size)
   527  {
   528  struct vmci_queue *queue;
   529  size_t queue_page_size;
   530  u64 num_pages;
   531  const size_t queue_size = sizeof(*queue) + 
sizeof(*(queue->kernel_if));
   532  
 > 533  if (size > min(VMCI_MAX_GUEST_QP_MEMORY, SIZE_MAX - PAGE_SIZE))
   534  return NULL;
   535  num_pages = DIV_ROUND_UP(size, PAGE_SIZE) + 1;
   536  if (num_pages > (SIZE_MAX - queue_size) /
   537   sizeof(*queue->kernel_if->u.h.page))
   538  return NULL;
   539  
   540  queue_page_size = num_pages * 
sizeof(*queue->kernel_if->u.h.page);
   541  
   542  queue = kzalloc(queue_size + queue_page_size, GFP_KERNEL);
   543  if (queue) {
   544  queue->q_header = NULL;
   545  queue->saved_header = NULL;
   546  queue->kernel_if = (struct vmci_queue_kern_if *)(queue 
+ 1);
   547  queue->kernel_if->host = true;
   548  queue->kernel_if->mutex = NULL;
   549  queue->kernel_if->num_pages = num_pages;
   550  queue->kernel_if->u.h.header_page =
   551  (struct page **)((u8 *)queue + queue_size);
   552  queue->kernel_if->u.h.page =
   553  &queue->kernel_if->u.h.header_page[1];
   554  }
   555  
   556  return queue;
   557  }
   558  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH] VMCI: Enforce queuepair max size for IOCTL_VMCI_QUEUEPAIR_ALLOC

2021-01-11 Thread Greg KH
On Mon, Jan 11, 2021 at 02:05:56PM +, Jorgen Hansen wrote:
> On 11 Jan 2021, at 13:46, Greg KH  wrote:
> > 
> > On Mon, Jan 11, 2021 at 04:18:53AM -0800, Jorgen Hansen wrote:
> >> When create the VMCI queue pair tracking data structures on the host
> >> side, the IOCTL for creating the VMCI queue pair didn't validate
> >> the queue pair size parameters. This change adds checks for this.
> >> 
> >> This avoids a memory allocation issue in qp_host_alloc_queue, as
> >> reported by nslusa...@gmx.net. The check in qp_host_alloc_queue
> >> has also been updated to enforce the maximum queue pair size
> >> as defined by VMCI_MAX_GUEST_QP_MEMORY.
> >> 
> >> The fix has been verified using sample code supplied by
> >> nslusa...@gmx.net.
> >> 
> >> Reported-by: nslusa...@gmx.net
> >> Reviewed-by: Vishnu Dasa 
> >> Signed-off-by: Jorgen Hansen 
> >> ---
> >> drivers/misc/vmw_vmci/vmci_queue_pair.c | 12 
> >> include/linux/vmw_vmci_defs.h   |  4 ++--
> >> 2 files changed, 10 insertions(+), 6 deletions(-)
> > 
> > Hi,
> > 
> > This is the friendly patch-bot of Greg Kroah-Hartman.  You have sent him
> > a patch that has triggered this response.  He used to manually respond
> > to these common problems, but in order to save his sanity (he kept
> > writing the same thing over and over, yet to different people), I was
> > created.  Hopefully you will not take offence and will fix the problem
> > in your patch and resubmit it so that it can be accepted into the Linux
> > kernel tree.
> > 
> > You are receiving this message because of the following common error(s)
> > as indicated below:
> > 
> > - You sent multiple patches, yet no indication of which ones should be
> >  applied in which order.  Greg could just guess, but if you are
> >  receiving this email, he guessed wrong and the patches didn't apply.
> >  Please read the section entitled "The canonical patch format" in the
> >  kernel file, Documentation/SubmittingPatches for a description of how
> >  to do this so that Greg has a chance to apply these correctly.
> > 
> > 
> > If you wish to discuss this problem further, or you have questions about
> > how to resolve this issue, please feel free to respond to this email and
> > Greg will reply once he has dug out from the pending patches received
> > from other developers.
> > 
> > thanks,
> > 
> > greg k-h's patch email bot
> 
> Hi,
> 
> The patches are independent and should be able to apply in any order;
> I didn’t see any problems when applying them in different orders locally.
> 
> I’m hesitant to provide the patches as a series of patches, since one of
> them:
>  VMCI: Use set_page_dirty_lock() when unregistering guest memory
> is marked as fixing an original checkin, and should be considered for
> backporting, whereas the others are either not important to backport
> or rely on other recent changes. However, if formatting them as a
> series of misc fixes is preferred, I’ll do that.

If one patch is wanting to be merged now, for 5.11-final, great, don't
send it in a middle of series of other patches that are not, how am I
supposed to know any of this?

Please send them in the proper order, and as individual series for
different releases, if relevant, again, otherwise how am I supposed to
know what to do with them?

thanks,

greg k-h
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH] VMCI: Enforce queuepair max size for IOCTL_VMCI_QUEUEPAIR_ALLOC

2021-01-11 Thread Greg KH
On Mon, Jan 11, 2021 at 04:18:53AM -0800, Jorgen Hansen wrote:
> When create the VMCI queue pair tracking data structures on the host
> side, the IOCTL for creating the VMCI queue pair didn't validate
> the queue pair size parameters. This change adds checks for this.
> 
> This avoids a memory allocation issue in qp_host_alloc_queue, as
> reported by nslusa...@gmx.net. The check in qp_host_alloc_queue
> has also been updated to enforce the maximum queue pair size
> as defined by VMCI_MAX_GUEST_QP_MEMORY.
> 
> The fix has been verified using sample code supplied by
> nslusa...@gmx.net.
> 
> Reported-by: nslusa...@gmx.net
> Reviewed-by: Vishnu Dasa 
> Signed-off-by: Jorgen Hansen 
> ---
>  drivers/misc/vmw_vmci/vmci_queue_pair.c | 12 
>  include/linux/vmw_vmci_defs.h   |  4 ++--
>  2 files changed, 10 insertions(+), 6 deletions(-)

Hi,

This is the friendly patch-bot of Greg Kroah-Hartman.  You have sent him
a patch that has triggered this response.  He used to manually respond
to these common problems, but in order to save his sanity (he kept
writing the same thing over and over, yet to different people), I was
created.  Hopefully you will not take offence and will fix the problem
in your patch and resubmit it so that it can be accepted into the Linux
kernel tree.

You are receiving this message because of the following common error(s)
as indicated below:

- You sent multiple patches, yet no indication of which ones should be
  applied in which order.  Greg could just guess, but if you are
  receiving this email, he guessed wrong and the patches didn't apply.
  Please read the section entitled "The canonical patch format" in the
  kernel file, Documentation/SubmittingPatches for a description of how
  to do this so that Greg has a chance to apply these correctly.


If you wish to discuss this problem further, or you have questions about
how to resolve this issue, please feel free to respond to this email and
Greg will reply once he has dug out from the pending patches received
from other developers.

thanks,

greg k-h's patch email bot
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH] VMCI: Use set_page_dirty_lock() when unregistering guest memory

2021-01-11 Thread Jorgen Hansen
When the VMCI host support releases guest memory in the case where
the VM was killed, the pinned guest pages aren't locked. Use
set_page_dirty_lock() instead of set_page_dirty().

Testing done: Killed VM while having an active VMCI based vSocket
connection and observed warning from ext4. With this fix, no
warning was observed. Ran various vSocket tests without issues.

Fixes: 06164d2b72aa ("VMCI: queue pairs implementation.")
Reviewed-by: Vishnu Dasa 
Signed-off-by: Jorgen Hansen 
---
 drivers/misc/vmw_vmci/vmci_queue_pair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c 
b/drivers/misc/vmw_vmci/vmci_queue_pair.c
index a3691c1..525ef96 100644
--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
+++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
@@ -630,7 +630,7 @@ static void qp_release_pages(struct page **pages,
 
for (i = 0; i < num_pages; i++) {
if (dirty)
-   set_page_dirty(pages[i]);
+   set_page_dirty_lock(pages[i]);
 
put_page(pages[i]);
pages[i] = NULL;
-- 
2.6.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH] VMCI: Stop log spew when qp allocation isn't possible

2021-01-11 Thread Jorgen Hansen
VMCI queue pair allocation is disabled, if a VM is in FT mode. In
these cases, VMware Tools may still once in a while attempt to
create a vSocket stream connection, resulting in multiple
warnings in the kernel logs. Therefore downgrade the error log to
a debug log.

Reviewed-by: Vishnu Dasa 
Signed-off-by: Jorgen Hansen 
---
 drivers/misc/vmw_vmci/vmci_queue_pair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c 
b/drivers/misc/vmw_vmci/vmci_queue_pair.c
index c490658..a3691c1 100644
--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
+++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
@@ -1207,7 +1207,7 @@ static int qp_alloc_guest_work(struct vmci_handle *handle,
} else {
result = qp_alloc_hypercall(queue_pair_entry);
if (result < VMCI_SUCCESS) {
-   pr_warn("qp_alloc_hypercall result = %d\n", result);
+   pr_devel("qp_alloc_hypercall result = %d\n", result);
goto error;
}
}
-- 
2.6.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH] VMCI: Enforce queuepair max size for IOCTL_VMCI_QUEUEPAIR_ALLOC

2021-01-11 Thread Jorgen Hansen
When create the VMCI queue pair tracking data structures on the host
side, the IOCTL for creating the VMCI queue pair didn't validate
the queue pair size parameters. This change adds checks for this.

This avoids a memory allocation issue in qp_host_alloc_queue, as
reported by nslusa...@gmx.net. The check in qp_host_alloc_queue
has also been updated to enforce the maximum queue pair size
as defined by VMCI_MAX_GUEST_QP_MEMORY.

The fix has been verified using sample code supplied by
nslusa...@gmx.net.

Reported-by: nslusa...@gmx.net
Reviewed-by: Vishnu Dasa 
Signed-off-by: Jorgen Hansen 
---
 drivers/misc/vmw_vmci/vmci_queue_pair.c | 12 
 include/linux/vmw_vmci_defs.h   |  4 ++--
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c 
b/drivers/misc/vmw_vmci/vmci_queue_pair.c
index 525ef96..39d2f191 100644
--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
+++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
@@ -237,7 +237,9 @@ static struct qp_list qp_guest_endpoints = {
 #define QPE_NUM_PAGES(_QPE) ((u32) \
 (DIV_ROUND_UP(_QPE.produce_size, PAGE_SIZE) + \
  DIV_ROUND_UP(_QPE.consume_size, PAGE_SIZE) + 2))
-
+#define QP_SIZES_ARE_VALID(_prod_qsize, _cons_qsize) \
+   ((_prod_qsize) + (_cons_qsize) >= max(_prod_qsize, _cons_qsize) && \
+(_prod_qsize) + (_cons_qsize) <= VMCI_MAX_GUEST_QP_MEMORY)
 
 /*
  * Frees kernel VA space for a given queue and its queue header, and
@@ -528,7 +530,7 @@ static struct vmci_queue *qp_host_alloc_queue(u64 size)
u64 num_pages;
const size_t queue_size = sizeof(*queue) + sizeof(*(queue->kernel_if));
 
-   if (size > SIZE_MAX - PAGE_SIZE)
+   if (size > min(VMCI_MAX_GUEST_QP_MEMORY, SIZE_MAX - PAGE_SIZE))
return NULL;
num_pages = DIV_ROUND_UP(size, PAGE_SIZE) + 1;
if (num_pages > (SIZE_MAX - queue_size) /
@@ -1929,6 +1931,9 @@ int vmci_qp_broker_alloc(struct vmci_handle handle,
 struct vmci_qp_page_store *page_store,
 struct vmci_ctx *context)
 {
+   if (!QP_SIZES_ARE_VALID(produce_size, consume_size))
+   return VMCI_ERROR_NO_RESOURCES;
+
return qp_broker_alloc(handle, peer, flags, priv_flags,
   produce_size, consume_size,
   page_store, context, NULL, NULL, NULL, NULL);
@@ -2685,8 +2690,7 @@ int vmci_qpair_alloc(struct vmci_qp **qpair,
 * used by the device is NO_RESOURCES, so use that here too.
 */
 
-   if (produce_qsize + consume_qsize < max(produce_qsize, consume_qsize) ||
-   produce_qsize + consume_qsize > VMCI_MAX_GUEST_QP_MEMORY)
+   if (!QP_SIZES_ARE_VALID(produce_qsize, consume_qsize))
return VMCI_ERROR_NO_RESOURCES;
 
retval = vmci_route(&src, &dst, false, &route);
diff --git a/include/linux/vmw_vmci_defs.h b/include/linux/vmw_vmci_defs.h
index be0afe6..e36cb11 100644
--- a/include/linux/vmw_vmci_defs.h
+++ b/include/linux/vmw_vmci_defs.h
@@ -66,7 +66,7 @@ enum {
  * consists of at least two pages, the memory limit also dictates the
  * number of queue pairs a guest can create.
  */
-#define VMCI_MAX_GUEST_QP_MEMORY (128 * 1024 * 1024)
+#define VMCI_MAX_GUEST_QP_MEMORY ((size_t)(128 * 1024 * 1024))
 #define VMCI_MAX_GUEST_QP_COUNT  (VMCI_MAX_GUEST_QP_MEMORY / PAGE_SIZE / 2)
 
 /*
@@ -80,7 +80,7 @@ enum {
  * too much kernel memory (especially on vmkernel).  We limit a queuepair to
  * 32 KB, or 16 KB per queue for symmetrical pairs.
  */
-#define VMCI_MAX_PINNED_QP_MEMORY (32 * 1024)
+#define VMCI_MAX_PINNED_QP_MEMORY ((size_t)(32 * 1024))
 
 /*
  * We have a fixed set of resource IDs available in the VMX.
-- 
2.6.2

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [patch V3 13/37] mips/mm/highmem: Switch to generic kmap atomic

2021-01-11 Thread Sebastian Andrzej Siewior
On 2021-01-09 01:33:52 [+0100], Thomas Bogendoerfer wrote:
> On Sat, Jan 09, 2021 at 12:58:05AM +0100, Thomas Bogendoerfer wrote:
> > On Fri, Jan 08, 2021 at 08:20:43PM +, Paul Cercueil wrote:
> > > Hi Thomas,
> > > 
> > > 5.11 does not boot anymore on Ingenic SoCs, I bisected it to this commit.
> > > 
> > > Any idea what could be happening?
> > 
> > not yet, kernel crash log of a Malta QEMU is below.
> 
> update:
> 
> This dirty hack lets the Malta QEMU boot again:
> 
> diff --git a/mm/highmem.c b/mm/highmem.c
> index c3a9ea7875ef..190cdda1149d 100644
> --- a/mm/highmem.c
> +++ b/mm/highmem.c
> @@ -515,7 +515,7 @@ void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t 
> prot)
>   vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
>   BUG_ON(!pte_none(*(kmap_pte - idx)));
>   pteval = pfn_pte(pfn, prot);
> - set_pte_at(&init_mm, vaddr, kmap_pte - idx, pteval);
> + set_pte(kmap_pte - idx, pteval);
>   arch_kmap_local_post_map(vaddr, pteval);
>   current->kmap_ctrl.pteval[kmap_local_idx()] = pteval;
>   preempt_enable();
> 
> set_pte_at() tries to update cache and could do an kmap_atomic() there.
So the old implementation used set_pte() while the new one uses
set_pte_at().

> Not sure, if this is allowed at this point.
The problem is the recursion
  kmap_atomic() -> __update_cache() -> kmap_atomic()

and kmap_local_idx_push() runs out if index space before stack space.

I'm not sure if the __update_cache() worked for highmem. It has been
added for that in commit
   f4281bba81810 ("MIPS: Handle highmem pages in __update_cache")

but it assumes that the address returned by kmap_atomic() is the same or
related enough for flush_data_cache_page() to work.

> Thomas.
> 

Sebastian
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization