[PATCH v2 2/2] drm/shmem-helper: Removed drm_gem_shmem_create_object_cached()

2020-11-06 Thread Thomas Zimmermann
Cached page mappings are now the default for SHMEM GEM objects. Remove
the obsolete create function for cached mappings.

Signed-off-by: Thomas Zimmermann 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 26 --
 drivers/gpu/drm/mgag200/mgag200_drv.c  |  1 -
 drivers/gpu/drm/udl/udl_drv.c  |  2 --
 drivers/gpu/drm/vkms/vkms_drv.c|  1 -
 include/drm/drm_gem_shmem_helper.h |  3 ---
 5 files changed, 33 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index ddec0e190f29..5b4240725e64 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -468,32 +468,6 @@ bool drm_gem_shmem_purge(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL(drm_gem_shmem_purge);
 
-/**
- * drm_gem_shmem_create_object_cached - Create a shmem buffer object with
- *  cached mappings
- * @dev: DRM device
- * @size: Size of the object to allocate
- *
- * By default, shmem buffer objects use writecombine mappings. This
- * function implements struct drm_driver.gem_create_object for shmem
- * buffer objects with cached mappings.
- *
- * Returns:
- * A struct drm_gem_shmem_object * on success or NULL negative on failure.
- */
-struct drm_gem_object *
-drm_gem_shmem_create_object_cached(struct drm_device *dev, size_t size)
-{
-   struct drm_gem_shmem_object *shmem;
-
-   shmem = kzalloc(sizeof(*shmem), GFP_KERNEL);
-   if (!shmem)
-   return NULL;
-
-   return >base;
-}
-EXPORT_SYMBOL(drm_gem_shmem_create_object_cached);
-
 /**
  * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
  * @file: DRM file structure to create the dumb buffer for
diff --git a/drivers/gpu/drm/mgag200/mgag200_drv.c 
b/drivers/gpu/drm/mgag200/mgag200_drv.c
index 771b26aeee19..aaf50b6a7372 100644
--- a/drivers/gpu/drm/mgag200/mgag200_drv.c
+++ b/drivers/gpu/drm/mgag200/mgag200_drv.c
@@ -37,7 +37,6 @@ static struct drm_driver mgag200_driver = {
.major = DRIVER_MAJOR,
.minor = DRIVER_MINOR,
.patchlevel = DRIVER_PATCHLEVEL,
-   .gem_create_object = drm_gem_shmem_create_object_cached,
DRM_GEM_SHMEM_DRIVER_OPS,
 };
 
diff --git a/drivers/gpu/drm/udl/udl_drv.c b/drivers/gpu/drm/udl/udl_drv.c
index 96d4317a2c1b..a29147799575 100644
--- a/drivers/gpu/drm/udl/udl_drv.c
+++ b/drivers/gpu/drm/udl/udl_drv.c
@@ -38,8 +38,6 @@ static struct drm_driver driver = {
.driver_features = DRIVER_ATOMIC | DRIVER_GEM | DRIVER_MODESET,
 
/* GEM hooks */
-   .gem_create_object = drm_gem_shmem_create_object_cached,
-
.fops = _driver_fops,
DRM_GEM_SHMEM_DRIVER_OPS,
 
diff --git a/drivers/gpu/drm/vkms/vkms_drv.c b/drivers/gpu/drm/vkms/vkms_drv.c
index 25faba5aac08..b6cd7f2d941c 100644
--- a/drivers/gpu/drm/vkms/vkms_drv.c
+++ b/drivers/gpu/drm/vkms/vkms_drv.c
@@ -82,7 +82,6 @@ static struct drm_driver vkms_driver = {
.driver_features= DRIVER_MODESET | DRIVER_ATOMIC | DRIVER_GEM,
.release= vkms_release,
.fops   = _driver_fops,
-   .gem_create_object = drm_gem_shmem_create_object_cached,
DRM_GEM_SHMEM_DRIVER_OPS,
 
.name   = DRIVER_NAME,
diff --git a/include/drm/drm_gem_shmem_helper.h 
b/include/drm/drm_gem_shmem_helper.h
index 268d0284ae02..6bcd2c4d6a51 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -133,9 +133,6 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
 struct drm_device *dev, size_t size,
 uint32_t *handle);
 
-struct drm_gem_object *
-drm_gem_shmem_create_object_cached(struct drm_device *dev, size_t size);
-
 int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
  struct drm_mode_create_dumb *args);
 
-- 
2.29.0

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v2 1/2] drm/shmem-helper: Use cached mappings by default

2020-11-06 Thread Thomas Zimmermann
SHMEM-buffer backing storage is allocated from system memory; which is
typically cachable. The default mode for SHMEM objects is writecombine
though.

Unify SHMEM semantics by defaulting to cached mappings. The exception
is pages imported via dma-buf. DMA memory is usually not cached.

DRM drivers that require write-combined mappings set the map_wc flag
in struct drm_gem_shmem_object to true. This currently affects lima,
panfrost and v3d.

The drivers mgag200, udl, virtio and vkms continue to use default
shmem mappings.

The drivers cirrus and gm12u320 change caching flags. Both used
writecombine and now switch over to shmem defaults. Both drivers use
SHMEM objects as shadow buffers for internal video memory, so cached
mappings will not affect them negatively.

v2:
* recreate patch on top of latest SHMEM helpers
* update lima, panfrost, v3d to select writecombine (Daniel, Rob)

Signed-off-by: Thomas Zimmermann 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c  | 11 ++-
 drivers/gpu/drm/lima/lima_gem.c |  2 +-
 drivers/gpu/drm/panfrost/panfrost_gem.c |  2 +-
 drivers/gpu/drm/v3d/v3d_bo.c|  2 +-
 drivers/gpu/drm/virtio/virtgpu_object.c |  1 -
 include/drm/drm_gem_shmem_helper.h  |  4 ++--
 6 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 8233bda4692f..ddec0e190f29 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -54,10 +54,12 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, 
bool private)
if (!obj->funcs)
obj->funcs = _gem_shmem_funcs;
 
-   if (private)
+   if (private) {
drm_gem_private_object_init(dev, obj, size);
-   else
+   shmem->map_wc = false; /* dma-buf mappings use always 
writecombine */
+   } else {
ret = drm_gem_object_init(dev, obj, size);
+   }
if (ret)
goto err_free;
 
@@ -278,7 +280,7 @@ static void *drm_gem_shmem_vmap_locked(struct 
drm_gem_shmem_object *shmem)
if (ret)
goto err_zero_use;
 
-   if (!shmem->map_cached)
+   if (shmem->map_wc)
prot = pgprot_writecombine(prot);
shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
VM_MAP, prot);
@@ -487,7 +489,6 @@ drm_gem_shmem_create_object_cached(struct drm_device *dev, 
size_t size)
shmem = kzalloc(sizeof(*shmem), GFP_KERNEL);
if (!shmem)
return NULL;
-   shmem->map_cached = true;
 
return >base;
 }
@@ -616,7 +617,7 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct 
vm_area_struct *vma)
 
vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
-   if (!shmem->map_cached)
+   if (shmem->map_wc)
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
vma->vm_ops = _gem_shmem_vm_ops;
 
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 11223fe348df..bbab1413eb0c 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -225,7 +225,7 @@ struct drm_gem_object *lima_gem_create_object(struct 
drm_device *dev, size_t siz
 
mutex_init(>lock);
INIT_LIST_HEAD(>va);
-
+   bo->base.map_wc = true;
bo->base.base.funcs = _gem_funcs;
 
return >base.base;
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c 
b/drivers/gpu/drm/panfrost/panfrost_gem.c
index fb9f7334ce18..f77b72d995f9 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
@@ -230,7 +230,7 @@ struct drm_gem_object *panfrost_gem_create_object(struct 
drm_device *dev, size_t
INIT_LIST_HEAD(>mappings.list);
mutex_init(>mappings.lock);
obj->base.base.funcs = _gem_funcs;
-   obj->base.map_cached = pfdev->coherent;
+   obj->base.map_wc = !pfdev->coherent;
 
return >base.base;
 }
diff --git a/drivers/gpu/drm/v3d/v3d_bo.c b/drivers/gpu/drm/v3d/v3d_bo.c
index 8b52cb25877c..6a8731ab9d7d 100644
--- a/drivers/gpu/drm/v3d/v3d_bo.c
+++ b/drivers/gpu/drm/v3d/v3d_bo.c
@@ -78,7 +78,7 @@ struct drm_gem_object *v3d_create_object(struct drm_device 
*dev, size_t size)
obj = >base.base;
 
obj->funcs = _gem_funcs;
-
+   bo->base.map_wc = true;
INIT_LIST_HEAD(>unref_head);
 
return >base.base;
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c 
b/drivers/gpu/drm/virtio/virtgpu_object.c
index 2d3aa7baffe4..47e3b69c3927 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -144,7 +144,6 @@ struct drm_gem_object *virtio_gpu_create_object(struct 
drm_device *dev,
 
dshmem = >base.base;
dshmem->base.funcs = _gpu_shmem_funcs;
-   

[PATCH v2 0/2] Default to cachable mappings for GEM SHMEM

2020-11-06 Thread Thomas Zimmermann
Shmem-allocated memory uses the system's default caching flags,
usually involving cached mappings.

By default, SHMEM GEM helpers map pages using writecombine. Only a few
drivers require this setting. Others revert it to default mappings
flags. Some could benefit from caching, but don't care.

Unify the behaviour by switching the SHMEM GEM code to use cached
mappings (i.e., PAGE_KERNEL actually); just like regular shmem memory
does. The 3 drivers that require write combining explicitly select it
during GEM object creation.

The exception is dma-buf imported pages, which are always mapped
using writecombine mode.

v2:
* recreate patches on top of latest SHMEM helpers
* update lima, panfrost, v3d (Daniel, Rob)
* udl has been updated before separately.

Thomas Zimmermann (2):
  drm/shmem-helper: Use cached mappings by default
  drm/shmem-helper: Removed drm_gem_shmem_create_object_cached()

 drivers/gpu/drm/drm_gem_shmem_helper.c  | 37 -
 drivers/gpu/drm/lima/lima_gem.c |  2 +-
 drivers/gpu/drm/mgag200/mgag200_drv.c   |  1 -
 drivers/gpu/drm/panfrost/panfrost_gem.c |  2 +-
 drivers/gpu/drm/udl/udl_drv.c   |  2 --
 drivers/gpu/drm/v3d/v3d_bo.c|  2 +-
 drivers/gpu/drm/virtio/virtgpu_object.c |  1 -
 drivers/gpu/drm/vkms/vkms_drv.c |  1 -
 include/drm/drm_gem_shmem_helper.h  |  7 ++---
 9 files changed, 11 insertions(+), 44 deletions(-)

--
2.29.0

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] drm/virtio: Fix a double free in virtio_gpu_cmd_map()

2020-11-06 Thread Gerd Hoffmann
On Fri, Oct 30, 2020 at 02:48:08PM +0300, Dan Carpenter wrote:
> This is freed both here and in the caller (virtio_gpu_vram_map()) so
> it's a double free.  The correct place is only in the caller.
> 
> Fixes: 16845c5d5409 ("drm/virtio: implement blob resources: implement vram 
> object")
> Signed-off-by: Dan Carpenter 

Pushed to drm-misc-next.

thanks,
  Gerd

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH] drm/qxl: replace idr_init() by idr_init_base()

2020-11-06 Thread Gerd Hoffmann
On Fri, Nov 06, 2020 at 12:20:16AM +0530, Deepak R Varma wrote:
> idr_init() uses base 0 which is an invalid identifier for this driver.
> The idr_alloc for this driver uses 1 as start value for ID range. The
> new function idr_init_base allows IDR to set the ID lookup from base 1.
> This avoids all lookups that otherwise starts from 0 since 0 is always
> unused / available.
> 
> References: commit 6ce711f27500 ("idr: Make 1-based IDRs more efficient")
> 
> Signed-off-by: Deepak R Varma 

Pushed to drm-misc-next.

thanks,
  Gerd

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization