Re: [PATCH 20/20] drm: Remove obsolete GEM and PRIME callbacks from struct drm_driver

2020-08-13 Thread Thomas Zimmermann
Hi

Am 13.08.20 um 12:16 schrieb Sam Ravnborg:
> Hi Thomas.
> 
> On Thu, Aug 13, 2020 at 10:36:44AM +0200, Thomas Zimmermann wrote:
>> Several GEM and PRIME callbacks have been deprecated in favor of
>> per-instance GEM object functions. Remove the callbacks as they are
>> now unused. The only exception is .gem_prime_mmap, which is still
>> in use by several drivers.
>>
>> What is also gone is gem_vm_ops in struct drm_driver. All drivers now
>> use struct drm_gem_object_funcs.vm_ops instead.
>>
>> While at it, the patch also improves error handling around calls
>> to .free and .get_sg_table callbacks.
>>
>> Signed-off-by: Thomas Zimmermann 
> 
> After this following entry in todo.rst is done?
> 
> "
> struct drm_gem_object_funcs
> ---
> 
> GEM objects can now have a function table instead of having the callbacks on 
> the
> DRM driver struct. This is now the preferred way and drivers can be moved 
> over.

This should only say that .gem_prime_mmap() is left in struct
drm_drivers for conversion.

> 
> We also need a 2nd version of the CMA define that doesn't require the
> vmapping to be present (different hook for prime importing). Plus this needs 
> to
> be rolled out to all drivers using their own implementations, too.
> "

This sounds like it got fixed in the recent CMA clean-up series. There
are CMA initializer macros for drivers with and without vmap. I have to
take a closer look, but I think it can be removed.

Thanks for bringing this up.

Best regards
Thomas

> 
> If yes, then delete it too.
> 
>   Sam
> 
>> ---
>>  drivers/gpu/drm/drm_gem.c| 35 +++-
>>  drivers/gpu/drm/drm_gem_cma_helper.c |  6 +-
>>  drivers/gpu/drm/drm_prime.c  | 17 +++---
>>  include/drm/drm_drv.h| 85 ++--
>>  4 files changed, 23 insertions(+), 120 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>> index 19d73868490e..96945bed8291 100644
>> --- a/drivers/gpu/drm/drm_gem.c
>> +++ b/drivers/gpu/drm/drm_gem.c
>> @@ -247,12 +247,9 @@ drm_gem_object_release_handle(int id, void *ptr, void 
>> *data)
>>  {
>>  struct drm_file *file_priv = data;
>>  struct drm_gem_object *obj = ptr;
>> -struct drm_device *dev = obj->dev;
>>  
>>  if (obj->funcs && obj->funcs->close)
>>  obj->funcs->close(obj, file_priv);
>> -else if (dev->driver->gem_close_object)
>> -dev->driver->gem_close_object(obj, file_priv);
>>  
>>  drm_gem_remove_prime_handles(obj, file_priv);
>>  drm_vma_node_revoke(>vma_node, file_priv);
>> @@ -407,10 +404,6 @@ drm_gem_handle_create_tail(struct drm_file *file_priv,
>>  ret = obj->funcs->open(obj, file_priv);
>>  if (ret)
>>  goto err_revoke;
>> -} else if (dev->driver->gem_open_object) {
>> -ret = dev->driver->gem_open_object(obj, file_priv);
>> -if (ret)
>> -goto err_revoke;
>>  }
>>  
>>  *handlep = handle;
>> @@ -982,12 +975,11 @@ drm_gem_object_free(struct kref *kref)
>>  {
>>  struct drm_gem_object *obj =
>>  container_of(kref, struct drm_gem_object, refcount);
>> -struct drm_device *dev = obj->dev;
>>  
>> -if (obj->funcs)
>> -obj->funcs->free(obj);
>> -else if (dev->driver->gem_free_object_unlocked)
>> -dev->driver->gem_free_object_unlocked(obj);
>> +if (drm_WARN_ON_ONCE(obj->dev, !obj->funcs || !obj->funcs->free))
>> +return;
>> +
>> +obj->funcs->free(obj);
>>  }
>>  EXPORT_SYMBOL(drm_gem_object_free);
>>  
>> @@ -1049,9 +1041,9 @@ EXPORT_SYMBOL(drm_gem_vm_close);
>>   * @obj_size: the object size to be mapped, in bytes
>>   * @vma: VMA for the area to be mapped
>>   *
>> - * Set up the VMA to prepare mapping of the GEM object using the gem_vm_ops
>> - * provided by the driver. Depending on their requirements, drivers can 
>> either
>> - * provide a fault handler in their gem_vm_ops (in which case any accesses 
>> to
>> + * Set up the VMA to prepare mapping of the GEM object using the GEM 
>> object's
>> + * vm_ops. Depending on their requirements, GEM objects can either
>> + * provide a fault handler in their vm_ops (in which case any accesses to
>>   * the object will be trapped, to perform migration, GTT binding, surface
>>   * register allocation, or performance monitoring), or mmap the buffer 
>> memory
>>   * synchronously after calling drm_gem_mmap_obj.
>> @@ -1065,12 +1057,11 @@ EXPORT_SYMBOL(drm_gem_vm_close);
>>   * callers must verify access restrictions before calling this helper.
>>   *
>>   * Return 0 or success or -EINVAL if the object size is smaller than the VMA
>> - * size, or if no gem_vm_ops are provided.
>> + * size, or if no vm_ops are provided.
>>   */
>>  int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
>>   struct vm_area_struct *vma)
>>  {
>> -struct drm_device *dev = obj->dev;
>>  int ret;
>>  

Re: [PATCH 20/20] drm: Remove obsolete GEM and PRIME callbacks from struct drm_driver

2020-08-13 Thread Sam Ravnborg
Hi Thomas.

On Thu, Aug 13, 2020 at 10:36:44AM +0200, Thomas Zimmermann wrote:
> Several GEM and PRIME callbacks have been deprecated in favor of
> per-instance GEM object functions. Remove the callbacks as they are
> now unused. The only exception is .gem_prime_mmap, which is still
> in use by several drivers.
> 
> What is also gone is gem_vm_ops in struct drm_driver. All drivers now
> use struct drm_gem_object_funcs.vm_ops instead.
> 
> While at it, the patch also improves error handling around calls
> to .free and .get_sg_table callbacks.
> 
> Signed-off-by: Thomas Zimmermann 

After this following entry in todo.rst is done?

"
struct drm_gem_object_funcs
---

GEM objects can now have a function table instead of having the callbacks on the
DRM driver struct. This is now the preferred way and drivers can be moved over.

We also need a 2nd version of the CMA define that doesn't require the
vmapping to be present (different hook for prime importing). Plus this needs to
be rolled out to all drivers using their own implementations, too.
"

If yes, then delete it too.

Sam

> ---
>  drivers/gpu/drm/drm_gem.c| 35 +++-
>  drivers/gpu/drm/drm_gem_cma_helper.c |  6 +-
>  drivers/gpu/drm/drm_prime.c  | 17 +++---
>  include/drm/drm_drv.h| 85 ++--
>  4 files changed, 23 insertions(+), 120 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 19d73868490e..96945bed8291 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -247,12 +247,9 @@ drm_gem_object_release_handle(int id, void *ptr, void 
> *data)
>  {
>   struct drm_file *file_priv = data;
>   struct drm_gem_object *obj = ptr;
> - struct drm_device *dev = obj->dev;
>  
>   if (obj->funcs && obj->funcs->close)
>   obj->funcs->close(obj, file_priv);
> - else if (dev->driver->gem_close_object)
> - dev->driver->gem_close_object(obj, file_priv);
>  
>   drm_gem_remove_prime_handles(obj, file_priv);
>   drm_vma_node_revoke(>vma_node, file_priv);
> @@ -407,10 +404,6 @@ drm_gem_handle_create_tail(struct drm_file *file_priv,
>   ret = obj->funcs->open(obj, file_priv);
>   if (ret)
>   goto err_revoke;
> - } else if (dev->driver->gem_open_object) {
> - ret = dev->driver->gem_open_object(obj, file_priv);
> - if (ret)
> - goto err_revoke;
>   }
>  
>   *handlep = handle;
> @@ -982,12 +975,11 @@ drm_gem_object_free(struct kref *kref)
>  {
>   struct drm_gem_object *obj =
>   container_of(kref, struct drm_gem_object, refcount);
> - struct drm_device *dev = obj->dev;
>  
> - if (obj->funcs)
> - obj->funcs->free(obj);
> - else if (dev->driver->gem_free_object_unlocked)
> - dev->driver->gem_free_object_unlocked(obj);
> + if (drm_WARN_ON_ONCE(obj->dev, !obj->funcs || !obj->funcs->free))
> + return;
> +
> + obj->funcs->free(obj);
>  }
>  EXPORT_SYMBOL(drm_gem_object_free);
>  
> @@ -1049,9 +1041,9 @@ EXPORT_SYMBOL(drm_gem_vm_close);
>   * @obj_size: the object size to be mapped, in bytes
>   * @vma: VMA for the area to be mapped
>   *
> - * Set up the VMA to prepare mapping of the GEM object using the gem_vm_ops
> - * provided by the driver. Depending on their requirements, drivers can 
> either
> - * provide a fault handler in their gem_vm_ops (in which case any accesses to
> + * Set up the VMA to prepare mapping of the GEM object using the GEM object's
> + * vm_ops. Depending on their requirements, GEM objects can either
> + * provide a fault handler in their vm_ops (in which case any accesses to
>   * the object will be trapped, to perform migration, GTT binding, surface
>   * register allocation, or performance monitoring), or mmap the buffer memory
>   * synchronously after calling drm_gem_mmap_obj.
> @@ -1065,12 +1057,11 @@ EXPORT_SYMBOL(drm_gem_vm_close);
>   * callers must verify access restrictions before calling this helper.
>   *
>   * Return 0 or success or -EINVAL if the object size is smaller than the VMA
> - * size, or if no gem_vm_ops are provided.
> + * size, or if no vm_ops are provided.
>   */
>  int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
>struct vm_area_struct *vma)
>  {
> - struct drm_device *dev = obj->dev;
>   int ret;
>  
>   /* Check for valid size. */
> @@ -1095,8 +1086,6 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, 
> unsigned long obj_size,
>   } else {
>   if (obj->funcs && obj->funcs->vm_ops)
>   vma->vm_ops = obj->funcs->vm_ops;
> - else if (dev->driver->gem_vm_ops)
> - vma->vm_ops = dev->driver->gem_vm_ops;
>   else {
>   drm_gem_object_put(obj);
>   return -EINVAL;
> @@ -1206,8 

[PATCH 20/20] drm: Remove obsolete GEM and PRIME callbacks from struct drm_driver

2020-08-13 Thread Thomas Zimmermann
Several GEM and PRIME callbacks have been deprecated in favor of
per-instance GEM object functions. Remove the callbacks as they are
now unused. The only exception is .gem_prime_mmap, which is still
in use by several drivers.

What is also gone is gem_vm_ops in struct drm_driver. All drivers now
use struct drm_gem_object_funcs.vm_ops instead.

While at it, the patch also improves error handling around calls
to .free and .get_sg_table callbacks.

Signed-off-by: Thomas Zimmermann 
---
 drivers/gpu/drm/drm_gem.c| 35 +++-
 drivers/gpu/drm/drm_gem_cma_helper.c |  6 +-
 drivers/gpu/drm/drm_prime.c  | 17 +++---
 include/drm/drm_drv.h| 85 ++--
 4 files changed, 23 insertions(+), 120 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 19d73868490e..96945bed8291 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -247,12 +247,9 @@ drm_gem_object_release_handle(int id, void *ptr, void 
*data)
 {
struct drm_file *file_priv = data;
struct drm_gem_object *obj = ptr;
-   struct drm_device *dev = obj->dev;
 
if (obj->funcs && obj->funcs->close)
obj->funcs->close(obj, file_priv);
-   else if (dev->driver->gem_close_object)
-   dev->driver->gem_close_object(obj, file_priv);
 
drm_gem_remove_prime_handles(obj, file_priv);
drm_vma_node_revoke(>vma_node, file_priv);
@@ -407,10 +404,6 @@ drm_gem_handle_create_tail(struct drm_file *file_priv,
ret = obj->funcs->open(obj, file_priv);
if (ret)
goto err_revoke;
-   } else if (dev->driver->gem_open_object) {
-   ret = dev->driver->gem_open_object(obj, file_priv);
-   if (ret)
-   goto err_revoke;
}
 
*handlep = handle;
@@ -982,12 +975,11 @@ drm_gem_object_free(struct kref *kref)
 {
struct drm_gem_object *obj =
container_of(kref, struct drm_gem_object, refcount);
-   struct drm_device *dev = obj->dev;
 
-   if (obj->funcs)
-   obj->funcs->free(obj);
-   else if (dev->driver->gem_free_object_unlocked)
-   dev->driver->gem_free_object_unlocked(obj);
+   if (drm_WARN_ON_ONCE(obj->dev, !obj->funcs || !obj->funcs->free))
+   return;
+
+   obj->funcs->free(obj);
 }
 EXPORT_SYMBOL(drm_gem_object_free);
 
@@ -1049,9 +1041,9 @@ EXPORT_SYMBOL(drm_gem_vm_close);
  * @obj_size: the object size to be mapped, in bytes
  * @vma: VMA for the area to be mapped
  *
- * Set up the VMA to prepare mapping of the GEM object using the gem_vm_ops
- * provided by the driver. Depending on their requirements, drivers can either
- * provide a fault handler in their gem_vm_ops (in which case any accesses to
+ * Set up the VMA to prepare mapping of the GEM object using the GEM object's
+ * vm_ops. Depending on their requirements, GEM objects can either
+ * provide a fault handler in their vm_ops (in which case any accesses to
  * the object will be trapped, to perform migration, GTT binding, surface
  * register allocation, or performance monitoring), or mmap the buffer memory
  * synchronously after calling drm_gem_mmap_obj.
@@ -1065,12 +1057,11 @@ EXPORT_SYMBOL(drm_gem_vm_close);
  * callers must verify access restrictions before calling this helper.
  *
  * Return 0 or success or -EINVAL if the object size is smaller than the VMA
- * size, or if no gem_vm_ops are provided.
+ * size, or if no vm_ops are provided.
  */
 int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
 struct vm_area_struct *vma)
 {
-   struct drm_device *dev = obj->dev;
int ret;
 
/* Check for valid size. */
@@ -1095,8 +1086,6 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned 
long obj_size,
} else {
if (obj->funcs && obj->funcs->vm_ops)
vma->vm_ops = obj->funcs->vm_ops;
-   else if (dev->driver->gem_vm_ops)
-   vma->vm_ops = dev->driver->gem_vm_ops;
else {
drm_gem_object_put(obj);
return -EINVAL;
@@ -1206,8 +1195,6 @@ int drm_gem_pin(struct drm_gem_object *obj)
 {
if (obj->funcs && obj->funcs->pin)
return obj->funcs->pin(obj);
-   else if (obj->dev->driver->gem_prime_pin)
-   return obj->dev->driver->gem_prime_pin(obj);
else
return 0;
 }
@@ -1216,8 +1203,6 @@ void drm_gem_unpin(struct drm_gem_object *obj)
 {
if (obj->funcs && obj->funcs->unpin)
obj->funcs->unpin(obj);
-   else if (obj->dev->driver->gem_prime_unpin)
-   obj->dev->driver->gem_prime_unpin(obj);
 }
 
 void *drm_gem_vmap(struct drm_gem_object *obj)
@@ -1226,8 +1211,6 @@ void *drm_gem_vmap(struct drm_gem_object *obj)
 
if (obj->funcs && obj->funcs->vmap)