All drivers will be moved to get/put pages explicitly and then the last
put_pages() will be invoked during gem_free() time by some drivers.
We can't touch reservation lock when GEM is freed because that will cause
a spurious warning from lockdep when shrinker support will be added.
Lockdep doesn't know that fs_reclaim isn't functioning for a freed object,
and thus, can't deadlock. Release pages directly without taking reservation
lock if GEM is freed and its refcount is zero.

Signed-off-by: Dmitry Osipenko <dmitry.osipe...@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index f5ed64f78648..c7357110ca76 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -242,6 +242,22 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object 
*shmem)
        if (refcount_dec_not_one(&shmem->pages_use_count))
                return;
 
+       /*
+        * Destroying the object is a special case because acquiring
+        * the obj lock can cause a locking order inversion between
+        * reservation_ww_class_mutex and fs_reclaim.
+        *
+        * This deadlock is not actually possible, because no one should
+        * be already holding the lock when GEM is released.  Unfortunately
+        * lockdep is not aware of this detail.  So when the refcount drops
+        * to zero, we pretend it is already locked.
+        */
+       if (!kref_read(&shmem->base.refcount)) {
+               if (refcount_dec_and_test(&shmem->pages_use_count))
+                       drm_gem_shmem_free_pages(shmem);
+               return;
+       }
+
        dma_resv_lock(shmem->base.resv, NULL);
        drm_gem_shmem_put_pages_locked(shmem);
        dma_resv_unlock(shmem->base.resv);
-- 
2.43.0

Reply via email to