From: Nicolai Hähnle <nicolai.haeh...@amd.com>

[Backport of upstream commit f6ff4f67cdf8455d0a4226eeeaf5af17c37d05eb, with
 an additional NULL pointer guard that is required for kernels 3.17 and older.

 To be precise, any kernel that does *not* have commit 954605ca3 "drm/radeon:
 use common fence implementation for fences, v4" requires this additional
 NULL pointer guard.]

An arbitrary amount of time can pass between spin_unlock and
radeon_fence_wait_any, so we need to ensure that nobody frees the
fences from under us.

Based on the analogous fix for amdgpu.

Signed-off-by: Nicolai Hähnle <nicolai.haehnle at amd.com>
Reviewed-by: Christian König <christian.koenig at amd.com> (v1 + fix)
Tested-by: Lutz Euler <lutz.euler at freenet.de>
Cc: stable at vger.kernel.org
---
 drivers/gpu/drm/radeon/radeon_sa.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/radeon/radeon_sa.c 
b/drivers/gpu/drm/radeon/radeon_sa.c
index c507896..7d11901 100644
--- a/drivers/gpu/drm/radeon/radeon_sa.c
+++ b/drivers/gpu/drm/radeon/radeon_sa.c
@@ -349,8 +349,15 @@ int radeon_sa_bo_new(struct radeon_device *rdev,
                        /* see if we can skip over some allocations */
                } while (radeon_sa_bo_next_hole(sa_manager, fences, tries));

+               for (i = 0; i < RADEON_NUM_RINGS; ++i) {
+                       if (fences[i])
+                               radeon_fence_ref(fences[i]);
+               }
+
                spin_unlock(&sa_manager->wq.lock);
                r = radeon_fence_wait_any(rdev, fences, false);
+               for (i = 0; i < RADEON_NUM_RINGS; ++i)
+                       radeon_fence_unref(&fences[i]);
                spin_lock(&sa_manager->wq.lock);
                /* if we have nothing to wait for block */
                if (r == -ENOENT) {
-- 
2.5.0

Reply via email to