When checking whether to skip certain buffers because they're protected
by dmem.low, we're checking the effective protection of the evictee's
cgroup, but depending on how the evictor's cgroup relates to the
evictee's, the semantics of effective protection values change.

When testing against cgroups from different subtrees, page_counter's
recursive protection propagates memory protection afforded to a parent
down to the child cgroups, even if the children were not explicitly
protected. This prevents cgroups whose parents were afforded no
protection from stealing memory from cgroups whose parents were afforded
more protection, without users having to explicitly propagate this
protection.

However, if we always calculate protection from the root cgroup, this
breaks prioritization of sibling cgroups: If one cgroup was explicitly
protected and its siblings were not, the protected cgroup should get
higher priority, i.e. the protected cgroup should be able to steal from
unprotected siblings. This only works if we restrict the protection
calculation to the subtree shared by evictor and evictee.

Signed-off-by: Natalie Vock <[email protected]>
---
 drivers/gpu/drm/ttm/ttm_bo.c | 22 +++++++++++++++++++++-
 1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 
d20ff41411c08cd97b4467f603751f483d1c7ff4..47dd5600c1a7d59dcccfec0d998b87c2d470df40
 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -512,15 +512,34 @@ struct ttm_bo_evict_walk {
        bool try_low;
        /** @hit_low: If we cannot evict a bo when @try_low is false (first 
pass) */
        bool hit_low;
+       /** @only_evict_unprotected: If eviction should be restricted to 
unprotected BOs */
+       bool only_evict_unprotected;
 };
 
 static s64 ttm_bo_evict_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object 
*bo)
 {
+       struct dmem_cgroup_pool_state *limit_pool;
        struct ttm_bo_evict_walk *evict_walk =
                container_of(walk, typeof(*evict_walk), walk);
        s64 lret;
 
-       if (!dmem_cgroup_state_evict_valuable(evict_walk->limit_pool, 
bo->resource->css,
+       /*
+        * If only_evict_unprotected is set, then we're trying to evict 
unprotected
+        * buffers in favor of a protected allocation for charge_pool. 
Explicitly skip
+        * buffers belonging to the same cgroup here - that cgroup is 
definitely protected,
+        * even though dmem_cgroup_state_evict_valuable would allow the 
eviction because a
+        * cgroup is always allowed to evict from itself even if it is 
protected.
+        */
+       if (evict_walk->only_evict_unprotected &&
+                       bo->resource->css == evict_walk->charge_pool)
+               return 0;
+
+       limit_pool = evict_walk->limit_pool;
+       if (!limit_pool)
+               limit_pool = dmem_cgroup_common_ancestor(bo->resource->css,
+                                                        
evict_walk->charge_pool);
+
+       if (!dmem_cgroup_state_evict_valuable(limit_pool, bo->resource->css,
                                              evict_walk->try_low, 
&evict_walk->hit_low))
                return 0;
 
@@ -580,6 +599,7 @@ static int ttm_bo_evict_alloc(struct ttm_device *bdev,
                .res = res,
                .charge_pool = charge_pool,
                .limit_pool = limit_pool,
+               .only_evict_unprotected = only_evict_unprotected,
        };
        s64 lret;
 

-- 
2.51.0

Reply via email to