From: Vincent Donnefort <[email protected]>

The sub_positive local version is saving an explicit load-store and is
enough for the cpu_util_next() usage.

Signed-off-by: Vincent Donnefort <[email protected]>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 146ac9fec4b6..1364f8b95214 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6525,7 +6525,7 @@ static unsigned long cpu_util_next(int cpu, struct 
task_struct *p, int dst_cpu)
         * util_avg should already be correct.
         */
        if (task_cpu(p) == cpu && dst_cpu != cpu)
-               sub_positive(&util, task_util(p));
+               lsub_positive(&util, task_util(p));
        else if (task_cpu(p) != cpu && dst_cpu == cpu)
                util += task_util(p);
 
-- 
2.25.1

Reply via email to