On 03/28/2017 10:05 AM, Peter Zijlstra wrote:
On Tue, Mar 28, 2017 at 07:35:40AM +0100, Dietmar Eggemann wrote:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 04d4f81b96ae..d1dcb19f5b55 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2940,6 +2940,8 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa,

        if (cfs_rq)
                trace_sched_load_cfs_rq(cfs_rq);
+       else
+               trace_sched_load_se(container_of(sa, struct sched_entity, avg));

        return decayed;
 }
@@ -3162,6 +3164,7 @@ static inline int propagate_entity_load_avg(struct 
sched_entity *se)
        update_tg_cfs_load(cfs_rq, se);

        trace_sched_load_cfs_rq(cfs_rq);
+       trace_sched_load_se(se);

        return 1;
 }

Having back-to-back tracepoints is disgusting.


Yeah, avoiding putting them like this is hard since update_tg_cfs_util()/update_tg_cfs_load() refresh util for the cfs_rq and the se respectively load/runnable_load.

Reply via email to