From: "leilei.lin" <linxiu...@gmail.com>

A performance issue caused by less strickly check in task
sched when these tasks were once attached by per-task perf_event.

A task will alloc task->perf_event_ctxp[ctxn] when it was called
by perf_event_open, and task->perf_event_ctxp[ctxn] would not
ever be freed to NULL.

__perf_event_task_sched_in()
        if (task->perf_event_ctxp[ctxn]) //  here is always true
                perf_event_context_sched_in() // operate pmu

50% at most performance overhead was observed under some extreme
test case. Therefore, add a more strick check as to ctx->nr_events,
when ctx->nr_events == 0, it's no need to continue.

Signed-off-by: leilei.lin <leilei....@alibaba-inc.com>
---
 kernel/events/core.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 426c2ff..3d86695 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3180,6 +3180,13 @@ static void perf_event_context_sched_in(struct 
perf_event_context *ctx,
                return;
 
        perf_ctx_lock(cpuctx, ctx);
+       /*
+        * We must check ctx->nr_events while holding ctx->lock, such
+        * that we serialize against perf_install_in_context().
+        */
+       if (!ctx->nr_events)
+               goto unlock;
+
        perf_pmu_disable(ctx->pmu);
        /*
         * We want to keep the following priority order:
@@ -3193,6 +3200,8 @@ static void perf_event_context_sched_in(struct 
perf_event_context *ctx,
                cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
        perf_event_sched_in(cpuctx, ctx, task);
        perf_pmu_enable(ctx->pmu);
+
+unlock:
        perf_ctx_unlock(cpuctx, ctx);
 }
 
-- 
2.8.4.31.g9ed660f

Reply via email to