On Fri, Aug 04, 2017 at 04:50:54PM +0800, 石祤 wrote:
> From: "leilei.lin" <leilei....@alibaba-inc.com>
> 
> A performance issue caused by less strickly check in task
> sched when these tasks were once attached by per-task perf_event.
> 
> A task will alloc task->perf_event_ctxp[ctxn] when it was called
> by perf_event_open, and task->perf_event_ctxp[ctxn] would not
> ever be freed to NULL.
> 
> __perf_event_task_sched_in()
>       if (task->perf_event_ctxp[ctxn]) //  here is always true
>               perf_event_context_sched_in() // operate pmu
> 
> 50% at most performance overhead was observed under some extreme
> test case. Therefor, add a more strick check as to ctx->nr_events,
> when ctx->nr_events == 0, it's no need to continue.
> 
> Signed-off-by: leilei.lin <leilei....@alibaba-inc.com>
> ---
>  kernel/events/core.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 426c2ff..f071013 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -3179,6 +3179,9 @@ static void perf_event_context_sched_in(struct 
> perf_event_context *ctx,
>       if (cpuctx->task_ctx == ctx)
>               return;
>  
> +     if (!cpuctx->task_ctx && !ctx->nr_events)
> +             return;
> +
>       perf_ctx_lock(cpuctx, ctx);
>       perf_pmu_disable(ctx->pmu);
>       /*

I _think_ we must do this check after acquiring ctx->lock, because of
how perf_install_in_context() works. See commit:

  63cae12bce98 ("perf/core: Fix sys_perf_event_open() vs. hotplug")

That is a giant bag of tricky, but the gist of it is that if
perf_event_ctxp[] is !NULL (as is the case in your scenario) then we
must acquire ctx->lock.

So I think we want something like the below.


---
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 426c2ffba16d..bf40b7633cb0 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3180,6 +3180,13 @@ static void perf_event_context_sched_in(struct 
perf_event_context *ctx,
                return;
 
        perf_ctx_lock(cpuctx, ctx);
+       /*
+        * We must check ctx->nr_events while holding ctx->lock, such
+        * that we serialize against perf_install_in_context().
+        */
+       if (!ctx->nr_events)
+               goto unlock;
+
        perf_pmu_disable(ctx->pmu);
        /*
         * We want to keep the following priority order:
@@ -3193,6 +3200,7 @@ static void perf_event_context_sched_in(struct 
perf_event_context *ctx,
                cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
        perf_event_sched_in(cpuctx, ctx, task);
        perf_pmu_enable(ctx->pmu);
+unlock:
        perf_ctx_unlock(cpuctx, ctx);
 }
 

Reply via email to