On Thu, Aug 03, 2017 at 11:30:09PM +0300, Alexey Budankov wrote: > On 03.08.2017 16:00, Peter Zijlstra wrote: > > On Wed, Aug 02, 2017 at 11:13:54AM +0300, Alexey Budankov wrote:
> >> @@ -2759,13 +2932,13 @@ static void ctx_sched_out(struct > >> perf_event_context *ctx, > >> > >> perf_pmu_disable(ctx->pmu); > >> if (is_active & EVENT_PINNED) { > >> - list_for_each_entry(event, &ctx->pinned_groups, group_entry) > >> - group_sched_out(event, cpuctx, ctx); > >> + perf_event_groups_iterate(&ctx->pinned_groups, > >> + group_sched_out_callback, ¶ms); > > > > So here I would expect to not iterate events where event->cpu != > > smp_processor_id() (and ideally not where event->pmu != ctx->pmu). > > > > We still need to iterate thru all groups on thread context switch in > and out as well as iterate thru cpu == -1 list (software events) additionally > to smp_processor_id() list from multiplexing timer interrupt handler. Well, just doing the @cpu=-1 and @cpu=this_cpu subtrees is less work than iterating _everything_, right? The rest will not survive event_filter_match() anyway, so iterating them is complete waste of time, and once we have them in a tree, its actually easy to find this subset.