On Mon, Apr 09, 2018 at 10:25:32AM +0300, Alexey Budankov wrote:
> 
> Store preempting context switch out event into Perf trace as a part of 
> PERF_RECORD_SWITCH[_CPU_WIDE] record.
> 
> Percentage of preempting and non-preempting context switches help 
> understanding the nature of workloads (CPU or IO bound) that are running 
> on a machine;
> 
> The event is treated as preemption one when task->state value of the 
> thread being switched out is TASK_RUNNING. Event type encoding is 
> implemented using PERF_RECORD_MISC_SWITCH_OUT_PREEMPT bit;
>       
> Signed-off-by: Alexey Budankov <[email protected]>

Acked-by: Peter Zijlstra (Intel) <[email protected]>

Acme, I'm thinking you should route this, since most of the changes are
actually to the tool.

> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index fc1c330c6bd6..872a5aaa77eb 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -7584,6 +7584,10 @@ static void perf_event_switch(struct task_struct *task,
>               },
>       };
>  
> +     if (!sched_in && task->state == TASK_RUNNING)
> +             switch_event.event_id.header.misc |=
> +                             PERF_RECORD_MISC_SWITCH_OUT_PREEMPT;

I typically prefer {} over any multi-line expression.

>       perf_iterate_sb(perf_event_switch_output,
>                      &switch_event,
>                      NULL);

Reply via email to