> On Apr 2, 2019, at 2:03 PM, Daniel Bristot de Oliveira <[email protected]> 
> wrote:
>
> Note: do not take it too seriously, it is just a proof of concept.
>
> Some time ago, while using perf to check the automaton model, I noticed
> that perf was losing events. The same was reproducible with ftrace.
>
> See: https://www.spinics.net/lists/linux-rt-users/msg19781.html
>
> Steve pointed to a problem in the identification of the context
> execution used by the recursion control.
>
> Currently, recursion control uses the preempt_counter to
> identify the current context. The NMI/HARD/SOFT IRQ counters
> are set in the preempt_counter in the irq_enter/exit functions.
>
> In a trace, they are set like this:
> -------------- %< --------------------
> 0)   ==========> |
> 0)               |  do_IRQ() {        /* First C function */
> 0)               |    irq_enter() {
> 0)               |              /* set the IRQ context. */
> 0)   1.081 us    |    }
> 0)               |    handle_irq() {
> 0)               |             /* IRQ handling code */
> 0) + 10.290 us   |    }
> 0)               |    irq_exit() {
> 0)               |              /* unset the IRQ context. */
> 0)   6.657 us    |    }
> 0) + 18.995 us   |  }
> 0)   <========== |
> -------------- >% --------------------
>
> As one can see, functions (and events) that take place before the set
> and after unset the preempt_counter are identified in the wrong context,
> causing the miss interpretation that a recursion is taking place.
> When this happens, events are dropped.
>
> To resolve this problem, the set/unset of the IRQ/NMI context needs to
> be done before the execution of the first C execution, and after its
> return. By doing so, and using this method to identify the context in the
> trace recursion protection, no more events are lost.

I would much rather do the opposite: completely remove context
tracking from the asm and, instead, stick it into the C code.  We'd
need to make sure that the C code is totally immune from tracing,
kprobes, etc, but it would be a nice cleanup.  And then you could fix
this bug in C!


>
> A possible solution is to use a per-cpu variable set and unset in the
> entry point of NMI/IRQs, before calling the C handler. This possible
> solution is presented in the next patches as a proof of concept, for
> x86_64. However, other ideas might be better than mine... so...
>
> Daniel Bristot de Oliveira (7):
>  x86/entry: Add support for early task context tracking
>  trace: Move the trace recursion context enum to trace.h and reuse it
>  trace: Optimize trace_get_context_bit()
>  trace/ring_buffer: Use trace_get_context_bit()
>  trace: Use early task context tracking if available
>  events: Create an trace_get_context_bit()
>  events: Use early task context tracking if available
>
> arch/x86/entry/entry_64.S       |  9 ++++++
> arch/x86/include/asm/irqflags.h | 30 ++++++++++++++++++++
> arch/x86/kernel/cpu/common.c    |  4 +++
> include/linux/irqflags.h        |  4 +++
> kernel/events/internal.h        | 50 +++++++++++++++++++++++++++------
> kernel/softirq.c                |  5 +++-
> kernel/trace/ring_buffer.c      | 28 ++----------------
> kernel/trace/trace.h            | 46 ++++++++++++++++++++++--------
> 8 files changed, 129 insertions(+), 47 deletions(-)
>
> --
> 2.20.1
>

Reply via email to