Re: [PATCH v4 0/2] fix & prevent the missing preemption disabling
On 2021/10/26 上午10:42, Steven Rostedt wrote: > On Tue, 26 Oct 2021 10:09:12 +0800 > 王贇 wrote: > >> Just a ping, to see if there are any more comments :-P > > I guess you missed what went into mainline (and your name found a bug > in my perl script for importing patches ;-) > > https://lore.kernel.org/all/20211019091344.65629...@gandalf.local.home/ Cool~ Missing some chinese font maybe, that's fine :-) > > Which means patch 1 needs to change: >> +/* >> + * Disable preemption to fulfill the promise. >> + * >> + * Don't worry about the bit 0 cases, they indicate >> + * the disabling behaviour has already been done by >> + * internal call previously. >> + */ >> +preempt_disable_notrace(); >> + >> return bit + 1; >> } >> >> +/* >> + * Preemption will be enabled (if it was previously enabled). >> + */ >> static __always_inline void trace_clear_recursion(int bit) >> { >> if (!bit) >> return; >> >> +if (bit > 0) >> +preempt_enable_notrace(); >> + > > Where this wont work anymore. > > Need to preempt disable and enable always. Yup, will rebase on the latest changes~ Regards, Michael Wang > > -- Steve > > >> barrier(); >> bit--; >> trace_recursion_clear(bit); >> @@ -209,7 +227,7 @@ static __always_inline void trace_clear_recursion(int >> bit) >> * tracing recursed in the same context (normal vs interrupt), >> *
Re: [PATCH v4 0/2] fix & prevent the missing preemption disabling
On Tue, 26 Oct 2021 10:09:12 +0800 王贇 wrote: > Just a ping, to see if there are any more comments :-P I guess you missed what went into mainline (and your name found a bug in my perl script for importing patches ;-) https://lore.kernel.org/all/20211019091344.65629...@gandalf.local.home/ Which means patch 1 needs to change: > + /* > + * Disable preemption to fulfill the promise. > + * > + * Don't worry about the bit 0 cases, they indicate > + * the disabling behaviour has already been done by > + * internal call previously. > + */ > + preempt_disable_notrace(); > + > return bit + 1; > } > > +/* > + * Preemption will be enabled (if it was previously enabled). > + */ > static __always_inline void trace_clear_recursion(int bit) > { > if (!bit) > return; > > + if (bit > 0) > + preempt_enable_notrace(); > + Where this wont work anymore. Need to preempt disable and enable always. -- Steve > barrier(); > bit--; > trace_recursion_clear(bit); > @@ -209,7 +227,7 @@ static __always_inline void trace_clear_recursion(int bit) > * tracing recursed in the same context (normal vs interrupt), > *
Re: [PATCH v4 0/2] fix & prevent the missing preemption disabling
Just a ping, to see if there are any more comments :-P Regards, Michael Wang On 2021/10/18 上午11:38, 王贇 wrote: > The testing show that perf_ftrace_function_call() are using smp_processor_id() > with preemption enabled, all the checking on CPU could be wrong after > preemption. > > As Peter point out, the section between ftrace_test_recursion_trylock/unlock() > pair require the preemption to be disabled as > 'Documentation/trace/ftrace-uses.rst' > explained, but currently the work is done outside of the helpers. > > And since the internal using of trace_test_and_set_recursion() > and trace_clear_recursion() also require preemption to be disabled, we > can just merge the logical together. > > Patch 1/2 will make sure preemption disabled when recursion lock succeed, > patch 2/2 will do smp_processor_id() checking after trylock() to address the > issue. > > v1: > https://lore.kernel.org/all/8c7de46d-9869-aa5e-2bb9-5dbc2eda3...@linux.alibaba.com/ > v2: > https://lore.kernel.org/all/b1d7fe43-ce84-0ed7-32f7-ea1d12d0b...@linux.alibaba.com/ > v3: > https://lore.kernel.org/all/609b565a-ed6e-a1da-f025-166691b5d...@linux.alibaba.com/ > > Michael Wang (2): > ftrace: disable preemption when recursion locked > ftrace: do CPU checking after preemption disabled > > arch/csky/kernel/probes/ftrace.c | 2 -- > arch/parisc/kernel/ftrace.c | 2 -- > arch/powerpc/kernel/kprobes-ftrace.c | 2 -- > arch/riscv/kernel/probes/ftrace.c| 2 -- > arch/x86/kernel/kprobes/ftrace.c | 2 -- > include/linux/trace_recursion.h | 20 +++- > kernel/livepatch/patch.c | 13 +++-- > kernel/trace/ftrace.c| 15 +-- > kernel/trace/trace_event_perf.c | 6 +++--- > kernel/trace/trace_functions.c | 5 - > 10 files changed, 34 insertions(+), 35 deletions(-) >
[PATCH v4 0/2] fix & prevent the missing preemption disabling
The testing show that perf_ftrace_function_call() are using smp_processor_id() with preemption enabled, all the checking on CPU could be wrong after preemption. As Peter point out, the section between ftrace_test_recursion_trylock/unlock() pair require the preemption to be disabled as 'Documentation/trace/ftrace-uses.rst' explained, but currently the work is done outside of the helpers. And since the internal using of trace_test_and_set_recursion() and trace_clear_recursion() also require preemption to be disabled, we can just merge the logical together. Patch 1/2 will make sure preemption disabled when recursion lock succeed, patch 2/2 will do smp_processor_id() checking after trylock() to address the issue. v1: https://lore.kernel.org/all/8c7de46d-9869-aa5e-2bb9-5dbc2eda3...@linux.alibaba.com/ v2: https://lore.kernel.org/all/b1d7fe43-ce84-0ed7-32f7-ea1d12d0b...@linux.alibaba.com/ v3: https://lore.kernel.org/all/609b565a-ed6e-a1da-f025-166691b5d...@linux.alibaba.com/ Michael Wang (2): ftrace: disable preemption when recursion locked ftrace: do CPU checking after preemption disabled arch/csky/kernel/probes/ftrace.c | 2 -- arch/parisc/kernel/ftrace.c | 2 -- arch/powerpc/kernel/kprobes-ftrace.c | 2 -- arch/riscv/kernel/probes/ftrace.c| 2 -- arch/x86/kernel/kprobes/ftrace.c | 2 -- include/linux/trace_recursion.h | 20 +++- kernel/livepatch/patch.c | 13 +++-- kernel/trace/ftrace.c| 15 +-- kernel/trace/trace_event_perf.c | 6 +++--- kernel/trace/trace_functions.c | 5 - 10 files changed, 34 insertions(+), 35 deletions(-) -- 1.8.3.1