Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
On Fri, May 26, 2017 at 08:10:25AM +0200, Ingo Molnar wrote: > > * Paul E. McKenneywrote: > > > How about the following (untested) patch? > > Looks good to me! > > Acked-by: Ingo Molnar Very good, I have applied Masami's Reviewed-by and your Acked-by, thank you both! Thanx, Paul
Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
On Fri, May 26, 2017 at 08:10:25AM +0200, Ingo Molnar wrote: > > * Paul E. McKenney wrote: > > > How about the following (untested) patch? > > Looks good to me! > > Acked-by: Ingo Molnar Very good, I have applied Masami's Reviewed-by and your Acked-by, thank you both! Thanx, Paul
Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
* Paul E. McKenneywrote: > How about the following (untested) patch? Looks good to me! Acked-by: Ingo Molnar Thanks, Ingo
Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
* Paul E. McKenney wrote: > How about the following (untested) patch? Looks good to me! Acked-by: Ingo Molnar Thanks, Ingo
Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
On Thu, 25 May 2017 08:14:01 -0700 "Paul E. McKenney"wrote: > On Thu, May 25, 2017 at 08:15:55AM +0200, Ingo Molnar wrote: > > > > * Masami Hiramatsu wrote: > > > > > --- a/kernel/kprobes.c > > > +++ b/kernel/kprobes.c > > > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, > > > struct kprobe *p) > > > static bool kprobes_allow_optimization; > > > > > > /* > > > + * Synchronizing wait on trampline code for interrupted tasks/threads. > > > + * Since the threads running on dynamically allocated trampline code > > > + * can be interrupted, kprobes has to wait for those tasks back on > > > + * track and scheduled. If the kernel is preemptive, the thread can be > > > + * preempted by other tasks on the trampoline too. For such case, this > > > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > > > + */ > > > +static void synchronize_on_trampoline(void) > > > +{ > > > +#ifdef CONFIG_PREEMPT > > > + synchronize_rcu_tasks(); > > > +#else > > > + synchronize_sched(); > > > +#endif > > > +} > > > > So that's really unacceptably ugly. > > > > Paul, I still question the need to have tasks-RCU as a Kconfig distinction, > > _especially_ if its API usage results in such ugly secondary #ifdefs... > > > > Why isn't there a single synchronize_rcu_tasks() API function, which does > > what is > > expected, where the _RCU_ code figures out how to implement it? > > > > I.e.: > > > > - There should be no user configurable TASKS_RCU Kconfig setting - at most > > a > >helper Kconfig that is automatically selected by the RCU code itself. > > > > - Both ftrace andkprobes would use a simple synchronize_rcu_tasks() call. > > > > Thanks, > > How about the following (untested) patch? This patch is good to me for kprobes at least. Reviewed-by: Masami Hiramatsu Steve, how about ftrace usage? Thank you, > > This is against -rcu's rcu/dev branch, FWIW. > > And I am also queuing patches with other cleanups, including do_exit(), > enabled by this approach. > > Thanx, Paul > > > > commit 9ab47fba45cea06e223e524d392621b64c174720 > Author: Paul E. McKenney > Date: Thu May 25 08:05:00 2017 -0700 > > rcu: Drive TASKS_RCU directly off of PREEMPT > > The actual use of TASKS_RCU is only when PREEMPT, otherwise RCU-sched > is used instead. This commit therefore makes synchronize_rcu_tasks() > and call_rcu_tasks() available always, but mapped to synchronize_sched() > and call_rcu_sched(), respectively, when !PREEMPT. This approach also > allows some #ifdefs to be removed from rcutorture. > > Reported-by: Ingo Molnar > Signed-off-by: Paul E. McKenney > > --- > > include/linux/rcupdate.h |6 -- > kernel/rcu/Kconfig |3 +-- > kernel/rcu/rcutorture.c | 17 + > 3 files changed, 6 insertions(+), 20 deletions(-) > > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h > index f816fc72b51e..c3f380befdd7 100644 > --- a/include/linux/rcupdate.h > +++ b/include/linux/rcupdate.h > @@ -58,8 +58,6 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func); > void call_rcu_bh(struct rcu_head *head, rcu_callback_t func); > void call_rcu_sched(struct rcu_head *head, rcu_callback_t func); > void synchronize_sched(void); > -void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); > -void synchronize_rcu_tasks(void); > void rcu_barrier_tasks(void); > > #ifdef CONFIG_PREEMPT_RCU > @@ -176,10 +174,14 @@ extern struct srcu_struct tasks_rcu_exit_srcu; > rcu_all_qs(); \ > rcu_note_voluntary_context_switch_lite(t); \ > } while (0) > +void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); > +void synchronize_rcu_tasks(void); > #else /* #ifdef CONFIG_TASKS_RCU */ > #define TASKS_RCU(x) do { } while (0) > #define rcu_note_voluntary_context_switch_lite(t)do { } while (0) > #define rcu_note_voluntary_context_switch(t) rcu_all_qs() > +#define call_rcu_tasks call_rcu_sched > +#define synchronize_rcu_tasks synchronize_sched > #endif /* #else #ifdef CONFIG_TASKS_RCU */ > > /** > diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig > index be90c945063f..9210379c0353 100644 > --- a/kernel/rcu/Kconfig > +++ b/kernel/rcu/Kconfig > @@ -69,8 +69,7 @@ config TREE_SRCU > This option selects the full-fledged version of SRCU. > > config TASKS_RCU > - bool > - default n > + def_bool PREEMPT > select SRCU > help > This option enables a task-based RCU implementation that uses > diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c > index aedc8f2ad955..273032dc8f2d 100644 > --- a/kernel/rcu/rcutorture.c > +++
Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
On Thu, 25 May 2017 08:14:01 -0700 "Paul E. McKenney" wrote: > On Thu, May 25, 2017 at 08:15:55AM +0200, Ingo Molnar wrote: > > > > * Masami Hiramatsu wrote: > > > > > --- a/kernel/kprobes.c > > > +++ b/kernel/kprobes.c > > > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, > > > struct kprobe *p) > > > static bool kprobes_allow_optimization; > > > > > > /* > > > + * Synchronizing wait on trampline code for interrupted tasks/threads. > > > + * Since the threads running on dynamically allocated trampline code > > > + * can be interrupted, kprobes has to wait for those tasks back on > > > + * track and scheduled. If the kernel is preemptive, the thread can be > > > + * preempted by other tasks on the trampoline too. For such case, this > > > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > > > + */ > > > +static void synchronize_on_trampoline(void) > > > +{ > > > +#ifdef CONFIG_PREEMPT > > > + synchronize_rcu_tasks(); > > > +#else > > > + synchronize_sched(); > > > +#endif > > > +} > > > > So that's really unacceptably ugly. > > > > Paul, I still question the need to have tasks-RCU as a Kconfig distinction, > > _especially_ if its API usage results in such ugly secondary #ifdefs... > > > > Why isn't there a single synchronize_rcu_tasks() API function, which does > > what is > > expected, where the _RCU_ code figures out how to implement it? > > > > I.e.: > > > > - There should be no user configurable TASKS_RCU Kconfig setting - at most > > a > >helper Kconfig that is automatically selected by the RCU code itself. > > > > - Both ftrace andkprobes would use a simple synchronize_rcu_tasks() call. > > > > Thanks, > > How about the following (untested) patch? This patch is good to me for kprobes at least. Reviewed-by: Masami Hiramatsu Steve, how about ftrace usage? Thank you, > > This is against -rcu's rcu/dev branch, FWIW. > > And I am also queuing patches with other cleanups, including do_exit(), > enabled by this approach. > > Thanx, Paul > > > > commit 9ab47fba45cea06e223e524d392621b64c174720 > Author: Paul E. McKenney > Date: Thu May 25 08:05:00 2017 -0700 > > rcu: Drive TASKS_RCU directly off of PREEMPT > > The actual use of TASKS_RCU is only when PREEMPT, otherwise RCU-sched > is used instead. This commit therefore makes synchronize_rcu_tasks() > and call_rcu_tasks() available always, but mapped to synchronize_sched() > and call_rcu_sched(), respectively, when !PREEMPT. This approach also > allows some #ifdefs to be removed from rcutorture. > > Reported-by: Ingo Molnar > Signed-off-by: Paul E. McKenney > > --- > > include/linux/rcupdate.h |6 -- > kernel/rcu/Kconfig |3 +-- > kernel/rcu/rcutorture.c | 17 + > 3 files changed, 6 insertions(+), 20 deletions(-) > > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h > index f816fc72b51e..c3f380befdd7 100644 > --- a/include/linux/rcupdate.h > +++ b/include/linux/rcupdate.h > @@ -58,8 +58,6 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func); > void call_rcu_bh(struct rcu_head *head, rcu_callback_t func); > void call_rcu_sched(struct rcu_head *head, rcu_callback_t func); > void synchronize_sched(void); > -void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); > -void synchronize_rcu_tasks(void); > void rcu_barrier_tasks(void); > > #ifdef CONFIG_PREEMPT_RCU > @@ -176,10 +174,14 @@ extern struct srcu_struct tasks_rcu_exit_srcu; > rcu_all_qs(); \ > rcu_note_voluntary_context_switch_lite(t); \ > } while (0) > +void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); > +void synchronize_rcu_tasks(void); > #else /* #ifdef CONFIG_TASKS_RCU */ > #define TASKS_RCU(x) do { } while (0) > #define rcu_note_voluntary_context_switch_lite(t)do { } while (0) > #define rcu_note_voluntary_context_switch(t) rcu_all_qs() > +#define call_rcu_tasks call_rcu_sched > +#define synchronize_rcu_tasks synchronize_sched > #endif /* #else #ifdef CONFIG_TASKS_RCU */ > > /** > diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig > index be90c945063f..9210379c0353 100644 > --- a/kernel/rcu/Kconfig > +++ b/kernel/rcu/Kconfig > @@ -69,8 +69,7 @@ config TREE_SRCU > This option selects the full-fledged version of SRCU. > > config TASKS_RCU > - bool > - default n > + def_bool PREEMPT > select SRCU > help > This option enables a task-based RCU implementation that uses > diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c > index aedc8f2ad955..273032dc8f2d 100644 > --- a/kernel/rcu/rcutorture.c > +++ b/kernel/rcu/rcutorture.c > @@ -659,8 +659,6 @@ static struct rcu_torture_ops sched_ops = { > .name = "sched" > }; > > -#ifdef
Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
On Thu, May 25, 2017 at 08:15:55AM +0200, Ingo Molnar wrote: > > * Masami Hiramatsuwrote: > > > --- a/kernel/kprobes.c > > +++ b/kernel/kprobes.c > > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, > > struct kprobe *p) > > static bool kprobes_allow_optimization; > > > > /* > > + * Synchronizing wait on trampline code for interrupted tasks/threads. > > + * Since the threads running on dynamically allocated trampline code > > + * can be interrupted, kprobes has to wait for those tasks back on > > + * track and scheduled. If the kernel is preemptive, the thread can be > > + * preempted by other tasks on the trampoline too. For such case, this > > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > > + */ > > +static void synchronize_on_trampoline(void) > > +{ > > +#ifdef CONFIG_PREEMPT > > + synchronize_rcu_tasks(); > > +#else > > + synchronize_sched(); > > +#endif > > +} > > So that's really unacceptably ugly. > > Paul, I still question the need to have tasks-RCU as a Kconfig distinction, > _especially_ if its API usage results in such ugly secondary #ifdefs... > > Why isn't there a single synchronize_rcu_tasks() API function, which does > what is > expected, where the _RCU_ code figures out how to implement it? > > I.e.: > > - There should be no user configurable TASKS_RCU Kconfig setting - at most a >helper Kconfig that is automatically selected by the RCU code itself. > > - Both ftrace andkprobes would use a simple synchronize_rcu_tasks() call. > > Thanks, How about the following (untested) patch? This is against -rcu's rcu/dev branch, FWIW. And I am also queuing patches with other cleanups, including do_exit(), enabled by this approach. Thanx, Paul commit 9ab47fba45cea06e223e524d392621b64c174720 Author: Paul E. McKenney Date: Thu May 25 08:05:00 2017 -0700 rcu: Drive TASKS_RCU directly off of PREEMPT The actual use of TASKS_RCU is only when PREEMPT, otherwise RCU-sched is used instead. This commit therefore makes synchronize_rcu_tasks() and call_rcu_tasks() available always, but mapped to synchronize_sched() and call_rcu_sched(), respectively, when !PREEMPT. This approach also allows some #ifdefs to be removed from rcutorture. Reported-by: Ingo Molnar Signed-off-by: Paul E. McKenney --- include/linux/rcupdate.h |6 -- kernel/rcu/Kconfig |3 +-- kernel/rcu/rcutorture.c | 17 + 3 files changed, 6 insertions(+), 20 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index f816fc72b51e..c3f380befdd7 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -58,8 +58,6 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func); void call_rcu_bh(struct rcu_head *head, rcu_callback_t func); void call_rcu_sched(struct rcu_head *head, rcu_callback_t func); void synchronize_sched(void); -void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); -void synchronize_rcu_tasks(void); void rcu_barrier_tasks(void); #ifdef CONFIG_PREEMPT_RCU @@ -176,10 +174,14 @@ extern struct srcu_struct tasks_rcu_exit_srcu; rcu_all_qs(); \ rcu_note_voluntary_context_switch_lite(t); \ } while (0) +void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); +void synchronize_rcu_tasks(void); #else /* #ifdef CONFIG_TASKS_RCU */ #define TASKS_RCU(x) do { } while (0) #define rcu_note_voluntary_context_switch_lite(t) do { } while (0) #define rcu_note_voluntary_context_switch(t) rcu_all_qs() +#define call_rcu_tasks call_rcu_sched +#define synchronize_rcu_tasks synchronize_sched #endif /* #else #ifdef CONFIG_TASKS_RCU */ /** diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig index be90c945063f..9210379c0353 100644 --- a/kernel/rcu/Kconfig +++ b/kernel/rcu/Kconfig @@ -69,8 +69,7 @@ config TREE_SRCU This option selects the full-fledged version of SRCU. config TASKS_RCU - bool - default n + def_bool PREEMPT select SRCU help This option enables a task-based RCU implementation that uses diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index aedc8f2ad955..273032dc8f2d 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -659,8 +659,6 @@ static struct rcu_torture_ops sched_ops = { .name = "sched" }; -#ifdef CONFIG_TASKS_RCU - /* * Definitions for RCU-tasks torture testing. */ @@ -698,24 +696,11 @@ static struct rcu_torture_ops tasks_ops = { .name = "tasks" }; -#define RCUTORTURE_TASKS_OPS _ops, - static bool __maybe_unused torturing_tasks(void) { return cur_ops == _ops; } -#else /* #ifdef
Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
On Thu, May 25, 2017 at 08:15:55AM +0200, Ingo Molnar wrote: > > * Masami Hiramatsu wrote: > > > --- a/kernel/kprobes.c > > +++ b/kernel/kprobes.c > > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, > > struct kprobe *p) > > static bool kprobes_allow_optimization; > > > > /* > > + * Synchronizing wait on trampline code for interrupted tasks/threads. > > + * Since the threads running on dynamically allocated trampline code > > + * can be interrupted, kprobes has to wait for those tasks back on > > + * track and scheduled. If the kernel is preemptive, the thread can be > > + * preempted by other tasks on the trampoline too. For such case, this > > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > > + */ > > +static void synchronize_on_trampoline(void) > > +{ > > +#ifdef CONFIG_PREEMPT > > + synchronize_rcu_tasks(); > > +#else > > + synchronize_sched(); > > +#endif > > +} > > So that's really unacceptably ugly. > > Paul, I still question the need to have tasks-RCU as a Kconfig distinction, > _especially_ if its API usage results in such ugly secondary #ifdefs... > > Why isn't there a single synchronize_rcu_tasks() API function, which does > what is > expected, where the _RCU_ code figures out how to implement it? > > I.e.: > > - There should be no user configurable TASKS_RCU Kconfig setting - at most a >helper Kconfig that is automatically selected by the RCU code itself. > > - Both ftrace andkprobes would use a simple synchronize_rcu_tasks() call. > > Thanks, How about the following (untested) patch? This is against -rcu's rcu/dev branch, FWIW. And I am also queuing patches with other cleanups, including do_exit(), enabled by this approach. Thanx, Paul commit 9ab47fba45cea06e223e524d392621b64c174720 Author: Paul E. McKenney Date: Thu May 25 08:05:00 2017 -0700 rcu: Drive TASKS_RCU directly off of PREEMPT The actual use of TASKS_RCU is only when PREEMPT, otherwise RCU-sched is used instead. This commit therefore makes synchronize_rcu_tasks() and call_rcu_tasks() available always, but mapped to synchronize_sched() and call_rcu_sched(), respectively, when !PREEMPT. This approach also allows some #ifdefs to be removed from rcutorture. Reported-by: Ingo Molnar Signed-off-by: Paul E. McKenney --- include/linux/rcupdate.h |6 -- kernel/rcu/Kconfig |3 +-- kernel/rcu/rcutorture.c | 17 + 3 files changed, 6 insertions(+), 20 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index f816fc72b51e..c3f380befdd7 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -58,8 +58,6 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func); void call_rcu_bh(struct rcu_head *head, rcu_callback_t func); void call_rcu_sched(struct rcu_head *head, rcu_callback_t func); void synchronize_sched(void); -void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); -void synchronize_rcu_tasks(void); void rcu_barrier_tasks(void); #ifdef CONFIG_PREEMPT_RCU @@ -176,10 +174,14 @@ extern struct srcu_struct tasks_rcu_exit_srcu; rcu_all_qs(); \ rcu_note_voluntary_context_switch_lite(t); \ } while (0) +void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); +void synchronize_rcu_tasks(void); #else /* #ifdef CONFIG_TASKS_RCU */ #define TASKS_RCU(x) do { } while (0) #define rcu_note_voluntary_context_switch_lite(t) do { } while (0) #define rcu_note_voluntary_context_switch(t) rcu_all_qs() +#define call_rcu_tasks call_rcu_sched +#define synchronize_rcu_tasks synchronize_sched #endif /* #else #ifdef CONFIG_TASKS_RCU */ /** diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig index be90c945063f..9210379c0353 100644 --- a/kernel/rcu/Kconfig +++ b/kernel/rcu/Kconfig @@ -69,8 +69,7 @@ config TREE_SRCU This option selects the full-fledged version of SRCU. config TASKS_RCU - bool - default n + def_bool PREEMPT select SRCU help This option enables a task-based RCU implementation that uses diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c index aedc8f2ad955..273032dc8f2d 100644 --- a/kernel/rcu/rcutorture.c +++ b/kernel/rcu/rcutorture.c @@ -659,8 +659,6 @@ static struct rcu_torture_ops sched_ops = { .name = "sched" }; -#ifdef CONFIG_TASKS_RCU - /* * Definitions for RCU-tasks torture testing. */ @@ -698,24 +696,11 @@ static struct rcu_torture_ops tasks_ops = { .name = "tasks" }; -#define RCUTORTURE_TASKS_OPS _ops, - static bool __maybe_unused torturing_tasks(void) { return cur_ops == _ops; } -#else /* #ifdef CONFIG_TASKS_RCU */ - -#define RCUTORTURE_TASKS_OPS - -static bool __maybe_unused
Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
On Thu, 25 May 2017 08:15:55 +0200 Ingo Molnarwrote: > > * Masami Hiramatsu wrote: > > > --- a/kernel/kprobes.c > > +++ b/kernel/kprobes.c > > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, > > struct kprobe *p) > > static bool kprobes_allow_optimization; > > > > /* > > + * Synchronizing wait on trampline code for interrupted tasks/threads. > > + * Since the threads running on dynamically allocated trampline code > > + * can be interrupted, kprobes has to wait for those tasks back on > > + * track and scheduled. If the kernel is preemptive, the thread can be > > + * preempted by other tasks on the trampoline too. For such case, this > > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > > + */ > > +static void synchronize_on_trampoline(void) > > +{ > > +#ifdef CONFIG_PREEMPT > > + synchronize_rcu_tasks(); > > +#else > > + synchronize_sched(); > > +#endif > > +} > > So that's really unacceptably ugly. > > Paul, I still question the need to have tasks-RCU as a Kconfig distinction, > _especially_ if its API usage results in such ugly secondary #ifdefs... > > Why isn't there a single synchronize_rcu_tasks() API function, which does > what is > expected, where the _RCU_ code figures out how to implement it? Hmm, if there are only 3 users, kprobes/ftrace/kpatch, and those use it same purpose (wait for tasks which preempted or interrupted), maybe we can switch the implementation of synchronize_rcu_tasks() in RCU level. > > I.e.: > > - There should be no user configurable TASKS_RCU Kconfig setting - at most a >helper Kconfig that is automatically selected by the RCU code itself. TASKS_RCU kconfig is already a hidden setting. It is selected if CONFIG_PREEMPT=y && CONFIG_KPROBES=y && HAVE_OPTPROBES=y for kprobes. Thank you, > > - Both ftrace andkprobes would use a simple synchronize_rcu_tasks() call. -- Masami Hiramatsu
Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
On Thu, 25 May 2017 08:15:55 +0200 Ingo Molnar wrote: > > * Masami Hiramatsu wrote: > > > --- a/kernel/kprobes.c > > +++ b/kernel/kprobes.c > > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, > > struct kprobe *p) > > static bool kprobes_allow_optimization; > > > > /* > > + * Synchronizing wait on trampline code for interrupted tasks/threads. > > + * Since the threads running on dynamically allocated trampline code > > + * can be interrupted, kprobes has to wait for those tasks back on > > + * track and scheduled. If the kernel is preemptive, the thread can be > > + * preempted by other tasks on the trampoline too. For such case, this > > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > > + */ > > +static void synchronize_on_trampoline(void) > > +{ > > +#ifdef CONFIG_PREEMPT > > + synchronize_rcu_tasks(); > > +#else > > + synchronize_sched(); > > +#endif > > +} > > So that's really unacceptably ugly. > > Paul, I still question the need to have tasks-RCU as a Kconfig distinction, > _especially_ if its API usage results in such ugly secondary #ifdefs... > > Why isn't there a single synchronize_rcu_tasks() API function, which does > what is > expected, where the _RCU_ code figures out how to implement it? Hmm, if there are only 3 users, kprobes/ftrace/kpatch, and those use it same purpose (wait for tasks which preempted or interrupted), maybe we can switch the implementation of synchronize_rcu_tasks() in RCU level. > > I.e.: > > - There should be no user configurable TASKS_RCU Kconfig setting - at most a >helper Kconfig that is automatically selected by the RCU code itself. TASKS_RCU kconfig is already a hidden setting. It is selected if CONFIG_PREEMPT=y && CONFIG_KPROBES=y && HAVE_OPTPROBES=y for kprobes. Thank you, > > - Both ftrace andkprobes would use a simple synchronize_rcu_tasks() call. -- Masami Hiramatsu
Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
* Masami Hiramatsuwrote: > --- a/kernel/kprobes.c > +++ b/kernel/kprobes.c > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, struct > kprobe *p) > static bool kprobes_allow_optimization; > > /* > + * Synchronizing wait on trampline code for interrupted tasks/threads. > + * Since the threads running on dynamically allocated trampline code > + * can be interrupted, kprobes has to wait for those tasks back on > + * track and scheduled. If the kernel is preemptive, the thread can be > + * preempted by other tasks on the trampoline too. For such case, this > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > + */ > +static void synchronize_on_trampoline(void) > +{ > +#ifdef CONFIG_PREEMPT > + synchronize_rcu_tasks(); > +#else > + synchronize_sched(); > +#endif > +} So that's really unacceptably ugly. Paul, I still question the need to have tasks-RCU as a Kconfig distinction, _especially_ if its API usage results in such ugly secondary #ifdefs... Why isn't there a single synchronize_rcu_tasks() API function, which does what is expected, where the _RCU_ code figures out how to implement it? I.e.: - There should be no user configurable TASKS_RCU Kconfig setting - at most a helper Kconfig that is automatically selected by the RCU code itself. - Both ftrace andkprobes would use a simple synchronize_rcu_tasks() call. Thanks, Ingo
Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
* Masami Hiramatsu wrote: > --- a/kernel/kprobes.c > +++ b/kernel/kprobes.c > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, struct > kprobe *p) > static bool kprobes_allow_optimization; > > /* > + * Synchronizing wait on trampline code for interrupted tasks/threads. > + * Since the threads running on dynamically allocated trampline code > + * can be interrupted, kprobes has to wait for those tasks back on > + * track and scheduled. If the kernel is preemptive, the thread can be > + * preempted by other tasks on the trampoline too. For such case, this > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > + */ > +static void synchronize_on_trampoline(void) > +{ > +#ifdef CONFIG_PREEMPT > + synchronize_rcu_tasks(); > +#else > + synchronize_sched(); > +#endif > +} So that's really unacceptably ugly. Paul, I still question the need to have tasks-RCU as a Kconfig distinction, _especially_ if its API usage results in such ugly secondary #ifdefs... Why isn't there a single synchronize_rcu_tasks() API function, which does what is expected, where the _RCU_ code figures out how to implement it? I.e.: - There should be no user configurable TASKS_RCU Kconfig setting - at most a helper Kconfig that is automatically selected by the RCU code itself. - Both ftrace andkprobes would use a simple synchronize_rcu_tasks() call. Thanks, Ingo
Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
On Wed, May 24, 2017 at 09:00:03PM +0900, Masami Hiramatsu wrote: > To enable jump optimized probe with CONFIG_PREEMPT, use > synchronize_rcu_tasks() to wait for all tasks preempted > on trampoline code back on track. > > Since the jump optimized kprobes can replace multiple > instructions, there can be tasks which are preempted > on the 2nd (or 3rd) instructions. If the kprobe > replaces those instructions by a jump instruction, > when those tasks back to the preempted place, it is > a middle of the jump instruction and causes a kernel > panic. > To avoid such tragedies in advance, kprobe optimizer > prepare a detour route using normal kprobe (e.g. > int3 breakpoint on x86), and wait for the tasks which > is interrrupted on such place by synchronize_sched() > when CONFIG_PREEMPT=n. > If CONFIG_PREEMPT=y, things be more complicated, because > such interrupted thread can be preempted (other thread > can be scheduled in interrupt handler.) So, kprobes > optimizer has to wait for those tasks scheduled normally. > In this case we can use synchronize_rcu_tasks() which > ensures that all preempted tasks back on track and > schedule it. > > Signed-off-by: Masami HiramatsuAcked-by: Paul E. McKenney > --- > arch/Kconfig |2 +- > kernel/kprobes.c | 23 ++- > 2 files changed, 23 insertions(+), 2 deletions(-) > > diff --git a/arch/Kconfig b/arch/Kconfig > index 6c00e5b..2abb8de 100644 > --- a/arch/Kconfig > +++ b/arch/Kconfig > @@ -90,7 +90,7 @@ config STATIC_KEYS_SELFTEST > config OPTPROBES > def_bool y > depends on KPROBES && HAVE_OPTPROBES > - depends on !PREEMPT > + select TASKS_RCU if PREEMPT > > config KPROBES_ON_FTRACE > def_bool y > diff --git a/kernel/kprobes.c b/kernel/kprobes.c > index 9f60567..6d69074 100644 > --- a/kernel/kprobes.c > +++ b/kernel/kprobes.c > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, struct > kprobe *p) > static bool kprobes_allow_optimization; > > /* > + * Synchronizing wait on trampline code for interrupted tasks/threads. > + * Since the threads running on dynamically allocated trampline code > + * can be interrupted, kprobes has to wait for those tasks back on > + * track and scheduled. If the kernel is preemptive, the thread can be > + * preempted by other tasks on the trampoline too. For such case, this > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > + */ > +static void synchronize_on_trampoline(void) > +{ > +#ifdef CONFIG_PREEMPT > + synchronize_rcu_tasks(); > +#else > + synchronize_sched(); > +#endif > +} > + > +/* > * Call all pre_handler on the list, but ignores its return value. > * This must be called from arch-dep optimized caller. > */ > @@ -578,8 +595,12 @@ static void kprobe_optimizer(struct work_struct *work) >* there is a chance that Nth instruction is interrupted. In that >* case, running interrupt can return to 2nd-Nth byte of jump >* instruction. This wait is for avoiding it. > + * With CONFIG_PREEMPT, the interrupts can leads preemption. To wait > + * for such thread, we will use synchronize_rcu_tasks() which ensures > + * all preeempted tasks are scheduled normally. So we can ensure there > + * is no threads running there. >*/ > - synchronize_sched(); > + synchronize_on_trampoline(); > > /* Step 3: Optimize kprobes after quiesence period */ > do_optimize_kprobes(); >
Re: [RFC PATCH tip/master] kprobes: Use synchronize_rcu_tasks() for optprobe wit CONFIG_PREEMPT
On Wed, May 24, 2017 at 09:00:03PM +0900, Masami Hiramatsu wrote: > To enable jump optimized probe with CONFIG_PREEMPT, use > synchronize_rcu_tasks() to wait for all tasks preempted > on trampoline code back on track. > > Since the jump optimized kprobes can replace multiple > instructions, there can be tasks which are preempted > on the 2nd (or 3rd) instructions. If the kprobe > replaces those instructions by a jump instruction, > when those tasks back to the preempted place, it is > a middle of the jump instruction and causes a kernel > panic. > To avoid such tragedies in advance, kprobe optimizer > prepare a detour route using normal kprobe (e.g. > int3 breakpoint on x86), and wait for the tasks which > is interrrupted on such place by synchronize_sched() > when CONFIG_PREEMPT=n. > If CONFIG_PREEMPT=y, things be more complicated, because > such interrupted thread can be preempted (other thread > can be scheduled in interrupt handler.) So, kprobes > optimizer has to wait for those tasks scheduled normally. > In this case we can use synchronize_rcu_tasks() which > ensures that all preempted tasks back on track and > schedule it. > > Signed-off-by: Masami Hiramatsu Acked-by: Paul E. McKenney > --- > arch/Kconfig |2 +- > kernel/kprobes.c | 23 ++- > 2 files changed, 23 insertions(+), 2 deletions(-) > > diff --git a/arch/Kconfig b/arch/Kconfig > index 6c00e5b..2abb8de 100644 > --- a/arch/Kconfig > +++ b/arch/Kconfig > @@ -90,7 +90,7 @@ config STATIC_KEYS_SELFTEST > config OPTPROBES > def_bool y > depends on KPROBES && HAVE_OPTPROBES > - depends on !PREEMPT > + select TASKS_RCU if PREEMPT > > config KPROBES_ON_FTRACE > def_bool y > diff --git a/kernel/kprobes.c b/kernel/kprobes.c > index 9f60567..6d69074 100644 > --- a/kernel/kprobes.c > +++ b/kernel/kprobes.c > @@ -377,6 +377,23 @@ static inline void copy_kprobe(struct kprobe *ap, struct > kprobe *p) > static bool kprobes_allow_optimization; > > /* > + * Synchronizing wait on trampline code for interrupted tasks/threads. > + * Since the threads running on dynamically allocated trampline code > + * can be interrupted, kprobes has to wait for those tasks back on > + * track and scheduled. If the kernel is preemptive, the thread can be > + * preempted by other tasks on the trampoline too. For such case, this > + * calls synchronize_rcu_tasks() to wait for those tasks back on track. > + */ > +static void synchronize_on_trampoline(void) > +{ > +#ifdef CONFIG_PREEMPT > + synchronize_rcu_tasks(); > +#else > + synchronize_sched(); > +#endif > +} > + > +/* > * Call all pre_handler on the list, but ignores its return value. > * This must be called from arch-dep optimized caller. > */ > @@ -578,8 +595,12 @@ static void kprobe_optimizer(struct work_struct *work) >* there is a chance that Nth instruction is interrupted. In that >* case, running interrupt can return to 2nd-Nth byte of jump >* instruction. This wait is for avoiding it. > + * With CONFIG_PREEMPT, the interrupts can leads preemption. To wait > + * for such thread, we will use synchronize_rcu_tasks() which ensures > + * all preeempted tasks are scheduled normally. So we can ensure there > + * is no threads running there. >*/ > - synchronize_sched(); > + synchronize_on_trampoline(); > > /* Step 3: Optimize kprobes after quiesence period */ > do_optimize_kprobes(); >