On 14/09/20 21:42, Thomas Gleixner wrote: > CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be > removed. Cleanup the leftovers before doing so. > > Signed-off-by: Thomas Gleixner <[email protected]> > Cc: Ingo Molnar <[email protected]> > Cc: Peter Zijlstra <[email protected]> > Cc: Juri Lelli <[email protected]> > Cc: Vincent Guittot <[email protected]> > Cc: Dietmar Eggemann <[email protected]> > Cc: Steven Rostedt <[email protected]> > Cc: Ben Segall <[email protected]> > Cc: Mel Gorman <[email protected]> > Cc: Daniel Bristot de Oliveira <[email protected]>
Small nit below; Reviewed-by: Valentin Schneider <[email protected]> > --- > kernel/sched/core.c | 6 +----- > lib/Kconfig.debug | 1 - > 2 files changed, 1 insertion(+), 6 deletions(-) > > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -3706,8 +3706,7 @@ asmlinkage __visible void schedule_tail( > * finish_task_switch() for details. > * > * finish_task_switch() will drop rq->lock() and lower preempt_count > - * and the preempt_enable() will end up enabling preemption (on > - * PREEMPT_COUNT kernels). I suppose this wanted to be s/PREEMPT_COUNT/PREEMPT/ in the first place, which ought to be still relevant. > + * and the preempt_enable() will end up enabling preemption. > */ > > rq = finish_task_switch(prev);

