On Thu, Oct 09, 2014 at 06:57:13PM +0200, Oleg Nesterov wrote: > > Your earlier proposal would penalize every > > !x86 arch by adding extra code to the scheduler core while they already > > automagically preserve their thread_info::preempt_count. > > Sure, and it can't be even compiled on !x86. > > But this is simple, just we need a new helper, preempt_count_restore(), > defined as nop in asm-generic/preempt.h. Well, perhaps another helper > makes sense, preempt_count_raw() which simply reads the counter, but > this is minor. > > After the patch below we can remove ->saved_preempt_count. Including > init_task_preempt_count(), it is no longer needed after the change in > schedule_tail().
Ah, right, this makes more sense. > @@ -2333,10 +2336,12 @@ context_switch(struct rq *rq, struct task_struct > *prev, > #endif > > context_tracking_task_switch(prev, next); > + > + pc = preempt_count(); The only problem here is that you can loose PREEMPT_NEED_RESCHED, I haven't thought about whether that is a problem here or not. > /* Here we just switch the register state and the stack. */ > switch_to(prev, next, prev); > - > barrier(); > + preempt_count_restore(pc); > /* > * this_rq must be evaluated again because prev may have moved > * CPUs since it called schedule(), thus the 'rq' on its stack > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

