----- On Oct 7, 2020, at 11:07 AM, Peter Zijlstra pet...@infradead.org wrote:
> On Thu, Sep 24, 2020 at 01:25:07PM -0400, Mathieu Desnoyers wrote: > >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c >> index 2d95dc3f4644..bab6f4f2809f 100644 >> --- a/kernel/sched/core.c >> +++ b/kernel/sched/core.c >> @@ -3736,6 +3736,8 @@ context_switch(struct rq *rq, struct task_struct *prev, >> */ >> arch_start_context_switch(prev); >> >> + membarrier_switch_mm(rq, prev->mm, next->mm); >> + >> /* >> * kernel -> kernel lazy + transfer active >> * user -> kernel lazy + mmgrab() active >> @@ -3752,7 +3754,6 @@ context_switch(struct rq *rq, struct task_struct *prev, >> else >> prev->active_mm = NULL; >> } else { // to user >> - membarrier_switch_mm(rq, prev->active_mm, next->mm); >> /* >> * sys_membarrier() requires an smp_mb() between setting >> * rq->curr / membarrier_switch_mm() and returning to userspace. > > I was thinking... do we need the above, when: > >> diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c >> index 8bc8b8a888b7..e5246580201b 100644 >> --- a/kernel/sched/membarrier.c >> +++ b/kernel/sched/membarrier.c >> @@ -112,13 +112,9 @@ static int membarrier_global_expedited(void) >> MEMBARRIER_STATE_GLOBAL_EXPEDITED)) >> continue; >> >> - /* >> - * Skip the CPU if it runs a kernel thread. The scheduler >> - * leaves the prior task mm in place as an optimization when >> - * scheduling a kthread. >> - */ >> + /* Skip the CPU if it runs the idle thread. */ >> p = rcu_dereference(cpu_rq(cpu)->curr); >> - if (p->flags & PF_KTHREAD) > > We retain this in the form: > > if ((p->flags & PF_KTHREAD) && !p-mm) > continue; > >> + if (is_idle_task(p)) >> continue; >> >> __cpumask_set_cpu(cpu, tmpmask); > > Specifically, we only care about kthreads when they're between > kthread_use_mm() / kthread_unuse_mm(), and in that case they will have > updated state already. > > It's too late in the day to be sure about the memory ordering though; > but if we see !->mm, they'll do/have-done switch_mm() which implies > sufficient barriers(). > > Hmm? Interesting. There are two things we want to ensure here: 1) That we issue an IPI or have the kthread issue the proper barriers when a kthread is using/unusing a mm, 2) That we don't issue an IPI to kthreads with NULL mm, so we don't disturb them. Moving the membarrier_switch_mm to cover kthread cases was to ensure (2), but if we add a p->mm NULL check in the global expedited iteration, I think we would be OK leaving the stale runqueue's membarrier state while in lazy tlb state. As far as (1) is concerned, I think your idea would work, because as you say we will have the proper barriers in kthread use/unuse mm. I just wonder whether having this stale membarrier state for lazy tlb is warranted performance-wise, as it adds complexity: the rq membarrier state will therefore not be relevant when we are in lazy tlb mode. Thoughts ? Thanks, Mathieu -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com