On 5/27/2017 9:56 AM, Andy Lutomirski wrote:
On Sat, May 27, 2017 at 9:00 AM, Andy Lutomirski <l...@kernel.org> wrote:
On Sat, May 27, 2017 at 6:31 AM, kernel test robot
<xiaolong...@intel.com> wrote:

FYI, we noticed the following commit:

commit: e2a7dcce31f10bd7471b4245a6d1f2de344e7adf ("x86/mm: Rework lazy TLB to track 
the actual loaded mm")
https://git.kernel.org/cgit/linux/kernel/git/luto/linux.git x86/tlbflush_cleanup

Ugh, there's an unpleasant interaction between this patch and
intel_idle.  I suspect that the intel_idle code in question is either
wrong or pointless, but I want to investigate further.  Ingo, can you
hold off on applying this patch?

I think this is what's going on: intel_idle has an optimization and
sometimes calls leave_mm().  This is a rather expensive way of working
around x86 Linux's fairly weak lazy mm handling.  It also abuses the
whole switch_mm state machine.  In particular, there's no guarantee
that the mm is actually lazy at the time.  The old code didn't care,
but the new code can oops.

The short-term fix is to just reorder the code in leave_mm() to avoid the OOPS.

fwiw the reason the code is in intel_idle is to avoid tlb flush IPIs to idle 
cpus,
once the cpu goes into a deep enough idle state.  In the current linux code,
that is done by no longer having the old TLB live on the CPU, by switching to 
the neutral
kernel-only set of tlbs.

If your proposed changes do that (avoid the IPI/wakeup), great!
(if not, there should be a way to do that)

Reply via email to