On Mon, Dec 4, 2017 at 6:07 AM, Thomas Gleixner <t...@linutronix.de> wrote:
> From: Dave Hansen <dave.han...@linux.intel.com>
>
> If changing the page tables in such a way that an invalidation of all
> contexts (aka. PCIDs / ASIDs) is required, they can be actively invalidated
> by:
>
>  1. INVPCID for each PCID (works for single pages too).
>
>  2. Load CR3 with each PCID without the NOFLUSH bit set
>
>  3. Load CR3 with the NOFLUSH bit set for each and do INVLPG for each address.
>
> But, none of these are really feasible since there are ~6 ASIDs (12 with
> KERNEL_PAGE_TABLE_ISOLATION) at the time that invalidation is required.
> Instead of actively invalidating them, invalidate the *current* context and
> also mark the cpu_tlbstate _quickly_ to indicate future invalidation to be
> required.
>
> At the next context-switch, look for this indicator
> ('invalidate_other' being set) invalidate all of the
> cpu_tlbstate.ctxs[] entries.
>
> This ensures that any future context switches will do a full flush
> of the TLB, picking up the previous changes.
>
> Signed-off-by: Dave Hansen <dave.han...@linux.intel.com>
> Signed-off-by: Ingo Molnar <mi...@kernel.org>
> Signed-off-by: Thomas Gleixner <t...@linutronix.de>
> Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
> Cc: Rik van Riel <r...@redhat.com>
> Cc: Denys Vlasenko <dvlas...@redhat.com>
> Cc: Andy Lutomirski <l...@kernel.org>
> Cc: michael.schw...@iaik.tugraz.at
> Cc: daniel.gr...@iaik.tugraz.at
> Cc: Brian Gerst <brge...@gmail.com>
> Cc: Josh Poimboeuf <jpoim...@redhat.com>
> Cc: hu...@google.com
> Cc: Borislav Petkov <b...@alien8.de>
> Cc: moritz.l...@iaik.tugraz.at
> Cc: keesc...@google.com
> Cc: Linus Torvalds <torva...@linux-foundation.org>
> Cc: richard.fell...@student.tugraz.at
> Link: https://lkml.kernel.org/r/20171123003507.e8c32...@viggo.jf.intel.com
>
> ---
>  arch/x86/include/asm/tlbflush.h |   42 
> ++++++++++++++++++++++++++++++----------
>  arch/x86/mm/tlb.c               |   37 +++++++++++++++++++++++++++++++++++
>  2 files changed, 69 insertions(+), 10 deletions(-)
>
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -188,6 +188,17 @@ struct tlb_state {
>         bool is_lazy;
>
>         /*
> +        * If set we changed the page tables in such a way that we
> +        * needed an invalidation of all contexts (aka. PCIDs / ASIDs).
> +        * This tells us to go invalidate all the non-loaded ctxs[]
> +        * on the next context switch.
> +        *
> +        * The current ctx was kept up-to-date as it ran and does not
> +        * need to be invalidated.
> +        */
> +       bool invalidate_other;
> +
> +       /*
>          * Access to this CR4 shadow and to H/W CR4 is protected by
>          * disabling interrupts when modifying either one.
>          */
> @@ -267,6 +278,19 @@ static inline unsigned long cr4_read_sha
>         return this_cpu_read(cpu_tlbstate.cr4);
>  }
>
> +static inline void invalidate_pcid_other(void)
> +{
> +       /*
> +        * With global pages, all of the shared kenel page tables
> +        * are set as _PAGE_GLOBAL.  We have no shared nonglobals
> +        * and nothing to do here.
> +        */
> +       if (!static_cpu_has_bug(X86_BUG_CPU_SECURE_MODE_KPTI))
> +               return;

I think I'd be more comfortable if this check were in the caller, not
here.  Shouldn't a function called invalidate_pcid_other() do what the
name says?

> +
> +       this_cpu_write(cpu_tlbstate.invalidate_other, true);

Why do we need this extra variable instead of just looping over all
other ASIDs and invalidating them?  It would be something like:

        for (i = 1; i < TLB_NR_DYN_ASIDS; i++) {
                if (i != this_cpu_read(cpu_tlbstate.loaded_mm_asid))
                       this_cpu_write(cpu_tlbstate.ctxs[i].ctx_id, 0);
        }

modulo epic whitespace damage and possible typos.

> +}
> +
>  /*
>   * Save some of cr4 feature set we're using (e.g.  Pentium 4MB
>   * enable and PPro Global page enable), so that any CPU's that boot
> @@ -341,24 +365,22 @@ static inline void __native_flush_tlb_si
>
>  static inline void __flush_tlb_all(void)
>  {
> -       if (boot_cpu_has(X86_FEATURE_PGE))
> +       if (boot_cpu_has(X86_FEATURE_PGE)) {
>                 __flush_tlb_global();
> -       else
> +       } else {
>                 __flush_tlb();
> -
> -       /*
> -        * Note: if we somehow had PCID but not PGE, then this wouldn't work 
> --
> -        * we'd end up flushing kernel translations for the current ASID but
> -        * we might fail to flush kernel translations for other cached ASIDs.
> -        *
> -        * To avoid this issue, we force PCID off if PGE is off.
> -        */
> +       }
>  }
>
>  static inline void __flush_tlb_one(unsigned long addr)
>  {
>         count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ONE);
>         __flush_tlb_single(addr);
> +       /*
> +        * Invalidate other address spaces inaccessible to single-page
> +        * invalidation:
> +        */

Ugh.  If I'm reading this right, __flush_tlb_single() means "flush one
user address" and __flush_tlb_one() means "flush one kernel address".
That's, um, not exactly obvious.  Could this be at least commented
better?

--Andy

Reply via email to