On Tue, Apr 23, 2019 at 9:56 AM Nadav Amit <[email protected]> wrote: > > > On Apr 23, 2019, at 9:50 AM, Andy Lutomirski <[email protected]> wrote: > > > > On Tue, Apr 23, 2019 at 12:12 AM Nadav Amit <[email protected]> wrote: >https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/log/?h=x86/fixes > >> Remove flush_tlb_info variables from the stack. This allows to align > >> flush_tlb_info to cache-line and avoid potentially unnecessary cache > >> line movements. It also allows to have a fixed virtual-to-physical > >> translation of the variables, which reduces TLB misses. > >> > >> Use per-CPU struct for flush_tlb_mm_range() and > >> flush_tlb_kernel_range(). Add debug assertions to ensure there are > >> no nested TLB flushes that might overwrite the per-CPU data. For > >> arch_tlbbatch_flush(), use a const struct. > >> > >> Results when running a microbenchmarks that performs 10^6 MADV_DONTEED > >> operations and touching a page, in which 3 additional threads run a > >> busy-wait loop (5 runs): > > > > Can you add a memset(,,,. 0, sizeof(struct flush_tlb_info)) everywhere > > you grab it? Or, even better, perhaps do something like: > > > > static inline struct flush_tlb_info *get_flush_tlb_info(void) > > { > > /* check reentrancy, make sure that we use smp_processor_id() or > > otherwise assert that we're bound to a single CPU. */ > > struct flush_tlb_info *ptr = this_cpu_ptr(...); > > memset(ptr, 0, sizeof(*ptr)); > > return ptr; > > } > > > > static inline void put_flush_tlb_info(void) > > { > > /* finish checking reentrancy. */ > > } > > I’ll check if the compiler is smart enough to avoid redundant assignments, > and if it is not, I’ll just give all the struct arguments to > get_flush_tlb_info() instead of memset() if you don’t mind.
Sounds good. > > I also want to give a try for parallelizing the remote and local > invocations, which really annoys me every time I look at the code. Yes please!

