On 9/15/23 03:59, Matteo Rizzo wrote:
> +     spin_lock_irqsave(&slub_kworker_lock, irq_flags);
> +     list_splice_init(&slub_tlbflush_queue, &local_queue);
> +     list_for_each_entry(slab, &local_queue, flush_list_elem) {
> +             unsigned long start = (unsigned long)slab_to_virt(slab);
> +             unsigned long end = start + PAGE_SIZE *
> +                     (1UL << oo_order(slab->oo));
> +
> +             if (start < addr_start)
> +                     addr_start = start;
> +             if (end > addr_end)
> +                     addr_end = end;
> +     }
> +     spin_unlock_irqrestore(&slub_kworker_lock, irq_flags);
> +
> +     if (addr_start < addr_end)
> +             flush_tlb_kernel_range(addr_start, addr_end);

I assume that the TLB flushes in the queue are going to be pretty sparse
on average.

At least on x86, flush_tlb_kernel_range() falls back pretty quickly from
individual address invalidation to just doing a full flush.  It might
not even be worth tracking the address ranges, and just do a full flush
every time.

I'd be really curious to see how often actual ranged flushes are
triggered from this code.  I expect it would be very near zero.

Reply via email to