Re: [PATCH v7 2/2] arm64: support batched/deferred tlb shootdown during page reclamation

2023-01-09 Thread Barry Song
On Tue, Jan 10, 2023 at 1:19 AM Catalin Marinas  wrote:
>
> On Sun, Jan 08, 2023 at 06:48:41PM +0800, Barry Song wrote:
> > On Fri, Jan 6, 2023 at 2:15 AM Catalin Marinas  
> > wrote:
> > > On Thu, Nov 17, 2022 at 04:26:48PM +0800, Yicong Yang wrote:
> > > > It is tested on 4,8,128 CPU platforms and shows to be beneficial on
> > > > large systems but may not have improvement on small systems like on
> > > > a 4 CPU platform. So make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends
> > > > on CONFIG_EXPERT for this stage and make this disabled on systems
> > > > with less than 8 CPUs. User can modify this threshold according to
> > > > their own platforms by CONFIG_NR_CPUS_FOR_BATCHED_TLB.
> > >
> > > What's the overhead of such batching on systems with 4 or fewer CPUs? If
> > > it isn't noticeable, I'd rather have it always on than some number
> > > chosen on whichever SoC you tested.
> >
> > On the one hand, tlb flush is cheap on a small system. so batching tlb flush
> > helps very minorly.
>
> Yes, it probably won't help on small systems but I don't like config
> options choosing the threshold, which may be different from system to
> system even if they have the same number of CPUs. A run-time tunable
> would be a better option.
>
> > On the other hand, since we have batched the tlb flush, new PTEs might be
> > invisible to others before the final broadcast is done and Ack-ed.
>
> The new PTEs could indeed be invisible at the TLB level but not at the
> memory (page table) level since this is done under the PTL IIUC.
>
> > thus, there
> > is a risk someone else might do mprotect or similar things  on those 
> > deferred
> > pages which will ask for read-modify-write on those deferred PTEs.
>
> And this should be fine, we have things like the PTL in place for the
> actual memory access to the page table.
>
> > in this
> > case, mm will do an explicit flush by flush_tlb_batched_pending which is
> > not required if tlb flush is not deferred.
>
> I don't fully understand why it's needed, or at least why it would be
> needed on arm64. At the end of an mprotect(), we have the final PTEs in
> place and we just need to issue a TLBI for that range.
> change_pte_range() for example has a tlb_flush_pte_range() if the PTE
> was present and that won't be done lazily. If there are other TLBIs
> pending for the same range, they'll be done later though likely
> unnecessarily but still cheaper than issuing a flush_tlb_mm().

Thanks! I'd like to ask for some comments from Nadav and Mel from the x86 side.
Revisiting the code of flush_tlb_batched_pending shows we still have races even
under PTL.

/*
 * Reclaim unmaps pages under the PTL but do not flush the TLB prior to
 * releasing the PTL if TLB flushes are batched. It's possible for a parallel
 * operation such as mprotect or munmap to race between reclaim unmapping
 * the page and flushing the page. If this race occurs, it potentially allows
 * access to data via a stale TLB entry. Tracking all mm's that have TLB
 * batching in flight would be expensive during reclaim so instead track
 * whether TLB batching occurred in the past and if so then do a flush here
 * if required. This will cost one additional flush per reclaim cycle paid
 * by the first operation at risk such as mprotect and mumap.
 *
 * This must be called under the PTL so that an access to tlb_flush_batched
 * that is potentially a "reclaim vs mprotect/munmap/etc" race will synchronise
 * via the PTL.
 */
void flush_tlb_batched_pending(struct mm_struct *mm)
{
}

According to Catalin's comment, it seems over-cautious since we can make sure
people see updated TLB after mprotect and munmap are done as they have tlb
flush.  We can also make sure mprotect see updated "memory" of PTEs from
reclamation though pte is not visible in TLB level.

Hi Mel, Nadav, would you please help clarify the exact sequence of how this race
is going to happen?

>
> > void flush_tlb_batched_pending(struct mm_struct *mm)
> > {
> >int batch = atomic_read(>tlb_flush_batched);
> >int pending = batch & TLB_FLUSH_BATCH_PENDING_MASK;
> >int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;
> >
> >if (pending != flushed) {
> >flush_tlb_mm(mm);
> > /*
> >  * If the new TLB flushing is pending during flushing, leave
> >  * mm->tlb_flush_batched as is, to avoid losing flushing.
> > */
> >   atomic_cmpxchg(>tlb_flush_batched, batch,
> >pending | (pending << TLB_FLUSH_BATCH_FLUSHED_SHIFT));
> >  }
> > }
>
> I guess this works on x86 better as it avoids the IPIs if this flush
> already happened. But on arm64 we already issued the TLBI, we just
> didn't wait for it to complete via a DSB.
>
> > I believe Anshuman has contributed many points on this in those previous
> > discussions.
>
> Yeah, I should re-read the old threads.
>
> --
> Catalin

Thanks
Barry


Re: [PATCH v7 2/2] arm64: support batched/deferred tlb shootdown during page reclamation

2023-01-09 Thread Catalin Marinas
On Sun, Jan 08, 2023 at 06:48:41PM +0800, Barry Song wrote:
> On Fri, Jan 6, 2023 at 2:15 AM Catalin Marinas  
> wrote:
> > On Thu, Nov 17, 2022 at 04:26:48PM +0800, Yicong Yang wrote:
> > > It is tested on 4,8,128 CPU platforms and shows to be beneficial on
> > > large systems but may not have improvement on small systems like on
> > > a 4 CPU platform. So make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends
> > > on CONFIG_EXPERT for this stage and make this disabled on systems
> > > with less than 8 CPUs. User can modify this threshold according to
> > > their own platforms by CONFIG_NR_CPUS_FOR_BATCHED_TLB.
> >
> > What's the overhead of such batching on systems with 4 or fewer CPUs? If
> > it isn't noticeable, I'd rather have it always on than some number
> > chosen on whichever SoC you tested.
> 
> On the one hand, tlb flush is cheap on a small system. so batching tlb flush
> helps very minorly.

Yes, it probably won't help on small systems but I don't like config
options choosing the threshold, which may be different from system to
system even if they have the same number of CPUs. A run-time tunable
would be a better option.

> On the other hand, since we have batched the tlb flush, new PTEs might be
> invisible to others before the final broadcast is done and Ack-ed.

The new PTEs could indeed be invisible at the TLB level but not at the
memory (page table) level since this is done under the PTL IIUC.

> thus, there
> is a risk someone else might do mprotect or similar things  on those deferred
> pages which will ask for read-modify-write on those deferred PTEs.

And this should be fine, we have things like the PTL in place for the
actual memory access to the page table.

> in this
> case, mm will do an explicit flush by flush_tlb_batched_pending which is
> not required if tlb flush is not deferred.

I don't fully understand why it's needed, or at least why it would be
needed on arm64. At the end of an mprotect(), we have the final PTEs in
place and we just need to issue a TLBI for that range.
change_pte_range() for example has a tlb_flush_pte_range() if the PTE
was present and that won't be done lazily. If there are other TLBIs
pending for the same range, they'll be done later though likely
unnecessarily but still cheaper than issuing a flush_tlb_mm().

> void flush_tlb_batched_pending(struct mm_struct *mm)
> {
>int batch = atomic_read(>tlb_flush_batched);
>int pending = batch & TLB_FLUSH_BATCH_PENDING_MASK;
>int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;
> 
>if (pending != flushed) {
>flush_tlb_mm(mm);
> /*
>  * If the new TLB flushing is pending during flushing, leave
>  * mm->tlb_flush_batched as is, to avoid losing flushing.
> */
>   atomic_cmpxchg(>tlb_flush_batched, batch,
>pending | (pending << TLB_FLUSH_BATCH_FLUSHED_SHIFT));
>  }
> }

I guess this works on x86 better as it avoids the IPIs if this flush
already happened. But on arm64 we already issued the TLBI, we just
didn't wait for it to complete via a DSB.

> I believe Anshuman has contributed many points on this in those previous
> discussions.

Yeah, I should re-read the old threads.

-- 
Catalin


Re: [PATCH v7 2/2] arm64: support batched/deferred tlb shootdown during page reclamation

2023-01-08 Thread Barry Song
On Fri, Jan 6, 2023 at 2:15 AM Catalin Marinas  wrote:
>
> On Thu, Nov 17, 2022 at 04:26:48PM +0800, Yicong Yang wrote:
> > It is tested on 4,8,128 CPU platforms and shows to be beneficial on
> > large systems but may not have improvement on small systems like on
> > a 4 CPU platform. So make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends
> > on CONFIG_EXPERT for this stage and make this disabled on systems
> > with less than 8 CPUs. User can modify this threshold according to
> > their own platforms by CONFIG_NR_CPUS_FOR_BATCHED_TLB.
>
> What's the overhead of such batching on systems with 4 or fewer CPUs? If
> it isn't noticeable, I'd rather have it always on than some number
> chosen on whichever SoC you tested.

On the one hand, tlb flush is cheap on a small system. so batching tlb flush
helps very minorly.

On the other hand, since we have batched the tlb flush, new PTEs might be
invisible to others before the final broadcast is done and Ack-ed. thus, there
is a risk someone else might do mprotect or similar things  on those deferred
pages which will ask for read-modify-write on those deferred PTEs. in this
case, mm will do an explicit flush by flush_tlb_batched_pending which is
not required if tlb flush is not deferred. the code is in:

static unsigned long change_pte_range(struct mmu_gather *tlb,
struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
unsigned long end, pgprot_t newprot, unsigned long cp_flags)
{
...

  pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, );

  flush_tlb_batched_pending(vma->vm_mm);
  arch_enter_lazy_mmu_mode();
  do {
oldpte = *pte;
if (pte_present(oldpte)) {
   pte_t ptent;
...
}

since we don't have the mechanism to record which pages should be
flushed in flush_tlb_batched_pending(), flush_tlb_batched_pending()
is flushing the whole process,

void flush_tlb_batched_pending(struct mm_struct *mm)
{
   int batch = atomic_read(>tlb_flush_batched);
   int pending = batch & TLB_FLUSH_BATCH_PENDING_MASK;
   int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;

   if (pending != flushed) {
   flush_tlb_mm(mm);
/*
 * If the new TLB flushing is pending during flushing, leave
 * mm->tlb_flush_batched as is, to avoid losing flushing.
*/
  atomic_cmpxchg(>tlb_flush_batched, batch,
   pending | (pending << TLB_FLUSH_BATCH_FLUSHED_SHIFT));
 }
}

I guess mprotect things won't be that often for a running process especially
when the system has begun to reclaim its memory. it might be more often
only during the initialization of a process. And x86 has enabled this feature
for a long time, probably this concurrency doesn't matter too much.

but it is still case by case. That is why we have decided to be more
conservative
on globally enabling this feature and why it also depends on CONFIG_EXPERT.

I believe Anshuman has contributed many points on this in those previous
discussions.

Thanks
Barry


Re: [PATCH v7 2/2] arm64: support batched/deferred tlb shootdown during page reclamation

2023-01-05 Thread Catalin Marinas
On Thu, Nov 17, 2022 at 04:26:48PM +0800, Yicong Yang wrote:
> It is tested on 4,8,128 CPU platforms and shows to be beneficial on
> large systems but may not have improvement on small systems like on
> a 4 CPU platform. So make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends
> on CONFIG_EXPERT for this stage and make this disabled on systems
> with less than 8 CPUs. User can modify this threshold according to
> their own platforms by CONFIG_NR_CPUS_FOR_BATCHED_TLB.

What's the overhead of such batching on systems with 4 or fewer CPUs? If
it isn't noticeable, I'd rather have it always on than some number
chosen on whichever SoC you tested.

Another option would be to make this a sysctl tunable.

>  .../features/vm/TLB/arch-support.txt  |  2 +-
>  arch/arm64/Kconfig|  6 +++
>  arch/arm64/include/asm/tlbbatch.h | 12 +
>  arch/arm64/include/asm/tlbflush.h | 52 ++-
>  arch/x86/include/asm/tlbflush.h   |  5 +-
>  include/linux/mm_types_task.h |  4 +-
>  mm/rmap.c | 10 ++--

Please keep any function prototype changes in a preparatory patch so
that the arm64 one only introduces the arch specific changes. Easier to
review.

> +static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
> +{
> + /*
> +  * TLB batched flush is proved to be beneficial for systems with large
> +  * number of CPUs, especially system with more than 8 CPUs. TLB shutdown
> +  * is cheap on small systems which may not need this feature. So use
> +  * a threshold for enabling this to avoid potential side effects on
> +  * these platforms.
> +  */
> + if (num_online_cpus() < CONFIG_ARM64_NR_CPUS_FOR_BATCHED_TLB)
> + return false;

The x86 implementation tracks the cpumask of where a task has run. We
don't have such tracking on arm64 and I don't think it matters. As
noticed/described in this series, the bottleneck is the actual DSB
synchronisation (which sends a DVM Sync message to all the other CPUs
and waits for a DVM Complete response). So I think it makes sense not to
bother with an mm_cpumask(). What this patch aims to optimise is
actually the number of DSBs issued on an SMP system by
ptep_clear_flush().

The DVM is not an architected concept (well, it's part of AMBA AXI). I'd
be curious to know how such patch behaves on Apple's M1/M2 hardware. My
preference would be to have this always on for num_online_cpus() > 1 if
there's no overhead.

-- 
Catalin


Re: [PATCH v7 2/2] arm64: support batched/deferred tlb shootdown during page reclamation

2022-11-23 Thread Anshuman Khandual



On 11/17/22 13:56, Yicong Yang wrote:
> From: Barry Song 
> 
> on x86, batched and deferred tlb shootdown has lead to 90%
> performance increase on tlb shootdown. on arm64, HW can do
> tlb shootdown without software IPI. But sync tlbi is still
> quite expensive.
> 
> Even running a simplest program which requires swapout can
> prove this is true,
>  #include 
>  #include 
>  #include 
>  #include 
> 
>  int main()
>  {
>  #define SIZE (1 * 1024 * 1024)
>  volatile unsigned char *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE,
>   MAP_SHARED | MAP_ANONYMOUS, -1, 0);
> 
>  memset(p, 0x88, SIZE);
> 
>  for (int k = 0; k < 1; k++) {
>  /* swap in */
>  for (int i = 0; i < SIZE; i += 4096) {
>  (void)p[i];
>  }
> 
>  /* swap out */
>  madvise(p, SIZE, MADV_PAGEOUT);
>  }
>  }
> 
> Perf result on snapdragon 888 with 8 cores by using zRAM
> as the swap block device.
> 
>  ~ # perf record taskset -c 4 ./a.out
>  [ perf record: Woken up 10 times to write data ]
>  [ perf record: Captured and wrote 2.297 MB perf.data (60084 samples) ]
>  ~ # perf report
>  # To display the perf.data header info, please use --header/--header-only 
> options.
>  # To display the perf.data header info, please use --header/--header-only 
> options.
>  #
>  #
>  # Total Lost Samples: 0
>  #
>  # Samples: 60K of event 'cycles'
>  # Event count (approx.): 35706225414
>  #
>  # Overhead  Command  Shared Object  Symbol
>  #   ...  .  
> .
>  #
> 21.07%  a.out[kernel.kallsyms]  [k] _raw_spin_unlock_irq
>  8.23%  a.out[kernel.kallsyms]  [k] _raw_spin_unlock_irqrestore
>  6.67%  a.out[kernel.kallsyms]  [k] filemap_map_pages
>  6.16%  a.out[kernel.kallsyms]  [k] __zram_bvec_write
>  5.36%  a.out[kernel.kallsyms]  [k] ptep_clear_flush
>  3.71%  a.out[kernel.kallsyms]  [k] _raw_spin_lock
>  3.49%  a.out[kernel.kallsyms]  [k] memset64
>  1.63%  a.out[kernel.kallsyms]  [k] clear_page
>  1.42%  a.out[kernel.kallsyms]  [k] _raw_spin_unlock
>  1.26%  a.out[kernel.kallsyms]  [k] 
> mod_zone_state.llvm.8525150236079521930
>  1.23%  a.out[kernel.kallsyms]  [k] xas_load
>  1.15%  a.out[kernel.kallsyms]  [k] zram_slot_lock
> 
> ptep_clear_flush() takes 5.36% CPU in the micro-benchmark
> swapping in/out a page mapped by only one process. If the
> page is mapped by multiple processes, typically, like more
> than 100 on a phone, the overhead would be much higher as
> we have to run tlb flush 100 times for one single page.
> Plus, tlb flush overhead will increase with the number
> of CPU cores due to the bad scalability of tlb shootdown
> in HW, so those ARM64 servers should expect much higher
> overhead.
> 
> Further perf annonate shows 95% cpu time of ptep_clear_flush
> is actually used by the final dsb() to wait for the completion
> of tlb flush. This provides us a very good chance to leverage
> the existing batched tlb in kernel. The minimum modification
> is that we only send async tlbi in the first stage and we send
> dsb while we have to sync in the second stage.
> 
> With the above simplest micro benchmark, collapsed time to
> finish the program decreases around 5%.
> 
> Typical collapsed time w/o patch:
>  ~ # time taskset -c 4 ./a.out
>  0.21user 14.34system 0:14.69elapsed
> w/ patch:
>  ~ # time taskset -c 4 ./a.out
>  0.22user 13.45system 0:13.80elapsed
> 
> Also, Yicong Yang added the following observation.
>   Tested with benchmark in the commit on Kunpeng920 arm64 server,
>   observed an improvement around 12.5% with command
>   `time ./swap_bench`.
>   w/o w/
>   real0m13.460s   0m11.771s
>   user0m0.248s0m0.279s
>   sys 0m12.039s   0m11.458s
> 
>   Originally it's noticed a 16.99% overhead of ptep_clear_flush()
>   which has been eliminated by this patch:
> 
>   [root@localhost yang]# perf record -- ./swap_bench && perf report
>   [...]
>   16.99%  swap_bench  [kernel.kallsyms]  [k] ptep_clear_flush
> 
> It is tested on 4,8,128 CPU platforms and shows to be beneficial on
> large systems but may not have improvement on small systems like on
> a 4 CPU platform. So make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends
> on CONFIG_EXPERT for this stage and make this disabled on systems
> with less than 8 CPUs. User can modify this threshold according to
> their own platforms by CONFIG_NR_CPUS_FOR_BATCHED_TLB.
> 
> This patch extends arch_tlbbatch_add_mm() to take an address of the
> target page to support the feature on arm64. Also rename it to
> arch_tlbbatch_add_pending() to better match its function since we
> don't need to handle the mm on arm64 and add_mm is not proper.
> 

[PATCH v7 2/2] arm64: support batched/deferred tlb shootdown during page reclamation

2022-11-17 Thread Yicong Yang
From: Barry Song 

on x86, batched and deferred tlb shootdown has lead to 90%
performance increase on tlb shootdown. on arm64, HW can do
tlb shootdown without software IPI. But sync tlbi is still
quite expensive.

Even running a simplest program which requires swapout can
prove this is true,
 #include 
 #include 
 #include 
 #include 

 int main()
 {
 #define SIZE (1 * 1024 * 1024)
 volatile unsigned char *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE,
  MAP_SHARED | MAP_ANONYMOUS, -1, 0);

 memset(p, 0x88, SIZE);

 for (int k = 0; k < 1; k++) {
 /* swap in */
 for (int i = 0; i < SIZE; i += 4096) {
 (void)p[i];
 }

 /* swap out */
 madvise(p, SIZE, MADV_PAGEOUT);
 }
 }

Perf result on snapdragon 888 with 8 cores by using zRAM
as the swap block device.

 ~ # perf record taskset -c 4 ./a.out
 [ perf record: Woken up 10 times to write data ]
 [ perf record: Captured and wrote 2.297 MB perf.data (60084 samples) ]
 ~ # perf report
 # To display the perf.data header info, please use --header/--header-only 
options.
 # To display the perf.data header info, please use --header/--header-only 
options.
 #
 #
 # Total Lost Samples: 0
 #
 # Samples: 60K of event 'cycles'
 # Event count (approx.): 35706225414
 #
 # Overhead  Command  Shared Object  Symbol
 #   ...  .  
.
 #
21.07%  a.out[kernel.kallsyms]  [k] _raw_spin_unlock_irq
 8.23%  a.out[kernel.kallsyms]  [k] _raw_spin_unlock_irqrestore
 6.67%  a.out[kernel.kallsyms]  [k] filemap_map_pages
 6.16%  a.out[kernel.kallsyms]  [k] __zram_bvec_write
 5.36%  a.out[kernel.kallsyms]  [k] ptep_clear_flush
 3.71%  a.out[kernel.kallsyms]  [k] _raw_spin_lock
 3.49%  a.out[kernel.kallsyms]  [k] memset64
 1.63%  a.out[kernel.kallsyms]  [k] clear_page
 1.42%  a.out[kernel.kallsyms]  [k] _raw_spin_unlock
 1.26%  a.out[kernel.kallsyms]  [k] 
mod_zone_state.llvm.8525150236079521930
 1.23%  a.out[kernel.kallsyms]  [k] xas_load
 1.15%  a.out[kernel.kallsyms]  [k] zram_slot_lock

ptep_clear_flush() takes 5.36% CPU in the micro-benchmark
swapping in/out a page mapped by only one process. If the
page is mapped by multiple processes, typically, like more
than 100 on a phone, the overhead would be much higher as
we have to run tlb flush 100 times for one single page.
Plus, tlb flush overhead will increase with the number
of CPU cores due to the bad scalability of tlb shootdown
in HW, so those ARM64 servers should expect much higher
overhead.

Further perf annonate shows 95% cpu time of ptep_clear_flush
is actually used by the final dsb() to wait for the completion
of tlb flush. This provides us a very good chance to leverage
the existing batched tlb in kernel. The minimum modification
is that we only send async tlbi in the first stage and we send
dsb while we have to sync in the second stage.

With the above simplest micro benchmark, collapsed time to
finish the program decreases around 5%.

Typical collapsed time w/o patch:
 ~ # time taskset -c 4 ./a.out
 0.21user 14.34system 0:14.69elapsed
w/ patch:
 ~ # time taskset -c 4 ./a.out
 0.22user 13.45system 0:13.80elapsed

Also, Yicong Yang added the following observation.
Tested with benchmark in the commit on Kunpeng920 arm64 server,
observed an improvement around 12.5% with command
`time ./swap_bench`.
w/o w/
real0m13.460s   0m11.771s
user0m0.248s0m0.279s
sys 0m12.039s   0m11.458s

Originally it's noticed a 16.99% overhead of ptep_clear_flush()
which has been eliminated by this patch:

[root@localhost yang]# perf record -- ./swap_bench && perf report
[...]
16.99%  swap_bench  [kernel.kallsyms]  [k] ptep_clear_flush

It is tested on 4,8,128 CPU platforms and shows to be beneficial on
large systems but may not have improvement on small systems like on
a 4 CPU platform. So make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends
on CONFIG_EXPERT for this stage and make this disabled on systems
with less than 8 CPUs. User can modify this threshold according to
their own platforms by CONFIG_NR_CPUS_FOR_BATCHED_TLB.

This patch extends arch_tlbbatch_add_mm() to take an address of the
target page to support the feature on arm64. Also rename it to
arch_tlbbatch_add_pending() to better match its function since we
don't need to handle the mm on arm64 and add_mm is not proper.
add_pending will make sense to both as on x86 we're pending the
TLB flush operations while on arm64 we're pending the synchronize
operations.

Cc: Anshuman Khandual 
Cc: Jonathan Corbet 
Cc: Nadav Amit 
Cc: Mel Gorman 
Tested-by: Yicong Yang 
Tested-by: Xin Hao