On Mon, Jul 29, 2019 at 09:45:23AM +0200, Michal Hocko wrote: > On Mon 29-07-19 16:10:37, Minchan Kim wrote: > > In our testing(carmera recording), Miguel and Wei found unmap_page_range > > takes above 6ms with preemption disabled easily. When I see that, the > > reason is it holds page table spinlock during entire 512 page operation > > in a PMD. 6.2ms is never trivial for user experince if RT task couldn't > > run in the time because it could make frame drop or glitch audio problem. > > Where is the time spent during the tear down? 512 pages doesn't sound > like a lot to tear down. Is it the TLB flushing?
Miguel confirmed there is no such big latency without mark_page_accessed in zap_pte_range so I guess it's the contention of LRU lock as well as heavy activate_page overhead which is not trivial, either. > > > This patch adds preemption point like coyp_pte_range. > > > > Reported-by: Miguel de Dios <[email protected]> > > Reported-by: Wei Wang <[email protected]> > > Cc: Michal Hocko <[email protected]> > > Cc: Johannes Weiner <[email protected]> > > Cc: Mel Gorman <[email protected]> > > Signed-off-by: Minchan Kim <[email protected]> > > --- > > mm/memory.c | 19 ++++++++++++++++--- > > 1 file changed, 16 insertions(+), 3 deletions(-) > > > > diff --git a/mm/memory.c b/mm/memory.c > > index 2e796372927fd..bc3e0c5e4f89b 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -1007,6 +1007,7 @@ static unsigned long zap_pte_range(struct mmu_gather > > *tlb, > > struct zap_details *details) > > { > > struct mm_struct *mm = tlb->mm; > > + int progress = 0; > > int force_flush = 0; > > int rss[NR_MM_COUNTERS]; > > spinlock_t *ptl; > > @@ -1022,7 +1023,16 @@ static unsigned long zap_pte_range(struct mmu_gather > > *tlb, > > flush_tlb_batched_pending(mm); > > arch_enter_lazy_mmu_mode(); > > do { > > - pte_t ptent = *pte; > > + pte_t ptent; > > + > > + if (progress >= 32) { > > + progress = 0; > > + if (need_resched()) > > + break; > > + } > > + progress += 8; > > Why 8? Just copied from copy_pte_range.

