> On Jan 31, 2021, at 2:07 AM, Damian Tometzki <li...@tometzki.de> wrote:
> 
> On Sat, 30. Jan 16:11, Nadav Amit wrote:
>> From: Nadav Amit <na...@vmware.com>
>> 
>> Introduce tlb_start_ptes() and tlb_end_ptes() which would be called
>> before and after PTEs are updated and TLB flushes are deferred. This
>> will be later be used for fine granualrity deferred TLB flushing
>> detection.
>> 
>> In the meanwhile, move flush_tlb_batched_pending() into
>> tlb_start_ptes(). It was not called from mapping_dirty_helpers by
>> wp_pte() and clean_record_pte(), which might be a bug.
>> 
>> No additional functional change is intended.
>> 
>> Signed-off-by: Nadav Amit <na...@vmware.com>
>> Cc: Andrea Arcangeli <aarca...@redhat.com>
>> Cc: Andrew Morton <a...@linux-foundation.org>
>> Cc: Andy Lutomirski <l...@kernel.org>
>> Cc: Dave Hansen <dave.han...@linux.intel.com>
>> Cc: Peter Zijlstra <pet...@infradead.org>
>> Cc: Thomas Gleixner <t...@linutronix.de>
>> Cc: Will Deacon <w...@kernel.org>
>> Cc: Yu Zhao <yuz...@google.com>
>> Cc: Nick Piggin <npig...@gmail.com>
>> Cc: x...@kernel.org
>> ---
>> fs/proc/task_mmu.c         |  2 ++
>> include/asm-generic/tlb.h  | 18 ++++++++++++++++++
>> mm/madvise.c               |  6 ++++--
>> mm/mapping_dirty_helpers.c | 15 +++++++++++++--
>> mm/memory.c                |  2 ++
>> mm/mprotect.c              |  3 ++-
>> 6 files changed, 41 insertions(+), 5 deletions(-)
>> 
>> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
>> index 4cd048ffa0f6..d0cce961fa5c 100644
>> --- a/fs/proc/task_mmu.c
>> +++ b/fs/proc/task_mmu.c
>> @@ -1168,6 +1168,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned 
>> long addr,
>>              return 0;
>> 
>>      pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
>> +    tlb_start_ptes(&cp->tlb);
>>      for (; addr != end; pte++, addr += PAGE_SIZE) {
>>              ptent = *pte;
>> 
>> @@ -1190,6 +1191,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned 
>> long addr,
>>              tlb_flush_pte_range(&cp->tlb, addr, PAGE_SIZE);
>>              ClearPageReferenced(page);
>>      }
>> +    tlb_end_ptes(&cp->tlb);
>>      pte_unmap_unlock(pte - 1, ptl);
>>      cond_resched();
>>      return 0;
>> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
>> index 041be2ef4426..10690763090a 100644
>> --- a/include/asm-generic/tlb.h
>> +++ b/include/asm-generic/tlb.h
>> @@ -58,6 +58,11 @@
>>  *    Defaults to flushing at tlb_end_vma() to reset the range; helps when
>>  *    there's large holes between the VMAs.
>>  *
>> + *  - tlb_start_ptes() / tlb_end_ptes; makr the start / end of PTEs change.
> 
> Hello Nadav,
> 
> short nid makr/mark

Thanks! I will fix it.


Reply via email to