On 03/11/2025 07:57, Aneesh Kumar K.V wrote:
"Roy, Patrick" <[email protected]> writes:

....

+static int kvm_gmem_folio_zap_direct_map(struct folio *folio)
+{
+     if (kvm_gmem_folio_no_direct_map(folio))
+             return 0;
+
+     int r = set_direct_map_valid_noflush(folio_page(folio, 0), 
folio_nr_pages(folio),
+                                      false);
+
+     if (!r) {
+             unsigned long addr = (unsigned long) folio_address(folio);
+             folio->private = (void *) ((u64) folio->private & 
KVM_GMEM_FOLIO_NO_DIRECT_MAP);
+             flush_tlb_kernel_range(addr, addr + folio_size(folio));
+     }
+
+     return r;
+}

These 'noflush' functions are actually doing flush_tlb_kernel

[-]  ∘ flush_tlb_kernel_range
  |-[-]  ← __change_memory_common
  |  `-[-]  ← set_memory_valid
  |     `-   ← set_direct_map_valid_noflush

Hi Aneesh,

Thanks for pointing at that. I ran internal tests and it appears that the second flush_tlb_kernel_range() call does add a latency similar to the one coming from the first call, even though it intuitively should be a no-op. I have to admit that I am not aware of a safe way to avoid the second flushing on ARM while keeping the guest_memfd code arch-agnostic. Perhaps I should seek Will's counsel for it. Nevertheless, I don't think there is a concern from the functional point of view.

Nikita


-aneesh


Reply via email to