On 3/6/26 16:41, Nikita Kalyazin wrote:
> 
> 
> On 06/03/2026 15:17, David Hildenbrand (Arm) wrote:
>> On 3/6/26 15:48, Nikita Kalyazin wrote:
>>>
>>>
>>>
>>> Yeah, that's unfortunately the status quo as pointed by Aneesh [1]
>>>
>>> [1] https://lore.kernel.org/kvm/[email protected]/
>>>
>>>
>>> Yes, looks like that.  I'll remove the explicit flush and rely on
>>> folio_zap_direct_map().
>>>
>>>
>>> I believe Dave meant to address that with folio_{zap,restore}
>>> _direct_map() [2].
>>>
>>> [2] https://lore.kernel.org/kvm/9409531b-589b-4a54-
>>> [email protected]/
>>>
>>>
>>> I'd be inclined to know what arch maintainers think because I don't have
>>> a strong opinion on that.
>>
>> You could also just perform a double flush, and let people that
>> implemented a _noflush() to perform a flush optimize that later.
> 
> Do you propose to just universalise the one from x86?
> 
> int folio_zap_direct_map(struct folio *folio)
> {
>     const void *addr = folio_address(folio);
>     int ret;
> 
>     ret = set_direct_map_valid_noflush(addr, folio_nr_pages(folio), false);
>     flush_tlb_kernel_range((unsigned long)addr,
>                    (unsigned long)addr + folio_size(folio));
> 
>     return ret;
> }

Yes, exactly something along those lines!
-- 
Cheers,

David

Reply via email to