On 18.07.25 14:43, Lorenzo Stoakes wrote:
On Thu, Jul 17, 2025 at 10:03:44PM +0200, David Hildenbrand wrote:
On 17.07.25 21:55, Lorenzo Stoakes wrote:
On Thu, Jul 17, 2025 at 08:51:51PM +0100, Lorenzo Stoakes wrote:
@@ -721,37 +772,21 @@ struct page *vm_normal_page_pmd(struct vm_area_struct
*vma, unsigned long addr,
print_bad_page_map(vma, addr, pmd_val(pmd), NULL);
return NULL;
}
-
- if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
- if (vma->vm_flags & VM_MIXEDMAP) {
- if (!pfn_valid(pfn))
- return NULL;
- goto out;
- } else {
- unsigned long off;
- off = (addr - vma->vm_start) >> PAGE_SHIFT;
- if (pfn == vma->vm_pgoff + off)
- return NULL;
- if (!is_cow_mapping(vma->vm_flags))
- return NULL;
- }
- }
-
- if (is_huge_zero_pfn(pfn))
- return NULL;
- if (unlikely(pfn > highest_memmap_pfn)) {
- print_bad_page_map(vma, addr, pmd_val(pmd), NULL);
- return NULL;
- }
-
- /*
- * NOTE! We still have PageReserved() pages in the page tables.
- * eg. VDSO mappings can cause them to exist.
- */
-out:
- return pfn_to_page(pfn);
+ return vm_normal_page_pfn(vma, addr, pfn, pmd_val(pmd));
Hmm this seems broken, because you're now making these special on arches with
pte_special() right? But then you're invoking the not-special function?
Also for non-pte_special() arches you're kind of implying they _maybe_ could be
special.
OK sorry the diff caught me out here, you explicitly handle the pmd_special()
case here, duplicatively (yuck).
Maybe you fix this up in a later patch :)
I had that, but the conditions depend on the level, meaning: unnecessary
checks for pte/pmd/pud level.
I had a variant where I would pass "bool special" into vm_normal_page_pfn(),
but I didn't like it.
To optimize out, I would have to provide a "level" argument, and did not
convince myself yet that that is a good idea at this point.
Yeah fair enough. That probably isn't worth it or might end up making things
even more ugly.
So, I decided to add the level arguments, but not use them to optimize the
checks,
only to forward it to the new print_bad_pte().
So the new helper will be
/**
* __vm_normal_page() - Get the "struct page" associated with a page table
entry.
* @vma: The VMA mapping the page table entry.
* @addr: The address where the page table entry is mapped.
* @pfn: The PFN stored in the page table entry.
* @special: Whether the page table entry is marked "special".
* @level: The page table level for error reporting purposes only.
* @entry: The page table entry value for error reporting purposes only.
...
*/
static inline struct page *__vm_normal_page(struct vm_area_struct *vma,
unsigned long addr, unsigned long pfn, bool special,
unsigned long long entry, enum pgtable_level level)
...
And vm_nomal_page() will for example be
/**
* vm_normal_page() - Get the "struct page" associated with a PTE
* @vma: The VMA mapping the @pte.
* @addr: The address where the @pte is mapped.
* @pte: The PTE.
*
* Get the "struct page" associated with a PTE. See __vm_normal_page()
* for details on "normal" and "special" mappings.
*
* Return: Returns the "struct page" if this is a "normal" mapping. Returns
* NULL if this is a "special" mapping.
*/
struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
pte_t pte)
{
return __vm_normal_page(vma, addr, pte_pfn(pte), pte_special(pte),
pte_val(pte), PGTABLE_LEVEL_PTE);
}
--
Cheers,
David / dhildenb