Commit 143937ca51cc ("arm64, mm: avoid always making PTE dirty in
pte_mkwrite()") modified pte_mkwrite_novma() to only clear PTE_RDONLY
when the page is already dirty (PTE_DIRTY is set). While this optimization
prevents unnecessary dirty page marking in normal memory management paths,
it breaks kexec on some platforms like NXP LS1043.
The issue occurs in the kexec code path:
1. machine_kexec_post_load() calls trans_pgd_create_copy() to create a
writable copy of the linear mapping
2. _copy_pte() calls pte_mkwrite_novma() to ensure all pages in the copy
are writable for the new kernel image copying
3. With the new logic, clean pages (without PTE_DIRTY) remain read-only
4. When kexec tries to copy the new kernel image through the linear
mapping, it fails on read-only pages, causing the system to hang
after "Bye!"
The same issue affects hibernation which uses the same trans_pgd code path.
Fix this by marking pages dirty with pte_mkdirty() in _copy_pte(), which
ensures pte_mkwrite_novma() clears PTE_RDONLY for both kexec and
hibernation, making all pages in the temporary mapping writable regardless
of their dirty state. This preserves the original commit's optimization
for normal memory management while fixing the kexec/hibernation regression.
Using pte_mkdirty() causes redundant bit operations when the page is
already writable (redundant PTE_RDONLY clearing), but this is acceptable
since it's not a hot path and only affects kexec/hibernation scenarios.
Fixes: 143937ca51cc ("arm64, mm: avoid always making PTE dirty in
pte_mkwrite()")
Signed-off-by: Jianpeng Chang <[email protected]>
---
arch/arm64/mm/trans_pgd.c | 17 +++++++++++++++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c
index 0f7b484cb2ff..baab46252fc3 100644
--- a/arch/arm64/mm/trans_pgd.c
+++ b/arch/arm64/mm/trans_pgd.c
@@ -40,8 +40,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep,
unsigned long addr)
* Resume will overwrite areas that may be marked
* read only (code, rodata). Clear the RDONLY bit from
* the temporary mappings we use during restore.
+ *
+ * For both kexec and hibernation, writable accesses are
required
+ * for all pages in the linear map to copy over new kernel
image.
+ * Hence mark these pages dirty first via pte_mkdirty() to
ensure
+ * pte_mkwrite_novma() subsequently clears PTE_RDONLY -
providing
+ * required write access for the pages.
*/
- __set_pte(dst_ptep, pte_mkwrite_novma(pte));
+ __set_pte(dst_ptep, pte_mkwrite_novma(pte_mkdirty(pte)));
} else if (!pte_none(pte)) {
/*
* debug_pagealloc will removed the PTE_VALID bit if
@@ -57,7 +63,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep,
unsigned long addr)
*/
BUG_ON(!pfn_valid(pte_pfn(pte)));
- __set_pte(dst_ptep, pte_mkpresent(pte_mkwrite_novma(pte)));
+ /*
+ * For both kexec and hibernation, writable accesses are
required
+ * for all pages in the linear map to copy over new kernel
image.
+ * Hence mark these pages dirty first via pte_mkdirty() to
ensure
+ * pte_mkwrite_novma() subsequently clears PTE_RDONLY -
providing
+ * required write access for the pages.
+ */
+ __set_pte(dst_ptep,
pte_mkpresent(pte_mkwrite_novma(pte_mkdirty(pte))));
}
}
--
2.52.0
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#16045):
https://lists.yoctoproject.org/g/linux-yocto/message/16045
Mute This Topic: https://lists.yoctoproject.org/mt/116740663/21656
Group Owner: [email protected]
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-