Put the zeropage in the read-only data section - nothing should ever change
its contents. Set up a new section .rodata..page_aligned to mirror the
existing .data..page_aligned and .bss..page_aligned sections.

There have been several security bugs where the kernel grabs references to
pages from some userspace-specified source, via GUP or splice, with
read-only semantics; and then later on, the kernel loses track of the
pages' read-only semantics and writes into them.

I have seen such bugs in out-of-tree GPU drivers before, and recently
upstream Linux bugs of this shape have been discovered as well.

One problem with these bugs is that fuzzers and such will have a hard time
noticing them, because the kernel has no mechanism to directly detect that
such a bug has occurred. It would be nice if we had debug infrastructure to
keep track of whether file pages are supposed to be writable, or such; but
for now, the easiest way to make these bugs detectable in at least some
cases is to make sure that writing the 4K zeropage is mapped as read-only
in the kernel, so that attempting to write into it immediately crashes
(unless the write happens through a vmap mapping or such).

This patch might increase the size of vmlinux by 4K since .rodata is stored
in the ELF file while .bss is not; but the compressed kernel image size
shouldn't change much, since it's compressed.

I have tested that with this patch applied, calling
`get_user_pages_fast(address, 1, 0, &page)` on a freshly-created anonymous
VMA and writing into the page with
`*(volatile char *)page_address(page) = 0` will cause an oops.

Signed-off-by: Jann Horn <[email protected]>
---
 include/asm-generic/vmlinux.lds.h | 1 +
 include/linux/linkage.h           | 1 +
 mm/mm_init.c                      | 2 +-
 3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/include/asm-generic/vmlinux.lds.h 
b/include/asm-generic/vmlinux.lds.h
index 60c8c22fd3e4..e6e96bce506f 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -479,6 +479,7 @@
        . = ALIGN((align));                                             \
        .rodata           : AT(ADDR(.rodata) - LOAD_OFFSET) {           \
                __start_rodata = .;                                     \
+               *(.rodata..page_aligned)                                \
                *(.rodata) *(.rodata.*) *(.data.rel.ro*)                \
                SCHED_DATA                                              \
                RO_AFTER_INIT_DATA      /* Read only after init */      \
diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index b11660b706c5..49997b292c01 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -38,6 +38,7 @@
 
 #define __page_aligned_data    __section(".data..page_aligned") 
__aligned(PAGE_SIZE)
 #define __page_aligned_bss     __section(".bss..page_aligned") 
__aligned(PAGE_SIZE)
+#define __page_aligned_rodata  __section(".rodata..page_aligned") 
__aligned(PAGE_SIZE)
 
 /*
  * For assembly routines.
diff --git a/mm/mm_init.c b/mm/mm_init.c
index f9f8e1af921c..67b260acc27e 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -57,7 +57,7 @@ unsigned long zero_page_pfn __ro_after_init;
 EXPORT_SYMBOL(zero_page_pfn);
 
 #ifndef __HAVE_COLOR_ZERO_PAGE
-uint8_t empty_zero_page[PAGE_SIZE] __page_aligned_bss;
+uint8_t empty_zero_page[PAGE_SIZE] __page_aligned_rodata;
 EXPORT_SYMBOL(empty_zero_page);
 
 struct page *__zero_page __ro_after_init;

---
base-commit: 917719c412c48687d4a176965d1fa35320ec457c
change-id: 20260508-ro-zeropage-86fb842965ae

--  
Jann Horn <[email protected]>


Reply via email to