On Mon, Jan 15, 2024 at 09:34PM +0100, Marco Elver wrote:
> On Mon, 15 Jan 2024 at 19:44, Alexander Potapenko <[email protected]> wrote:
> >
> > Cc: "Paul E. McKenney" <[email protected]>
> > Cc: Marco Elver <[email protected]>
> > Cc: Dmitry Vyukov <[email protected]>
> > Cc: [email protected]
> > Cc: Ilya Leoshkevich <[email protected]>
> > Cc: Nicholas Miehlbradt <[email protected]>
> >
> > Hi folks,
> >
> > (adding KMSAN reviewers and IBM people who are currently porting KMSAN to 
> > other
> > architectures, plus Paul for his opinion on refactoring RCU)
> >
> > this patch broke x86 KMSAN in a subtle way.
> >
> > For every memory access in the code instrumented by KMSAN we call
> > kmsan_get_metadata() to obtain the metadata for the memory being accessed. 
> > For
> > virtual memory the metadata pointers are stored in the corresponding `struct
> > page`, therefore we need to call virt_to_page() to get them.
> >
> > According to the comment in arch/x86/include/asm/page.h, virt_to_page(kaddr)
> > returns a valid pointer iff virt_addr_valid(kaddr) is true, so KMSAN needs 
> > to
> > call virt_addr_valid() as well.
> >
> > To avoid recursion, kmsan_get_metadata() must not call instrumented code,
> > therefore ./arch/x86/include/asm/kmsan.h forks parts of 
> > arch/x86/mm/physaddr.c
> > to check whether a virtual address is valid or not.
> >
> > But the introduction of rcu_read_lock() to pfn_valid() added instrumented 
> > RCU
> > API calls to virt_to_page_or_null(), which is called by 
> > kmsan_get_metadata(),
> > so there is an infinite recursion now. I do not think it is correct to stop 
> > that
> > recursion by doing kmsan_enter_runtime()/kmsan_exit_runtime() in
> > kmsan_get_metadata(): that would prevent instrumented functions called from
> > within the runtime from tracking the shadow values, which might introduce 
> > false
> > positives.
> >
> > I am currently looking into inlining __rcu_read_lock()/__rcu_read_unlock(), 
> > into
> > KMSAN code to prevent it from being instrumented, but that might require 
> > factoring
> > out parts of kernel/rcu/tree_plugin.h into a non-private header. Do you 
> > think this
> > is feasible?
> 
> __rcu_read_lock/unlock() is only outlined in PREEMPT_RCU. Not sure that helps.
> 
> Otherwise, there is rcu_read_lock_sched_notrace() which does the bare
> minimum and is static inline.
> 
> Does that help?

Hrm, rcu_read_unlock_sched_notrace() can still call
__preempt_schedule_notrace(), which is again instrumented by KMSAN.

This patch gets me a working kernel:

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 4ed33b127821..2d62df462d88 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -2000,6 +2000,7 @@ static inline int pfn_valid(unsigned long pfn)
 {
        struct mem_section *ms;
        int ret;
+       unsigned long flags;
 
        /*
         * Ensure the upper PAGE_SHIFT bits are clear in the
@@ -2013,9 +2014,9 @@ static inline int pfn_valid(unsigned long pfn)
        if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
                return 0;
        ms = __pfn_to_section(pfn);
-       rcu_read_lock();
+       local_irq_save(flags);
        if (!valid_section(ms)) {
-               rcu_read_unlock();
+               local_irq_restore(flags);
                return 0;
        }
        /*
@@ -2023,7 +2024,7 @@ static inline int pfn_valid(unsigned long pfn)
         * the entire section-sized span.
         */
        ret = early_section(ms) || pfn_section_valid(ms, pfn);
-       rcu_read_unlock();
+       local_irq_restore(flags);
 
        return ret;
 }

Disabling interrupts is a little heavy handed - it also assumes the
current RCU implementation. There is
preempt_enable_no_resched_notrace(), but that might be worse because it
breaks scheduling guarantees.

That being said, whatever we do here should be wrapped in some
rcu_read_lock/unlock_<newvariant>() helper.

Is there an existing helper we can use? If not, we need a variant that
can be used from extremely constrained contexts that can't even call
into the scheduler. And if we want pfn_valid() to switch to it, it also
should be fast.

Thanks,
-- Marco

Reply via email to