> On May 12, 2026, at 6:16 PM, Boqun Feng <[email protected]> wrote:
> 
> On Tue, May 12, 2026 at 03:22:39PM -0400, Joel Fernandes wrote:
>> 
>> 
>>> On 5/12/2026 12:30 PM, Steven Rostedt wrote:
>>> On Thu,  7 May 2026 21:21:02 -0700
>>> Boqun Feng <[email protected]> wrote:
>>> 
>>>> From: Joel Fernandes <[email protected]>
>>>> 
>>>> Move NMI nesting tracking from the preempt_count bits to a separate per-CPU
>>>> counter (nmi_nesting). This is to free up the NMI bits in the 
>>>> preempt_count,
>>>> allowing those bits to be repurposed for other uses.  This also has the 
>>>> benefit
>>>> of tracking more than 16-levels deep if there is ever a need.
>>>> 
>>>> Reduce multiple bits in preempt_count for NMI tracking. Reduce NMI_BITS
>>>> from 3 to 1, using it only to detect if we're in an NMI.
>>>> 
>>>> Suggested-by: Boqun Feng <[email protected]>
>>>> Signed-off-by: Joel Fernandes <[email protected]>
>>>> Signed-off-by: Lyude Paul <[email protected]>
>>>> Signed-off-by: Boqun Feng <[email protected]>
>>>> Link: https://patch.msgid.link/[email protected]
>>>> ---
>>>> include/linux/hardirq.h | 16 ++++++++++++----
>>>> include/linux/preempt.h | 13 +++++++++----
>>>> kernel/softirq.c        |  2 ++
>>>> 3 files changed, 23 insertions(+), 8 deletions(-)
>>>> 
>>>> diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
>>>> index d57cab4d4c06..cc06bda52c3e 100644
>>>> --- a/include/linux/hardirq.h
>>>> +++ b/include/linux/hardirq.h
>>>> @@ -10,6 +10,8 @@
>>>> #include <linux/vtime.h>
>>>> #include <asm/hardirq.h>
>>>> 
>>>> +DECLARE_PER_CPU(unsigned int, nmi_nesting);
>>>> +
>>>> extern void synchronize_irq(unsigned int irq);
>>>> extern bool synchronize_hardirq(unsigned int irq);
>>>> 
>>>> @@ -102,14 +104,16 @@ void irq_exit_rcu(void);
>>>>  */
>>>> 
>>>> /*
>>>> - * nmi_enter() can nest up to 15 times; see NMI_BITS.
>>>> + * nmi_enter() can nest - nesting is tracked in a per-CPU counter.
>>>>  */
>>>> #define __nmi_enter()                        \
>>>>    do {                            \
>>>>        lockdep_off();                    \
>>>>        arch_nmi_enter();                \
>>>> -        BUG_ON(in_nmi() == NMI_MASK);            \
>>>> -        __preempt_count_add(NMI_OFFSET + HARDIRQ_OFFSET);    \
>>>> +        BUG_ON(__this_cpu_read(nmi_nesting) == UINT_MAX);    \
>>> 
>>> I think we should keep the max nesting fixed to 15. If this doesn't trigger
>>> until UINT_MAX, it may take a long time to see that, and there's no reason
>>> NMIs should nest more than 15 anyway.
> 
> 
> 
>>> 
>>> Just because the counter allows it, doesn't me the system should allow it.
>> 
>> That's fine with me. Boqun, do you want to make the one-line change to the 
>> patch?
>> 
> 
> Something like this on top of your patch?
> 
> diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
> index cc06bda52c3e..a59a33e0f5ca 100644
> --- a/include/linux/hardirq.h
> +++ b/include/linux/hardirq.h
> @@ -110,7 +110,8 @@ void irq_exit_rcu(void);
>    do {                            \
>        lockdep_off();                    \
>        arch_nmi_enter();                \
> -        BUG_ON(__this_cpu_read(nmi_nesting) == UINT_MAX);    \
> +        /* Maximum NMI nesting is 15 */            \
> +        BUG_ON(__this_cpu_read(nmi_nesting) == 15);    \
>        __this_cpu_inc(nmi_nesting);            \
>        __preempt_count_add(HARDIRQ_OFFSET);        \
>        preempt_count_set(preempt_count() | NMI_MASK);    \
> 
> I will need to adjust this in patch #10 as well, but shouldn't be hard.

Maybe >= but this sounds good to me. Thanks for adjusting the patches.

Thanks!



> 
> Regards,
> Boqun
> 
>> Thanks.
>> 

Reply via email to