Re: [PATCH] asm-generic/mmiowb: Mark accesses to fix KCSAN warnings

2024-04-19 Thread Will Deacon
On Thu, Apr 04, 2024 at 03:38:53PM +1100, Rohan McLure wrote:
> Prior to this patch, data races are detectable by KCSAN of the following
> forms:
> 
> [1] Asynchronous calls to mmiowb_set_pending() from an interrupt context
> or otherwise outside of a critical section
> [2] Interrupted critical sections, where the interrupt will itself
> acquire a lock
> 
> In case [1], calling context does not need an mmiowb() call to be
> issued, otherwise it would do so itself. Such calls to
> mmiowb_set_pending() are either idempotent or no-ops.
> 
> In case [2], irrespective of when the interrupt occurs, the interrupt
> will acquire and release its locks prior to its return, nesting_count
> will continue balanced. In the worst case, the interrupted critical
> section during a mmiowb_spin_unlock() call observes an mmiowb to be
> pending and afterward is interrupted, leading to an extraneous call to
> mmiowb(). This data race is clearly innocuous.
> 
> Resolve KCSAN warnings of type [1] by means of READ_ONCE, WRITE_ONCE.
> As increments and decrements to nesting_count are balanced by interrupt
> contexts, resolve type [2] warnings by simply revoking instrumentation,
> with data_race() rather than READ_ONCE() and WRITE_ONCE(), the memory
> consistency semantics of plain-accesses will still lead to correct
> behaviour.
> 
> Signed-off-by: Rohan McLure 
> Reported-by: Michael Ellerman 
> Reported-by: Gautam Menghani 
> Tested-by: Gautam Menghani 
> Acked-by: Arnd Bergmann 
> ---
> Previously discussed here:
> https://lore.kernel.org/linuxppc-dev/20230510033117.1395895-4-rmcl...@linux.ibm.com/
> But pushed back due to affecting other architectures. Reissuing, to
> linuxppc-dev, as it does not enact a functional change.
> ---
>  include/asm-generic/mmiowb.h | 15 +--
>  1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/include/asm-generic/mmiowb.h b/include/asm-generic/mmiowb.h
> index 5698fca3bf56..f8c7c8a84e9e 100644
> --- a/include/asm-generic/mmiowb.h
> +++ b/include/asm-generic/mmiowb.h
> @@ -37,25 +37,28 @@ static inline void mmiowb_set_pending(void)
>   struct mmiowb_state *ms = __mmiowb_state();
>  
>   if (likely(ms->nesting_count))
> - ms->mmiowb_pending = ms->nesting_count;
> + WRITE_ONCE(ms->mmiowb_pending, ms->nesting_count);
>  }
>  
>  static inline void mmiowb_spin_lock(void)
>  {
>   struct mmiowb_state *ms = __mmiowb_state();
> - ms->nesting_count++;
> +
> + /* Increment need not be atomic. Nestedness is balanced over 
> interrupts. */
> + data_race(ms->nesting_count++);
>  }
>  
>  static inline void mmiowb_spin_unlock(void)
>  {
>   struct mmiowb_state *ms = __mmiowb_state();
> + u16 pending = READ_ONCE(ms->mmiowb_pending);
>  
> - if (unlikely(ms->mmiowb_pending)) {
> - ms->mmiowb_pending = 0;
> + WRITE_ONCE(ms->mmiowb_pending, 0);

Why are you changing this store to be unconditional?

Will


[PATCH] asm-generic/mmiowb: Mark accesses to fix KCSAN warnings

2024-04-03 Thread Rohan McLure
Prior to this patch, data races are detectable by KCSAN of the following
forms:

[1] Asynchronous calls to mmiowb_set_pending() from an interrupt context
or otherwise outside of a critical section
[2] Interrupted critical sections, where the interrupt will itself
acquire a lock

In case [1], calling context does not need an mmiowb() call to be
issued, otherwise it would do so itself. Such calls to
mmiowb_set_pending() are either idempotent or no-ops.

In case [2], irrespective of when the interrupt occurs, the interrupt
will acquire and release its locks prior to its return, nesting_count
will continue balanced. In the worst case, the interrupted critical
section during a mmiowb_spin_unlock() call observes an mmiowb to be
pending and afterward is interrupted, leading to an extraneous call to
mmiowb(). This data race is clearly innocuous.

Resolve KCSAN warnings of type [1] by means of READ_ONCE, WRITE_ONCE.
As increments and decrements to nesting_count are balanced by interrupt
contexts, resolve type [2] warnings by simply revoking instrumentation,
with data_race() rather than READ_ONCE() and WRITE_ONCE(), the memory
consistency semantics of plain-accesses will still lead to correct
behaviour.

Signed-off-by: Rohan McLure 
Reported-by: Michael Ellerman 
Reported-by: Gautam Menghani 
Tested-by: Gautam Menghani 
Acked-by: Arnd Bergmann 
---
Previously discussed here:
https://lore.kernel.org/linuxppc-dev/20230510033117.1395895-4-rmcl...@linux.ibm.com/
But pushed back due to affecting other architectures. Reissuing, to
linuxppc-dev, as it does not enact a functional change.
---
 include/asm-generic/mmiowb.h | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/include/asm-generic/mmiowb.h b/include/asm-generic/mmiowb.h
index 5698fca3bf56..f8c7c8a84e9e 100644
--- a/include/asm-generic/mmiowb.h
+++ b/include/asm-generic/mmiowb.h
@@ -37,25 +37,28 @@ static inline void mmiowb_set_pending(void)
struct mmiowb_state *ms = __mmiowb_state();
 
if (likely(ms->nesting_count))
-   ms->mmiowb_pending = ms->nesting_count;
+   WRITE_ONCE(ms->mmiowb_pending, ms->nesting_count);
 }
 
 static inline void mmiowb_spin_lock(void)
 {
struct mmiowb_state *ms = __mmiowb_state();
-   ms->nesting_count++;
+
+   /* Increment need not be atomic. Nestedness is balanced over 
interrupts. */
+   data_race(ms->nesting_count++);
 }
 
 static inline void mmiowb_spin_unlock(void)
 {
struct mmiowb_state *ms = __mmiowb_state();
+   u16 pending = READ_ONCE(ms->mmiowb_pending);
 
-   if (unlikely(ms->mmiowb_pending)) {
-   ms->mmiowb_pending = 0;
+   WRITE_ONCE(ms->mmiowb_pending, 0);
+   if (unlikely(pending))
mmiowb();
-   }
 
-   ms->nesting_count--;
+   /* Decrement need not be atomic. Nestedness is balanced over 
interrupts. */
+   data_race(ms->nesting_count--);
 }
 #else
 #define mmiowb_set_pending()   do { } while (0)
-- 
2.44.0