Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Thu, Jul 12, 2018 at 11:29:38AM +0800, Yandong.Zhao wrote: > From: Yandong Zhao > > It does not matter if the caller of may_use_simd() migrates to > another cpu after the call, but it is still important that the > kernel_neon_busy percpu instance that is read matches the cpu the > task is running on at the time of the read. > > This means that raw_cpu_read() is not sufficient. kernel_neon_busy > may appear true if the caller migrates during the execution of > raw_cpu_read() and the next task to be scheduled in on the initial > cpu calls kernel_neon_begin(). > > This patch replaces raw_cpu_read() with this_cpu_read() to protect > against this race. > > Fixes: cb84d11e1625 ("arm64: neon: Remove support for nested or hardirq > kernel-mode NEON") > Acked-by: Ard Biesheuvel > Reviewed-by: Dave Martin > Reviewed-by: Mark Rutland > Signed-off-by: Yandong Zhao > --- > arch/arm64/include/asm/simd.h | 19 +++ > 1 file changed, 7 insertions(+), 12 deletions(-) > > diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h > index fa8b3fe..6495cc5 100644 > --- a/arch/arm64/include/asm/simd.h > +++ b/arch/arm64/include/asm/simd.h > @@ -29,20 +29,15 @@ > static __must_check inline bool may_use_simd(void) > { > /* > - * The raw_cpu_read() is racy if called with preemption enabled. > - * This is not a bug: kernel_neon_busy is only set when > - * preemption is disabled, so we cannot migrate to another CPU > - * while it is set, nor can we migrate to a CPU where it is set. > - * So, if we find it clear on some CPU then we're guaranteed to > - * find it clear on any CPU we could migrate to. > - * > - * If we are in between kernel_neon_begin()...kernel_neon_end(), > - * the flag will be set, but preemption is also disabled, so we > - * can't migrate to another CPU and spuriously see it become > - * false. > + * kernel_neon_busy is only set while preemption is disabled, > + * and is clear whenever preemption is enabled. Since > + * this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy > + * cannot change under our feet -- if it's set we cannot be > + * migrated, and if it's clear we cannot be migrated to a CPU > + * where it is set. >*/ This new explanation looks fine to me. [...] Cheers ---Dave
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Thu, Jul 12, 2018 at 11:29:38AM +0800, Yandong.Zhao wrote: > From: Yandong Zhao > > It does not matter if the caller of may_use_simd() migrates to > another cpu after the call, but it is still important that the > kernel_neon_busy percpu instance that is read matches the cpu the > task is running on at the time of the read. > > This means that raw_cpu_read() is not sufficient. kernel_neon_busy > may appear true if the caller migrates during the execution of > raw_cpu_read() and the next task to be scheduled in on the initial > cpu calls kernel_neon_begin(). > > This patch replaces raw_cpu_read() with this_cpu_read() to protect > against this race. > > Fixes: cb84d11e1625 ("arm64: neon: Remove support for nested or hardirq > kernel-mode NEON") > Acked-by: Ard Biesheuvel > Reviewed-by: Dave Martin > Reviewed-by: Mark Rutland > Signed-off-by: Yandong Zhao > --- > arch/arm64/include/asm/simd.h | 19 +++ > 1 file changed, 7 insertions(+), 12 deletions(-) > > diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h > index fa8b3fe..6495cc5 100644 > --- a/arch/arm64/include/asm/simd.h > +++ b/arch/arm64/include/asm/simd.h > @@ -29,20 +29,15 @@ > static __must_check inline bool may_use_simd(void) > { > /* > - * The raw_cpu_read() is racy if called with preemption enabled. > - * This is not a bug: kernel_neon_busy is only set when > - * preemption is disabled, so we cannot migrate to another CPU > - * while it is set, nor can we migrate to a CPU where it is set. > - * So, if we find it clear on some CPU then we're guaranteed to > - * find it clear on any CPU we could migrate to. > - * > - * If we are in between kernel_neon_begin()...kernel_neon_end(), > - * the flag will be set, but preemption is also disabled, so we > - * can't migrate to another CPU and spuriously see it become > - * false. > + * kernel_neon_busy is only set while preemption is disabled, > + * and is clear whenever preemption is enabled. Since > + * this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy > + * cannot change under our feet -- if it's set we cannot be > + * migrated, and if it's clear we cannot be migrated to a CPU > + * where it is set. >*/ This new explanation looks fine to me. [...] Cheers ---Dave
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Wed, Jul 11, 2018 at 05:03:15PM +0100, Will Deacon wrote: > On Wed, Jul 11, 2018 at 04:47:58PM +0100, Mark Rutland wrote: > > On Wed, Jul 11, 2018 at 09:20:03AM +0200, Ard Biesheuvel wrote: > > > On 11 July 2018 at 03:09, Yandong.Zhao wrote: > > > > diff --git a/arch/arm64/include/asm/simd.h > > > > b/arch/arm64/include/asm/simd.h > > > > index fa8b3fe..784a8c2 100644 > > > > --- a/arch/arm64/include/asm/simd.h > > > > +++ b/arch/arm64/include/asm/simd.h > > > > @@ -29,7 +29,8 @@ > > > > static __must_check inline bool may_use_simd(void) > > > > { > > > > /* > > > > -* The raw_cpu_read() is racy if called with preemption enabled. > > > > +* The this_cpu_read() is racy if called with preemption > > > > enabled, > > > > +* since the task may subsequently migrate to another CPU. > > > > * This is not a bug: kernel_neon_busy is only set when > > > > * preemption is disabled, so we cannot migrate to another CPU > > > > * while it is set, nor can we migrate to a CPU where it is set. > > > > It would be nice if we could clarify the "is racy" part here. > > > > How about: > > > > /* > > * kernel_neon_busy is only set while preemption is disabled, > > * and is clear whenever preemption is enabled. Since > > * this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy > > * cannot change under our feet -- if it's set we cannot be > > * migrated, and if it's clear we cannot be migrated to a CPU > > * where it is set. > > */ > > > > With that: > > > > Reviewed-by: Mark Rutland > > Thanks. Applied with the updated comment and your tag.. Cheer! Mar.
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Wed, Jul 11, 2018 at 05:03:15PM +0100, Will Deacon wrote: > On Wed, Jul 11, 2018 at 04:47:58PM +0100, Mark Rutland wrote: > > On Wed, Jul 11, 2018 at 09:20:03AM +0200, Ard Biesheuvel wrote: > > > On 11 July 2018 at 03:09, Yandong.Zhao wrote: > > > > diff --git a/arch/arm64/include/asm/simd.h > > > > b/arch/arm64/include/asm/simd.h > > > > index fa8b3fe..784a8c2 100644 > > > > --- a/arch/arm64/include/asm/simd.h > > > > +++ b/arch/arm64/include/asm/simd.h > > > > @@ -29,7 +29,8 @@ > > > > static __must_check inline bool may_use_simd(void) > > > > { > > > > /* > > > > -* The raw_cpu_read() is racy if called with preemption enabled. > > > > +* The this_cpu_read() is racy if called with preemption > > > > enabled, > > > > +* since the task may subsequently migrate to another CPU. > > > > * This is not a bug: kernel_neon_busy is only set when > > > > * preemption is disabled, so we cannot migrate to another CPU > > > > * while it is set, nor can we migrate to a CPU where it is set. > > > > It would be nice if we could clarify the "is racy" part here. > > > > How about: > > > > /* > > * kernel_neon_busy is only set while preemption is disabled, > > * and is clear whenever preemption is enabled. Since > > * this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy > > * cannot change under our feet -- if it's set we cannot be > > * migrated, and if it's clear we cannot be migrated to a CPU > > * where it is set. > > */ > > > > With that: > > > > Reviewed-by: Mark Rutland > > Thanks. Applied with the updated comment and your tag.. Cheer! Mar.
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Wed, Jul 11, 2018 at 04:47:58PM +0100, Mark Rutland wrote: > On Wed, Jul 11, 2018 at 09:20:03AM +0200, Ard Biesheuvel wrote: > > On 11 July 2018 at 03:09, Yandong.Zhao wrote: > > > diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h > > > index fa8b3fe..784a8c2 100644 > > > --- a/arch/arm64/include/asm/simd.h > > > +++ b/arch/arm64/include/asm/simd.h > > > @@ -29,7 +29,8 @@ > > > static __must_check inline bool may_use_simd(void) > > > { > > > /* > > > -* The raw_cpu_read() is racy if called with preemption enabled. > > > +* The this_cpu_read() is racy if called with preemption enabled, > > > +* since the task may subsequently migrate to another CPU. > > > * This is not a bug: kernel_neon_busy is only set when > > > * preemption is disabled, so we cannot migrate to another CPU > > > * while it is set, nor can we migrate to a CPU where it is set. > > It would be nice if we could clarify the "is racy" part here. > > How about: > > /* >* kernel_neon_busy is only set while preemption is disabled, >* and is clear whenever preemption is enabled. Since >* this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy >* cannot change under our feet -- if it's set we cannot be >* migrated, and if it's clear we cannot be migrated to a CPU >* where it is set. >*/ > > With that: > > Reviewed-by: Mark Rutland Thanks. Applied with the updated comment and your tag.. Will
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Wed, Jul 11, 2018 at 04:47:58PM +0100, Mark Rutland wrote: > On Wed, Jul 11, 2018 at 09:20:03AM +0200, Ard Biesheuvel wrote: > > On 11 July 2018 at 03:09, Yandong.Zhao wrote: > > > diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h > > > index fa8b3fe..784a8c2 100644 > > > --- a/arch/arm64/include/asm/simd.h > > > +++ b/arch/arm64/include/asm/simd.h > > > @@ -29,7 +29,8 @@ > > > static __must_check inline bool may_use_simd(void) > > > { > > > /* > > > -* The raw_cpu_read() is racy if called with preemption enabled. > > > +* The this_cpu_read() is racy if called with preemption enabled, > > > +* since the task may subsequently migrate to another CPU. > > > * This is not a bug: kernel_neon_busy is only set when > > > * preemption is disabled, so we cannot migrate to another CPU > > > * while it is set, nor can we migrate to a CPU where it is set. > > It would be nice if we could clarify the "is racy" part here. > > How about: > > /* >* kernel_neon_busy is only set while preemption is disabled, >* and is clear whenever preemption is enabled. Since >* this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy >* cannot change under our feet -- if it's set we cannot be >* migrated, and if it's clear we cannot be migrated to a CPU >* where it is set. >*/ > > With that: > > Reviewed-by: Mark Rutland Thanks. Applied with the updated comment and your tag.. Will
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Wed, Jul 11, 2018 at 09:20:03AM +0200, Ard Biesheuvel wrote: > On 11 July 2018 at 03:09, Yandong.Zhao wrote: > > From: Yandong Zhao > > > > It does not matter if the caller of may_use_simd() migrates to > > another cpu after the call, but it is still important that the > > kernel_neon_busy percpu instance that is read matches the cpu the > > task is running on at the time of the read. > > > > This means that raw_cpu_read() is not sufficient. kernel_neon_busy > > may appear true if the caller migrates during the execution of > > raw_cpu_read() and the next task to be scheduled in on the initial > > cpu calls kernel_neon_begin(). > > > > This patch replaces raw_cpu_read() with this_cpu_read() to protect > > against this race. > > > > Signed-off-by: Yandong Zhao > > I had a bit of trouble disentangling the per-cpu spaghetti to decide > whether this may trigger warnings when CONFIG_DEBUG_PREEMPT=y, but I > don't think so. So assuming this is *not* the case: It shouldn't, since: * this_cpu_*() are prempt-safe * __this_cpu_*() are not preempt-safe (and warn when preemptible) * raw_cpu_*() are not preempt safe (but don't warn when preemptible) > Acked-by: Ard Biesheuvel > > > > --- > > arch/arm64/include/asm/simd.h | 5 +++-- > > 1 file changed, 3 insertions(+), 2 deletions(-) > > > > diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h > > index fa8b3fe..784a8c2 100644 > > --- a/arch/arm64/include/asm/simd.h > > +++ b/arch/arm64/include/asm/simd.h > > @@ -29,7 +29,8 @@ > > static __must_check inline bool may_use_simd(void) > > { > > /* > > -* The raw_cpu_read() is racy if called with preemption enabled. > > +* The this_cpu_read() is racy if called with preemption enabled, > > +* since the task may subsequently migrate to another CPU. > > * This is not a bug: kernel_neon_busy is only set when > > * preemption is disabled, so we cannot migrate to another CPU > > * while it is set, nor can we migrate to a CPU where it is set. It would be nice if we could clarify the "is racy" part here. How about: /* * kernel_neon_busy is only set while preemption is disabled, * and is clear whenever preemption is enabled. Since * this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy * cannot change under our feet -- if it's set we cannot be * migrated, and if it's clear we cannot be migrated to a CPU * where it is set. */ With that: Reviewed-by: Mark Rutland Thanks, Mark. > > @@ -42,7 +43,7 @@ static __must_check inline bool may_use_simd(void) > > * false. > > */ > > return !in_irq() && !irqs_disabled() && !in_nmi() && > > - !raw_cpu_read(kernel_neon_busy); > > + !this_cpu_read(kernel_neon_busy); > > } > > > > #else /* ! CONFIG_KERNEL_MODE_NEON */ > > -- > > 1.9.1 > >
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Wed, Jul 11, 2018 at 09:20:03AM +0200, Ard Biesheuvel wrote: > On 11 July 2018 at 03:09, Yandong.Zhao wrote: > > From: Yandong Zhao > > > > It does not matter if the caller of may_use_simd() migrates to > > another cpu after the call, but it is still important that the > > kernel_neon_busy percpu instance that is read matches the cpu the > > task is running on at the time of the read. > > > > This means that raw_cpu_read() is not sufficient. kernel_neon_busy > > may appear true if the caller migrates during the execution of > > raw_cpu_read() and the next task to be scheduled in on the initial > > cpu calls kernel_neon_begin(). > > > > This patch replaces raw_cpu_read() with this_cpu_read() to protect > > against this race. > > > > Signed-off-by: Yandong Zhao > > I had a bit of trouble disentangling the per-cpu spaghetti to decide > whether this may trigger warnings when CONFIG_DEBUG_PREEMPT=y, but I > don't think so. So assuming this is *not* the case: It shouldn't, since: * this_cpu_*() are prempt-safe * __this_cpu_*() are not preempt-safe (and warn when preemptible) * raw_cpu_*() are not preempt safe (but don't warn when preemptible) > Acked-by: Ard Biesheuvel > > > > --- > > arch/arm64/include/asm/simd.h | 5 +++-- > > 1 file changed, 3 insertions(+), 2 deletions(-) > > > > diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h > > index fa8b3fe..784a8c2 100644 > > --- a/arch/arm64/include/asm/simd.h > > +++ b/arch/arm64/include/asm/simd.h > > @@ -29,7 +29,8 @@ > > static __must_check inline bool may_use_simd(void) > > { > > /* > > -* The raw_cpu_read() is racy if called with preemption enabled. > > +* The this_cpu_read() is racy if called with preemption enabled, > > +* since the task may subsequently migrate to another CPU. > > * This is not a bug: kernel_neon_busy is only set when > > * preemption is disabled, so we cannot migrate to another CPU > > * while it is set, nor can we migrate to a CPU where it is set. It would be nice if we could clarify the "is racy" part here. How about: /* * kernel_neon_busy is only set while preemption is disabled, * and is clear whenever preemption is enabled. Since * this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy * cannot change under our feet -- if it's set we cannot be * migrated, and if it's clear we cannot be migrated to a CPU * where it is set. */ With that: Reviewed-by: Mark Rutland Thanks, Mark. > > @@ -42,7 +43,7 @@ static __must_check inline bool may_use_simd(void) > > * false. > > */ > > return !in_irq() && !irqs_disabled() && !in_nmi() && > > - !raw_cpu_read(kernel_neon_busy); > > + !this_cpu_read(kernel_neon_busy); > > } > > > > #else /* ! CONFIG_KERNEL_MODE_NEON */ > > -- > > 1.9.1 > >
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Wed, Jul 11, 2018 at 01:05:19PM +0100, Will Deacon wrote: > On Wed, Jul 11, 2018 at 07:06:28PM +0800, Yandong.Zhao wrote: > > From: Yandong Zhao > > > > It does not matter if the caller of may_use_simd() migrates to > > another cpu after the call, but it is still important that the > > kernel_neon_busy percpu instance that is read matches the cpu the > > task is running on at the time of the read. > > > > This means that raw_cpu_read() is not sufficient. kernel_neon_busy > > may appear true if the caller migrates during the execution of > > raw_cpu_read() and the next task to be scheduled in on the initial > > cpu calls kernel_neon_begin(). > > > > This patch replaces raw_cpu_read() with this_cpu_read() to protect > > against this race. > > > > Fixes: cb84d11e1625 ("arm64: neon: Remove support for nested or hardirq > > kernel-mode NEON") > > Reviewed-by: Dave Martin > > Signed-off-by: Yandong Zhao > > --- > > arch/arm64/include/asm/simd.h | 5 +++-- > > 1 file changed, 3 insertions(+), 2 deletions(-) > > Does this need to go to stable? It should, yes, so we should probably add Cc: sta...@vger.kernel.org Are you OK to pick that up? Cheers ---Dave
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Wed, Jul 11, 2018 at 01:05:19PM +0100, Will Deacon wrote: > On Wed, Jul 11, 2018 at 07:06:28PM +0800, Yandong.Zhao wrote: > > From: Yandong Zhao > > > > It does not matter if the caller of may_use_simd() migrates to > > another cpu after the call, but it is still important that the > > kernel_neon_busy percpu instance that is read matches the cpu the > > task is running on at the time of the read. > > > > This means that raw_cpu_read() is not sufficient. kernel_neon_busy > > may appear true if the caller migrates during the execution of > > raw_cpu_read() and the next task to be scheduled in on the initial > > cpu calls kernel_neon_begin(). > > > > This patch replaces raw_cpu_read() with this_cpu_read() to protect > > against this race. > > > > Fixes: cb84d11e1625 ("arm64: neon: Remove support for nested or hardirq > > kernel-mode NEON") > > Reviewed-by: Dave Martin > > Signed-off-by: Yandong Zhao > > --- > > arch/arm64/include/asm/simd.h | 5 +++-- > > 1 file changed, 3 insertions(+), 2 deletions(-) > > Does this need to go to stable? It should, yes, so we should probably add Cc: sta...@vger.kernel.org Are you OK to pick that up? Cheers ---Dave
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Wed, Jul 11, 2018 at 07:06:28PM +0800, Yandong.Zhao wrote: > From: Yandong Zhao > > It does not matter if the caller of may_use_simd() migrates to > another cpu after the call, but it is still important that the > kernel_neon_busy percpu instance that is read matches the cpu the > task is running on at the time of the read. > > This means that raw_cpu_read() is not sufficient. kernel_neon_busy > may appear true if the caller migrates during the execution of > raw_cpu_read() and the next task to be scheduled in on the initial > cpu calls kernel_neon_begin(). > > This patch replaces raw_cpu_read() with this_cpu_read() to protect > against this race. > > Fixes: cb84d11e1625 ("arm64: neon: Remove support for nested or hardirq > kernel-mode NEON") > Reviewed-by: Dave Martin > Signed-off-by: Yandong Zhao > --- > arch/arm64/include/asm/simd.h | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) Does this need to go to stable? Will
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Wed, Jul 11, 2018 at 07:06:28PM +0800, Yandong.Zhao wrote: > From: Yandong Zhao > > It does not matter if the caller of may_use_simd() migrates to > another cpu after the call, but it is still important that the > kernel_neon_busy percpu instance that is read matches the cpu the > task is running on at the time of the read. > > This means that raw_cpu_read() is not sufficient. kernel_neon_busy > may appear true if the caller migrates during the execution of > raw_cpu_read() and the next task to be scheduled in on the initial > cpu calls kernel_neon_begin(). > > This patch replaces raw_cpu_read() with this_cpu_read() to protect > against this race. > > Fixes: cb84d11e1625 ("arm64: neon: Remove support for nested or hardirq > kernel-mode NEON") > Reviewed-by: Dave Martin > Signed-off-by: Yandong Zhao > --- > arch/arm64/include/asm/simd.h | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) Does this need to go to stable? Will
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Wed, Jul 11, 2018 at 09:09:59AM +0800, Yandong.Zhao wrote: > From: Yandong Zhao > > It does not matter if the caller of may_use_simd() migrates to > another cpu after the call, but it is still important that the > kernel_neon_busy percpu instance that is read matches the cpu the > task is running on at the time of the read. > > This means that raw_cpu_read() is not sufficient. kernel_neon_busy > may appear true if the caller migrates during the execution of > raw_cpu_read() and the next task to be scheduled in on the initial > cpu calls kernel_neon_begin(). > > This patch replaces raw_cpu_read() with this_cpu_read() to protect > against this race. > > Signed-off-by: Yandong Zhao Looks ok to me. You can add the following tags: Fixes: cb84d11e1625 ("arm64: neon: Remove support for nested or hardirq kernel-mode NEON") Reviewed-by: Dave Martin Cheers ---Dave > --- > arch/arm64/include/asm/simd.h | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h > index fa8b3fe..784a8c2 100644 > --- a/arch/arm64/include/asm/simd.h > +++ b/arch/arm64/include/asm/simd.h > @@ -29,7 +29,8 @@ > static __must_check inline bool may_use_simd(void) > { > /* > - * The raw_cpu_read() is racy if called with preemption enabled. > + * The this_cpu_read() is racy if called with preemption enabled, > + * since the task may subsequently migrate to another CPU. >* This is not a bug: kernel_neon_busy is only set when >* preemption is disabled, so we cannot migrate to another CPU >* while it is set, nor can we migrate to a CPU where it is set. > @@ -42,7 +43,7 @@ static __must_check inline bool may_use_simd(void) >* false. >*/ > return !in_irq() && !irqs_disabled() && !in_nmi() && > - !raw_cpu_read(kernel_neon_busy); > + !this_cpu_read(kernel_neon_busy); > } > > #else /* ! CONFIG_KERNEL_MODE_NEON */ > -- > 1.9.1 > > > ___ > linux-arm-kernel mailing list > linux-arm-ker...@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Wed, Jul 11, 2018 at 09:09:59AM +0800, Yandong.Zhao wrote: > From: Yandong Zhao > > It does not matter if the caller of may_use_simd() migrates to > another cpu after the call, but it is still important that the > kernel_neon_busy percpu instance that is read matches the cpu the > task is running on at the time of the read. > > This means that raw_cpu_read() is not sufficient. kernel_neon_busy > may appear true if the caller migrates during the execution of > raw_cpu_read() and the next task to be scheduled in on the initial > cpu calls kernel_neon_begin(). > > This patch replaces raw_cpu_read() with this_cpu_read() to protect > against this race. > > Signed-off-by: Yandong Zhao Looks ok to me. You can add the following tags: Fixes: cb84d11e1625 ("arm64: neon: Remove support for nested or hardirq kernel-mode NEON") Reviewed-by: Dave Martin Cheers ---Dave > --- > arch/arm64/include/asm/simd.h | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h > index fa8b3fe..784a8c2 100644 > --- a/arch/arm64/include/asm/simd.h > +++ b/arch/arm64/include/asm/simd.h > @@ -29,7 +29,8 @@ > static __must_check inline bool may_use_simd(void) > { > /* > - * The raw_cpu_read() is racy if called with preemption enabled. > + * The this_cpu_read() is racy if called with preemption enabled, > + * since the task may subsequently migrate to another CPU. >* This is not a bug: kernel_neon_busy is only set when >* preemption is disabled, so we cannot migrate to another CPU >* while it is set, nor can we migrate to a CPU where it is set. > @@ -42,7 +43,7 @@ static __must_check inline bool may_use_simd(void) >* false. >*/ > return !in_irq() && !irqs_disabled() && !in_nmi() && > - !raw_cpu_read(kernel_neon_busy); > + !this_cpu_read(kernel_neon_busy); > } > > #else /* ! CONFIG_KERNEL_MODE_NEON */ > -- > 1.9.1 > > > ___ > linux-arm-kernel mailing list > linux-arm-ker...@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On 11 July 2018 at 03:09, Yandong.Zhao wrote: > From: Yandong Zhao > > It does not matter if the caller of may_use_simd() migrates to > another cpu after the call, but it is still important that the > kernel_neon_busy percpu instance that is read matches the cpu the > task is running on at the time of the read. > > This means that raw_cpu_read() is not sufficient. kernel_neon_busy > may appear true if the caller migrates during the execution of > raw_cpu_read() and the next task to be scheduled in on the initial > cpu calls kernel_neon_begin(). > > This patch replaces raw_cpu_read() with this_cpu_read() to protect > against this race. > > Signed-off-by: Yandong Zhao I had a bit of trouble disentangling the per-cpu spaghetti to decide whether this may trigger warnings when CONFIG_DEBUG_PREEMPT=y, but I don't think so. So assuming this is *not* the case: Acked-by: Ard Biesheuvel > --- > arch/arm64/include/asm/simd.h | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h > index fa8b3fe..784a8c2 100644 > --- a/arch/arm64/include/asm/simd.h > +++ b/arch/arm64/include/asm/simd.h > @@ -29,7 +29,8 @@ > static __must_check inline bool may_use_simd(void) > { > /* > -* The raw_cpu_read() is racy if called with preemption enabled. > +* The this_cpu_read() is racy if called with preemption enabled, > +* since the task may subsequently migrate to another CPU. > * This is not a bug: kernel_neon_busy is only set when > * preemption is disabled, so we cannot migrate to another CPU > * while it is set, nor can we migrate to a CPU where it is set. > @@ -42,7 +43,7 @@ static __must_check inline bool may_use_simd(void) > * false. > */ > return !in_irq() && !irqs_disabled() && !in_nmi() && > - !raw_cpu_read(kernel_neon_busy); > + !this_cpu_read(kernel_neon_busy); > } > > #else /* ! CONFIG_KERNEL_MODE_NEON */ > -- > 1.9.1 >
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On 11 July 2018 at 03:09, Yandong.Zhao wrote: > From: Yandong Zhao > > It does not matter if the caller of may_use_simd() migrates to > another cpu after the call, but it is still important that the > kernel_neon_busy percpu instance that is read matches the cpu the > task is running on at the time of the read. > > This means that raw_cpu_read() is not sufficient. kernel_neon_busy > may appear true if the caller migrates during the execution of > raw_cpu_read() and the next task to be scheduled in on the initial > cpu calls kernel_neon_begin(). > > This patch replaces raw_cpu_read() with this_cpu_read() to protect > against this race. > > Signed-off-by: Yandong Zhao I had a bit of trouble disentangling the per-cpu spaghetti to decide whether this may trigger warnings when CONFIG_DEBUG_PREEMPT=y, but I don't think so. So assuming this is *not* the case: Acked-by: Ard Biesheuvel > --- > arch/arm64/include/asm/simd.h | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h > index fa8b3fe..784a8c2 100644 > --- a/arch/arm64/include/asm/simd.h > +++ b/arch/arm64/include/asm/simd.h > @@ -29,7 +29,8 @@ > static __must_check inline bool may_use_simd(void) > { > /* > -* The raw_cpu_read() is racy if called with preemption enabled. > +* The this_cpu_read() is racy if called with preemption enabled, > +* since the task may subsequently migrate to another CPU. > * This is not a bug: kernel_neon_busy is only set when > * preemption is disabled, so we cannot migrate to another CPU > * while it is set, nor can we migrate to a CPU where it is set. > @@ -42,7 +43,7 @@ static __must_check inline bool may_use_simd(void) > * false. > */ > return !in_irq() && !irqs_disabled() && !in_nmi() && > - !raw_cpu_read(kernel_neon_busy); > + !this_cpu_read(kernel_neon_busy); > } > > #else /* ! CONFIG_KERNEL_MODE_NEON */ > -- > 1.9.1 >
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Tue, Jul 10, 2018 at 10:21:40AM +0800, Yandong.Zhao wrote: > From: Yandong Zhao > > Operations for contexts where we do not want to do any checks for > preemptions. Unless strictly necessary, always use this_cpu_read() > instead. Because of the kernel_neon_busy here we have to make sure > that it is the current cpu. I find this wording a bit confusing. Does the following make look OK to you? --8<-- It does not matter if the caller of may_use_simd() migrates to another cpu after the call, but it is still important that the kernel_neon_busy percpu instance that is read matches the cpu the task is running on at the time of the read. This means that raw_cpu_read() is not sufficient. kernel_neon_busy may appear true if the caller migrates during the execution of raw_cpu_read() and the next task to be scheduled in on the initial cpu calls kernel_neon_begin(). This patch replaces raw_cpu_read() with this_cpu_read() to protect against this race. -->8-- > > Signed-off-by: Yandong Zhao > --- > arch/arm64/include/asm/simd.h | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h > index fa8b3fe..8b97f8b 100644 > --- a/arch/arm64/include/asm/simd.h > +++ b/arch/arm64/include/asm/simd.h > @@ -29,7 +29,8 @@ > static __must_check inline bool may_use_simd(void) > { > /* > - * The raw_cpu_read() is racy if called with preemption enabled. > + * The this_cpu_read() is racy if called with preemption enabled, > + * since the task my subsequently migrate to another CPU. "my" -> "may" (apologies if I was responsible for that typo) [...] Cheers ---Dave
Re: [PATCH] arm64: neon: Fix function may_use_simd() return error status
On Tue, Jul 10, 2018 at 10:21:40AM +0800, Yandong.Zhao wrote: > From: Yandong Zhao > > Operations for contexts where we do not want to do any checks for > preemptions. Unless strictly necessary, always use this_cpu_read() > instead. Because of the kernel_neon_busy here we have to make sure > that it is the current cpu. I find this wording a bit confusing. Does the following make look OK to you? --8<-- It does not matter if the caller of may_use_simd() migrates to another cpu after the call, but it is still important that the kernel_neon_busy percpu instance that is read matches the cpu the task is running on at the time of the read. This means that raw_cpu_read() is not sufficient. kernel_neon_busy may appear true if the caller migrates during the execution of raw_cpu_read() and the next task to be scheduled in on the initial cpu calls kernel_neon_begin(). This patch replaces raw_cpu_read() with this_cpu_read() to protect against this race. -->8-- > > Signed-off-by: Yandong Zhao > --- > arch/arm64/include/asm/simd.h | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h > index fa8b3fe..8b97f8b 100644 > --- a/arch/arm64/include/asm/simd.h > +++ b/arch/arm64/include/asm/simd.h > @@ -29,7 +29,8 @@ > static __must_check inline bool may_use_simd(void) > { > /* > - * The raw_cpu_read() is racy if called with preemption enabled. > + * The this_cpu_read() is racy if called with preemption enabled, > + * since the task my subsequently migrate to another CPU. "my" -> "may" (apologies if I was responsible for that typo) [...] Cheers ---Dave