[PATCH] arm64: neon: Fix function may_use_simd() return error status

2018-07-11 Thread Yandong.Zhao
From: Yandong Zhao 

It does not matter if the caller of may_use_simd() migrates to
another cpu after the call, but it is still important that the
kernel_neon_busy percpu instance that is read matches the cpu the
task is running on at the time of the read.

This means that raw_cpu_read() is not sufficient. kernel_neon_busy
may appear true if the caller migrates during the execution of
raw_cpu_read() and the next task to be scheduled in on the initial
cpu calls kernel_neon_begin().

This patch replaces raw_cpu_read() with this_cpu_read() to protect
against this race.

Fixes: cb84d11e1625 ("arm64: neon: Remove support for nested or hardirq 
kernel-mode NEON")
Acked-by: Ard Biesheuvel 
Reviewed-by: Dave Martin 
Reviewed-by: Mark Rutland 
Signed-off-by: Yandong Zhao 
---
 arch/arm64/include/asm/simd.h | 19 +++
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index fa8b3fe..6495cc5 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -29,20 +29,15 @@
 static __must_check inline bool may_use_simd(void)
 {
/*
-* The raw_cpu_read() is racy if called with preemption enabled.
-* This is not a bug: kernel_neon_busy is only set when
-* preemption is disabled, so we cannot migrate to another CPU
-* while it is set, nor can we migrate to a CPU where it is set.
-* So, if we find it clear on some CPU then we're guaranteed to
-* find it clear on any CPU we could migrate to.
-*
-* If we are in between kernel_neon_begin()...kernel_neon_end(),
-* the flag will be set, but preemption is also disabled, so we
-* can't migrate to another CPU and spuriously see it become
-* false.
+* kernel_neon_busy is only set while preemption is disabled,
+* and is clear whenever preemption is enabled. Since
+* this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy
+* cannot change under our feet -- if it's set we cannot be
+* migrated, and if it's clear we cannot be migrated to a CPU
+* where it is set.
 */
return !in_irq() && !irqs_disabled() && !in_nmi() &&
-   !raw_cpu_read(kernel_neon_busy);
+   !this_cpu_read(kernel_neon_busy);
 }
 
 #else /* ! CONFIG_KERNEL_MODE_NEON */
-- 
1.9.1



[PATCH] arm64: neon: Fix function may_use_simd() return error status

2018-07-11 Thread Yandong.Zhao
From: Yandong Zhao 

It does not matter if the caller of may_use_simd() migrates to
another cpu after the call, but it is still important that the
kernel_neon_busy percpu instance that is read matches the cpu the
task is running on at the time of the read.

This means that raw_cpu_read() is not sufficient. kernel_neon_busy
may appear true if the caller migrates during the execution of
raw_cpu_read() and the next task to be scheduled in on the initial
cpu calls kernel_neon_begin().

This patch replaces raw_cpu_read() with this_cpu_read() to protect
against this race.

Fixes: cb84d11e1625 ("arm64: neon: Remove support for nested or hardirq 
kernel-mode NEON")
Acked-by: Ard Biesheuvel 
Reviewed-by: Dave Martin 
Reviewed-by: Mark Rutland 
Signed-off-by: Yandong Zhao 
---
 arch/arm64/include/asm/simd.h | 19 +++
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index fa8b3fe..6495cc5 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -29,20 +29,15 @@
 static __must_check inline bool may_use_simd(void)
 {
/*
-* The raw_cpu_read() is racy if called with preemption enabled.
-* This is not a bug: kernel_neon_busy is only set when
-* preemption is disabled, so we cannot migrate to another CPU
-* while it is set, nor can we migrate to a CPU where it is set.
-* So, if we find it clear on some CPU then we're guaranteed to
-* find it clear on any CPU we could migrate to.
-*
-* If we are in between kernel_neon_begin()...kernel_neon_end(),
-* the flag will be set, but preemption is also disabled, so we
-* can't migrate to another CPU and spuriously see it become
-* false.
+* kernel_neon_busy is only set while preemption is disabled,
+* and is clear whenever preemption is enabled. Since
+* this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy
+* cannot change under our feet -- if it's set we cannot be
+* migrated, and if it's clear we cannot be migrated to a CPU
+* where it is set.
 */
return !in_irq() && !irqs_disabled() && !in_nmi() &&
-   !raw_cpu_read(kernel_neon_busy);
+   !this_cpu_read(kernel_neon_busy);
 }
 
 #else /* ! CONFIG_KERNEL_MODE_NEON */
-- 
1.9.1



[PATCH] arm64: neon: Fix function may_use_simd() return error status

2018-07-11 Thread Yandong.Zhao
From: Yandong Zhao 

It does not matter if the caller of may_use_simd() migrates to
another cpu after the call, but it is still important that the
kernel_neon_busy percpu instance that is read matches the cpu the
task is running on at the time of the read.

This means that raw_cpu_read() is not sufficient.  kernel_neon_busy
may appear true if the caller migrates during the execution of
raw_cpu_read() and the next task to be scheduled in on the initial
cpu calls kernel_neon_begin().

This patch replaces raw_cpu_read() with this_cpu_read() to protect
against this race.

Fixes: cb84d11e1625 ("arm64: neon: Remove support for nested or hardirq 
kernel-mode NEON")
Reviewed-by: Dave Martin 
Signed-off-by: Yandong Zhao 
---
 arch/arm64/include/asm/simd.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index fa8b3fe..784a8c2 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -29,7 +29,8 @@
 static __must_check inline bool may_use_simd(void)
 {
/*
-* The raw_cpu_read() is racy if called with preemption enabled.
+* The this_cpu_read() is racy if called with preemption enabled,
+* since the task may subsequently migrate to another CPU.
 * This is not a bug: kernel_neon_busy is only set when
 * preemption is disabled, so we cannot migrate to another CPU
 * while it is set, nor can we migrate to a CPU where it is set.
@@ -42,7 +43,7 @@ static __must_check inline bool may_use_simd(void)
 * false.
 */
return !in_irq() && !irqs_disabled() && !in_nmi() &&
-   !raw_cpu_read(kernel_neon_busy);
+   !this_cpu_read(kernel_neon_busy);
 }
 
 #else /* ! CONFIG_KERNEL_MODE_NEON */
-- 
1.9.1



[PATCH] arm64: neon: Fix function may_use_simd() return error status

2018-07-11 Thread Yandong.Zhao
From: Yandong Zhao 

It does not matter if the caller of may_use_simd() migrates to
another cpu after the call, but it is still important that the
kernel_neon_busy percpu instance that is read matches the cpu the
task is running on at the time of the read.

This means that raw_cpu_read() is not sufficient.  kernel_neon_busy
may appear true if the caller migrates during the execution of
raw_cpu_read() and the next task to be scheduled in on the initial
cpu calls kernel_neon_begin().

This patch replaces raw_cpu_read() with this_cpu_read() to protect
against this race.

Fixes: cb84d11e1625 ("arm64: neon: Remove support for nested or hardirq 
kernel-mode NEON")
Reviewed-by: Dave Martin 
Signed-off-by: Yandong Zhao 
---
 arch/arm64/include/asm/simd.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index fa8b3fe..784a8c2 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -29,7 +29,8 @@
 static __must_check inline bool may_use_simd(void)
 {
/*
-* The raw_cpu_read() is racy if called with preemption enabled.
+* The this_cpu_read() is racy if called with preemption enabled,
+* since the task may subsequently migrate to another CPU.
 * This is not a bug: kernel_neon_busy is only set when
 * preemption is disabled, so we cannot migrate to another CPU
 * while it is set, nor can we migrate to a CPU where it is set.
@@ -42,7 +43,7 @@ static __must_check inline bool may_use_simd(void)
 * false.
 */
return !in_irq() && !irqs_disabled() && !in_nmi() &&
-   !raw_cpu_read(kernel_neon_busy);
+   !this_cpu_read(kernel_neon_busy);
 }
 
 #else /* ! CONFIG_KERNEL_MODE_NEON */
-- 
1.9.1



[PATCH] arm64: neon: Fix function may_use_simd() return error status

2018-07-10 Thread Yandong.Zhao
From: Yandong Zhao 

It does not matter if the caller of may_use_simd() migrates to
another cpu after the call, but it is still important that the
kernel_neon_busy percpu instance that is read matches the cpu the
task is running on at the time of the read.

This means that raw_cpu_read() is not sufficient.  kernel_neon_busy
may appear true if the caller migrates during the execution of
raw_cpu_read() and the next task to be scheduled in on the initial
cpu calls kernel_neon_begin().

This patch replaces raw_cpu_read() with this_cpu_read() to protect
against this race.

Signed-off-by: Yandong Zhao 
---
 arch/arm64/include/asm/simd.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index fa8b3fe..784a8c2 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -29,7 +29,8 @@
 static __must_check inline bool may_use_simd(void)
 {
/*
-* The raw_cpu_read() is racy if called with preemption enabled.
+* The this_cpu_read() is racy if called with preemption enabled,
+* since the task may subsequently migrate to another CPU.
 * This is not a bug: kernel_neon_busy is only set when
 * preemption is disabled, so we cannot migrate to another CPU
 * while it is set, nor can we migrate to a CPU where it is set.
@@ -42,7 +43,7 @@ static __must_check inline bool may_use_simd(void)
 * false.
 */
return !in_irq() && !irqs_disabled() && !in_nmi() &&
-   !raw_cpu_read(kernel_neon_busy);
+   !this_cpu_read(kernel_neon_busy);
 }
 
 #else /* ! CONFIG_KERNEL_MODE_NEON */
-- 
1.9.1



[PATCH] arm64: neon: Fix function may_use_simd() return error status

2018-07-10 Thread Yandong.Zhao
From: Yandong Zhao 

It does not matter if the caller of may_use_simd() migrates to
another cpu after the call, but it is still important that the
kernel_neon_busy percpu instance that is read matches the cpu the
task is running on at the time of the read.

This means that raw_cpu_read() is not sufficient.  kernel_neon_busy
may appear true if the caller migrates during the execution of
raw_cpu_read() and the next task to be scheduled in on the initial
cpu calls kernel_neon_begin().

This patch replaces raw_cpu_read() with this_cpu_read() to protect
against this race.

Signed-off-by: Yandong Zhao 
---
 arch/arm64/include/asm/simd.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index fa8b3fe..784a8c2 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -29,7 +29,8 @@
 static __must_check inline bool may_use_simd(void)
 {
/*
-* The raw_cpu_read() is racy if called with preemption enabled.
+* The this_cpu_read() is racy if called with preemption enabled,
+* since the task may subsequently migrate to another CPU.
 * This is not a bug: kernel_neon_busy is only set when
 * preemption is disabled, so we cannot migrate to another CPU
 * while it is set, nor can we migrate to a CPU where it is set.
@@ -42,7 +43,7 @@ static __must_check inline bool may_use_simd(void)
 * false.
 */
return !in_irq() && !irqs_disabled() && !in_nmi() &&
-   !raw_cpu_read(kernel_neon_busy);
+   !this_cpu_read(kernel_neon_busy);
 }
 
 #else /* ! CONFIG_KERNEL_MODE_NEON */
-- 
1.9.1



[PATCH] arm64: neon: Fix function may_use_simd() return error status

2018-07-09 Thread Yandong.Zhao
From: Yandong Zhao 

Operations for contexts where we do not want to do any checks for
preemptions.  Unless strictly necessary, always use this_cpu_read()
instead.  Because of the kernel_neon_busy here we have to make sure
that it is the current cpu.

Signed-off-by: Yandong Zhao 
---
 arch/arm64/include/asm/simd.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index fa8b3fe..8b97f8b 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -29,7 +29,8 @@
 static __must_check inline bool may_use_simd(void)
 {
/*
-* The raw_cpu_read() is racy if called with preemption enabled.
+* The this_cpu_read() is racy if called with preemption enabled,
+* since the task my subsequently migrate to another CPU.
 * This is not a bug: kernel_neon_busy is only set when
 * preemption is disabled, so we cannot migrate to another CPU
 * while it is set, nor can we migrate to a CPU where it is set.
@@ -42,7 +43,7 @@ static __must_check inline bool may_use_simd(void)
 * false.
 */
return !in_irq() && !irqs_disabled() && !in_nmi() &&
-   !raw_cpu_read(kernel_neon_busy);
+   !this_cpu_read(kernel_neon_busy);
 }
 
 #else /* ! CONFIG_KERNEL_MODE_NEON */
-- 
1.9.1



[PATCH] arm64: neon: Fix function may_use_simd() return error status

2018-07-09 Thread Yandong.Zhao
From: Yandong Zhao 

Operations for contexts where we do not want to do any checks for
preemptions.  Unless strictly necessary, always use this_cpu_read()
instead.  Because of the kernel_neon_busy here we have to make sure
that it is the current cpu.

Signed-off-by: Yandong Zhao 
---
 arch/arm64/include/asm/simd.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index fa8b3fe..8b97f8b 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -29,7 +29,8 @@
 static __must_check inline bool may_use_simd(void)
 {
/*
-* The raw_cpu_read() is racy if called with preemption enabled.
+* The this_cpu_read() is racy if called with preemption enabled,
+* since the task my subsequently migrate to another CPU.
 * This is not a bug: kernel_neon_busy is only set when
 * preemption is disabled, so we cannot migrate to another CPU
 * while it is set, nor can we migrate to a CPU where it is set.
@@ -42,7 +43,7 @@ static __must_check inline bool may_use_simd(void)
 * false.
 */
return !in_irq() && !irqs_disabled() && !in_nmi() &&
-   !raw_cpu_read(kernel_neon_busy);
+   !this_cpu_read(kernel_neon_busy);
 }
 
 #else /* ! CONFIG_KERNEL_MODE_NEON */
-- 
1.9.1



[PATCH] arm64: neon: Add preemption protection for kernel_neon_busy

2018-07-09 Thread Yandong.Zhao
From: Yandong Zhao 

Dear Dave,

The scenario for this bug is:
The A process is sched out when the CPU0 executes the function
raw_cpu_read(kernel_neon_busy) and just gets the address of
kernel_neon_busy without reading.
The B process starts running kernel_neon_begin() on CPU0, and the variable
kernel_neon_busy on CPU0 becomes true. At this time, the A process is
executed on CPU1 and the kernel_neon_busy value is CPU0 (true),so BUG_ON()!

crash64> kernel_neon_busy
PER-CPU DATA TYPE:
  bool kernel_neon_busy;
PER-CPU ADDRESSES:
  [0]: ffc07fee30a0
  [1]: ffc07fef90a0
  [2]: ffc07ff0f0a0
  [3]: ffc07ff250a0

  CPU0   CPU1
   |  |
A task have get addr ffc07fee30a0 |
  and sched out   |
   |  |
B task kernel_neon_begin()|
[ffc07fee30a0]=1  |
   |  |
   |   A task sched in and read
   |[ffc07fee30a0]==1,so BUG_ON.
   |  |
B task kernel_neon_end()  |
 [ffc07fee30a0]=0 |
   |  |

Signed-off-by: Yandong Zhao 
---
 arch/arm64/include/asm/simd.h | 16 
 1 file changed, 4 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index fa8b3fe..6580dcd 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -29,20 +29,12 @@
 static __must_check inline bool may_use_simd(void)
 {
/*
-* The raw_cpu_read() is racy if called with preemption enabled.
-* This is not a bug: kernel_neon_busy is only set when
-* preemption is disabled, so we cannot migrate to another CPU
-* while it is set, nor can we migrate to a CPU where it is set.
-* So, if we find it clear on some CPU then we're guaranteed to
-* find it clear on any CPU we could migrate to.
-*
-* If we are in between kernel_neon_begin()...kernel_neon_end(),
-* the flag will be set, but preemption is also disabled, so we
-* can't migrate to another CPU and spuriously see it become
-* false.
+* Operations for contexts where we do not want to do any checks for
+* preemptions.  Unless strictly necessary, always use this_cpu_*()
+* instead.
 */
return !in_irq() && !irqs_disabled() && !in_nmi() &&
-   !raw_cpu_read(kernel_neon_busy);
+   !this_cpu_read(kernel_neon_busy);
 }
 
 #else /* ! CONFIG_KERNEL_MODE_NEON */
-- 
1.9.1



[PATCH] arm64: neon: Add preemption protection for kernel_neon_busy

2018-07-09 Thread Yandong.Zhao
From: Yandong Zhao 

Dear Dave,

The scenario for this bug is:
The A process is sched out when the CPU0 executes the function
raw_cpu_read(kernel_neon_busy) and just gets the address of
kernel_neon_busy without reading.
The B process starts running kernel_neon_begin() on CPU0, and the variable
kernel_neon_busy on CPU0 becomes true. At this time, the A process is
executed on CPU1 and the kernel_neon_busy value is CPU0 (true),so BUG_ON()!

crash64> kernel_neon_busy
PER-CPU DATA TYPE:
  bool kernel_neon_busy;
PER-CPU ADDRESSES:
  [0]: ffc07fee30a0
  [1]: ffc07fef90a0
  [2]: ffc07ff0f0a0
  [3]: ffc07ff250a0

  CPU0   CPU1
   |  |
A task have get addr ffc07fee30a0 |
  and sched out   |
   |  |
B task kernel_neon_begin()|
[ffc07fee30a0]=1  |
   |  |
   |   A task sched in and read
   |[ffc07fee30a0]==1,so BUG_ON.
   |  |
B task kernel_neon_end()  |
 [ffc07fee30a0]=0 |
   |  |

Signed-off-by: Yandong Zhao 
---
 arch/arm64/include/asm/simd.h | 16 
 1 file changed, 4 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index fa8b3fe..6580dcd 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -29,20 +29,12 @@
 static __must_check inline bool may_use_simd(void)
 {
/*
-* The raw_cpu_read() is racy if called with preemption enabled.
-* This is not a bug: kernel_neon_busy is only set when
-* preemption is disabled, so we cannot migrate to another CPU
-* while it is set, nor can we migrate to a CPU where it is set.
-* So, if we find it clear on some CPU then we're guaranteed to
-* find it clear on any CPU we could migrate to.
-*
-* If we are in between kernel_neon_begin()...kernel_neon_end(),
-* the flag will be set, but preemption is also disabled, so we
-* can't migrate to another CPU and spuriously see it become
-* false.
+* Operations for contexts where we do not want to do any checks for
+* preemptions.  Unless strictly necessary, always use this_cpu_*()
+* instead.
 */
return !in_irq() && !irqs_disabled() && !in_nmi() &&
-   !raw_cpu_read(kernel_neon_busy);
+   !this_cpu_read(kernel_neon_busy);
 }
 
 #else /* ! CONFIG_KERNEL_MODE_NEON */
-- 
1.9.1



[PATCH] arm64: neon: Add preemption protection for kernel_neon_busy

2018-07-09 Thread Yandong.Zhao
From: Yandong Zhao 

may_use_simd() can be called in any case and access kernel_neon_busy,
for example: BUG_ON(!may_use_simd()).  This patch ensures that
migration will not occur during program access to kernel_neon_busy.

Signed-off-by: Yandong Zhao 
---
 arch/arm64/include/asm/simd.h | 16 
 1 file changed, 4 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index fa8b3fe..6580dcd 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -29,20 +29,12 @@
 static __must_check inline bool may_use_simd(void)
 {
/*
-* The raw_cpu_read() is racy if called with preemption enabled.
-* This is not a bug: kernel_neon_busy is only set when
-* preemption is disabled, so we cannot migrate to another CPU
-* while it is set, nor can we migrate to a CPU where it is set.
-* So, if we find it clear on some CPU then we're guaranteed to
-* find it clear on any CPU we could migrate to.
-*
-* If we are in between kernel_neon_begin()...kernel_neon_end(),
-* the flag will be set, but preemption is also disabled, so we
-* can't migrate to another CPU and spuriously see it become
-* false.
+* Operations for contexts where we do not want to do any checks for
+* preemptions.  Unless strictly necessary, always use this_cpu_*()
+* instead.
 */
return !in_irq() && !irqs_disabled() && !in_nmi() &&
-   !raw_cpu_read(kernel_neon_busy);
+   !this_cpu_read(kernel_neon_busy);
 }
 
 #else /* ! CONFIG_KERNEL_MODE_NEON */
-- 
1.9.1



[PATCH] arm64: neon: Add preemption protection for kernel_neon_busy

2018-07-09 Thread Yandong.Zhao
From: Yandong Zhao 

may_use_simd() can be called in any case and access kernel_neon_busy,
for example: BUG_ON(!may_use_simd()).  This patch ensures that
migration will not occur during program access to kernel_neon_busy.

Signed-off-by: Yandong Zhao 
---
 arch/arm64/include/asm/simd.h | 16 
 1 file changed, 4 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/simd.h b/arch/arm64/include/asm/simd.h
index fa8b3fe..6580dcd 100644
--- a/arch/arm64/include/asm/simd.h
+++ b/arch/arm64/include/asm/simd.h
@@ -29,20 +29,12 @@
 static __must_check inline bool may_use_simd(void)
 {
/*
-* The raw_cpu_read() is racy if called with preemption enabled.
-* This is not a bug: kernel_neon_busy is only set when
-* preemption is disabled, so we cannot migrate to another CPU
-* while it is set, nor can we migrate to a CPU where it is set.
-* So, if we find it clear on some CPU then we're guaranteed to
-* find it clear on any CPU we could migrate to.
-*
-* If we are in between kernel_neon_begin()...kernel_neon_end(),
-* the flag will be set, but preemption is also disabled, so we
-* can't migrate to another CPU and spuriously see it become
-* false.
+* Operations for contexts where we do not want to do any checks for
+* preemptions.  Unless strictly necessary, always use this_cpu_*()
+* instead.
 */
return !in_irq() && !irqs_disabled() && !in_nmi() &&
-   !raw_cpu_read(kernel_neon_busy);
+   !this_cpu_read(kernel_neon_busy);
 }
 
 #else /* ! CONFIG_KERNEL_MODE_NEON */
-- 
1.9.1