Re: [PATCH 1/2] powerpc/64s: remove PROT_SAO support

2020-08-19 Thread Nicholas Piggin
Excerpts from Shawn Anastasio's message of August 19, 2020 6:59 am:
> On 8/18/20 2:11 AM, Nicholas Piggin wrote> Very reasonable point.
>> 
>> The problem we're trying to get a handle on is live partition migration
>> where a running guest might be using SAO then get migrated to a P10. I
>> don't think we have a good way to handle this case. Potentially the
>> hypervisor could revoke the page tables if the guest is running in hash
>> mode and the guest kernel could be taught about that and sigbus the
>> process, but in radix the guest controls those page tables and the SAO
>> state and I don't think there's a way to cause it to take a fault.
>> 
>> I also don't know what the proprietary hypervisor does here.
>> 
>> We could add it back, default to n, or make it bare metal only, or
>> somehow try to block live migration to a later CPU without the faciliy.
>> I wouldn't be against that.
> 
> 
> Admittedly I'm not too familiar with the specifics of live migration
> or guest memory management, but restoring the functionality and adding
> a way to prevent migration of SAO-using guests seems like a reasonable
> choice to me. Would this be done with help from the guest using some
> sort of infrastructure to signal to the hypervisor that SAO is in use,
> or entirely on the hypervisor by e.g. scanning the through the process
> table for SAO pages?

The first step might be to just re-add the functionality but disable
it by default if firmware_has_feature(FW_FEATURE_LPAR). You could have
a config or boot option to allow guests to use it at the cost of
migration compatibility.

That would probably be good enough for experimenting with the feature.
I think modifying the hypervisor and/or guest to deal with migration
is probably too much work to be justified at the moment.

>> It would be very interesting to know how it performs in such a "real"
>> situation. I don't know how well POWER9 has optimised it -- it's
>> possible that it's not much better than putting lwsync after every load
>> or store.
> 
> 
> This is definitely worth investigating in depth. That said, even if the
> performance on P9 isn't super great, I think the feature could still be
> useful, since it would offer more granularity than the sledgehammer
> approach of emitting lwsync everywhere.

Sure, we'd be interested to hear of results.

> I'd be happy to put in some of the work required to get this to a point
> where it can be reintroduced without breaking guest migration - I'd just
> need some pointers on getting started with whatever approach is decided on.

I think re-adding it as I said above would be okay. The code itself is 
not complex so that was not the reason for removal.

Thanks,
Nick



Re: [PATCH 1/2] powerpc/64s: remove PROT_SAO support

2020-08-18 Thread Shawn Anastasio

On 8/18/20 2:11 AM, Nicholas Piggin wrote> Very reasonable point.


The problem we're trying to get a handle on is live partition migration
where a running guest might be using SAO then get migrated to a P10. I
don't think we have a good way to handle this case. Potentially the
hypervisor could revoke the page tables if the guest is running in hash
mode and the guest kernel could be taught about that and sigbus the
process, but in radix the guest controls those page tables and the SAO
state and I don't think there's a way to cause it to take a fault.

I also don't know what the proprietary hypervisor does here.

We could add it back, default to n, or make it bare metal only, or
somehow try to block live migration to a later CPU without the faciliy.
I wouldn't be against that.



Admittedly I'm not too familiar with the specifics of live migration
or guest memory management, but restoring the functionality and adding
a way to prevent migration of SAO-using guests seems like a reasonable
choice to me. Would this be done with help from the guest using some
sort of infrastructure to signal to the hypervisor that SAO is in use,
or entirely on the hypervisor by e.g. scanning the through the process
table for SAO pages?


It would be very interesting to know how it performs in such a "real"
situation. I don't know how well POWER9 has optimised it -- it's
possible that it's not much better than putting lwsync after every load
or store.



This is definitely worth investigating in depth. That said, even if the
performance on P9 isn't super great, I think the feature could still be
useful, since it would offer more granularity than the sledgehammer
approach of emitting lwsync everywhere.

I'd be happy to put in some of the work required to get this to a point
where it can be reintroduced without breaking guest migration - I'd just
need some pointers on getting started with whatever approach is decided on.

Thanks,
Shawn


Re: [PATCH 1/2] powerpc/64s: remove PROT_SAO support

2020-08-18 Thread Nicholas Piggin
Excerpts from Shawn Anastasio's message of August 18, 2020 5:14 am:
> I'm a bit concerned about the removal of PROT_SAO.
> 
>  From what I can see, a feature like this would be extremely useful for
> emulating architectures with stronger memory models. QEMU's multi-
> threaded TCG project in particular looks like it would be a good
> candidate, since as far as I'm aware it is currently completely
> unable to perform strong-on-weak emulation.
> 
> Without hardware support like SAO provides, the only way I could see
> to achieve this would be by emitting tons of unnecessary and costly
> memory barrier instructions.
> 
> I understand that ISA 3.1 and POWER10 have dropped SAO, but as a POWER9
> user it seems a bit silly to have a potentially useful feature dropped
> from the kernel just because a future processor doesn't support it.
> 
> Curious to hear more thoughts on this.

Very reasonable point.

The problem we're trying to get a handle on is live partition migration
where a running guest might be using SAO then get migrated to a P10. I
don't think we have a good way to handle this case. Potentially the
hypervisor could revoke the page tables if the guest is running in hash
mode and the guest kernel could be taught about that and sigbus the
process, but in radix the guest controls those page tables and the SAO
state and I don't think there's a way to cause it to take a fault.

I also don't know what the proprietary hypervisor does here.

We could add it back, default to n, or make it bare metal only, or
somehow try to block live migration to a later CPU without the faciliy.
I wouldn't be against that.

It would be very interesting to know how it performs in such a "real"
situation. I don't know how well POWER9 has optimised it -- it's
possible that it's not much better than putting lwsync after every load
or store.

Thanks,
Nick


Re: [PATCH 1/2] powerpc/64s: remove PROT_SAO support

2020-08-17 Thread Shawn Anastasio

I'm a bit concerned about the removal of PROT_SAO.

From what I can see, a feature like this would be extremely useful for
emulating architectures with stronger memory models. QEMU's multi-
threaded TCG project in particular looks like it would be a good
candidate, since as far as I'm aware it is currently completely
unable to perform strong-on-weak emulation.

Without hardware support like SAO provides, the only way I could see
to achieve this would be by emitting tons of unnecessary and costly
memory barrier instructions.

I understand that ISA 3.1 and POWER10 have dropped SAO, but as a POWER9
user it seems a bit silly to have a potentially useful feature dropped
from the kernel just because a future processor doesn't support it.

Curious to hear more thoughts on this.

Regards,
Shawn

On 6/7/20 7:02 AM, Nicholas Piggin wrote:

ISA v3.1 does not support the SAO storage control attribute required to
implement PROT_SAO. PROT_SAO was used by specialised system software
(Lx86) that has been discontinued for about 7 years, and is not thought
to be used elsewhere, so removal should not cause problems.

We rather remove it than keep support for older processors, because
live migrating guest partitions to newer processors may not be possible
if SAO is in use.

Signed-off-by: Nicholas Piggin 
---
  arch/powerpc/include/asm/book3s/64/pgtable.h  |  8 ++--
  arch/powerpc/include/asm/cputable.h   |  9 ++--
  arch/powerpc/include/asm/kvm_book3s_64.h  |  3 +-
  arch/powerpc/include/asm/mman.h   | 24 +++
  arch/powerpc/include/asm/nohash/64/pgtable.h  |  2 -
  arch/powerpc/kernel/dt_cpu_ftrs.c |  2 +-
  arch/powerpc/mm/book3s64/hash_utils.c |  2 -
  include/linux/mm.h|  2 -
  include/trace/events/mmflags.h|  2 -
  mm/ksm.c  |  4 --
  tools/testing/selftests/powerpc/mm/.gitignore |  1 -
  tools/testing/selftests/powerpc/mm/Makefile   |  4 +-
  tools/testing/selftests/powerpc/mm/prot_sao.c | 42 ---
  13 files changed, 18 insertions(+), 87 deletions(-)
  delete mode 100644 tools/testing/selftests/powerpc/mm/prot_sao.c

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
b/arch/powerpc/include/asm/book3s/64/pgtable.h
index f17442c3a092..d9e92586f8dc 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -20,9 +20,13 @@
  #define _PAGE_RW  (_PAGE_READ | _PAGE_WRITE)
  #define _PAGE_RWX (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)
  #define _PAGE_PRIVILEGED  0x8 /* kernel access only */
-#define _PAGE_SAO  0x00010 /* Strong access order */
+
+#define _PAGE_CACHE_CTL0x00030 /* Bits for the folowing cache 
modes */
+   /*  No bits set is normal cacheable memory */
+   /*  0x00010 unused, is SAO bit on radix POWER9 */
  #define _PAGE_NON_IDEMPOTENT  0x00020 /* non idempotent memory */
  #define _PAGE_TOLERANT0x00030 /* tolerant memory, cache 
inhibited */
+
  #define _PAGE_DIRTY   0x00080 /* C: page changed */
  #define _PAGE_ACCESSED0x00100 /* R: page referenced */
  /*
@@ -825,8 +829,6 @@ static inline void __set_pte_at(struct mm_struct *mm, 
unsigned long addr,
return hash__set_pte_at(mm, addr, ptep, pte, percpu);
  }
  
-#define _PAGE_CACHE_CTL	(_PAGE_SAO | _PAGE_NON_IDEMPOTENT | _PAGE_TOLERANT)

-
  #define pgprot_noncached pgprot_noncached
  static inline pgprot_t pgprot_noncached(pgprot_t prot)
  {
diff --git a/arch/powerpc/include/asm/cputable.h 
b/arch/powerpc/include/asm/cputable.h
index bac2252c839e..c7e923ba 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -191,7 +191,6 @@ static inline void cpu_feature_keys_init(void) { }
  #define CPU_FTR_SPURR LONG_ASM_CONST(0x0100)
  #define CPU_FTR_DSCR  LONG_ASM_CONST(0x0200)
  #define CPU_FTR_VSX   LONG_ASM_CONST(0x0400)
-#define CPU_FTR_SAOLONG_ASM_CONST(0x0800)
  #define CPU_FTR_CP_USE_DCBTZ  LONG_ASM_CONST(0x1000)
  #define CPU_FTR_UNALIGNED_LD_STD  LONG_ASM_CONST(0x2000)
  #define CPU_FTR_ASYM_SMT  LONG_ASM_CONST(0x4000)
@@ -435,7 +434,7 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_MMCRA | CPU_FTR_SMT | \
CPU_FTR_COHERENT_ICACHE | \
CPU_FTR_PURR | CPU_FTR_SPURR | CPU_FTR_REAL_LE | \
-   CPU_FTR_DSCR | CPU_FTR_SAO  | CPU_FTR_ASYM_SMT | \
+   CPU_FTR_DSCR | CPU_FTR_ASYM_SMT | \
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_CFAR | CPU_FTR_HVMODE | \
CPU_FTR_VMX_COPY | CPU_FTR_HAS_PPR | CPU_FTR_DABRX | CPU_FTR_PKEY)
@@ -444,7 +443,7 @@ static inline void cpu_feature_

Re: [PATCH 1/2] powerpc/64s: remove PROT_SAO support

2020-06-28 Thread Nicholas Piggin
Excerpts from Michael Ellerman's message of June 12, 2020 4:14 pm:
> Nicholas Piggin  writes:
>> ISA v3.1 does not support the SAO storage control attribute required to
>> implement PROT_SAO. PROT_SAO was used by specialised system software
>> (Lx86) that has been discontinued for about 7 years, and is not thought
>> to be used elsewhere, so removal should not cause problems.
>>
>> We rather remove it than keep support for older processors, because
>> live migrating guest partitions to newer processors may not be possible
>> if SAO is in use.
> 

Thakns for the review, sorry got distracted...

> They key details being:
>  - you don't remove PROT_SAO from the uapi header, so code using the
>definition will still build.
>  - you change arch_validate_prot() to reject PROT_SAO, which means code
>using it will see a failure from mmap() at runtime.

Yes.

> This obviously risks breaking userspace, even if we think it won't in
> practice. I guess we don't really have any option given the hardware
> support is being dropped.
> 
> Can you repost with a wider Cc list, including linux-mm and linux-arch?

Will do.

> I wonder if we should add a comment to the uapi header, eg?
> 
> diff --git a/arch/powerpc/include/uapi/asm/mman.h 
> b/arch/powerpc/include/uapi/asm/mman.h
> index c0c737215b00..d4fdbe768997 100644
> --- a/arch/powerpc/include/uapi/asm/mman.h
> +++ b/arch/powerpc/include/uapi/asm/mman.h
> @@ -11,7 +11,7 @@
>  #include 
>  
>  
> -#define PROT_SAO 0x10/* Strong Access Ordering */
> +#define PROT_SAO 0x10/* Unsupported since v5.9 */
>  
>  #define MAP_RENAME  MAP_ANONYMOUS   /* In SunOS terminology */
>  #define MAP_NORESERVE   0x40/* don't reserve swap pages */

Yeah that makes sense.

>> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
>> b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> index f17442c3a092..d9e92586f8dc 100644
>> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
>> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> @@ -20,9 +20,13 @@
>>  #define _PAGE_RW(_PAGE_READ | _PAGE_WRITE)
>>  #define _PAGE_RWX   (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)
>>  #define _PAGE_PRIVILEGED0x8 /* kernel access only */
>> -#define _PAGE_SAO   0x00010 /* Strong access order */
>> +
>> +#define _PAGE_CACHE_CTL 0x00030 /* Bits for the folowing cache 
>> modes */
>> +/*  No bits set is normal cacheable memory */
>> +/*  0x00010 unused, is SAO bit on radix POWER9 */
>>  #define _PAGE_NON_IDEMPOTENT0x00020 /* non idempotent memory */
>>  #define _PAGE_TOLERANT  0x00030 /* tolerant memory, cache 
>> inhibited */
>> +
> 
> Why'd you do it that way vs just dropping _PAGE_SAO from the or below?

Just didn't like _PAGE_CACHE_CTL depending on values of the variants 
that we use.

>> diff --git a/arch/powerpc/include/asm/cputable.h 
>> b/arch/powerpc/include/asm/cputable.h
>> index bac2252c839e..c7e923ba 100644
>> --- a/arch/powerpc/include/asm/cputable.h
>> +++ b/arch/powerpc/include/asm/cputable.h
>> @@ -191,7 +191,6 @@ static inline void cpu_feature_keys_init(void) { }
>>  #define CPU_FTR_SPURR   
>> LONG_ASM_CONST(0x0100)
>>  #define CPU_FTR_DSCR
>> LONG_ASM_CONST(0x0200)
>>  #define CPU_FTR_VSX LONG_ASM_CONST(0x0400)
>> -#define CPU_FTR_SAO LONG_ASM_CONST(0x0800)
> 
> Can you do:
> 
> +// Free  LONG_ASM_CONST(0x0800)

Yes.

> 
>> diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h 
>> b/arch/powerpc/include/asm/kvm_book3s_64.h
>> index 9bb9bb370b53..579c9229124b 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s_64.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s_64.h
>> @@ -400,7 +400,8 @@ static inline bool hpte_cache_flags_ok(unsigned long 
>> hptel, bool is_ci)
>>  
>>  /* Handle SAO */
>>  if (wimg == (HPTE_R_W | HPTE_R_I | HPTE_R_M) &&
>> -cpu_has_feature(CPU_FTR_ARCH_206))
>> +cpu_has_feature(CPU_FTR_ARCH_206) &&
>> +!cpu_has_feature(CPU_FTR_ARCH_31))
>>  wimg = HPTE_R_M;
> 
> Shouldn't it reject that combination if the host can't support it?
> 
> Or I guess it does, but yikes that code is not clear.

Yeah, took me a bit to work that out.

>> diff --git a/arch/powerpc/include/asm/mman.h 
>> b/arch/powerpc/include/asm/mman.h
>> index d610c2e07b28..43a62f3e21a0 100644
>> --- a/arch/powerpc/include/asm/mman.h
>> +++ b/arch/powerpc/include/asm/mman.h
>> @@ -13,38 +13,24 @@
>>  #include 
>>  #include 
>>  
>> -/*
>> - * This file is included by linux/mman.h, so we can't use 
>> cacl_vm_prot_bits()
>> - * here.  How important is the optimization?
>> - */
> 
> This comment seems confused, but also unrelated to this patch?

Yeah.
 
>> diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c 
>> b/arch/powerpc/kernel/dt_cpu_ftrs.c
>> in

Re: [PATCH 1/2] powerpc/64s: remove PROT_SAO support

2020-06-11 Thread Michael Ellerman
Nicholas Piggin  writes:
> ISA v3.1 does not support the SAO storage control attribute required to
> implement PROT_SAO. PROT_SAO was used by specialised system software
> (Lx86) that has been discontinued for about 7 years, and is not thought
> to be used elsewhere, so removal should not cause problems.
>
> We rather remove it than keep support for older processors, because
> live migrating guest partitions to newer processors may not be possible
> if SAO is in use.

They key details being:
 - you don't remove PROT_SAO from the uapi header, so code using the
   definition will still build.
 - you change arch_validate_prot() to reject PROT_SAO, which means code
   using it will see a failure from mmap() at runtime.


This obviously risks breaking userspace, even if we think it won't in
practice. I guess we don't really have any option given the hardware
support is being dropped.

Can you repost with a wider Cc list, including linux-mm and linux-arch?

I wonder if we should add a comment to the uapi header, eg?

diff --git a/arch/powerpc/include/uapi/asm/mman.h 
b/arch/powerpc/include/uapi/asm/mman.h
index c0c737215b00..d4fdbe768997 100644
--- a/arch/powerpc/include/uapi/asm/mman.h
+++ b/arch/powerpc/include/uapi/asm/mman.h
@@ -11,7 +11,7 @@
 #include 
 
 
-#define PROT_SAO   0x10/* Strong Access Ordering */
+#define PROT_SAO   0x10/* Unsupported since v5.9 */
 
 #define MAP_RENAME  MAP_ANONYMOUS   /* In SunOS terminology */
 #define MAP_NORESERVE   0x40/* don't reserve swap pages */


> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
> b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index f17442c3a092..d9e92586f8dc 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -20,9 +20,13 @@
>  #define _PAGE_RW (_PAGE_READ | _PAGE_WRITE)
>  #define _PAGE_RWX(_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)
>  #define _PAGE_PRIVILEGED 0x8 /* kernel access only */
> -#define _PAGE_SAO0x00010 /* Strong access order */
> +
> +#define _PAGE_CACHE_CTL  0x00030 /* Bits for the folowing cache 
> modes */
> + /*  No bits set is normal cacheable memory */
> + /*  0x00010 unused, is SAO bit on radix POWER9 */
>  #define _PAGE_NON_IDEMPOTENT 0x00020 /* non idempotent memory */
>  #define _PAGE_TOLERANT   0x00030 /* tolerant memory, cache 
> inhibited */
> +

Why'd you do it that way vs just dropping _PAGE_SAO from the or below?

> diff --git a/arch/powerpc/include/asm/cputable.h 
> b/arch/powerpc/include/asm/cputable.h
> index bac2252c839e..c7e923ba 100644
> --- a/arch/powerpc/include/asm/cputable.h
> +++ b/arch/powerpc/include/asm/cputable.h
> @@ -191,7 +191,6 @@ static inline void cpu_feature_keys_init(void) { }
>  #define CPU_FTR_SPURR
> LONG_ASM_CONST(0x0100)
>  #define CPU_FTR_DSCR LONG_ASM_CONST(0x0200)
>  #define CPU_FTR_VSX  LONG_ASM_CONST(0x0400)
> -#define CPU_FTR_SAO  LONG_ASM_CONST(0x0800)

Can you do:

+// FreeLONG_ASM_CONST(0x0800)

> diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h 
> b/arch/powerpc/include/asm/kvm_book3s_64.h
> index 9bb9bb370b53..579c9229124b 100644
> --- a/arch/powerpc/include/asm/kvm_book3s_64.h
> +++ b/arch/powerpc/include/asm/kvm_book3s_64.h
> @@ -400,7 +400,8 @@ static inline bool hpte_cache_flags_ok(unsigned long 
> hptel, bool is_ci)
>  
>   /* Handle SAO */
>   if (wimg == (HPTE_R_W | HPTE_R_I | HPTE_R_M) &&
> - cpu_has_feature(CPU_FTR_ARCH_206))
> + cpu_has_feature(CPU_FTR_ARCH_206) &&
> + !cpu_has_feature(CPU_FTR_ARCH_31))
>   wimg = HPTE_R_M;

Shouldn't it reject that combination if the host can't support it?

Or I guess it does, but yikes that code is not clear.

> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
> index d610c2e07b28..43a62f3e21a0 100644
> --- a/arch/powerpc/include/asm/mman.h
> +++ b/arch/powerpc/include/asm/mman.h
> @@ -13,38 +13,24 @@
>  #include 
>  #include 
>  
> -/*
> - * This file is included by linux/mman.h, so we can't use cacl_vm_prot_bits()
> - * here.  How important is the optimization?
> - */

This comment seems confused, but also unrelated to this patch?

> diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c 
> b/arch/powerpc/kernel/dt_cpu_ftrs.c
> index 3a409517c031..8d2e4043702f 100644
> --- a/arch/powerpc/kernel/dt_cpu_ftrs.c
> +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
> @@ -622,7 +622,7 @@ static struct dt_cpu_feature_match __initdata
>   {"processor-control-facility-v3", feat_enable_dbell, CPU_FTR_DBELL},
>   {"processor-utilization-of-resources-register", feat_enable_purr, 0},
>   {"no-execute", feat_enable, 0},
> - {"strong-access-ordering", feat_enable, CPU_FTR_SAO

[PATCH 1/2] powerpc/64s: remove PROT_SAO support

2020-06-07 Thread Nicholas Piggin
ISA v3.1 does not support the SAO storage control attribute required to
implement PROT_SAO. PROT_SAO was used by specialised system software
(Lx86) that has been discontinued for about 7 years, and is not thought
to be used elsewhere, so removal should not cause problems.

We rather remove it than keep support for older processors, because
live migrating guest partitions to newer processors may not be possible
if SAO is in use.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/include/asm/book3s/64/pgtable.h  |  8 ++--
 arch/powerpc/include/asm/cputable.h   |  9 ++--
 arch/powerpc/include/asm/kvm_book3s_64.h  |  3 +-
 arch/powerpc/include/asm/mman.h   | 24 +++
 arch/powerpc/include/asm/nohash/64/pgtable.h  |  2 -
 arch/powerpc/kernel/dt_cpu_ftrs.c |  2 +-
 arch/powerpc/mm/book3s64/hash_utils.c |  2 -
 include/linux/mm.h|  2 -
 include/trace/events/mmflags.h|  2 -
 mm/ksm.c  |  4 --
 tools/testing/selftests/powerpc/mm/.gitignore |  1 -
 tools/testing/selftests/powerpc/mm/Makefile   |  4 +-
 tools/testing/selftests/powerpc/mm/prot_sao.c | 42 ---
 13 files changed, 18 insertions(+), 87 deletions(-)
 delete mode 100644 tools/testing/selftests/powerpc/mm/prot_sao.c

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
b/arch/powerpc/include/asm/book3s/64/pgtable.h
index f17442c3a092..d9e92586f8dc 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -20,9 +20,13 @@
 #define _PAGE_RW   (_PAGE_READ | _PAGE_WRITE)
 #define _PAGE_RWX  (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)
 #define _PAGE_PRIVILEGED   0x8 /* kernel access only */
-#define _PAGE_SAO  0x00010 /* Strong access order */
+
+#define _PAGE_CACHE_CTL0x00030 /* Bits for the folowing cache 
modes */
+   /*  No bits set is normal cacheable memory */
+   /*  0x00010 unused, is SAO bit on radix POWER9 */
 #define _PAGE_NON_IDEMPOTENT   0x00020 /* non idempotent memory */
 #define _PAGE_TOLERANT 0x00030 /* tolerant memory, cache inhibited */
+
 #define _PAGE_DIRTY0x00080 /* C: page changed */
 #define _PAGE_ACCESSED 0x00100 /* R: page referenced */
 /*
@@ -825,8 +829,6 @@ static inline void __set_pte_at(struct mm_struct *mm, 
unsigned long addr,
return hash__set_pte_at(mm, addr, ptep, pte, percpu);
 }
 
-#define _PAGE_CACHE_CTL(_PAGE_SAO | _PAGE_NON_IDEMPOTENT | 
_PAGE_TOLERANT)
-
 #define pgprot_noncached pgprot_noncached
 static inline pgprot_t pgprot_noncached(pgprot_t prot)
 {
diff --git a/arch/powerpc/include/asm/cputable.h 
b/arch/powerpc/include/asm/cputable.h
index bac2252c839e..c7e923ba 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -191,7 +191,6 @@ static inline void cpu_feature_keys_init(void) { }
 #define CPU_FTR_SPURR  LONG_ASM_CONST(0x0100)
 #define CPU_FTR_DSCR   LONG_ASM_CONST(0x0200)
 #define CPU_FTR_VSXLONG_ASM_CONST(0x0400)
-#define CPU_FTR_SAOLONG_ASM_CONST(0x0800)
 #define CPU_FTR_CP_USE_DCBTZ   LONG_ASM_CONST(0x1000)
 #define CPU_FTR_UNALIGNED_LD_STD   LONG_ASM_CONST(0x2000)
 #define CPU_FTR_ASYM_SMT   LONG_ASM_CONST(0x4000)
@@ -435,7 +434,7 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_MMCRA | CPU_FTR_SMT | \
CPU_FTR_COHERENT_ICACHE | \
CPU_FTR_PURR | CPU_FTR_SPURR | CPU_FTR_REAL_LE | \
-   CPU_FTR_DSCR | CPU_FTR_SAO  | CPU_FTR_ASYM_SMT | \
+   CPU_FTR_DSCR | CPU_FTR_ASYM_SMT | \
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_CFAR | CPU_FTR_HVMODE | \
CPU_FTR_VMX_COPY | CPU_FTR_HAS_PPR | CPU_FTR_DABRX | CPU_FTR_PKEY)
@@ -444,7 +443,7 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_MMCRA | CPU_FTR_SMT | \
CPU_FTR_COHERENT_ICACHE | \
CPU_FTR_PURR | CPU_FTR_SPURR | CPU_FTR_REAL_LE | \
-   CPU_FTR_DSCR | CPU_FTR_SAO  | \
+   CPU_FTR_DSCR | \
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_CFAR | CPU_FTR_HVMODE | CPU_FTR_VMX_COPY | \
CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_DAWR | \
@@ -455,7 +454,7 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_MMCRA | CPU_FTR_SMT | \
CPU_FTR_COHERENT_ICACHE | \
CPU_FTR_PURR | CPU_FTR_SPURR | CPU_FTR_REAL_LE | \
-   CPU_FTR_DSCR | CPU_FTR_SAO  | \
+   CPU_FTR_DSCR | \
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_CFAR | CPU_FTR_HVMODE | CPU_FTR_VMX