Re: [PATCH V1 09/10] arch/kmap: Define kmap_atomic_prot() for all arch's

2020-05-02 Thread Ira Weiny
On Fri, May 01, 2020 at 04:20:20AM +0100, Al Viro wrote:
> On Fri, May 01, 2020 at 03:37:34AM +0100, Al Viro wrote:
> > On Thu, Apr 30, 2020 at 01:38:44PM -0700, ira.we...@intel.com wrote:
> > 
> > > -static inline void *kmap_atomic(struct page *page)
> > > +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
> > >  {
> > >   preempt_disable();
> > >   pagefault_disable();
> > >   if (!PageHighMem(page))
> > >   return page_address(page);
> > > - return kmap_atomic_high(page);
> > > + return kmap_atomic_high_prot(page, prot);
> > >  }
> > > +#define kmap_atomic(page)kmap_atomic_prot(page, kmap_prot)
> > 
> > OK, so it *was* just a bisect hazard - you return to original semantics
> > wrt preempt_disable()...
> 
> FWIW, how about doing the following: just before #5/10 have a patch
> that would touch only microblaze, ppc and x86 splitting their
> kmap_atomic_prot() into an inline helper + kmap_atomic_high_prot().
> Then your #5 would leave their kmap_atomic_prot() as-is (it would
> use kmap_atomic_prot_high() instead).  The rest of the series plays
> out pretty much the same way it does now, and wrappers on those
> 3 architectures would go away when an identical generic one is
> introduced in this commit (#9/10).
> 
> AFAICS, that would avoid the bisect hazard and might even end
> up with less noise in the patches...

This works.  V2 coming out shortly.

Thanks for catching this,
Ira



Re: [PATCH v3 3/3] mm/page_alloc: Keep memoryless cpuless node 0 offline

2020-05-02 Thread Christopher Lameter
On Fri, 1 May 2020, Srikar Dronamraju wrote:

> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -116,8 +116,10 @@ EXPORT_SYMBOL(latent_entropy);
>   */
>  nodemask_t node_states[NR_NODE_STATES] __read_mostly = {
>   [N_POSSIBLE] = NODE_MASK_ALL,
> +#ifdef CONFIG_NUMA
> + [N_ONLINE] = NODE_MASK_NONE,

Hmmm I would have expected that you would have added something early
in boot that would mark the current node (whatever is is) online instead?


Re: [PATCH v3 1/3] powerpc/numa: Set numa_node for all possible cpus

2020-05-02 Thread Christopher Lameter
On Fri, 1 May 2020, Srikar Dronamraju wrote:

> - for_each_present_cpu(cpu)
> - numa_setup_cpu(cpu);
> + for_each_possible_cpu(cpu) {
> + /*
> +  * Powerpc with CONFIG_NUMA always used to have a node 0,
> +  * even if it was memoryless or cpuless. For all cpus that
> +  * are possible but not present, cpu_to_node() would point
> +  * to node 0. To remove a cpuless, memoryless dummy node,
> +  * powerpc need to make sure all possible but not present
> +  * cpu_to_node are set to a proper node.
> +  */
> + if (cpu_present(cpu))
> + numa_setup_cpu(cpu);
> + else
> + set_cpu_numa_node(cpu, first_online_node);
> + }
>  }


Can this be folded into numa_setup_cpu?

This looks more like numa_setup_cpu needs to change?



[PATCH] powerpc/5200: update contact email

2020-05-02 Thread Wolfram Sang
My 'pengutronix' address is defunct for years. Merge the entries and use
the proper contact address.

Signed-off-by: Wolfram Sang 
---
 arch/powerpc/boot/dts/pcm032.dts | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/powerpc/boot/dts/pcm032.dts b/arch/powerpc/boot/dts/pcm032.dts
index c259c6b3ac5a..780e13d99e7b 100644
--- a/arch/powerpc/boot/dts/pcm032.dts
+++ b/arch/powerpc/boot/dts/pcm032.dts
@@ -3,9 +3,7 @@
  * phyCORE-MPC5200B-IO (pcm032) board Device Tree Source
  *
  * Copyright (C) 2006-2009 Pengutronix
- * Sascha Hauer 
- * Juergen Beisert 
- * Wolfram Sang 
+ * Sascha Hauer, Juergen Beisert, Wolfram Sang 
  */
 
 /include/ "mpc5200b.dtsi"
-- 
2.20.1



Re: [PATCH 0/7] sha1 library cleanup

2020-05-02 Thread Jason A. Donenfeld
Thanks for this series. I like the general idea. I think it might make
sense, though, to separate things out into sha1.h and sha256.h. That
will be nice preparation work for when we eventually move obsolete
primitives into some  subdirectory.


[PATCH 0/7] sha1 library cleanup

2020-05-02 Thread Eric Biggers
 sounds very generic and important, like it's the
header to include if you're doing cryptographic hashing in the kernel.
But actually it only includes the library implementation of the SHA-1
compression function (not even the full SHA-1).  This should basically
never be used anymore; SHA-1 is no longer considered secure, and there
are much better ways to do cryptographic hashing in the kernel.

Also the function is named just "sha_transform()", which makes it
unclear which version of SHA is meant.

Therefore, this series cleans things up by moving these SHA-1
declarations into  where they better belong, and changing
the names to say SHA-1 rather than just SHA.

As future work, we should split sha.h into sha1.h and sha2.h and try to
remove the remaining uses of SHA-1.  For example, the remaining use in
drivers/char/random.c is probably one that can be gotten rid of.

This patch series applies to cryptodev/master.

Eric Biggers (7):
  mptcp: use SHA256_BLOCK_SIZE, not SHA_MESSAGE_BYTES
  crypto: powerpc/sha1 - remove unused temporary workspace
  crypto: powerpc/sha1 - prefix the "sha1_" functions
  crypto: s390/sha1 - prefix the "sha1_" functions
  crypto: lib/sha1 - rename "sha" to "sha1"
  crypto: lib/sha1 - remove unnecessary includes of linux/cryptohash.h
  crypto: lib/sha1 - fold linux/cryptohash.h into crypto/sha.h

 Documentation/security/siphash.rst  |  2 +-
 arch/arm/crypto/sha1_glue.c |  1 -
 arch/arm/crypto/sha1_neon_glue.c|  1 -
 arch/arm/crypto/sha256_glue.c   |  1 -
 arch/arm/crypto/sha256_neon_glue.c  |  1 -
 arch/arm/kernel/armksyms.c  |  1 -
 arch/arm64/crypto/sha256-glue.c |  1 -
 arch/arm64/crypto/sha512-glue.c |  1 -
 arch/microblaze/kernel/microblaze_ksyms.c   |  1 -
 arch/mips/cavium-octeon/crypto/octeon-md5.c |  1 -
 arch/powerpc/crypto/md5-glue.c  |  1 -
 arch/powerpc/crypto/sha1-spe-glue.c |  1 -
 arch/powerpc/crypto/sha1.c  | 33 ++---
 arch/powerpc/crypto/sha256-spe-glue.c   |  1 -
 arch/s390/crypto/sha1_s390.c| 12 
 arch/sparc/crypto/md5_glue.c|  1 -
 arch/sparc/crypto/sha1_glue.c   |  1 -
 arch/sparc/crypto/sha256_glue.c |  1 -
 arch/sparc/crypto/sha512_glue.c |  1 -
 arch/unicore32/kernel/ksyms.c   |  1 -
 arch/x86/crypto/sha1_ssse3_glue.c   |  1 -
 arch/x86/crypto/sha256_ssse3_glue.c |  1 -
 arch/x86/crypto/sha512_ssse3_glue.c |  1 -
 crypto/sha1_generic.c   |  5 ++--
 drivers/char/random.c   |  8 ++---
 drivers/crypto/atmel-sha.c  |  1 -
 drivers/crypto/chelsio/chcr_algo.c  |  1 -
 drivers/crypto/chelsio/chcr_ipsec.c |  1 -
 drivers/crypto/omap-sham.c  |  1 -
 fs/f2fs/hash.c  |  1 -
 include/crypto/sha.h| 10 +++
 include/linux/cryptohash.h  | 14 -
 include/linux/filter.h  |  4 +--
 include/net/tcp.h   |  1 -
 kernel/bpf/core.c   | 18 +--
 lib/crypto/chacha.c |  1 -
 lib/sha1.c  | 24 ---
 net/core/secure_seq.c   |  1 -
 net/ipv6/addrconf.c | 10 +++
 net/ipv6/seg6_hmac.c|  1 -
 net/mptcp/crypto.c  |  4 +--
 41 files changed, 69 insertions(+), 104 deletions(-)
 delete mode 100644 include/linux/cryptohash.h


base-commit: 12b3cf9093542d9f752a4968815ece836159013f
-- 
2.26.2



[PATCH 2/7] crypto: powerpc/sha1 - remove unused temporary workspace

2020-05-02 Thread Eric Biggers
From: Eric Biggers 

The PowerPC implementation of SHA-1 doesn't actually use the 16-word
temporary array that's passed to the assembly code.  This was probably
meant to correspond to the 'W' array that lib/sha1.c uses.  However, in
sha1-powerpc-asm.S these values are actually stored in GPRs 16-31.

Referencing SHA_WORKSPACE_WORDS from this code also isn't appropriate,
since it's an implementation detail of lib/sha1.c.

Therefore, just remove this unneeded array.

Tested with:

export ARCH=powerpc CROSS_COMPILE=powerpc-linux-gnu-
make mpc85xx_defconfig
cat >> .config << EOF
# CONFIG_MODULES is not set
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y
CONFIG_CRYPTO_SHA1_PPC=y
EOF
make olddefconfig
make -j32
qemu-system-ppc -M mpc8544ds -cpu e500 -nographic \
-kernel arch/powerpc/boot/zImage \
-append "cryptomgr.fuzz_iterations=1000 
cryptomgr.panic_on_fail=1"

Cc: linuxppc-dev@lists.ozlabs.org
Cc: Benjamin Herrenschmidt 
Cc: Michael Ellerman 
Cc: Paul Mackerras 
Signed-off-by: Eric Biggers 
---
 arch/powerpc/crypto/sha1.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/crypto/sha1.c b/arch/powerpc/crypto/sha1.c
index 7b43fc352089b1..db46b6130a9642 100644
--- a/arch/powerpc/crypto/sha1.c
+++ b/arch/powerpc/crypto/sha1.c
@@ -16,12 +16,11 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
 
-extern void powerpc_sha_transform(u32 *state, const u8 *src, u32 *temp);
+void powerpc_sha_transform(u32 *state, const u8 *src);
 
 static int sha1_init(struct shash_desc *desc)
 {
@@ -47,7 +46,6 @@ static int sha1_update(struct shash_desc *desc, const u8 
*data,
src = data;
 
if ((partial + len) > 63) {
-   u32 temp[SHA_WORKSPACE_WORDS];
 
if (partial) {
done = -partial;
@@ -56,12 +54,11 @@ static int sha1_update(struct shash_desc *desc, const u8 
*data,
}
 
do {
-   powerpc_sha_transform(sctx->state, src, temp);
+   powerpc_sha_transform(sctx->state, src);
done += 64;
src = data + done;
} while (done + 63 < len);
 
-   memzero_explicit(temp, sizeof(temp));
partial = 0;
}
memcpy(sctx->buffer + partial, src, len - done);
-- 
2.26.2



[PATCH 3/7] crypto: powerpc/sha1 - prefix the "sha1_" functions

2020-05-02 Thread Eric Biggers
From: Eric Biggers 

Prefix the PowerPC SHA-1 functions with "powerpc_sha1_" rather than
"sha1_".  This allows us to rename the library function sha_init() to
sha1_init() without causing a naming collision.

Cc: linuxppc-dev@lists.ozlabs.org
Cc: Benjamin Herrenschmidt 
Cc: Michael Ellerman 
Cc: Paul Mackerras 
Signed-off-by: Eric Biggers 
---
 arch/powerpc/crypto/sha1.c | 26 +-
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/crypto/sha1.c b/arch/powerpc/crypto/sha1.c
index db46b6130a9642..b40dc50a6908ae 100644
--- a/arch/powerpc/crypto/sha1.c
+++ b/arch/powerpc/crypto/sha1.c
@@ -22,7 +22,7 @@
 
 void powerpc_sha_transform(u32 *state, const u8 *src);
 
-static int sha1_init(struct shash_desc *desc)
+static int powerpc_sha1_init(struct shash_desc *desc)
 {
struct sha1_state *sctx = shash_desc_ctx(desc);
 
@@ -33,8 +33,8 @@ static int sha1_init(struct shash_desc *desc)
return 0;
 }
 
-static int sha1_update(struct shash_desc *desc, const u8 *data,
-   unsigned int len)
+static int powerpc_sha1_update(struct shash_desc *desc, const u8 *data,
+  unsigned int len)
 {
struct sha1_state *sctx = shash_desc_ctx(desc);
unsigned int partial, done;
@@ -68,7 +68,7 @@ static int sha1_update(struct shash_desc *desc, const u8 
*data,
 
 
 /* Add padding and return the message digest. */
-static int sha1_final(struct shash_desc *desc, u8 *out)
+static int powerpc_sha1_final(struct shash_desc *desc, u8 *out)
 {
struct sha1_state *sctx = shash_desc_ctx(desc);
__be32 *dst = (__be32 *)out;
@@ -81,10 +81,10 @@ static int sha1_final(struct shash_desc *desc, u8 *out)
/* Pad out to 56 mod 64 */
index = sctx->count & 0x3f;
padlen = (index < 56) ? (56 - index) : ((64+56) - index);
-   sha1_update(desc, padding, padlen);
+   powerpc_sha1_update(desc, padding, padlen);
 
/* Append length */
-   sha1_update(desc, (const u8 *)&bits, sizeof(bits));
+   powerpc_sha1_update(desc, (const u8 *)&bits, sizeof(bits));
 
/* Store state in digest */
for (i = 0; i < 5; i++)
@@ -96,7 +96,7 @@ static int sha1_final(struct shash_desc *desc, u8 *out)
return 0;
 }
 
-static int sha1_export(struct shash_desc *desc, void *out)
+static int powerpc_sha1_export(struct shash_desc *desc, void *out)
 {
struct sha1_state *sctx = shash_desc_ctx(desc);
 
@@ -104,7 +104,7 @@ static int sha1_export(struct shash_desc *desc, void *out)
return 0;
 }
 
-static int sha1_import(struct shash_desc *desc, const void *in)
+static int powerpc_sha1_import(struct shash_desc *desc, const void *in)
 {
struct sha1_state *sctx = shash_desc_ctx(desc);
 
@@ -114,11 +114,11 @@ static int sha1_import(struct shash_desc *desc, const 
void *in)
 
 static struct shash_alg alg = {
.digestsize =   SHA1_DIGEST_SIZE,
-   .init   =   sha1_init,
-   .update =   sha1_update,
-   .final  =   sha1_final,
-   .export =   sha1_export,
-   .import =   sha1_import,
+   .init   =   powerpc_sha1_init,
+   .update =   powerpc_sha1_update,
+   .final  =   powerpc_sha1_final,
+   .export =   powerpc_sha1_export,
+   .import =   powerpc_sha1_import,
.descsize   =   sizeof(struct sha1_state),
.statesize  =   sizeof(struct sha1_state),
.base   =   {
-- 
2.26.2



Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP

2020-05-02 Thread Dan Williams
On Sat, May 2, 2020 at 2:27 AM David Hildenbrand  wrote:
>
> >> Now, let's clarify what I want regarding virtio-mem:
> >>
> >> 1. kexec should not add virtio-mem memory to the initial firmware
> >>memmap. The driver has to be in charge as discussed.
> >> 2. kexec should not place kexec images onto virtio-mem memory. That
> >>would end badly.
> >> 3. kexec should still dump virtio-mem memory via kdump.
> >
> > Ok, but then seems to say to me that dax/kmem is a different type of
> > (driver managed) than virtio-mem and it's confusing to try to apply
> > the same meaning. Why not just call your type for the distinct type it
> > is "System RAM (virtio-mem)" and let any other driver managed memory
> > follow the same "System RAM ($driver)" format if it wants?
>
> I had the same idea but discarded it because it seemed to uglify the
> add_memory() interface (passing yet another parameter only relevant for
> driver managed memory). Maybe we really want a new one, because I like
> that idea:
>
> /*
>  * Add special, driver-managed memory to the system as system ram.
>  * The resource_name is expected to have the name format "System RAM
>  * ($DRIVER)", so user space (esp. kexec-tools)" can special-case it.
>  *
>  * For this memory, no entries in /sys/firmware/memmap are created,
>  * as this memory won't be part of the raw firmware-provided memory map
>  * e.g., after a reboot. Also, the created memory resource is flagged
>  * with IORESOURCE_MEM_DRIVER_MANAGED, so in-kernel users can special-
>  * case this memory (e.g., not place kexec images onto it).
>  */
> int add_memory_driver_managed(int nid, u64 start, u64 size,
>   const char *resource_name);
>
>
> If we'd ever have to special case it even more in the kernel, we could
> allow to specify further resource flags. While passing the driver name
> instead of the resource_name would be an option, this way we don't have
> to hand craft new resource strings for added memory resources.
>
> Thoughts?

Looks useful to me and simplifies walking /proc/iomem. I personally
like the safety of the string just being the $driver component of the
name, but I won't lose sleep if the interface stays freeform like you
propose.


[PATCH] powerpc/64s: Fix unrecoverable SLB crashes due to preemption check

2020-05-02 Thread Michael Ellerman
Hugh reported that his trusty G5 crashed after a few hours under load
with an "Unrecoverable exception 380".

The crash is in interrupt_return() where we check lazy_irq_pending(),
which calls get_paca() and with CONFIG_DEBUG_PREEMPT=y that goes to
check_preemption_disabled() via debug_smp_processor_id().

As Nick explained on the list:

  Problem is MSR[RI] is cleared here, ready to do the last few things
  for interrupt return where we're not allowed to take any other
  interrupts.

  SLB interrupts can happen just about anywhere aside from kernel
  text, global variables, and stack. When that hits, it appears to be
  unrecoverable due to RI=0.

The problematic access is in preempt_count() which is:

return READ_ONCE(current_thread_info()->preempt_count);

Because of THREAD_INFO_IN_TASK, current_thread_info() just points to
current, so the access is to somewhere in kernel memory, but not on
the stack or in .data, which means it can cause an SLB miss. If we
take an SLB miss with RI=0 it is fatal.

The easiest solution is to add a version of lazy_irq_pending() that
doesn't do the preemption check and call it from the interrupt return
path.

Fixes: 68b34588e202 ("powerpc/64/sycall: Implement syscall entry/exit logic in 
C")
Reported-by: Hugh Dickins 
Signed-off-by: Michael Ellerman 
---
 arch/powerpc/include/asm/hw_irq.h | 20 +++-
 arch/powerpc/kernel/syscall_64.c  |  6 +++---
 2 files changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/hw_irq.h 
b/arch/powerpc/include/asm/hw_irq.h
index e0e71777961f..3a0db7b0b46e 100644
--- a/arch/powerpc/include/asm/hw_irq.h
+++ b/arch/powerpc/include/asm/hw_irq.h
@@ -250,9 +250,27 @@ static inline bool arch_irqs_disabled(void)
}   \
 } while(0)
 
+static inline bool __lazy_irq_pending(u8 irq_happened)
+{
+   return !!(irq_happened & ~PACA_IRQ_HARD_DIS);
+}
+
+/*
+ * Check if a lazy IRQ is pending. Should be called with IRQs hard disabled.
+ */
 static inline bool lazy_irq_pending(void)
 {
-   return !!(get_paca()->irq_happened & ~PACA_IRQ_HARD_DIS);
+   return __lazy_irq_pending(get_paca()->irq_happened);
+}
+
+/*
+ * Check if a lazy IRQ is pending, with no debugging checks.
+ * Should be called with IRQs hard disabled.
+ * For use in RI disabled code or other constrained situations.
+ */
+static inline bool lazy_irq_pending_nocheck(void)
+{
+   return __lazy_irq_pending(local_paca->irq_happened);
 }
 
 /*
diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
index c74295a7765b..1fe94dd9de32 100644
--- a/arch/powerpc/kernel/syscall_64.c
+++ b/arch/powerpc/kernel/syscall_64.c
@@ -189,7 +189,7 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
 
/* This pattern matches prep_irq_for_idle */
__hard_EE_RI_disable();
-   if (unlikely(lazy_irq_pending())) {
+   if (unlikely(lazy_irq_pending_nocheck())) {
__hard_RI_enable();
trace_hardirqs_off();
local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
@@ -264,7 +264,7 @@ notrace unsigned long interrupt_exit_user_prepare(struct 
pt_regs *regs, unsigned
 
trace_hardirqs_on();
__hard_EE_RI_disable();
-   if (unlikely(lazy_irq_pending())) {
+   if (unlikely(lazy_irq_pending_nocheck())) {
__hard_RI_enable();
trace_hardirqs_off();
local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
@@ -334,7 +334,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct 
pt_regs *regs, unsign
 
trace_hardirqs_on();
__hard_EE_RI_disable();
-   if (unlikely(lazy_irq_pending())) {
+   if (unlikely(lazy_irq_pending_nocheck())) {
__hard_RI_enable();
irq_soft_mask_set(IRQS_ALL_DISABLED);
trace_hardirqs_off();
-- 
2.25.1



Re: [PATCH v7 11/28] powerpc: Use a datatype for instructions

2020-05-02 Thread kbuild test robot
Hi Jordan,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on powerpc/next]
[also build test ERROR on v5.7-rc3 next-20200501]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:
https://github.com/0day-ci/linux/commits/Jordan-Niethe/Initial-Prefixed-Instruction-support/20200501-124644
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-randconfig-a001-20200501 (attached as .config)
compiler: powerpc-linux-gcc (GCC) 9.3.0
reproduce:
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day GCC_VERSION=9.3.0 make.cross 
ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot 

All error/warnings (new ones prefixed by >>):

   arch/powerpc/mm/nohash/8xx.c: In function 'mmu_patch_addis':
>> arch/powerpc/mm/nohash/8xx.c:104:31: error: incompatible type for argument 2 
>> of 'patch_instruction_site'
 104 |  patch_instruction_site(site, instr);
 |   ^
 |   |
 |   unsigned int
   In file included from arch/powerpc/mm/nohash/8xx.c:13:
   arch/powerpc/include/asm/code-patching.h:39:69: note: expected 'struct 
ppc_inst' but argument is of type 'unsigned int'
  39 | static inline int patch_instruction_site(s32 *site, struct ppc_inst 
instr)
 | 
^
   In file included from arch/powerpc/include/asm/asm-compat.h:6,
from arch/powerpc/include/asm/bug.h:6,
from include/linux/bug.h:5,
from include/linux/mmdebug.h:5,
from include/linux/mm.h:9,
from include/linux/memblock.h:13,
from arch/powerpc/mm/nohash/8xx.c:10:
   arch/powerpc/mm/nohash/8xx.c: In function 'mmu_mapin_ram':
>> arch/powerpc/include/asm/ppc-opcode.h:234:24: error: incompatible type for 
>> argument 2 of 'patch_instruction_site'
 234 | #define PPC_INST_NOP   0x6000
 |^~
 ||
 |int
>> arch/powerpc/mm/nohash/8xx.c:128:54: note: in expansion of macro 
>> 'PPC_INST_NOP'
 128 |patch_instruction_site(&patch__dtlbmiss_immr_jmp, PPC_INST_NOP);
 |  ^~~~
   In file included from arch/powerpc/mm/nohash/8xx.c:13:
   arch/powerpc/include/asm/code-patching.h:39:69: note: expected 'struct 
ppc_inst' but argument is of type 'int'
  39 | static inline int patch_instruction_site(s32 *site, struct ppc_inst 
instr)
 | 
^
--
   In file included from include/linux/printk.h:7,
from include/linux/kernel.h:15,
from include/linux/list.h:9,
from include/linux/preempt.h:11,
from include/linux/spinlock.h:51,
from arch/powerpc/kernel/trace/ftrace.c:16:
   arch/powerpc/kernel/trace/ftrace.c: In function '__ftrace_make_nop':
>> include/linux/kern_levels.h:5:18: error: format '%x' expects argument of 
>> type 'unsigned int', but argument 2 has type 'struct ppc_inst' 
>> [-Werror=format=]
   5 | #define KERN_SOH "\001"  /* ASCII Start Of Header */
 |  ^~
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
  11 | #define KERN_ERR KERN_SOH "3" /* error conditions */
 |  ^~~~
   include/linux/printk.h:299:9: note: in expansion of macro 'KERN_ERR'
 299 |  printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
 | ^~~~
>> arch/powerpc/kernel/trace/ftrace.c:233:3: note: in expansion of macro 
>> 'pr_err'
 233 |   pr_err("Not expected bl: opcode is %x\n", op);
 |   ^~
   arch/powerpc/kernel/trace/ftrace.c:233:39: note: format string is defined 
here
 233 |   pr_err("Not expected bl: opcode is %x\n", op);
 |  ~^
 |   |
 |   unsigned int
   In file included from include/linux/printk.h:7,
from include/linux/kernel.h:15,
from include/linux/list.h:9,
from include/linux/preempt.h:11,
from include/linux/spinlock.h:51,
from arch/powerpc/kernel/trace/ftrace.c:16:
   arch/powerpc/kernel/trace/ftrac

[powerpc:next-test] BUILD SUCCESS 64c245a2974a376bb02dd94d1d03719d3a167e86

2020-05-02 Thread kbuild test robot
tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git  
next-test
branch HEAD: 64c245a2974a376bb02dd94d1d03719d3a167e86  Merge branch 
'topic/uaccess-ppc' into next-test

elapsed time: 766m

configs tested: 209
configs skipped: 0

The following configs have been built successfully.
More configs may be tested in the coming days.

arm64allyesconfig
arm  allyesconfig
arm64allmodconfig
arm  allmodconfig
arm64 allnoconfig
arm   allnoconfig
arm   efm32_defconfig
arm at91_dt_defconfig
armshmobile_defconfig
arm64   defconfig
arm  exynos_defconfig
armmulti_v5_defconfig
arm   sunxi_defconfig
armmulti_v7_defconfig
sparcallyesconfig
powerpc defconfig
ia64defconfig
arc defconfig
mipsar7_defconfig
mips  ath79_defconfig
mips allmodconfig
nios2 3c120_defconfig
sparc64 defconfig
cskydefconfig
sh  rsk7269_defconfig
ia64  allnoconfig
nds32 allnoconfig
m68k   sun3_defconfig
i386  allnoconfig
i386 allyesconfig
i386 alldefconfig
i386defconfig
i386  debian-10.3
ia64 allmodconfig
ia64generic_defconfig
ia64  tiger_defconfig
ia64 bigsur_defconfig
ia64 allyesconfig
ia64 alldefconfig
m68k   m5475evb_defconfig
m68k allmodconfig
m68k   bvme6000_defconfig
m68k  multi_defconfig
nios2 10m50_defconfig
c6xevmc6678_defconfig
c6x  allyesconfig
openrisc simple_smp_defconfig
openriscor1ksim_defconfig
nds32   defconfig
alpha   defconfig
h8300   h8s-sim_defconfig
h8300 edosk2674_defconfig
xtensa  iss_defconfig
h8300h8300h-sim_defconfig
xtensa   common_defconfig
arc  allyesconfig
microblaze  mmu_defconfig
microblazenommu_defconfig
mips  fuloong2e_defconfig
mips  malta_kvm_defconfig
mips allyesconfig
mips 64r6el_defconfig
mips  allnoconfig
mips   32r2_defconfig
mipsmalta_kvm_guest_defconfig
mips tb0287_defconfig
mips   capcella_defconfig
mips   ip32_defconfig
mips  decstation_64_defconfig
mips  loongson3_defconfig
mipsbcm63xx_defconfig
pariscallnoconfig
pariscgeneric-64bit_defconfig
pariscgeneric-32bit_defconfig
parisc   allyesconfig
parisc   allmodconfig
powerpc  chrp32_defconfig
powerpc   holly_defconfig
powerpc   ppc64_defconfig
powerpc  rhel-kconfig
powerpc   allnoconfig
powerpc  mpc866_ads_defconfig
powerpcamigaone_defconfig
powerpcadder875_defconfig
powerpc ep8248e_defconfig
powerpc  g5_defconfig
powerpc mpc512x_defconfig
m68k randconfig-a001-20200502
mips randconfig-a001-20200502
nds32randconfig-a001-20200502
alpharandconfig-a001-20200502
parisc   randconfig-a001-20200502
riscvrandconfig-a001-20200502
h8300randconfig-a001-20200502
nios2randconfig-a001-20200502
microblaze   randconfig-a001-20200502
c6x  randconfig-a001-20200502
sparc64  randconfig-a001-20200502
s390 randconfig-a001-20200502
xtensa   randconfig-a001-20200502
sh   randc

[PATCH] powerpc/powernv: Fix a warning message

2020-05-02 Thread Christophe JAILLET
Fix a cut'n'paste error in a warning message. This should be
'cpu-idle-state-residency-ns' to match the property searched in the
previous 'of_property_read_u32_array()'

Fixes: 9c7b185ab2fe ("powernv/cpuidle: Parse dt idle properties into global 
structure")
Signed-off-by: Christophe JAILLET 
---
 arch/powerpc/platforms/powernv/idle.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/powernv/idle.c 
b/arch/powerpc/platforms/powernv/idle.c
index 78599bca66c2..2dd467383a88 100644
--- a/arch/powerpc/platforms/powernv/idle.c
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -1270,7 +1270,7 @@ static int pnv_parse_cpuidle_dt(void)
/* Read residencies */
if (of_property_read_u32_array(np, "ibm,cpu-idle-state-residency-ns",
   temp_u32, nr_idle_states)) {
-   pr_warn("cpuidle-powernv: missing 
ibm,cpu-idle-state-latencies-ns in DT\n");
+   pr_warn("cpuidle-powernv: missing 
ibm,cpu-idle-state-residency-ns in DT\n");
rc = -EINVAL;
goto out;
}
-- 
2.25.1



Re: [RFC 1/3] powernv/cpuidle : Support for pre-entry and post exit of stop state in firmware

2020-05-02 Thread Nicholas Piggin
Excerpts from Abhishek's message of April 30, 2020 3:52 pm:
> Hi Nick,
> 
> Have you posted out the kernel side of "opal v4" patchset?
> I could only find the opal patchset.

I just posted some new ones. I have some change sfor the cpuidle side
but I haven't really looked to see what needs reconciling with your
version, but I'll try to do that when I get time.

Thanks,
Nick


[powerpc:topic/uaccess] BUILD SUCCESS b44f687386875b714dae2afa768e73401e45c21c

2020-05-02 Thread kbuild test robot
tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git  
topic/uaccess
branch HEAD: b44f687386875b714dae2afa768e73401e45c21c  drm/i915/gem: Replace 
user_access_begin by user_write_access_begin

elapsed time: 695m

configs tested: 193
configs skipped: 0

The following configs have been built successfully.
More configs may be tested in the coming days.

arm64allyesconfig
arm  allyesconfig
arm64allmodconfig
arm  allmodconfig
arm64 allnoconfig
arm   allnoconfig
arm   efm32_defconfig
arm at91_dt_defconfig
armshmobile_defconfig
arm64   defconfig
arm  exynos_defconfig
armmulti_v5_defconfig
arm   sunxi_defconfig
armmulti_v7_defconfig
sparcallyesconfig
arc  allyesconfig
powerpc defconfig
ia64defconfig
i386 alldefconfig
openrisc simple_smp_defconfig
arc defconfig
mipsar7_defconfig
s390  allnoconfig
mips  allnoconfig
mips allmodconfig
sparc64 defconfig
cskydefconfig
sh  rsk7269_defconfig
ia64  allnoconfig
powerpc mpc512x_defconfig
sh  sh7785lcr_32bit_defconfig
xtensa  iss_defconfig
um  defconfig
nds32 allnoconfig
m68k   sun3_defconfig
i386  allnoconfig
i386 allyesconfig
i386defconfig
i386  debian-10.3
ia64 allmodconfig
ia64generic_defconfig
ia64  tiger_defconfig
ia64 bigsur_defconfig
ia64 allyesconfig
ia64 alldefconfig
m68k   m5475evb_defconfig
m68k allmodconfig
m68k   bvme6000_defconfig
m68k  multi_defconfig
nios2 3c120_defconfig
nios2 10m50_defconfig
c6xevmc6678_defconfig
c6x  allyesconfig
openriscor1ksim_defconfig
nds32   defconfig
alpha   defconfig
h8300   h8s-sim_defconfig
h8300 edosk2674_defconfig
h8300h8300h-sim_defconfig
xtensa   common_defconfig
microblaze  mmu_defconfig
microblazenommu_defconfig
mips  fuloong2e_defconfig
mips  malta_kvm_defconfig
mips allyesconfig
mips 64r6el_defconfig
mips   32r2_defconfig
mipsmalta_kvm_guest_defconfig
mips tb0287_defconfig
mips   capcella_defconfig
mips   ip32_defconfig
mips  decstation_64_defconfig
mips  loongson3_defconfig
mips  ath79_defconfig
mipsbcm63xx_defconfig
pariscallnoconfig
pariscgeneric-64bit_defconfig
pariscgeneric-32bit_defconfig
parisc   allyesconfig
parisc   allmodconfig
powerpc  chrp32_defconfig
powerpc   holly_defconfig
powerpc   ppc64_defconfig
powerpc  rhel-kconfig
powerpc   allnoconfig
powerpc  mpc866_ads_defconfig
powerpcamigaone_defconfig
powerpcadder875_defconfig
powerpc ep8248e_defconfig
powerpc  g5_defconfig
m68k randconfig-a001-20200502
mips randconfig-a001-20200502
nds32randconfig-a001-20200502
alpharandconfig-a001-20200502
parisc   randconfig-a001-20200502
riscvrandconfig-a001-20200502
h8300randconfig-a001-20200502
nios2randconfig-a001-20200502
microblaze   randconfig-a001-20200502
c6x  randconfig-a001-20200502
sparc64

[PATCH v2 28/28] powerpc/book3s64/keys/kuap: Reset AMR/IAMR values on kexec

2020-05-02 Thread Aneesh Kumar K.V
We can kexec into a kernel that doesn't use memory keys for kernel
mapping (such as an older kernel which doesn't support kuap/kuep with hash
translation). We need to make sure we reset the AMR/IAMR value on kexec
otherwise, the new kernel will use key 0 for kernel mapping and the old
AMR value prevents access to key 0.

This patch also removes reset if IAMR and AMOR in kexec_sequence. Reset of AMOR
is not needed and the IAMR reset is partial (it doesn't do the reset
on secondary cpus) and is redundant with this patch.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/kup.h | 20 
 arch/powerpc/include/asm/kup.h   | 14 ++
 arch/powerpc/kernel/misc_64.S| 14 --
 arch/powerpc/kexec/core_64.c |  3 +++
 arch/powerpc/mm/book3s64/pgtable.c   |  3 +++
 5 files changed, 40 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
b/arch/powerpc/include/asm/book3s/64/kup.h
index 5b00592479d1..1cd0d849bd1b 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -341,6 +341,26 @@ static inline bool bad_kuap_fault(struct pt_regs *regs, 
unsigned long address,
return !!(error_code & DSISR_KEYFAULT);
 }
 
+#define reset_kuap reset_kuap
+static inline void reset_kuap(void)
+{
+   if (mmu_has_feature(MMU_FTR_KUAP)) {
+   mtspr(SPRN_AMR, 0);
+   /*  Do we need isync()? We are going via a kexec reset */
+   isync();
+   }
+}
+
+#define reset_kuep reset_kuep
+static inline void reset_kuep(void)
+{
+   if (mmu_has_feature(MMU_FTR_KUEP)) {
+   mtspr(SPRN_IAMR, 0);
+   /*  Do we need isync()? We are going via a kexec reset */
+   isync();
+   }
+}
+
 #else /* CONFIG_PPC_MEM_KEYS */
 static inline void kuap_restore_user_amr(struct pt_regs *regs)
 {
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 249eed77a06b..b22becc1705c 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -101,6 +101,20 @@ static inline void prevent_current_access_user(void)
prevent_user_access(NULL, NULL, ~0UL, KUAP_CURRENT);
 }
 
+#ifndef reset_kuap
+#define reset_kuap reset_kuap
+static inline void reset_kuap(void)
+{
+}
+#endif
+
+#ifndef reset_kuep
+#define reset_kuep reset_kuep
+static inline void reset_kuep(void)
+{
+}
+#endif
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_KUAP_H_ */
diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
index 1864605eca29..7bb46ad98207 100644
--- a/arch/powerpc/kernel/misc_64.S
+++ b/arch/powerpc/kernel/misc_64.S
@@ -413,20 +413,6 @@ _GLOBAL(kexec_sequence)
li  r0,0
std r0,16(r1)
 
-BEGIN_FTR_SECTION
-   /*
-* This is the best time to turn AMR/IAMR off.
-* key 0 is used in radix for supervisor<->user
-* protection, but on hash key 0 is reserved
-* ideally we want to enter with a clean state.
-* NOTE, we rely on r0 being 0 from above.
-*/
-   mtspr   SPRN_IAMR,r0
-BEGIN_FTR_SECTION_NESTED(42)
-   mtspr   SPRN_AMOR,r0
-END_FTR_SECTION_NESTED_IFSET(CPU_FTR_HVMODE, 42)
-END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
-
/* save regs for local vars on new stack.
 * yes, we won't go back, but ...
 */
diff --git a/arch/powerpc/kexec/core_64.c b/arch/powerpc/kexec/core_64.c
index b4184092172a..a124715f33ea 100644
--- a/arch/powerpc/kexec/core_64.c
+++ b/arch/powerpc/kexec/core_64.c
@@ -152,6 +152,9 @@ static void kexec_smp_down(void *arg)
if (ppc_md.kexec_cpu_down)
ppc_md.kexec_cpu_down(0, 1);
 
+   reset_kuap();
+   reset_kuep();
+
kexec_smp_wait();
/* NOTREACHED */
 }
diff --git a/arch/powerpc/mm/book3s64/pgtable.c 
b/arch/powerpc/mm/book3s64/pgtable.c
index e0bb69c616e4..cf3d65067d48 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -168,6 +168,9 @@ void mmu_cleanup_all(void)
radix__mmu_cleanup_all();
else if (mmu_hash_ops.hpte_clear_all)
mmu_hash_ops.hpte_clear_all();
+
+   reset_kuap();
+   reset_kuep();
 }
 
 #ifdef CONFIG_MEMORY_HOTPLUG
-- 
2.26.2



[PATCH v2 27/28] powerpc/selftest/ptrace-pkey: IAMR and uamor cannot be updated by ptrace

2020-05-02 Thread Aneesh Kumar K.V
Both IAMR and uamor are privileged and cannot be updated by userspace. Hence
we also don't allow ptrace interface to update them. Don't update them in the
test. Also expected_iamr is only changed if we can allocate a  DISABLE_EXECUTE
pkey.

Signed-off-by: Aneesh Kumar K.V 
---
 tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c 
b/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
index bc33d748d95b..5c3c8222de46 100644
--- a/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
+++ b/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
@@ -101,15 +101,12 @@ static int child(struct shared_info *info)
 */
info->invalid_amr = info->amr2 | (~0x0UL & ~info->expected_uamor);
 
+   /*
+* if PKEY_DISABLE_EXECUTE succeeded we should update the expected_iamr
+*/
if (disable_execute)
info->expected_iamr |= 1ul << pkeyshift(pkey1);
-   else
-   info->expected_iamr &= ~(1ul << pkeyshift(pkey1));
-
-   info->expected_iamr &= ~(1ul << pkeyshift(pkey2) | 1ul << 
pkeyshift(pkey3));
 
-   info->expected_uamor |= 3ul << pkeyshift(pkey1) |
-   3ul << pkeyshift(pkey2);
/*
 * Create an IAMR value different from expected value.
 * Kernel will reject an IAMR and UAMOR change.
-- 
2.26.2



[PATCH v2 26/28] powerpc/selftest/ptrace-pkey: Update the test to mark an invalid pkey correctly

2020-05-02 Thread Aneesh Kumar K.V
Signed-off-by: Aneesh Kumar K.V 
---
 .../selftests/powerpc/ptrace/ptrace-pkey.c| 30 ---
 1 file changed, 12 insertions(+), 18 deletions(-)

diff --git a/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c 
b/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
index f9216c7a1829..bc33d748d95b 100644
--- a/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
+++ b/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
@@ -66,11 +66,6 @@ static int sys_pkey_alloc(unsigned long flags, unsigned long 
init_access_rights)
return syscall(__NR_pkey_alloc, flags, init_access_rights);
 }
 
-static int sys_pkey_free(int pkey)
-{
-   return syscall(__NR_pkey_free, pkey);
-}
-
 static int child(struct shared_info *info)
 {
unsigned long reg;
@@ -100,7 +95,11 @@ static int child(struct shared_info *info)
 
info->amr1 |= 3ul << pkeyshift(pkey1);
info->amr2 |= 3ul << pkeyshift(pkey2);
-   info->invalid_amr |= info->amr2 | 3ul << pkeyshift(pkey3);
+   /*
+* invalid amr value where we try to force write
+* things which are deined by a uamor setting.
+*/
+   info->invalid_amr = info->amr2 | (~0x0UL & ~info->expected_uamor);
 
if (disable_execute)
info->expected_iamr |= 1ul << pkeyshift(pkey1);
@@ -111,17 +110,12 @@ static int child(struct shared_info *info)
 
info->expected_uamor |= 3ul << pkeyshift(pkey1) |
3ul << pkeyshift(pkey2);
-   info->invalid_iamr |= 1ul << pkeyshift(pkey1) | 1ul << pkeyshift(pkey2);
-   info->invalid_uamor |= 3ul << pkeyshift(pkey1);
-
/*
-* We won't use pkey3. We just want a plausible but invalid key to test
-* whether ptrace will let us write to AMR bits we are not supposed to.
-*
-* This also tests whether the kernel restores the UAMOR permissions
-* after a key is freed.
+* Create an IAMR value different from expected value.
+* Kernel will reject an IAMR and UAMOR change.
 */
-   sys_pkey_free(pkey3);
+   info->invalid_iamr = info->expected_iamr | (1ul << pkeyshift(pkey1) | 
1ul << pkeyshift(pkey2));
+   info->invalid_uamor = info->expected_uamor & ~(0x3ul << 
pkeyshift(pkey1));
 
printf("%-30s AMR: %016lx pkey1: %d pkey2: %d pkey3: %d\n",
   user_write, info->amr1, pkey1, pkey2, pkey3);
@@ -196,9 +190,9 @@ static int parent(struct shared_info *info, pid_t pid)
PARENT_SKIP_IF_UNSUPPORTED(ret, &info->child_sync);
PARENT_FAIL_IF(ret, &info->child_sync);
 
-   info->amr1 = info->amr2 = info->invalid_amr = regs[0];
-   info->expected_iamr = info->invalid_iamr = regs[1];
-   info->expected_uamor = info->invalid_uamor = regs[2];
+   info->amr1 = info->amr2 = regs[0];
+   info->expected_iamr = regs[1];
+   info->expected_uamor = regs[2];
 
/* Wake up child so that it can set itself up. */
ret = prod_child(&info->child_sync);
-- 
2.26.2



[PATCH v2 25/28] powerpc/selftest/ptrave-pkey: Rename variables to make it easier to follow code

2020-05-02 Thread Aneesh Kumar K.V
Rename variable to indicate that they are invalid values which we will use to
test ptrace update of pkeys.

Signed-off-by: Aneesh Kumar K.V 
---
 .../selftests/powerpc/ptrace/ptrace-pkey.c| 26 +--
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c 
b/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
index bdbbbe8431e0..f9216c7a1829 100644
--- a/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
+++ b/tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
@@ -44,7 +44,7 @@ struct shared_info {
unsigned long amr2;
 
/* AMR value that ptrace should refuse to write to the child. */
-   unsigned long amr3;
+   unsigned long invalid_amr;
 
/* IAMR value the parent expects to read from the child. */
unsigned long expected_iamr;
@@ -57,8 +57,8 @@ struct shared_info {
 * (even though they're valid ones) because userspace doesn't have
 * access to those registers.
 */
-   unsigned long new_iamr;
-   unsigned long new_uamor;
+   unsigned long invalid_iamr;
+   unsigned long invalid_uamor;
 };
 
 static int sys_pkey_alloc(unsigned long flags, unsigned long 
init_access_rights)
@@ -100,7 +100,7 @@ static int child(struct shared_info *info)
 
info->amr1 |= 3ul << pkeyshift(pkey1);
info->amr2 |= 3ul << pkeyshift(pkey2);
-   info->amr3 |= info->amr2 | 3ul << pkeyshift(pkey3);
+   info->invalid_amr |= info->amr2 | 3ul << pkeyshift(pkey3);
 
if (disable_execute)
info->expected_iamr |= 1ul << pkeyshift(pkey1);
@@ -111,8 +111,8 @@ static int child(struct shared_info *info)
 
info->expected_uamor |= 3ul << pkeyshift(pkey1) |
3ul << pkeyshift(pkey2);
-   info->new_iamr |= 1ul << pkeyshift(pkey1) | 1ul << pkeyshift(pkey2);
-   info->new_uamor |= 3ul << pkeyshift(pkey1);
+   info->invalid_iamr |= 1ul << pkeyshift(pkey1) | 1ul << pkeyshift(pkey2);
+   info->invalid_uamor |= 3ul << pkeyshift(pkey1);
 
/*
 * We won't use pkey3. We just want a plausible but invalid key to test
@@ -196,9 +196,9 @@ static int parent(struct shared_info *info, pid_t pid)
PARENT_SKIP_IF_UNSUPPORTED(ret, &info->child_sync);
PARENT_FAIL_IF(ret, &info->child_sync);
 
-   info->amr1 = info->amr2 = info->amr3 = regs[0];
-   info->expected_iamr = info->new_iamr = regs[1];
-   info->expected_uamor = info->new_uamor = regs[2];
+   info->amr1 = info->amr2 = info->invalid_amr = regs[0];
+   info->expected_iamr = info->invalid_iamr = regs[1];
+   info->expected_uamor = info->invalid_uamor = regs[2];
 
/* Wake up child so that it can set itself up. */
ret = prod_child(&info->child_sync);
@@ -234,10 +234,10 @@ static int parent(struct shared_info *info, pid_t pid)
return ret;
 
/* Write invalid AMR value in child. */
-   ret = ptrace_write_regs(pid, NT_PPC_PKEY, &info->amr3, 1);
+   ret = ptrace_write_regs(pid, NT_PPC_PKEY, &info->invalid_amr, 1);
PARENT_FAIL_IF(ret, &info->child_sync);
 
-   printf("%-30s AMR: %016lx\n", ptrace_write_running, info->amr3);
+   printf("%-30s AMR: %016lx\n", ptrace_write_running, info->invalid_amr);
 
/* Wake up child so that it can verify it didn't change. */
ret = prod_child(&info->child_sync);
@@ -249,7 +249,7 @@ static int parent(struct shared_info *info, pid_t pid)
 
/* Try to write to IAMR. */
regs[0] = info->amr1;
-   regs[1] = info->new_iamr;
+   regs[1] = info->invalid_iamr;
ret = ptrace_write_regs(pid, NT_PPC_PKEY, regs, 2);
PARENT_FAIL_IF(!ret, &info->child_sync);
 
@@ -257,7 +257,7 @@ static int parent(struct shared_info *info, pid_t pid)
   ptrace_write_running, regs[0], regs[1]);
 
/* Try to write to IAMR and UAMOR. */
-   regs[2] = info->new_uamor;
+   regs[2] = info->invalid_uamor;
ret = ptrace_write_regs(pid, NT_PPC_PKEY, regs, 3);
PARENT_FAIL_IF(!ret, &info->child_sync);
 
-- 
2.26.2



[PATCH v2 24/28] powerpc/book3s64/keys: Print information during boot.

2020-05-02 Thread Aneesh Kumar K.V
Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/mm/book3s64/pkeys.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index bb127e4e2dd2..5d320ac2ba04 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -207,6 +207,7 @@ void __init pkey_early_init_devtree(void)
 */
initial_allocation_mask |= reserved_allocation_mask;
 
+   pr_info("Enabling Memory keys with max key count %d", max_pkey);
 err_out:
/*
 * Setup uamor on boot cpu
-- 
2.26.2



[PATCH v2 23/28] powerpc/book3s64/hash/kuep: Enable KUEP on hash

2020-05-02 Thread Aneesh Kumar K.V
Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/mm/book3s64/pkeys.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index e94585fad5c4..bb127e4e2dd2 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -219,7 +219,12 @@ void __init pkey_early_init_devtree(void)
 #ifdef CONFIG_PPC_KUEP
 void __init setup_kuep(bool disabled)
 {
-   if (disabled || !early_radix_enabled())
+   if (disabled)
+   return;
+   /*
+* On hash if PKEY feature is not enabled, disable KUAP too.
+*/
+   if (!early_radix_enabled() && !early_mmu_has_feature(MMU_FTR_PKEY))
return;
 
if (smp_processor_id() == boot_cpuid) {
-- 
2.26.2



[PATCH v2 22/28] powerpc/book3s64/hash/kuap: Enable kuap on hash

2020-05-02 Thread Aneesh Kumar K.V
Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/mm/book3s64/pkeys.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 0f4fc2876fc8..e94585fad5c4 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -240,7 +240,12 @@ void __init setup_kuep(bool disabled)
 #ifdef CONFIG_PPC_KUAP
 void __init setup_kuap(bool disabled)
 {
-   if (disabled || !early_radix_enabled())
+   if (disabled)
+   return;
+   /*
+* On hash if PKEY feature is not enabled, disable KUAP too.
+*/
+   if (!early_radix_enabled() && !early_mmu_has_feature(MMU_FTR_PKEY))
return;
 
if (smp_processor_id() == boot_cpuid) {
-- 
2.26.2



[PATCH v2 21/28] powerpc/book3s64/kuep: Use Key 3 to implement KUEP with hash translation.

2020-05-02 Thread Aneesh Kumar K.V
Radix use IAMR Key 0 and hash translation use IAMR key 3.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/kup.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
b/arch/powerpc/include/asm/book3s/64/kup.h
index f704fb615dd5..5b00592479d1 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -7,7 +7,7 @@
 
 #define AMR_KUAP_BLOCK_READUL(0x5455)
 #define AMR_KUAP_BLOCK_WRITE   UL(0xa8aa)
-#define AMR_KUEP_BLOCKED   (1UL << 62)
+#define AMR_KUEP_BLOCKED   UL(0x5455)
 #define AMR_KUAP_BLOCKED   (AMR_KUAP_BLOCK_READ | AMR_KUAP_BLOCK_WRITE)
 
 #ifdef __ASSEMBLY__
-- 
2.26.2



[PATCH v2 20/28] powerpc/book3s64/kuap: Use Key 3 to implement KUAP with hash translation.

2020-05-02 Thread Aneesh Kumar K.V
Radix use AMR Key 0 and hash translation use AMR key 3.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/kup.h | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
b/arch/powerpc/include/asm/book3s/64/kup.h
index 66c97d9d2ed8..f704fb615dd5 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -5,11 +5,10 @@
 #include 
 #include 
 
-#define AMR_KUAP_BLOCK_READUL(0x4000)
-#define AMR_KUAP_BLOCK_WRITE   UL(0x8000)
+#define AMR_KUAP_BLOCK_READUL(0x5455)
+#define AMR_KUAP_BLOCK_WRITE   UL(0xa8aa)
 #define AMR_KUEP_BLOCKED   (1UL << 62)
 #define AMR_KUAP_BLOCKED   (AMR_KUAP_BLOCK_READ | AMR_KUAP_BLOCK_WRITE)
-#define AMR_KUAP_SHIFT 62
 
 #ifdef __ASSEMBLY__
 
@@ -75,8 +74,8 @@
 #ifdef CONFIG_PPC_KUAP_DEBUG
BEGIN_MMU_FTR_SECTION_NESTED(67)
mfspr   \gpr1, SPRN_AMR
-   li  \gpr2, (AMR_KUAP_BLOCKED >> AMR_KUAP_SHIFT)
-   sldi\gpr2, \gpr2, AMR_KUAP_SHIFT
+   /* Prevent access to userspace using any key values */
+   LOAD_REG_IMMEDIATE(\gpr2, AMR_KUAP_BLOCKED)
 999:   tdne\gpr1, \gpr2
EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | 
BUGFLAG_ONCE)
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
-- 
2.26.2



[PATCH v2 19/28] powerpc/book3s64/kuap: Improve error reporting with KUAP

2020-05-02 Thread Aneesh Kumar K.V
With hash translation use DSISR_KEYFAULT to identify a wrong access.
With Radix we look at the AMR value and type of fault.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/32/kup.h |  4 +--
 arch/powerpc/include/asm/book3s/64/kup.h | 28 
 arch/powerpc/include/asm/kup.h   |  4 +--
 arch/powerpc/include/asm/nohash/32/kup-8xx.h |  4 +--
 arch/powerpc/mm/fault.c  |  2 +-
 5 files changed, 30 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/32/kup.h 
b/arch/powerpc/include/asm/book3s/32/kup.h
index 3c0ba22dc360..213d3ab40d2d 100644
--- a/arch/powerpc/include/asm/book3s/32/kup.h
+++ b/arch/powerpc/include/asm/book3s/32/kup.h
@@ -176,8 +176,8 @@ static inline void restore_user_access(unsigned long flags)
allow_user_access(to, to, end - addr, KUAP_READ_WRITE);
 }
 
-static inline bool
-bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
+static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
+ bool is_write, unsigned long error_code)
 {
unsigned long begin = regs->kuap & 0xf000;
unsigned long end = regs->kuap << 28;
diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
b/arch/powerpc/include/asm/book3s/64/kup.h
index f097d69ec2c8..66c97d9d2ed8 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -317,13 +317,31 @@ static inline void restore_user_access(unsigned long 
flags)
set_kuap(flags);
 }
 
-static inline bool
-bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
+#define RADIX_KUAP_BLOCK_READ  UL(0x4000)
+#define RADIX_KUAP_BLOCK_WRITE UL(0x8000)
+
+static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
+ bool is_write, unsigned long error_code)
 {
-   return WARN(mmu_has_feature(MMU_FTR_KUAP) &&
-   (regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : 
AMR_KUAP_BLOCK_READ)),
-   "Bug: %s fault blocked by AMR!", is_write ? "Write" : 
"Read");
+   if (!mmu_has_feature(MMU_FTR_KUAP))
+   return false;
+
+   if (radix_enabled()) {
+   /*
+* Will be a storage protection fault.
+* Only check the details of AMR[0]
+*/
+   return WARN((regs->kuap & (is_write ? RADIX_KUAP_BLOCK_WRITE : 
RADIX_KUAP_BLOCK_READ)),
+   "Bug: %s fault blocked by AMR!", is_write ? "Write" 
: "Read");
+   }
+   /*
+* We don't want to WARN here because userspace can setup
+* keys such that a kernel access to user address can cause
+* fault
+*/
+   return !!(error_code & DSISR_KEYFAULT);
 }
+
 #else /* CONFIG_PPC_MEM_KEYS */
 static inline void kuap_restore_user_amr(struct pt_regs *regs)
 {
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 248438dff74a..249eed77a06b 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -57,8 +57,8 @@ static inline void prevent_user_access(void __user *to, const 
void __user *from,
   unsigned long size, unsigned long dir) { 
}
 static inline unsigned long prevent_user_access_return(void) { return 0UL; }
 static inline void restore_user_access(unsigned long flags) { }
-static inline bool
-bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
+static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
+ bool is_write, unsigned long error_code)
 {
return false;
 }
diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h 
b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index 85ed2390fb99..c401e4e404d4 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -60,8 +60,8 @@ static inline void restore_user_access(unsigned long flags)
mtspr(SPRN_MD_AP, flags);
 }
 
-static inline bool
-bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
+static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
+ bool is_write, unsigned long error_code)
 {
return WARN(!((regs->kuap ^ MD_APG_KUAP) & 0xf000),
"Bug: fault blocked by AP register !");
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 84af6c8eecf7..4e6e7e0fea21 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -233,7 +233,7 @@ static bool bad_kernel_fault(struct pt_regs *regs, unsigned 
long error_code,
 
// Read/write fault in a valid region (the exception table search passed
// above), but blocked by KUAP is bad, it can never succeed.
-   if (bad_kuap_fault(regs, address, is_write))
+   if (bad_

[PATCH v2 18/28] powerpc/book3s64/kuap: Restrict access to userspace based on userspace AMR

2020-05-02 Thread Aneesh Kumar K.V
If an application has configured address protection such that read/write is
denied using pkey even the kernel should receive a FAULT on accessing the same.

This patch use user AMR value stored in pt_regs.kuap to achieve the same.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/kup.h | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
b/arch/powerpc/include/asm/book3s/64/kup.h
index 31eb0acddea9..f097d69ec2c8 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -279,14 +279,20 @@ static inline void set_kuap(unsigned long value)
 static __always_inline void allow_user_access(void __user *to, const void 
__user *from,
  unsigned long size, unsigned long 
dir)
 {
+   unsigned long thread_amr = 0;
+
// This is written so we can resolve to a single case at build time
BUILD_BUG_ON(!__builtin_constant_p(dir));
+
+   if (mmu_has_feature(MMU_FTR_PKEY))
+   thread_amr = current_thread_amr();
+
if (dir == KUAP_READ)
-   set_kuap(AMR_KUAP_BLOCK_WRITE);
+   set_kuap(thread_amr | AMR_KUAP_BLOCK_WRITE);
else if (dir == KUAP_WRITE)
-   set_kuap(AMR_KUAP_BLOCK_READ);
+   set_kuap(thread_amr | AMR_KUAP_BLOCK_READ);
else if (dir == KUAP_READ_WRITE)
-   set_kuap(0);
+   set_kuap(thread_amr);
else
BUILD_BUG();
 }
-- 
2.26.2



[PATCH v2 17/28] powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode.

2020-05-02 Thread Aneesh Kumar K.V
Now that kernel correctly store/restore userspace AMR/IAMR values, avoid
manipulating AMR and IAMR from the kernel on behalf of userspace.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/kup.h | 23 
 arch/powerpc/include/asm/processor.h |  5 --
 arch/powerpc/kernel/process.c|  4 --
 arch/powerpc/kernel/traps.c  |  6 --
 arch/powerpc/mm/book3s64/pkeys.c | 71 
 5 files changed, 34 insertions(+), 75 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
b/arch/powerpc/include/asm/book3s/64/kup.h
index fe1818954e51..31eb0acddea9 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -174,6 +174,29 @@ extern u64 default_uamor;
 extern u64 default_amr;
 extern u64 default_iamr;
 
+/*
+ * For kernel thread that doesn't have thread.regs return
+ * default AMR/IAMR values.
+ */
+static inline u64 current_thread_amr(void)
+{
+   if (current->thread.regs)
+   return current->thread.regs->kuap;
+   return AMR_KUAP_BLOCKED;
+}
+
+static inline u64 current_thread_iamr(void)
+{
+   if (current->thread.regs)
+   return current->thread.regs->kuep;
+   return AMR_KUEP_BLOCKED;
+}
+
+static inline u64 read_uamor(void)
+{
+   return default_uamor;
+}
+
 static inline void kuap_restore_user_amr(struct pt_regs *regs)
 {
if (!mmu_has_feature(MMU_FTR_PKEY))
diff --git a/arch/powerpc/include/asm/processor.h 
b/arch/powerpc/include/asm/processor.h
index a51964b4ec42..591987da44e2 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -234,11 +234,6 @@ struct thread_struct {
struct thread_vr_state ckvr_state; /* Checkpointed VR state */
unsigned long   ckvrsave; /* Checkpointed VRSAVE */
 #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
-#ifdef CONFIG_PPC_MEM_KEYS
-   unsigned long   amr;
-   unsigned long   iamr;
-   unsigned long   uamor;
-#endif
 #ifdef CONFIG_KVM_BOOK3S_32_HANDLER
void*   kvm_shadow_vcpu; /* KVM internal data */
 #endif /* CONFIG_KVM_BOOK3S_32_HANDLER */
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 0ab9a8cf1bcb..682d421f 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -585,7 +585,6 @@ static void save_all(struct task_struct *tsk)
__giveup_spe(tsk);
 
msr_check_and_clear(msr_all_available);
-   thread_pkey_regs_save(&tsk->thread);
 }
 
 void flush_all_to_thread(struct task_struct *tsk)
@@ -1097,8 +1096,6 @@ static inline void save_sprs(struct thread_struct *t)
t->tar = mfspr(SPRN_TAR);
}
 #endif
-
-   thread_pkey_regs_save(t);
 }
 
 static inline void restore_sprs(struct thread_struct *old_thread,
@@ -1139,7 +1136,6 @@ static inline void restore_sprs(struct thread_struct 
*old_thread,
mtspr(SPRN_TIDR, new_thread->tidr);
 #endif
 
-   thread_pkey_regs_restore(new_thread, old_thread);
 }
 
 struct task_struct *__switch_to(struct task_struct *prev,
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 3fca22276bb1..a47fb49b7af8 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -348,12 +348,6 @@ static bool exception_common(int signr, struct pt_regs 
*regs, int code,
 
current->thread.trap_nr = code;
 
-   /*
-* Save all the pkey registers AMR/IAMR/UAMOR. Eg: Core dumps need
-* to capture the content, if the task gets killed.
-*/
-   thread_pkey_regs_save(¤t->thread);
-
return true;
 }
 
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 5012b57af808..0f4fc2876fc8 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -264,40 +264,17 @@ void pkey_mm_init(struct mm_struct *mm)
mm->context.execute_only_pkey = execute_only_key;
 }
 
-static inline u64 read_amr(void)
+static inline void update_current_thread_amr(u64 value)
 {
-   return mfspr(SPRN_AMR);
+   current->thread.regs->kuap = value;
 }
 
-static inline void write_amr(u64 value)
-{
-   mtspr(SPRN_AMR, value);
-}
-
-static inline u64 read_iamr(void)
-{
-   if (static_branch_unlikely(&execute_pkey_disabled))
-   return 0x0UL;
-
-   return mfspr(SPRN_IAMR);
-}
-
-static inline void write_iamr(u64 value)
+static inline void update_current_thread_iamr(u64 value)
 {
if (static_branch_unlikely(&execute_pkey_disabled))
return;
 
-   mtspr(SPRN_IAMR, value);
-}
-
-static inline u64 read_uamor(void)
-{
-   return mfspr(SPRN_UAMOR);
-}
-
-static inline void write_uamor(u64 value)
-{
-   mtspr(SPRN_UAMOR, value);
+   current->thread.regs->kuep = value;
 }
 
 static bool is_pkey_enabled(int pkey)
@@ -314,20 +291,21 @@ static bool is_pkey_enabled(int pkey)
return !!(uamor_pkey_bits);
 }
 
+/*  FI

[PATCH v2 16/28] powerpc/ptrace-view: Use pt_regs values instead of thread_struct based one.

2020-05-02 Thread Aneesh Kumar K.V
We will remove thread.amr/iamr/uamor in a later patch

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/kernel/ptrace/ptrace-view.c | 23 +--
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/ptrace/ptrace-view.c 
b/arch/powerpc/kernel/ptrace/ptrace-view.c
index 15e3b79b6395..5b7bea41c699 100644
--- a/arch/powerpc/kernel/ptrace/ptrace-view.c
+++ b/arch/powerpc/kernel/ptrace/ptrace-view.c
@@ -488,14 +488,25 @@ static int pkey_active(struct task_struct *target, const 
struct user_regset *reg
 static int pkey_get(struct task_struct *target, const struct user_regset 
*regset,
unsigned int pos, unsigned int count, void *kbuf, void 
__user *ubuf)
 {
-   BUILD_BUG_ON(TSO(amr) + sizeof(unsigned long) != TSO(iamr));
-   BUILD_BUG_ON(TSO(iamr) + sizeof(unsigned long) != TSO(uamor));
+   int ret;
 
if (!arch_pkeys_enabled())
return -ENODEV;
 
-   return user_regset_copyout(&pos, &count, &kbuf, &ubuf, 
&target->thread.amr,
-  0, ELF_NPKEY * sizeof(unsigned long));
+   ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, 
&target->thread.regs->kuap,
+ 0, 1 * sizeof(unsigned long));
+   if (ret)
+   goto err_out;
+
+   ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, 
&target->thread.regs->kuep,
+ 1 * sizeof(unsigned long), 2 * 
sizeof(unsigned long));
+   if (ret)
+   goto err_out;
+
+   ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, &default_uamor,
+ 2 * sizeof(unsigned long), 3 * 
sizeof(unsigned long));
+err_out:
+   return ret;
 }
 
 static int pkey_set(struct task_struct *target, const struct user_regset 
*regset,
@@ -518,8 +529,8 @@ static int pkey_set(struct task_struct *target, const 
struct user_regset *regset
return ret;
 
/* UAMOR determines which bits of the AMR can be set from userspace. */
-   target->thread.amr = (new_amr & target->thread.uamor) |
-(target->thread.amr & ~target->thread.uamor);
+   target->thread.regs->kuap = (new_amr & default_uamor) |
+   (target->thread.regs->kuap & ~default_uamor);
 
return 0;
 }
-- 
2.26.2



[PATCH v2 15/28] powerpc/book3s64/pkeys: Reset userspace AMR correctly on exec

2020-05-02 Thread Aneesh Kumar K.V
On fork, we inherit from the parent and on exec, we should switch to 
default_amr values.

Also, avoid changing the AMR register value within the kernel. The kernel now 
runs with
different AMR values.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/kup.h |  2 ++
 arch/powerpc/kernel/process.c| 19 ++-
 arch/powerpc/mm/book3s64/pkeys.c | 18 ++
 3 files changed, 22 insertions(+), 17 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
b/arch/powerpc/include/asm/book3s/64/kup.h
index 67320a990f3f..fe1818954e51 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -171,6 +171,8 @@
 #include 
 
 extern u64 default_uamor;
+extern u64 default_amr;
+extern u64 default_iamr;
 
 static inline void kuap_restore_user_amr(struct pt_regs *regs)
 {
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 9ef95a1217ef..0ab9a8cf1bcb 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1474,7 +1474,25 @@ void arch_setup_new_exec(void)
current->thread.regs = regs - 1;
}
 
+#ifdef CONFIG_PPC_MEM_KEYS
+   current->thread.regs->kuap  = default_amr;
+   current->thread.regs->kuep  = default_iamr;
+#endif
+
 }
+#else
+void arch_setup_new_exec(void)
+{
+   /*
+* If we exec out of a kernel thread then thread.regs will not be
+* set.  Do it now.
+*/
+   if (!current->thread.regs) {
+   struct pt_regs *regs = task_stack_page(current) + THREAD_SIZE;
+   current->thread.regs = regs - 1;
+   }
+}
+
 #endif
 
 #ifdef CONFIG_PPC64
@@ -1809,7 +1827,6 @@ void start_thread(struct pt_regs *regs, unsigned long 
start, unsigned long sp)
current->thread.load_tm = 0;
 #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
 
-   thread_pkey_regs_init(¤t->thread);
 }
 EXPORT_SYMBOL(start_thread);
 
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 976f65f27324..5012b57af808 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -20,8 +20,8 @@ int  max_pkey;/* Maximum key value 
supported */
  */
 u32  reserved_allocation_mask;
 static u32  initial_allocation_mask;   /* Bits set for the initially allocated 
keys */
-static u64 default_amr;
-static u64 default_iamr;
+u64 default_amr;
+u64 default_iamr;
 /* Allow all keys to be modified by default */
 u64 default_uamor = ~0x0UL;
 /*
@@ -387,20 +387,6 @@ void thread_pkey_regs_restore(struct thread_struct 
*new_thread,
write_uamor(new_thread->uamor);
 }
 
-void thread_pkey_regs_init(struct thread_struct *thread)
-{
-   if (!mmu_has_feature(MMU_FTR_PKEY))
-   return;
-
-   thread->amr   = default_amr;
-   thread->iamr  = default_iamr;
-   thread->uamor = default_uamor;
-
-   write_amr(default_amr);
-   write_iamr(default_iamr);
-   write_uamor(default_uamor);
-}
-
 int execute_only_pkey(struct mm_struct *mm)
 {
if (static_branch_likely(&execute_pkey_disabled))
-- 
2.26.2



[PATCH v2 14/28] powerpc/book3s64/pkeys: Inherit correctly on fork.

2020-05-02 Thread Aneesh Kumar K.V
Child thread.kuap value is inherited from the parent in copy_thread_tls. We 
still
need to make sure when the child returns from a fork in the kernel we start 
with the kernel
default AMR value.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/kernel/process.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index abbe545ed88c..9ef95a1217ef 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1677,6 +1677,13 @@ int copy_thread_tls(unsigned long clone_flags, unsigned 
long usp,
childregs->ppr = DEFAULT_PPR;
 
p->thread.tidr = 0;
+#endif
+   /*
+* Run with the current AMR value of the kernel
+*/
+#if defined(CONFIG_PPC_MEM_KEYS)
+   kregs->kuap = AMR_KUAP_BLOCKED;
+   kregs->kuep = AMR_KUEP_BLOCKED;
 #endif
kregs->nip = ppc_function_entry(f);
return 0;
-- 
2.26.2



[PATCH v2 13/28] powerpc/book3s64/kuep: Store/restore userspace IAMR correctly on entry and exit from kernel

2020-05-02 Thread Aneesh Kumar K.V
This prepare kernel to operate with a different value than userspace IAMR.
For this, IAMR needs to be saved and restored on entry and return from the
kernel.

If MMU_FTR_KEY is enabled we always use the key mechanism to implement KUEP
feature. If MMU_FTR_KEY is not supported and if we support MMU_FTR_KUEP
(radix translation on POWER9), we can skip restoring IAMR on return
to userspace. Userspace won't be using IAMR in that specific config.

We don't need to save/restore IAMR on reentry into the kernel due to interrupt
because the kernel doesn't modify IAMR internally.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/kup.h | 46 ++--
 arch/powerpc/include/asm/ptrace.h|  6 +++-
 arch/powerpc/kernel/asm-offsets.c|  4 +++
 arch/powerpc/kernel/syscall_64.c |  7 ++--
 4 files changed, 58 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
b/arch/powerpc/include/asm/book3s/64/kup.h
index 82bef3901672..67320a990f3f 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -17,15 +17,26 @@
 #if defined(CONFIG_PPC_MEM_KEYS)
BEGIN_MMU_FTR_SECTION_NESTED(67)
/*
-* AMR is going to be different when
+* AMR and IAMR are going to be different when
 * returning to userspace.
 */
ld  \gpr1, STACK_REGS_KUAP(r1)
isync
mtspr   SPRN_AMR, \gpr1
+   /*
+* Restore IAMR only when returning to userspace
+*/
+   ld  \gpr1, STACK_REGS_KUEP(r1)
+   mtspr   SPRN_IAMR, \gpr1
 
/* No isync required, see kuap_restore_user_amr() */
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_PKEY , 67)
+
+   /*
+* We don't check KUEP feature here, because if FTR_PKEY
+* is not enabled we don't need to restore IAMR on
+* return to usespace.
+*/
 #endif
 .endm
 
@@ -53,6 +64,9 @@
isync
mtspr   SPRN_AMR, \gpr2
/* No isync required, see kuap_restore_amr() */
+   /*
+* No need to restore IAMR when returning to kernel space.
+*/
 100:  // skip_restore_amr
 #endif
 .endm
@@ -90,6 +104,12 @@
b   100f  // skip_save_amr
ALT_MMU_FTR_SECTION_END_NESTED_IFSET(MMU_FTR_KUAP, 68)
 
+   /*
+* We don't check KUEP feature here, because if FTR_PKEY
+* is not enabled we don't need to save IAMR on
+* entry from usespace. That is handled by either
+* handle_kuap_save_amr or skip_save_amr
+*/
 
 99: // handle_kuap_save_amr
.ifnb \msr_pr_cr
@@ -120,6 +140,25 @@
 102:
END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 69)
 
+   .ifnb \msr_pr_cr
+   beq \msr_pr_cr, 103f // from kernel space
+   mfspr   \gpr1, SPRN_IAMR
+   std \gpr1, STACK_REGS_KUEP(r1)
+
+   /*
+* update kernel IAMR with AMR_KUEP_BLOCKED only
+* if KUAP feature is enabled
+*/
+   BEGIN_MMU_FTR_SECTION_NESTED(70)
+   LOAD_REG_IMMEDIATE(\gpr2, AMR_KUEP_BLOCKED)
+   cmpd\use_cr, \gpr1, \gpr2
+   beq \use_cr, 103f
+   mtspr   SPRN_IAMR, \gpr2
+   isync
+103:
+   END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUEP, 70)
+   .endif
+
 100: // skip_save_amr
 #endif
 .endm
@@ -140,13 +179,13 @@ static inline void kuap_restore_user_amr(struct pt_regs 
*regs)
 
isync();
mtspr(SPRN_AMR, regs->kuap);
+   mtspr(SPRN_IAMR, regs->kuep);
/*
 * No isync required here because we are about to rfi
 * back to previous context before any user accesses
 * would be made, which is a CSI.
 */
 }
-
 static inline void kuap_restore_kernel_amr(struct pt_regs *regs,
   unsigned long amr)
 {
@@ -162,6 +201,9 @@ static inline void kuap_restore_kernel_amr(struct pt_regs 
*regs,
 */
}
}
+   /*
+* No need to restore IAMR when returning to kernel space.
+*/
 }
 
 static inline unsigned long kuap_get_and_check_amr(void)
diff --git a/arch/powerpc/include/asm/ptrace.h 
b/arch/powerpc/include/asm/ptrace.h
index e0195e6b892b..2bfd2b6a72ab 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -56,8 +56,12 @@ struct pt_regs
 #ifdef CONFIG_PPC_KUAP
unsigned long kuap;
 #endif
+#ifdef CONFIG_PPC_KUEP
+   unsigned long kuep;
+#endif
+
};
-   unsigned long __pad[2]; /* Maintain 16 byte interrupt stack 
alignment */
+   unsigned long __pad[4]; /* Maintain 16 byte interrupt stack 
alignment */
};
 };
 #endif
diff --git a/arch/powerpc/kernel/asm-offsets.c 
b/arch/powerpc/kernel/asm-offsets.c
index fcf24a365fc0..6c7326fc73b9 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -353,6 +353,10 @@ int main(void)
 #ifdef CONFIG_PPC_KUAP
  

[PATCH v2 12/28] powerpc/book3s64/pkeys: Store/restore userspace AMR correctly on entry and exit from kernel

2020-05-02 Thread Aneesh Kumar K.V
This prepare kernel to operate with a different value than userspace AMR.
For this, AMR needs to be saved and restored on entry and return from the
kernel.

With KUAP we modify kernel AMR when accessing user address from the kernel
via copy_to/from_user interfaces.

If MMU_FTR_KEY is enabled we always use the key mechanism to implement KUAP
feature. If MMU_FTR_KEY is not supported and if we support MMU_FTR_KUAP
(radix translation on POWER9), we can skip restoring AMR on return
to userspace. Userspace won't be using AMR in that specific config.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/kup.h | 129 +++
 arch/powerpc/kernel/entry_64.S   |   6 +-
 arch/powerpc/kernel/exceptions-64s.S |   4 +-
 arch/powerpc/kernel/syscall_64.c |  25 -
 4 files changed, 136 insertions(+), 28 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
b/arch/powerpc/include/asm/book3s/64/kup.h
index 1b350bf781ec..82bef3901672 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -13,18 +13,47 @@
 
 #ifdef __ASSEMBLY__
 
-.macro kuap_restore_amrgpr1, gpr2
-#ifdef CONFIG_PPC_KUAP
+.macro kuap_restore_user_amr gpr1
+#if defined(CONFIG_PPC_MEM_KEYS)
BEGIN_MMU_FTR_SECTION_NESTED(67)
-   mfspr   \gpr1, SPRN_AMR
+   /*
+* AMR is going to be different when
+* returning to userspace.
+*/
+   ld  \gpr1, STACK_REGS_KUAP(r1)
+   isync
+   mtspr   SPRN_AMR, \gpr1
+
+   /* No isync required, see kuap_restore_user_amr() */
+   END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_PKEY , 67)
+#endif
+.endm
+
+.macro kuap_restore_kernel_amr gpr1, gpr2
+#if defined(CONFIG_PPC_MEM_KEYS)
+   BEGIN_MMU_FTR_SECTION_NESTED(67)
+   b   99f  // handle_pkey_restore_amr
+   END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_PKEY , 67)
+
+   BEGIN_MMU_FTR_SECTION_NESTED(68)
+   b   99f  // handle_kuap_restore_amr
+   MMU_FTR_SECTION_ELSE_NESTED(68)
+   b   100f  // skip_restore_amr
+   ALT_MMU_FTR_SECTION_END_NESTED_IFSET(MMU_FTR_KUAP, 68)
+
+99:
+   /*
+* AMR is going to be mostly the same since we are
+* returning to the kernel. Compare and do a mtspr.
+*/
ld  \gpr2, STACK_REGS_KUAP(r1)
+   mfspr   \gpr1, SPRN_AMR
cmpd\gpr1, \gpr2
-   beq 998f
+   beq 100f
isync
mtspr   SPRN_AMR, \gpr2
/* No isync required, see kuap_restore_amr() */
-998:
-   END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
+100:  // skip_restore_amr
 #endif
 .endm
 
@@ -40,38 +69,89 @@
 #endif
 .endm
 
+/*
+ * MMU_FTR_PKEY and MMU_FTR_KUAP can both be enabled on a platform. We prefer
+ * PKEY over KUAP if both can be enabled on the platform.
+ *
+ * With KUAP only enabled on exception if we are coming from userspace we don't
+ * save the AMR at all, because the expectation is that userspace can't change
+ * the AMR if KUAP feature is enabled.
+ */
 .macro kuap_save_amr_and_lock gpr1, gpr2, use_cr, msr_pr_cr
-#ifdef CONFIG_PPC_KUAP
+#if defined(CONFIG_PPC_MEM_KEYS)
+
BEGIN_MMU_FTR_SECTION_NESTED(67)
+   b   101f   // handle_pkey_save_amr
+END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_PKEY , 67)
+
+   BEGIN_MMU_FTR_SECTION_NESTED(68)
+   b   99f  // handle_kuap_save_amr
+   MMU_FTR_SECTION_ELSE_NESTED(68)
+   b   100f  // skip_save_amr
+   ALT_MMU_FTR_SECTION_END_NESTED_IFSET(MMU_FTR_KUAP, 68)
+
+
+99: // handle_kuap_save_amr
.ifnb \msr_pr_cr
-   bne \msr_pr_cr, 99f
+   /*
+* We avoid changing AMR outside the kernel
+* hence skip this completely.
+*/
+   bne \msr_pr_cr, 100f  // from userspace
.endif
+
+101:   // handle_pkey_save_amr
mfspr   \gpr1, SPRN_AMR
std \gpr1, STACK_REGS_KUAP(r1)
-   li  \gpr2, (AMR_KUAP_BLOCKED >> AMR_KUAP_SHIFT)
-   sldi\gpr2, \gpr2, AMR_KUAP_SHIFT
+
+   /*
+* update kernel AMR with AMR_KUAP_BLOCKED only
+* if KUAP feature is enabled
+*/
+   BEGIN_MMU_FTR_SECTION_NESTED(69)
+   LOAD_REG_IMMEDIATE(\gpr2, AMR_KUAP_BLOCKED)
cmpd\use_cr, \gpr1, \gpr2
-   beq \use_cr, 99f
-   // We don't isync here because we very recently entered via rfid
+   beq \use_cr, 102f
+   /*
+* We don't isync here because we very recently entered via an interrupt
+*/
mtspr   SPRN_AMR, \gpr2
isync
-99:
-   END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
+102:
+   END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 69)
+
+100: // skip_save_amr
 #endif
 .endm
 
 #else /* !__ASSEMBLY__ */
 
-#ifdef CONFIG_PPC_KUAP
+#ifdef CONFIG_PPC_MEM_KEYS
 
 #include 
 #include 
 
 extern u64 default_uamor;
 
-static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr)
+static inline void kuap_restore_user_amr(struct pt_

[PATCH v2 11/28] powerpc/exec: Set thread.regs early during exec

2020-05-02 Thread Aneesh Kumar K.V
In later patches during exec, we would like to access default regs.kuap to
control access to the user mapping. Having thread.regs set early makes the
code changes simpler.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/kernel/process.c | 24 
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 8479c762aef2..abbe545ed88c 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1462,9 +1462,18 @@ void flush_thread(void)
 #ifdef CONFIG_PPC_BOOK3S_64
 void arch_setup_new_exec(void)
 {
-   if (radix_enabled())
-   return;
-   hash__setup_new_exec();
+   if (!radix_enabled())
+   hash__setup_new_exec();
+
+   /*
+* If we exec out of a kernel thread then thread.regs will not be
+* set.  Do it now.
+*/
+   if (!current->thread.regs) {
+   struct pt_regs *regs = task_stack_page(current) + THREAD_SIZE;
+   current->thread.regs = regs - 1;
+   }
+
 }
 #endif
 
@@ -1689,15 +1698,6 @@ void start_thread(struct pt_regs *regs, unsigned long 
start, unsigned long sp)
 #endif
 #endif
 
-   /*
-* If we exec out of a kernel thread then thread.regs will not be
-* set.  Do it now.
-*/
-   if (!current->thread.regs) {
-   struct pt_regs *regs = task_stack_page(current) + THREAD_SIZE;
-   current->thread.regs = regs - 1;
-   }
-
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
/*
 * Clear any transactional state, we're exec()ing. The cause is
-- 
2.26.2



[PATCH v2 10/28] powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash translation

2020-05-02 Thread Aneesh Kumar K.V
This patch updates kernel hash page table entries to use storage key 3
for its mapping. This implies all kernel access will now use key 3 to
control READ/WRITE. The patch also prevents the allocation of key 3 from
userspace and UAMOR value is updated such that userspace cannot modify key 3.

Signed-off-by: Aneesh Kumar K.V 
---
 .../powerpc/include/asm/book3s/64/hash-pkey.h | 24 ++-
 arch/powerpc/include/asm/book3s/64/hash.h |  3 ++-
 arch/powerpc/include/asm/book3s/64/mmu-hash.h |  1 +
 arch/powerpc/include/asm/mmu_context.h|  2 +-
 arch/powerpc/mm/book3s64/hash_4k.c|  2 +-
 arch/powerpc/mm/book3s64/hash_64k.c   |  4 ++--
 arch/powerpc/mm/book3s64/hash_hugepage.c  |  2 +-
 arch/powerpc/mm/book3s64/hash_hugetlbpage.c   |  2 +-
 arch/powerpc/mm/book3s64/hash_pgtable.c   |  2 +-
 arch/powerpc/mm/book3s64/hash_utils.c | 10 
 arch/powerpc/mm/book3s64/pkeys.c  |  4 
 11 files changed, 38 insertions(+), 18 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/hash-pkey.h 
b/arch/powerpc/include/asm/book3s/64/hash-pkey.h
index 795010897e5d..fc75b815c9ca 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-pkey.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-pkey.h
@@ -2,6 +2,9 @@
 #ifndef _ASM_POWERPC_BOOK3S_64_HASH_PKEY_H
 #define _ASM_POWERPC_BOOK3S_64_HASH_PKEY_H
 
+/*  We use key 3 for KERNEL */
+#define HASH_DEFAULT_KERNEL_KEY (HPTE_R_KEY_BIT0 | HPTE_R_KEY_BIT1)
+
 static inline u64 hash__vmflag_to_pte_pkey_bits(u64 vm_flags)
 {
return (((vm_flags & VM_PKEY_BIT0) ? H_PTE_PKEY_BIT0 : 0x0UL) |
@@ -11,13 +14,22 @@ static inline u64 hash__vmflag_to_pte_pkey_bits(u64 
vm_flags)
((vm_flags & VM_PKEY_BIT4) ? H_PTE_PKEY_BIT4 : 0x0UL));
 }
 
-static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
+static inline u64 pte_to_hpte_pkey_bits(u64 pteflags, unsigned long flags)
 {
-   return (((pteflags & H_PTE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL) |
-   ((pteflags & H_PTE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) |
-   ((pteflags & H_PTE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) |
-   ((pteflags & H_PTE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) |
-   ((pteflags & H_PTE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL));
+   unsigned long pte_pkey;
+
+   pte_pkey = (((pteflags & H_PTE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL) |
+   ((pteflags & H_PTE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) |
+   ((pteflags & H_PTE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) |
+   ((pteflags & H_PTE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) |
+   ((pteflags & H_PTE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL));
+
+   if (mmu_has_feature(MMU_FTR_KUAP)) {
+   if ((pte_pkey == 0) && (flags & HPTE_USE_KERNEL_KEY))
+   return HASH_DEFAULT_KERNEL_KEY;
+   }
+
+   return pte_pkey;
 }
 
 static inline u16 hash__pte_to_pkey_bits(u64 pteflags)
diff --git a/arch/powerpc/include/asm/book3s/64/hash.h 
b/arch/powerpc/include/asm/book3s/64/hash.h
index 6fc4520092c7..12b65d3d79aa 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -145,7 +145,8 @@ extern void hash__mark_initmem_nx(void);
 
 extern void hpte_need_flush(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, unsigned long pte, int huge);
-extern unsigned long htab_convert_pte_flags(unsigned long pteflags);
+extern unsigned long htab_convert_pte_flags(unsigned long pteflags,
+   unsigned long flags);
 /* Atomic PTE updates */
 static inline unsigned long hash__pte_update(struct mm_struct *mm,
 unsigned long addr,
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h 
b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index 58fcc959f9d5..eb9950043b78 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -452,6 +452,7 @@ static inline unsigned long hpt_hash(unsigned long vpn,
 
 #define HPTE_LOCAL_UPDATE  0x1
 #define HPTE_NOHPTE_UPDATE 0x2
+#define HPTE_USE_KERNEL_KEY0x4
 
 extern int __hash_page_4K(unsigned long ea, unsigned long access,
  unsigned long vsid, pte_t *ptep, unsigned long trap,
diff --git a/arch/powerpc/include/asm/mmu_context.h 
b/arch/powerpc/include/asm/mmu_context.h
index 1a474f6b1992..2d85e0ea5f1c 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -286,7 +286,7 @@ static inline bool arch_vma_access_permitted(struct 
vm_area_struct *vma,
 #define thread_pkey_regs_init(thread)
 #define arch_dup_pkeys(oldmm, mm)
 
-static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
+static inline u64 pte_to_hpte_pkey_bits(u64 pteflags, unsigned long flags)
 {
return 0x0UL;
 }
diff --git a/arch/powerpc/mm/book3s64/hash_4k.c 
b/arch/powerpc/mm/book

[PATCH v2 09/28] powerpc/book3s64/kuap: Move UAMOR setup to key init function

2020-05-02 Thread Aneesh Kumar K.V
With hash translation, the kernel will use key 3 for implementing
KUAP feature. Hence the default UAMOR value depends on what other
keys are marked reserved. Move the UAMOR initialization to pkeys init.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/kup.h |  2 ++
 arch/powerpc/kernel/smp.c|  5 +
 arch/powerpc/mm/book3s64/pkeys.c | 25 +++-
 3 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
b/arch/powerpc/include/asm/book3s/64/kup.h
index f4fb651f5850..1b350bf781ec 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -67,6 +67,8 @@
 #include 
 #include 
 
+extern u64 default_uamor;
+
 static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr)
 {
if (mmu_has_feature(MMU_FTR_KUAP)) {
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 6d2a3a3666f0..4cd5b620c08c 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -59,6 +59,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #ifdef DEBUG
 #include 
@@ -1256,6 +1257,10 @@ void start_secondary(void *unused)
mmgrab(&init_mm);
current->active_mm = &init_mm;
 
+#ifdef CONFIG_PPC_MEM_KEYS
+   mtspr(SPRN_UAMOR, default_uamor);
+#endif
+
smp_store_cpu_info(cpu);
set_dec(tb_ticks_per_jiffy);
preempt_disable();
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 7498c9a8ef74..12a9ac169f5d 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -23,7 +23,7 @@ static u32  initial_allocation_mask;   /* Bits set for the 
initially allocated k
 static u64 default_amr;
 static u64 default_iamr;
 /* Allow all keys to be modified by default */
-static u64 default_uamor = ~0x0UL;
+u64 default_uamor = ~0x0UL;
 /*
  * Key used to implement PROT_EXEC mmap. Denies READ/WRITE
  * We pick key 2 because 0 is special key and 1 is reserved as per ISA.
@@ -112,8 +112,16 @@ void __init pkey_early_init_devtree(void)
/* scan the device tree for pkey feature */
pkeys_total = scan_pkey_feature();
if (!pkeys_total) {
-   /* No support for pkey. Mark it disabled */
-   return;
+   /*
+* No key support but on radix we can use key 0
+* to implement kuap.
+*/
+   if (early_radix_enabled())
+   /*
+* Make sure userspace can't change the AMR
+*/
+   default_uamor = 0;
+   goto err_out;
}
 
cur_cpu_spec->mmu_features |= MMU_FTR_PKEY;
@@ -195,6 +203,12 @@ void __init pkey_early_init_devtree(void)
 */
initial_allocation_mask |= reserved_allocation_mask;
 
+err_out:
+   /*
+* Setup uamor on boot cpu
+*/
+   mtspr(SPRN_UAMOR, default_uamor);
+
return;
 }
 
@@ -230,8 +244,9 @@ void __init setup_kuap(bool disabled)
cur_cpu_spec->mmu_features |= MMU_FTR_KUAP;
}
 
-   /* Make sure userspace can't change the AMR */
-   mtspr(SPRN_UAMOR, 0);
+   /*
+* Set the default kernel AMR values on all cpus.
+*/
mtspr(SPRN_AMR, AMR_KUAP_BLOCKED);
isync();
 }
-- 
2.26.2



[PATCH v2 08/28] powerpc/book3s64/kuap/kuep: Make KUAP and KUEP a subfeature of PPC_MEM_KEYS

2020-05-02 Thread Aneesh Kumar K.V
The next set of patches adds support for kuap with hash translation.
Hence make KUAP a BOOK3S_64 feature. Also make it a subfeature of
PPC_MEM_KEYS. Hash translation is going to use pkeys to support
KUAP/KUEP. Adding this dependency reduces the code complexity and
enables us to move some of the initialization code to pkeys.c

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/platforms/Kconfig.cputype | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/platforms/Kconfig.cputype 
b/arch/powerpc/platforms/Kconfig.cputype
index 27a81c291be8..eb36a6007a94 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -99,6 +99,8 @@ config PPC_BOOK3S_64
select ARCH_SUPPORTS_NUMA_BALANCING
select IRQ_WORK
select PPC_MM_SLICES
+   select PPC_HAVE_KUAP if PPC_MEM_KEYS
+   select PPC_HAVE_KUEP if PPC_MEM_KEYS
 
 config PPC_BOOK3E_64
bool "Embedded processors"
@@ -350,8 +352,6 @@ config PPC_RADIX_MMU
bool "Radix MMU Support"
depends on PPC_BOOK3S_64
select ARCH_HAS_GIGANTIC_PAGE
-   select PPC_HAVE_KUEP
-   select PPC_HAVE_KUAP
default y
help
  Enable support for the Power ISA 3.0 Radix style MMU. Currently this
-- 
2.26.2



[PATCH v2 07/28] powerpc/book3s64/kuap: Rename MMU_FTR_RADIX_KUAP to MMU_FTR_KUAP

2020-05-02 Thread Aneesh Kumar K.V
The next set of patches adds support for kuap with hash translation.
In preparation for that rename/move kuap related functions to
non radix names.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/kup.h | 18 +-
 arch/powerpc/include/asm/mmu.h   |  6 +++---
 arch/powerpc/mm/book3s64/pkeys.c |  2 +-
 3 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
b/arch/powerpc/include/asm/book3s/64/kup.h
index 21008cc7af6f..f4fb651f5850 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -24,7 +24,7 @@
mtspr   SPRN_AMR, \gpr2
/* No isync required, see kuap_restore_amr() */
 998:
-   END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
+   END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
 #endif
 .endm
 
@@ -36,7 +36,7 @@
sldi\gpr2, \gpr2, AMR_KUAP_SHIFT
 999:   tdne\gpr1, \gpr2
EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | 
BUGFLAG_ONCE)
-   END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
+   END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
 #endif
 .endm
 
@@ -56,7 +56,7 @@
mtspr   SPRN_AMR, \gpr2
isync
 99:
-   END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
+   END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
 #endif
 .endm
 
@@ -69,7 +69,7 @@
 
 static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr)
 {
-   if (mmu_has_feature(MMU_FTR_RADIX_KUAP)) {
+   if (mmu_has_feature(MMU_FTR_KUAP)) {
if (unlikely(regs->kuap != amr)) {
isync();
mtspr(SPRN_AMR, regs->kuap);
@@ -84,7 +84,7 @@ static inline void kuap_restore_amr(struct pt_regs *regs, 
unsigned long amr)
 
 static inline unsigned long kuap_get_and_check_amr(void)
 {
-   if (mmu_has_feature(MMU_FTR_RADIX_KUAP)) {
+   if (mmu_has_feature(MMU_FTR_KUAP)) {
unsigned long amr = mfspr(SPRN_AMR);
if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG)) /* kuap_check_amr() */
WARN_ON_ONCE(amr != AMR_KUAP_BLOCKED);
@@ -95,7 +95,7 @@ static inline unsigned long kuap_get_and_check_amr(void)
 
 static inline void kuap_check_amr(void)
 {
-   if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && 
mmu_has_feature(MMU_FTR_RADIX_KUAP))
+   if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && mmu_has_feature(MMU_FTR_KUAP))
WARN_ON_ONCE(mfspr(SPRN_AMR) != AMR_KUAP_BLOCKED);
 }
 
@@ -106,7 +106,7 @@ static inline void kuap_check_amr(void)
 
 static inline unsigned long get_kuap(void)
 {
-   if (!early_mmu_has_feature(MMU_FTR_RADIX_KUAP))
+   if (!early_mmu_has_feature(MMU_FTR_KUAP))
return 0;
 
return mfspr(SPRN_AMR);
@@ -114,7 +114,7 @@ static inline unsigned long get_kuap(void)
 
 static inline void set_kuap(unsigned long value)
 {
-   if (!early_mmu_has_feature(MMU_FTR_RADIX_KUAP))
+   if (!early_mmu_has_feature(MMU_FTR_KUAP))
return;
 
/*
@@ -164,7 +164,7 @@ static inline void restore_user_access(unsigned long flags)
 static inline bool
 bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
 {
-   return WARN(mmu_has_feature(MMU_FTR_RADIX_KUAP) &&
+   return WARN(mmu_has_feature(MMU_FTR_KUAP) &&
(regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : 
AMR_KUAP_BLOCK_READ)),
"Bug: %s fault blocked by AMR!", is_write ? "Write" : 
"Read");
 }
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index b26af5aac5a6..a7cc2b83836f 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -112,7 +112,7 @@
 /*
  * Supports KUAP (key 0 controlling userspace addresses) on radix
  */
-#define MMU_FTR_RADIX_KUAP ASM_CONST(0x8000)
+#define MMU_FTR_KUAP   ASM_CONST(0x8000)
 
 /* MMU feature bit sets for various CPUs */
 #define MMU_FTRS_DEFAULT_HPTE_ARCH_V2  \
@@ -174,10 +174,10 @@ enum {
 #endif
 #ifdef CONFIG_PPC_RADIX_MMU
MMU_FTR_TYPE_RADIX |
+#endif /* CONFIG_PPC_RADIX_MMU */
 #ifdef CONFIG_PPC_KUAP
-   MMU_FTR_RADIX_KUAP |
+   MMU_FTR_KUAP |
 #endif /* CONFIG_PPC_KUAP */
-#endif /* CONFIG_PPC_RADIX_MMU */
 #ifdef CONFIG_PPC_MEM_KEYS
MMU_FTR_PKEY |
 #endif
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index b6ea4fec787b..7498c9a8ef74 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -227,7 +227,7 @@ void __init setup_kuap(bool disabled)
 
if (smp_processor_id() == boot_cpuid) {
pr_info("Activating Kernel Userspace Access Prevention\n");
-   cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
+   cur_cpu_spec->mmu_features |= MMU_FTR_KUAP;
}
 
/* Make sure userspace can't change the AMR */
-- 
2.2

[PATCH v2 06/28] powerpc/book3s64/kuep: Move KUEP related function outside radix

2020-05-02 Thread Aneesh Kumar K.V
The next set of patches adds support for kuep with hash translation.
In preparation for that rename/move kuap related functions to
non radix names.

Also set MMU_FTR_KUEP and add the missing isync().

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/kup.h |  1 +
 arch/powerpc/mm/book3s64/pkeys.c | 21 +
 arch/powerpc/mm/book3s64/radix_pgtable.c | 18 --
 3 files changed, 22 insertions(+), 18 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
b/arch/powerpc/include/asm/book3s/64/kup.h
index 21de441762d5..21008cc7af6f 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -7,6 +7,7 @@
 
 #define AMR_KUAP_BLOCK_READUL(0x4000)
 #define AMR_KUAP_BLOCK_WRITE   UL(0x8000)
+#define AMR_KUEP_BLOCKED   (1UL << 62)
 #define AMR_KUAP_BLOCKED   (AMR_KUAP_BLOCK_READ | AMR_KUAP_BLOCK_WRITE)
 #define AMR_KUAP_SHIFT 62
 
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index e923be3b52e7..b6ea4fec787b 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -198,6 +198,27 @@ void __init pkey_early_init_devtree(void)
return;
 }
 
+#ifdef CONFIG_PPC_KUEP
+void __init setup_kuep(bool disabled)
+{
+   if (disabled || !early_radix_enabled())
+   return;
+
+   if (smp_processor_id() == boot_cpuid) {
+   pr_info("Activating Kernel Userspace Execution Prevention\n");
+   cur_cpu_spec->mmu_features |= MMU_FTR_KUEP;
+   }
+
+   /*
+* Radix always uses key0 of the IAMR to determine if an access is
+* allowed. We set bit 0 (IBM bit 1) of key0, to prevent instruction
+* fetch.
+*/
+   mtspr(SPRN_IAMR, AMR_KUEP_BLOCKED);
+   isync();
+}
+#endif
+
 #ifdef CONFIG_PPC_KUAP
 void __init setup_kuap(bool disabled)
 {
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c 
b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 74558ce6b5cb..3fb088eecece 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -508,24 +508,6 @@ static void radix_init_amor(void)
mtspr(SPRN_AMOR, (3ul << 62));
 }
 
-#ifdef CONFIG_PPC_KUEP
-void setup_kuep(bool disabled)
-{
-   if (disabled || !early_radix_enabled())
-   return;
-
-   if (smp_processor_id() == boot_cpuid)
-   pr_info("Activating Kernel Userspace Execution Prevention\n");
-
-   /*
-* Radix always uses key0 of the IAMR to determine if an access is
-* allowed. We set bit 0 (IBM bit 1) of key0, to prevent instruction
-* fetch.
-*/
-   mtspr(SPRN_IAMR, (1ul << 62));
-}
-#endif
-
 void __init radix__early_init_mmu(void)
 {
unsigned long lpcr;
-- 
2.26.2



[PATCH v2 05/28] powerpc/book3s64/kuap: Move KUAP related function outside radix

2020-05-02 Thread Aneesh Kumar K.V
The next set of patches adds support for kuap with hash translation.
In preparation for that rename/move kuap related functions to
non radix names.

Signed-off-by: Aneesh Kumar K.V 
---
 .../asm/book3s/64/{kup-radix.h => kup.h}   |  6 +++---
 arch/powerpc/include/asm/kup.h |  2 +-
 arch/powerpc/kernel/syscall_64.c   |  2 +-
 arch/powerpc/mm/book3s64/pkeys.c   | 18 ++
 arch/powerpc/mm/book3s64/radix_pgtable.c   | 18 --
 5 files changed, 23 insertions(+), 23 deletions(-)
 rename arch/powerpc/include/asm/book3s/64/{kup-radix.h => kup.h} (97%)

diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h 
b/arch/powerpc/include/asm/book3s/64/kup.h
similarity index 97%
rename from arch/powerpc/include/asm/book3s/64/kup-radix.h
rename to arch/powerpc/include/asm/book3s/64/kup.h
index e82df54f5681..21de441762d5 100644
--- a/arch/powerpc/include/asm/book3s/64/kup-radix.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H
-#define _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H
+#ifndef _ASM_POWERPC_BOOK3S_64_KUP_H
+#define _ASM_POWERPC_BOOK3S_64_KUP_H
 
 #include 
 #include 
@@ -184,4 +184,4 @@ static inline unsigned long kuap_get_and_check_amr(void)
 
 #endif /* __ASSEMBLY__ */
 
-#endif /* _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H */
+#endif /* _ASM_POWERPC_BOOK3S_64_KUP_H */
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 92bcd1a26d73..248438dff74a 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -13,7 +13,7 @@
 #define KUAP_CURRENT   4
 
 #ifdef CONFIG_PPC64
-#include 
+#include 
 #endif
 #ifdef CONFIG_PPC_8xx
 #include 
diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
index bfb161a3a0ea..f704f657e1f7 100644
--- a/arch/powerpc/kernel/syscall_64.c
+++ b/arch/powerpc/kernel/syscall_64.c
@@ -2,7 +2,7 @@
 
 #include 
 #include 
-#include 
+#include 
 #include 
 #include 
 #include 
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 810118123e70..e923be3b52e7 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -198,6 +198,24 @@ void __init pkey_early_init_devtree(void)
return;
 }
 
+#ifdef CONFIG_PPC_KUAP
+void __init setup_kuap(bool disabled)
+{
+   if (disabled || !early_radix_enabled())
+   return;
+
+   if (smp_processor_id() == boot_cpuid) {
+   pr_info("Activating Kernel Userspace Access Prevention\n");
+   cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
+   }
+
+   /* Make sure userspace can't change the AMR */
+   mtspr(SPRN_UAMOR, 0);
+   mtspr(SPRN_AMR, AMR_KUAP_BLOCKED);
+   isync();
+}
+#endif
+
 void pkey_mm_init(struct mm_struct *mm)
 {
if (!mmu_has_feature(MMU_FTR_PKEY))
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c 
b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 8f9edf07063a..74558ce6b5cb 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -526,24 +526,6 @@ void setup_kuep(bool disabled)
 }
 #endif
 
-#ifdef CONFIG_PPC_KUAP
-void setup_kuap(bool disabled)
-{
-   if (disabled || !early_radix_enabled())
-   return;
-
-   if (smp_processor_id() == boot_cpuid) {
-   pr_info("Activating Kernel Userspace Access Prevention\n");
-   cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
-   }
-
-   /* Make sure userspace can't change the AMR */
-   mtspr(SPRN_UAMOR, 0);
-   mtspr(SPRN_AMR, AMR_KUAP_BLOCKED);
-   isync();
-}
-#endif
-
 void __init radix__early_init_mmu(void)
 {
unsigned long lpcr;
-- 
2.26.2



[PATCH v2 04/28] powerpc/book3s64/pkeys: Use MMU_FTR_PKEY instead of pkey_disabled static key

2020-05-02 Thread Aneesh Kumar K.V
Instead of pkey_disabled static key use mmu feature MMU_FTR_PKEY.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/pkeys.h |  2 +-
 arch/powerpc/include/asm/pkeys.h   | 14 ++
 arch/powerpc/mm/book3s64/pkeys.c   | 16 +++-
 3 files changed, 14 insertions(+), 18 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pkeys.h 
b/arch/powerpc/include/asm/book3s/64/pkeys.h
index 8174662a9173..5b178139f3c0 100644
--- a/arch/powerpc/include/asm/book3s/64/pkeys.h
+++ b/arch/powerpc/include/asm/book3s/64/pkeys.h
@@ -7,7 +7,7 @@
 
 static inline u64 vmflag_to_pte_pkey_bits(u64 vm_flags)
 {
-   if (static_branch_likely(&pkey_disabled))
+   if (!mmu_has_feature(MMU_FTR_PKEY))
return 0x0UL;
 
if (radix_enabled())
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 09fbaa409ac4..b1d448c53209 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -11,7 +11,6 @@
 #include 
 #include 
 
-DECLARE_STATIC_KEY_FALSE(pkey_disabled);
 extern int max_pkey;
 extern u32 reserved_allocation_mask; /* bits set for reserved keys */
 
@@ -38,7 +37,7 @@ static inline u64 pkey_to_vmflag_bits(u16 pkey)
 
 static inline int vma_pkey(struct vm_area_struct *vma)
 {
-   if (static_branch_likely(&pkey_disabled))
+   if (!mmu_has_feature(MMU_FTR_PKEY))
return 0;
return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT;
 }
@@ -93,9 +92,8 @@ static inline int mm_pkey_alloc(struct mm_struct *mm)
u32 all_pkeys_mask = (u32)(~(0x0));
int ret;
 
-   if (static_branch_likely(&pkey_disabled))
+   if (!mmu_has_feature(MMU_FTR_PKEY))
return -1;
-
/*
 * Are we out of pkeys? We must handle this specially because ffz()
 * behavior is undefined if there are no zeros.
@@ -111,7 +109,7 @@ static inline int mm_pkey_alloc(struct mm_struct *mm)
 
 static inline int mm_pkey_free(struct mm_struct *mm, int pkey)
 {
-   if (static_branch_likely(&pkey_disabled))
+   if (!mmu_has_feature(MMU_FTR_PKEY))
return -1;
 
if (!mm_pkey_is_allocated(mm, pkey))
@@ -132,7 +130,7 @@ extern int __arch_override_mprotect_pkey(struct 
vm_area_struct *vma,
 static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
  int prot, int pkey)
 {
-   if (static_branch_likely(&pkey_disabled))
+   if (!mmu_has_feature(MMU_FTR_PKEY))
return 0;
 
/*
@@ -150,7 +148,7 @@ extern int __arch_set_user_pkey_access(struct task_struct 
*tsk, int pkey,
 static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
unsigned long init_val)
 {
-   if (static_branch_likely(&pkey_disabled))
+   if (!mmu_has_feature(MMU_FTR_PKEY))
return -EINVAL;
 
/*
@@ -167,7 +165,7 @@ static inline int arch_set_user_pkey_access(struct 
task_struct *tsk, int pkey,
 
 static inline bool arch_pkeys_enabled(void)
 {
-   return !static_branch_likely(&pkey_disabled);
+   return mmu_has_feature(MMU_FTR_PKEY);
 }
 
 extern void pkey_mm_init(struct mm_struct *mm);
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index fed4f159011b..810118123e70 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -13,7 +13,6 @@
 #include 
 
 
-DEFINE_STATIC_KEY_FALSE(pkey_disabled);
 DEFINE_STATIC_KEY_FALSE(execute_pkey_disabled);
 int  max_pkey; /* Maximum key value supported */
 /*
@@ -114,7 +113,6 @@ void __init pkey_early_init_devtree(void)
pkeys_total = scan_pkey_feature();
if (!pkeys_total) {
/* No support for pkey. Mark it disabled */
-   static_branch_enable(&pkey_disabled);
return;
}
 
@@ -202,7 +200,7 @@ void __init pkey_early_init_devtree(void)
 
 void pkey_mm_init(struct mm_struct *mm)
 {
-   if (static_branch_likely(&pkey_disabled))
+   if (!mmu_has_feature(MMU_FTR_PKEY))
return;
mm_pkey_allocation_map(mm) = initial_allocation_mask;
mm->context.execute_only_pkey = execute_only_key;
@@ -306,7 +304,7 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, 
int pkey,
 
 void thread_pkey_regs_save(struct thread_struct *thread)
 {
-   if (static_branch_likely(&pkey_disabled))
+   if (!mmu_has_feature(MMU_FTR_PKEY))
return;
 
/*
@@ -320,7 +318,7 @@ void thread_pkey_regs_save(struct thread_struct *thread)
 void thread_pkey_regs_restore(struct thread_struct *new_thread,
  struct thread_struct *old_thread)
 {
-   if (static_branch_likely(&pkey_disabled))
+   if (!mmu_has_feature(MMU_FTR_PKEY))
return;
 
if (old_thread->amr != new_thread->amr)
@@ -333,7 +331,7 @@ void thread_pkey_regs

[PATCH v2 03/28] powerpc/book3s64/pkeys: Use execute_pkey_disable static key

2020-05-02 Thread Aneesh Kumar K.V
Use execute_pkey_disabled static key to check for execute key support instead
of pkey_disabled.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/pkeys.h | 10 +-
 arch/powerpc/mm/book3s64/pkeys.c |  5 -
 2 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 47c81d41ea9a..09fbaa409ac4 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -126,15 +126,7 @@ static inline int mm_pkey_free(struct mm_struct *mm, int 
pkey)
  * Try to dedicate one of the protection keys to be used as an
  * execute-only protection key.
  */
-extern int __execute_only_pkey(struct mm_struct *mm);
-static inline int execute_only_pkey(struct mm_struct *mm)
-{
-   if (static_branch_likely(&pkey_disabled))
-   return -1;
-
-   return __execute_only_pkey(mm);
-}
-
+extern int execute_only_pkey(struct mm_struct *mm);
 extern int __arch_override_mprotect_pkey(struct vm_area_struct *vma,
 int prot, int pkey);
 static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index bbba9c601e14..fed4f159011b 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -345,8 +345,11 @@ void thread_pkey_regs_init(struct thread_struct *thread)
write_uamor(default_uamor);
 }
 
-int __execute_only_pkey(struct mm_struct *mm)
+int execute_only_pkey(struct mm_struct *mm)
 {
+   if (static_branch_likely(&execute_pkey_disabled))
+   return -1;
+
return mm->context.execute_only_pkey;
 }
 
-- 
2.26.2



[PATCH v2 02/28] powerpc/book3s64/kuep: Add MMU_FTR_KUEP

2020-05-02 Thread Aneesh Kumar K.V
This will be used to enable/disable Kernel Userspace Execution
Prevention (KUEP).

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/mmu.h | 5 +
 1 file changed, 5 insertions(+)

diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 0e5d7ed9fcd6..b26af5aac5a6 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -24,6 +24,7 @@
 /* Radix page table supported and enabled */
 #define MMU_FTR_TYPE_RADIX ASM_CONST(0x0040)
 #define MMU_FTR_PKEY   ASM_CONST(0x0080)
+#define MMU_FTR_KUEP   ASM_CONST(0x0100)
 
 /*
  * Individual features below.
@@ -180,6 +181,10 @@ enum {
 #ifdef CONFIG_PPC_MEM_KEYS
MMU_FTR_PKEY |
 #endif
+#ifdef CONFIG_PPC_KUEP
+   MMU_FTR_KUEP |
+#endif /* CONFIG_PPC_KUAP */
+
0,
 };
 
-- 
2.26.2



[PATCH v2 01/28] powerpc/book3s64/pkeys: Enable MMU_FTR_PKEY

2020-05-02 Thread Aneesh Kumar K.V
Parse storage keys related device tree entry in early_init_devtree
and enable MMU feature MMU_FTR_PKEY if pkeys are supported.

MMU feature is used instead of CPU feature because this enables us
to group MMU_FTR_KUAP and MMU_FTR_PKEY in asm feature fixup code.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/mmu.h |  6 +++
 arch/powerpc/include/asm/mmu.h   |  6 +++
 arch/powerpc/kernel/prom.c   |  5 +++
 arch/powerpc/mm/book3s64/pkeys.c | 54 ++--
 4 files changed, 48 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h 
b/arch/powerpc/include/asm/book3s/64/mmu.h
index f0a9ff690881..10f54288b3b7 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -209,6 +209,12 @@ extern int mmu_io_psize;
 void mmu_early_init_devtree(void);
 void hash__early_init_devtree(void);
 void radix__early_init_devtree(void);
+#ifdef CONFIG_PPC_MEM_KEYS
+void pkey_early_init_devtree(void);
+#else
+static inline void pkey_early_init_devtree(void) {}
+#endif
+
 extern void hash__early_init_mmu(void);
 extern void radix__early_init_mmu(void);
 static inline void early_init_mmu(void)
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 0699cfeeb8c9..0e5d7ed9fcd6 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -23,6 +23,7 @@
 
 /* Radix page table supported and enabled */
 #define MMU_FTR_TYPE_RADIX ASM_CONST(0x0040)
+#define MMU_FTR_PKEY   ASM_CONST(0x0080)
 
 /*
  * Individual features below.
@@ -176,6 +177,9 @@ enum {
MMU_FTR_RADIX_KUAP |
 #endif /* CONFIG_PPC_KUAP */
 #endif /* CONFIG_PPC_RADIX_MMU */
+#ifdef CONFIG_PPC_MEM_KEYS
+   MMU_FTR_PKEY |
+#endif
0,
 };
 
@@ -364,6 +368,8 @@ extern void setup_initial_memory_limit(phys_addr_t 
first_memblock_base,
   phys_addr_t first_memblock_size);
 static inline void mmu_early_init_devtree(void) { }
 
+static inline void pkey_early_init_devtree(void) {}
+
 extern void *abatron_pteptrs[2];
 #endif /* __ASSEMBLY__ */
 #endif
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 6620f37abe73..6266bfb72aae 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -791,6 +791,11 @@ void __init early_init_devtree(void *params)
/* Now try to figure out if we are running on LPAR and so on */
pseries_probe_fw_features();
 
+   /*
+* Initialize pkey features and default AMR/IAMR values
+*/
+   pkey_early_init_devtree();
+
 #ifdef CONFIG_PPC_PS3
/* Identify PS3 firmware */
if (of_flat_dt_is_compatible(of_get_flat_dt_root(), "sony,ps3"))
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 0ff59acdbb84..bbba9c601e14 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -10,7 +10,8 @@
 #include 
 #include 
 #include 
-#include 
+#include 
+
 
 DEFINE_STATIC_KEY_FALSE(pkey_disabled);
 DEFINE_STATIC_KEY_FALSE(execute_pkey_disabled);
@@ -38,38 +39,45 @@ static int execute_only_key = 2;
 #define PKEY_REG_BITS (sizeof(u64) * 8)
 #define pkeyshift(pkey) (PKEY_REG_BITS - ((pkey+1) * AMR_BITS_PER_PKEY))
 
+static int __init dt_scan_storage_keys(unsigned long node,
+  const char *uname, int depth,
+  void *data)
+{
+   const char *type = of_get_flat_dt_prop(node, "device_type", NULL);
+   const __be32 *prop;
+   int pkeys_total;
+
+   /* We are scanning "cpu" nodes only */
+   if (type == NULL || strcmp(type, "cpu") != 0)
+   return 0;
+
+   prop = of_get_flat_dt_prop(node, "ibm,processor-storage-keys", NULL);
+   if (!prop)
+   return 0;
+   pkeys_total = be32_to_cpu(prop[0]);
+   return pkeys_total;
+}
+
 static int scan_pkey_feature(void)
 {
-   u32 vals[2];
-   int pkeys_total = 0;
-   struct device_node *cpu;
+   int pkeys_total;
 
/*
 * Pkey is not supported with Radix translation.
 */
-   if (radix_enabled())
+   if (early_radix_enabled())
return 0;
 
-   cpu = of_find_node_by_type(NULL, "cpu");
-   if (!cpu)
-   return 0;
+   pkeys_total = of_scan_flat_dt(dt_scan_storage_keys, NULL);
+   if (pkeys_total == 0) {
 
-   if (of_property_read_u32_array(cpu,
-  "ibm,processor-storage-keys", vals, 2) 
== 0) {
-   /*
-* Since any pkey can be used for data or execute, we will
-* just treat all keys as equal and track them as one entity.
-*/
-   pkeys_total = vals[0];
-   /*  Should we check for IAMR support FIXME!! */
-   } else {
/*
 * Let's assume 3

[PATCH v2 00/28] Kernel userspace access/execution prevention with hash translation

2020-05-02 Thread Aneesh Kumar K.V
This patch series implements KUAP and KUEP with hash translation mode using
memory keys. The kernel now uses memory protection key 3 to control access
to the kernel. Kernel page table entries are now configured with key 3.
Access to locations configured with any other key value is denied when in
kernel mode (MSR_PR=0). This includes userspace which is by default configured
with key 0.

Changes from V1:
* Rebased on latest kernel
* Depends on the below patch sets.

https://lore.kernel.org/linuxppc-dev/20200429065654.1677541-1-npig...@gmail.com
https://lore.kernel.org/linuxppc-dev/20200428123130.73078-1-...@ellerman.id.au
https://lore.kernel.org/linuxppc-dev/20200502111347.541836-1-aneesh.ku...@linux.ibm.com

Aneesh Kumar K.V (28):
  powerpc/book3s64/pkeys: Enable MMU_FTR_PKEY
  powerpc/book3s64/kuep: Add MMU_FTR_KUEP
  powerpc/book3s64/pkeys: Use execute_pkey_disable static key
  powerpc/book3s64/pkeys: Use MMU_FTR_PKEY instead of pkey_disabled
static key
  powerpc/book3s64/kuap: Move KUAP related function outside radix
  powerpc/book3s64/kuep: Move KUEP related function outside radix
  powerpc/book3s64/kuap: Rename MMU_FTR_RADIX_KUAP to MMU_FTR_KUAP
  powerpc/book3s64/kuap/kuep: Make KUAP and KUEP a subfeature of
PPC_MEM_KEYS
  powerpc/book3s64/kuap: Move UAMOR setup to key init function
  powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash
translation
  powerpc/exec: Set thread.regs early during exec
  powerpc/book3s64/pkeys: Store/restore userspace AMR correctly on entry
and exit from kernel
  powerpc/book3s64/kuep: Store/restore userspace IAMR correctly on entry
and exit from kernel
  powerpc/book3s64/pkeys: Inherit correctly on fork.
  powerpc/book3s64/pkeys: Reset userspace AMR correctly on exec
  powerpc/ptrace-view: Use pt_regs values instead of thread_struct based
one.
  powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode.
  powerpc/book3s64/kuap: Restrict access to userspace based on userspace
AMR
  powerpc/book3s64/kuap: Improve error reporting with KUAP
  powerpc/book3s64/kuap: Use Key 3 to implement KUAP with hash
translation.
  powerpc/book3s64/kuep: Use Key 3 to implement KUEP with hash
translation.
  powerpc/book3s64/hash/kuap: Enable kuap on hash
  powerpc/book3s64/hash/kuep: Enable KUEP on hash
  powerpc/book3s64/keys: Print information during boot.
  powerpc/selftest/ptrave-pkey: Rename variables to make it easier to
follow code
  powerpc/selftest/ptrace-pkey: Update the test to mark an invalid pkey
correctly
  powerpc/selftest/ptrace-pkey: IAMR and uamor cannot be updated by
ptrace
  powerpc/book3s64/keys/kuap: Reset AMR/IAMR values on kexec

 arch/powerpc/include/asm/book3s/32/kup.h  |   4 +-
 .../powerpc/include/asm/book3s/64/hash-pkey.h |  24 +-
 arch/powerpc/include/asm/book3s/64/hash.h |   3 +-
 .../powerpc/include/asm/book3s/64/kup-radix.h | 187 -
 arch/powerpc/include/asm/book3s/64/kup.h  | 385 ++
 arch/powerpc/include/asm/book3s/64/mmu-hash.h |   1 +
 arch/powerpc/include/asm/book3s/64/mmu.h  |   6 +
 arch/powerpc/include/asm/book3s/64/pkeys.h|   2 +-
 arch/powerpc/include/asm/kup.h|  20 +-
 arch/powerpc/include/asm/mmu.h|  17 +-
 arch/powerpc/include/asm/mmu_context.h|   2 +-
 arch/powerpc/include/asm/nohash/32/kup-8xx.h  |   4 +-
 arch/powerpc/include/asm/pkeys.h  |  24 +-
 arch/powerpc/include/asm/processor.h  |   5 -
 arch/powerpc/include/asm/ptrace.h |   6 +-
 arch/powerpc/kernel/asm-offsets.c |   4 +
 arch/powerpc/kernel/entry_64.S|   6 +-
 arch/powerpc/kernel/exceptions-64s.S  |   4 +-
 arch/powerpc/kernel/misc_64.S |  14 -
 arch/powerpc/kernel/process.c |  54 ++-
 arch/powerpc/kernel/prom.c|   5 +
 arch/powerpc/kernel/ptrace/ptrace-view.c  |  23 +-
 arch/powerpc/kernel/smp.c |   5 +
 arch/powerpc/kernel/syscall_64.c  |  30 +-
 arch/powerpc/kernel/traps.c   |   6 -
 arch/powerpc/kexec/core_64.c  |   3 +
 arch/powerpc/mm/book3s64/hash_4k.c|   2 +-
 arch/powerpc/mm/book3s64/hash_64k.c   |   4 +-
 arch/powerpc/mm/book3s64/hash_hugepage.c  |   2 +-
 arch/powerpc/mm/book3s64/hash_hugetlbpage.c   |   2 +-
 arch/powerpc/mm/book3s64/hash_pgtable.c   |   2 +-
 arch/powerpc/mm/book3s64/hash_utils.c |  10 +-
 arch/powerpc/mm/book3s64/pgtable.c|   3 +
 arch/powerpc/mm/book3s64/pkeys.c  | 221 +-
 arch/powerpc/mm/book3s64/radix_pgtable.c  |  36 --
 arch/powerpc/mm/fault.c   |   2 +-
 arch/powerpc/platforms/Kconfig.cputype|   4 +-
 .../selftests/powerpc/ptrace/ptrace-pkey.c|  53 +--
 38 files changed, 723 insertions(+), 462 deletions(-)
 delete mode 100644 arch/powerpc/include/asm/book3s/64/kup-radix.h
 create mode 100644 arch/powerpc/include/asm/b

[RFC PATCH 10/10] powerpc/powernv: OPAL V4 Implement vm_map/unmap service

2020-05-02 Thread Nicholas Piggin
This implements os_vm_map, os_vm_unmap. OPAL uses EA regions that
is specifies in OPAL_FIND_VM_AREA for these mappings, so provided
the page tables are allocated at init-time and not freed, these
services can be provided without memory allocation / blocking.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/include/asm/opal-api.h   |  2 +
 arch/powerpc/platforms/powernv/opal.c | 57 +++
 2 files changed, 59 insertions(+)

diff --git a/arch/powerpc/include/asm/opal-api.h 
b/arch/powerpc/include/asm/opal-api.h
index 1b2f176677fc..97c5e5423827 100644
--- a/arch/powerpc/include/asm/opal-api.h
+++ b/arch/powerpc/include/asm/opal-api.h
@@ -1205,6 +1205,8 @@ struct opal_vm_area {
 
 struct opal_os_ops {
__be64  os_printf; /* void printf(int32_t level, const char *str) */
+   __be64  os_vm_map; /* int64_t os_vm_map(uint64_t ea, uint64_t pa, 
uint64_t flags) */
+   __be64  os_vm_unmap; /* void os_vm_unmap(uint64_t ea) */
 };
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/powerpc/platforms/powernv/opal.c 
b/arch/powerpc/platforms/powernv/opal.c
index 0fbfcd088c58..93b9afaf33b3 100644
--- a/arch/powerpc/platforms/powernv/opal.c
+++ b/arch/powerpc/platforms/powernv/opal.c
@@ -1095,6 +1095,61 @@ static pgprot_t opal_vm_flags_to_prot(uint64_t flags)
return prot;
 }
 
+static int64_t os_vm_map(uint64_t ea, uint64_t pa, uint64_t flags)
+{
+   struct mm_struct *mm = opal_mm;
+   spinlock_t *ptl;
+   pte_t pte, *ptep;
+   pgprot_t prot;
+
+   if (WARN_ON_ONCE(!opal_mm_enabled))
+   return -EINVAL;
+
+   if (WARN_ON_ONCE(!(mfmsr() & (MSR_IR|MSR_DR
+   return -EINVAL;
+
+   /* mm should be active_mm if MMU is on here */
+
+// printk("os_vm_map 0x%llx->0x%llx flags=0x%llx\n", ea, pa, flags);
+
+   prot = opal_vm_flags_to_prot(flags);
+
+   pte = pfn_pte(pa >> PAGE_SHIFT, PAGE_KERNEL_X);
+
+   ptep = get_locked_pte(mm, ea, &ptl);
+   set_pte_at(mm, ea, ptep, pte);
+   pte_unmap_unlock(ptep, ptl);
+
+   return 0;
+}
+
+static void os_vm_unmap(uint64_t ea)
+{
+   struct mm_struct *mm = opal_mm;
+   spinlock_t *ptl;
+   pte_t *ptep;
+
+   if (WARN_ON_ONCE(!opal_mm_enabled))
+   return;
+
+   if (WARN_ON_ONCE(!(mfmsr() & (MSR_IR|MSR_DR
+   return;
+
+// printk("os_vm_unmap 0x%llx\n", ea);
+
+   /* mm should be active_mm if MMU is on here */
+
+   ptep = get_locked_pte(mm, ea, &ptl);
+   pte_clear(mm, ea, ptep);
+   pte_unmap_unlock(ptep, ptl);
+
+   /*
+* This leaves potential TLBs in other CPUs for this EA, but it is
+* only used by this CPU. Can't do a broadcast flush here, no IPIs.
+*/
+   local_flush_tlb_mm(mm);
+}
+
 static int __init opal_init_mm(void)
 {
struct mm_struct *mm;
@@ -1174,6 +1229,8 @@ static int __init opal_init_early(void)
 
memset(&opal_os_ops, 0, sizeof(opal_os_ops));
opal_os_ops.os_printf = cpu_to_be64(&os_printf);
+   opal_os_ops.os_vm_map = cpu_to_be64(&os_vm_map);
+   opal_os_ops.os_vm_unmap = cpu_to_be64(&os_vm_unmap);
if (opal_register_os_ops(&opal_os_ops, sizeof(opal_os_ops))) {
pr_warn("OPAL register OS ops failed, firmware will run 
in v3 mode.\n");
} else {
-- 
2.23.0



[RFC PATCH 09/10] powerpc/powernv: OPAL V4 OS services

2020-05-02 Thread Nicholas Piggin
This implements OPAL_REGISTER_OS_OPS and implements the printf
service.

When this API is called, OPAL switches to V4 mode which requires
the OS to subsequently handle its program interrupts and printf
calls.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/include/asm/opal-api.h|  7 -
 arch/powerpc/include/asm/opal.h|  1 +
 arch/powerpc/platforms/powernv/opal-call.c |  1 +
 arch/powerpc/platforms/powernv/opal.c  | 36 ++
 4 files changed, 44 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/opal-api.h 
b/arch/powerpc/include/asm/opal-api.h
index 0be5ff4e51b5..1b2f176677fc 100644
--- a/arch/powerpc/include/asm/opal-api.h
+++ b/arch/powerpc/include/asm/opal-api.h
@@ -218,7 +218,8 @@
 #define OPAL_SYM_TO_ADDR   182
 #define OPAL_REPORT_TRAP   183
 #define OPAL_FIND_VM_AREA  184
-#define OPAL_LAST  184
+#define OPAL_REGISTER_OS_OPS   185
+#define OPAL_LAST  185
 
 #define QUIESCE_HOLD   1 /* Spin all calls at entry */
 #define QUIESCE_REJECT 2 /* Fail all calls with OPAL_BUSY */
@@ -1202,6 +1203,10 @@ struct opal_vm_area {
__be64  vm_flags;
 };
 
+struct opal_os_ops {
+   __be64  os_printf; /* void printf(int32_t level, const char *str) */
+};
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* __OPAL_API_H */
diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h
index 199b5582b700..09985b7718b3 100644
--- a/arch/powerpc/include/asm/opal.h
+++ b/arch/powerpc/include/asm/opal.h
@@ -406,6 +406,7 @@ void opal_psr_init(void);
 void opal_sensor_groups_init(void);
 
 int64_t opal_find_vm_area(uint64_t addr, struct opal_vm_area *opal_vm_area);
+int64_t opal_register_os_ops(struct opal_os_ops *ops, uint64_t size);
 
 #endif /* __ASSEMBLY__ */
 
diff --git a/arch/powerpc/platforms/powernv/opal-call.c 
b/arch/powerpc/platforms/powernv/opal-call.c
index 4bdad3d2fa18..11f419e76059 100644
--- a/arch/powerpc/platforms/powernv/opal-call.c
+++ b/arch/powerpc/platforms/powernv/opal-call.c
@@ -350,3 +350,4 @@ OPAL_CALL(opal_addr_to_sym, 
OPAL_ADDR_TO_SYM);
 OPAL_CALL(opal_sym_to_addr,OPAL_SYM_TO_ADDR);
 OPAL_CALL(opal_report_trap,OPAL_REPORT_TRAP);
 OPAL_CALL(opal_find_vm_area,   OPAL_FIND_VM_AREA);
+OPAL_CALL(opal_register_os_ops,OPAL_REGISTER_OS_OPS);
diff --git a/arch/powerpc/platforms/powernv/opal.c 
b/arch/powerpc/platforms/powernv/opal.c
index 98d6d7fc5411..0fbfcd088c58 100644
--- a/arch/powerpc/platforms/powernv/opal.c
+++ b/arch/powerpc/platforms/powernv/opal.c
@@ -47,6 +47,7 @@ static LIST_HEAD(msg_list);
 
 struct mm_struct *opal_mm __read_mostly;
 bool opal_v4_present __read_mostly;
+bool opal_v4_enabled __read_mostly;
 bool opal_mm_enabled __read_mostly;
 
 /* /sys/firmware/opal */
@@ -152,6 +153,8 @@ unsigned long arch_symbol_lookup_name(const char *name)
return be64_to_cpu(addr);
 }
 
+static void os_printf(int32_t level, const char *str);
+
 int __init early_init_dt_scan_opal(unsigned long node,
   const char *uname, int depth, void *data)
 {
@@ -1045,6 +1048,28 @@ static void opal_init_heartbeat(void)
kopald_tsk = kthread_run(kopald, NULL, "kopald");
 }
 
+static void os_printf(int32_t level, const char *str)
+{
+   const char *l;
+
+   /* Assuming printk does not work in real mode */
+   if (WARN_ON_ONCE(!(mfmsr() & (MSR_IR|MSR_DR
+   return;
+
+   switch (level) {
+   case 0: l = KERN_EMERG; break;
+   case 1: l = KERN_ALERT; break;
+   case 2: l = KERN_CRIT; break;
+   case 3: l = KERN_ERR; break;
+   case 4: l = KERN_WARNING; break;
+   case 5: l = KERN_NOTICE; break;
+   case 6: l = KERN_INFO; break;
+   case 7: l = KERN_DEBUG; break;
+   default: l = KERN_ERR;
+   }
+   printk("%s[OPAL] %s", l, str);
+}
+
 static pgprot_t opal_vm_flags_to_prot(uint64_t flags)
 {
pgprot_t prot;
@@ -1137,6 +1162,8 @@ static int __init opal_init_early(void)
int rc;
 
if (opal_v4_present) {
+   struct opal_os_ops opal_os_ops;
+
if (radix_enabled()) {
/* Hash can't resolve SLB faults to the switched mm */
rc = opal_init_mm();
@@ -1144,6 +1171,15 @@ static int __init opal_init_early(void)
pr_warn("OPAL virtual memory init failed, 
firmware will run in real-mode.\n");
}
}
+
+   memset(&opal_os_ops, 0, sizeof(opal_os_ops));
+   opal_os_ops.os_printf = cpu_to_be64(&os_printf);
+   if (opal_register_os_ops(&opal_os_ops, sizeof(opal_os_ops))) {
+   pr_warn("OPAL register OS ops failed, firmware will run 
in v3 mode.\n");
+  

[RFC PATCH 08/10] powerpc/powernv: Set up an mm context to call OPAL in

2020-05-02 Thread Nicholas Piggin
This creates an mm context to be used for OPAL V4 calls, which
is populated with ptes according to querying OPAL_FIND_VM_AREA.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/platforms/powernv/opal-call.c |  21 +++-
 arch/powerpc/platforms/powernv/opal.c  | 119 -
 2 files changed, 137 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/opal-call.c 
b/arch/powerpc/platforms/powernv/opal-call.c
index e62a74dfb3d0..4bdad3d2fa18 100644
--- a/arch/powerpc/platforms/powernv/opal-call.c
+++ b/arch/powerpc/platforms/powernv/opal-call.c
@@ -104,6 +104,9 @@ typedef int64_t (*opal_v4_le_entry_fn)(uint64_t r3, 
uint64_t r4, uint64_t r5,
uint64_t r6, uint64_t r7, uint64_t r8,
uint64_t r9, uint64_t r10);
 
+extern struct mm_struct *opal_mm;
+extern bool opal_mm_enabled;
+
 static int64_t opal_call(int64_t a0, int64_t a1, int64_t a2, int64_t a3,
 int64_t a4, int64_t a5, int64_t a6, int64_t a7, int64_t opcode)
 {
@@ -117,6 +120,8 @@ static int64_t opal_call(int64_t a0, int64_t a1, int64_t 
a2, int64_t a3,
fn = (opal_v4_le_entry_fn)(opal.v4_le_entry);
 
if (fn) {
+   struct mm_struct *old_mm = current->active_mm;
+
if (!mmu) {
BUG_ON(msr & MSR_EE);
ret = fn(opcode, a0, a1, a2, a3, a4, a5, a6);
@@ -126,11 +131,23 @@ static int64_t opal_call(int64_t a0, int64_t a1, int64_t 
a2, int64_t a3,
local_irq_save(flags);
hard_irq_disable(); /* XXX r13 */
msr &= ~MSR_EE;
-   mtmsr(msr & ~(MSR_IR|MSR_DR));
+   if (!opal_mm_enabled)
+   mtmsr(msr & ~(MSR_IR|MSR_DR));
+
+   if (opal_mm_enabled && old_mm != opal_mm) {
+   current->active_mm = opal_mm;
+   switch_mm_irqs_off(NULL, opal_mm, current);
+   }
 
ret = fn(opcode, a0, a1, a2, a3, a4, a5, a6);
 
-   mtmsr(msr);
+   if (opal_mm_enabled && old_mm != opal_mm) {
+   current->active_mm = old_mm;
+   switch_mm_irqs_off(NULL, old_mm, current);
+   }
+
+   if (!opal_mm_enabled)
+   mtmsr(msr);
local_irq_restore(flags);
 
return ret;
diff --git a/arch/powerpc/platforms/powernv/opal.c 
b/arch/powerpc/platforms/powernv/opal.c
index d00772d40680..98d6d7fc5411 100644
--- a/arch/powerpc/platforms/powernv/opal.c
+++ b/arch/powerpc/platforms/powernv/opal.c
@@ -45,6 +45,10 @@ struct opal_msg_node {
 static DEFINE_SPINLOCK(msg_list_lock);
 static LIST_HEAD(msg_list);
 
+struct mm_struct *opal_mm __read_mostly;
+bool opal_v4_present __read_mostly;
+bool opal_mm_enabled __read_mostly;
+
 /* /sys/firmware/opal */
 struct kobject *opal_kobj __read_mostly;
 
@@ -172,7 +176,12 @@ int __init early_init_dt_scan_opal(unsigned long node,
 
if (of_flat_dt_is_compatible(node, "ibm,opal-v3")) {
powerpc_firmware_features |= FW_FEATURE_OPAL;
-   pr_debug("OPAL detected !\n");
+   if (of_flat_dt_is_compatible(node, "ibm,opal-v4")) {
+   opal_v4_present = true;
+   pr_debug("OPAL v4 runtime firmware\n");
+   } else {
+   pr_debug("OPAL detected !\n");
+   }
} else {
panic("OPAL v3 compatible firmware not detected, can not 
continue.\n");
}
@@ -187,6 +196,9 @@ int __init early_init_dt_scan_opal(unsigned long node,
 
pr_debug("OPAL v4 Entry = 0x%llx (v4_le_entryp=%p 
v4_le_entrysz=%d)\n",
 opal.v4_le_entry, v4_le_entryp, v4_le_entrysz);
+   } else {
+   /* Can't use v4 entry */
+   opal_v4_present = false;
}
 
return 1;
@@ -1033,6 +1045,111 @@ static void opal_init_heartbeat(void)
kopald_tsk = kthread_run(kopald, NULL, "kopald");
 }
 
+static pgprot_t opal_vm_flags_to_prot(uint64_t flags)
+{
+   pgprot_t prot;
+
+   BUG_ON(!flags);
+   if (flags & OS_VM_FLAG_EXECUTE) {
+   if (flags & OS_VM_FLAG_CI)
+   BUG();
+   if (flags & OS_VM_FLAG_WRITE)
+   prot = PAGE_KERNEL_X;
+   else
+   prot = PAGE_KERNEL_X /* XXX!? PAGE_KERNEL_ROX */;
+   } else {
+   if (flags & OS_VM_FLAG_WRITE)
+   prot = PAGE_KERNEL;
+   else if (flags & OS_VM_FLAG_READ)
+   prot = PAGE_KERNEL_RO;
+   else
+   BUG();
+   if (flags & OS_VM_FLAG_CI)
+   prot = pgprot_noncached(prot);
+   }
+   return prot;
+}
+
+static int __init opal_init_mm(void)
+{
+   struct mm_struct *mm;
+   unsigned long addr;
+   struct

[RFC PATCH 07/10] powerpc/powernv: Add OPAL_FIND_VM_AREA API

2020-05-02 Thread Nicholas Piggin
This will be used in the next patch.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/include/asm/opal-api.h| 15 ++-
 arch/powerpc/include/asm/opal.h|  2 ++
 arch/powerpc/platforms/powernv/opal-call.c |  1 +
 3 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/opal-api.h 
b/arch/powerpc/include/asm/opal-api.h
index 018d4734c323..0be5ff4e51b5 100644
--- a/arch/powerpc/include/asm/opal-api.h
+++ b/arch/powerpc/include/asm/opal-api.h
@@ -217,7 +217,8 @@
 #define OPAL_ADDR_TO_SYM   181
 #define OPAL_SYM_TO_ADDR   182
 #define OPAL_REPORT_TRAP   183
-#define OPAL_LAST  183
+#define OPAL_FIND_VM_AREA  184
+#define OPAL_LAST  184
 
 #define QUIESCE_HOLD   1 /* Spin all calls at entry */
 #define QUIESCE_REJECT 2 /* Fail all calls with OPAL_BUSY */
@@ -1189,6 +1190,18 @@ struct opal_mpipl_fadump {
 #define OPAL_TRAP_WARN 2
 #define OPAL_TRAP_PANIC3
 
+#define OS_VM_FLAG_READ0x1
+#define OS_VM_FLAG_WRITE   0x2
+#define OS_VM_FLAG_EXECUTE 0x4
+#define OS_VM_FLAG_CI  0x8
+
+struct opal_vm_area {
+   __be64  address;
+   __be64  length;
+   __be64  pa;
+   __be64  vm_flags;
+};
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* __OPAL_API_H */
diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h
index dc77c2d5e036..199b5582b700 100644
--- a/arch/powerpc/include/asm/opal.h
+++ b/arch/powerpc/include/asm/opal.h
@@ -405,6 +405,8 @@ void opal_powercap_init(void);
 void opal_psr_init(void);
 void opal_sensor_groups_init(void);
 
+int64_t opal_find_vm_area(uint64_t addr, struct opal_vm_area *opal_vm_area);
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_OPAL_H */
diff --git a/arch/powerpc/platforms/powernv/opal-call.c 
b/arch/powerpc/platforms/powernv/opal-call.c
index 32857254d268..e62a74dfb3d0 100644
--- a/arch/powerpc/platforms/powernv/opal-call.c
+++ b/arch/powerpc/platforms/powernv/opal-call.c
@@ -332,3 +332,4 @@ OPAL_CALL(opal_secvar_enqueue_update,   
OPAL_SECVAR_ENQUEUE_UPDATE);
 OPAL_CALL(opal_addr_to_sym,OPAL_ADDR_TO_SYM);
 OPAL_CALL(opal_sym_to_addr,OPAL_SYM_TO_ADDR);
 OPAL_CALL(opal_report_trap,OPAL_REPORT_TRAP);
+OPAL_CALL(opal_find_vm_area,   OPAL_FIND_VM_AREA);
-- 
2.23.0



[RFC PATCH 06/10] powerpc/powernv: opal use new opal call entry point if it exists

2020-05-02 Thread Nicholas Piggin
OPAL may advertise new endian-specific entry point which has different
calling conventions including using the caller's stack, but otherwise
provides the standard OPAL call API without any changes required to
the OS.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/boot/opal.c   |  5 +++
 arch/powerpc/platforms/powernv/opal-call.c | 36 ++
 arch/powerpc/platforms/powernv/opal.c  | 30 +++---
 3 files changed, 60 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/boot/opal.c b/arch/powerpc/boot/opal.c
index b69818ce592b..8b006a0282ac 100644
--- a/arch/powerpc/boot/opal.c
+++ b/arch/powerpc/boot/opal.c
@@ -13,6 +13,7 @@
 struct opal {
u64 base;
u64 entry;
+   u64 v4_le_entry;
 } opal;
 
 static u32 opal_con_id;
@@ -75,6 +76,10 @@ static void opal_init(void)
if (getprop(opal_node, "opal-entry-address", &opal.entry, sizeof(u64)) 
< 0)
return;
opal.entry = be64_to_cpu(opal.entry);
+
+   if (getprop(opal_node, "opal-v4-le-entry-address", &opal.v4_le_entry, 
sizeof(u64)) < 0)
+   return;
+   opal.v4_le_entry = be64_to_cpu(opal.v4_le_entry);
 }
 
 int opal_console_init(void *devp, struct serial_console_data *scdp)
diff --git a/arch/powerpc/platforms/powernv/opal-call.c 
b/arch/powerpc/platforms/powernv/opal-call.c
index 506b1798081a..32857254d268 100644
--- a/arch/powerpc/platforms/powernv/opal-call.c
+++ b/arch/powerpc/platforms/powernv/opal-call.c
@@ -92,6 +92,18 @@ static s64 __opal_call_trace(s64 a0, s64 a1, s64 a2, s64 a3,
 #define DO_TRACE false
 #endif /* CONFIG_TRACEPOINTS */
 
+struct opal {
+   u64 base;
+   u64 entry;
+   u64 size;
+   u64 v4_le_entry;
+};
+extern struct opal opal;
+
+typedef int64_t (*opal_v4_le_entry_fn)(uint64_t r3, uint64_t r4, uint64_t r5,
+   uint64_t r6, uint64_t r7, uint64_t r8,
+   uint64_t r9, uint64_t r10);
+
 static int64_t opal_call(int64_t a0, int64_t a1, int64_t a2, int64_t a3,
 int64_t a4, int64_t a5, int64_t a6, int64_t a7, int64_t opcode)
 {
@@ -99,6 +111,30 @@ static int64_t opal_call(int64_t a0, int64_t a1, int64_t 
a2, int64_t a3,
unsigned long msr = mfmsr();
bool mmu = (msr & (MSR_IR|MSR_DR));
int64_t ret;
+   opal_v4_le_entry_fn fn;
+
+   if (IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))
+   fn = (opal_v4_le_entry_fn)(opal.v4_le_entry);
+
+   if (fn) {
+   if (!mmu) {
+   BUG_ON(msr & MSR_EE);
+   ret = fn(opcode, a0, a1, a2, a3, a4, a5, a6);
+   return ret;
+   }
+
+   local_irq_save(flags);
+   hard_irq_disable(); /* XXX r13 */
+   msr &= ~MSR_EE;
+   mtmsr(msr & ~(MSR_IR|MSR_DR));
+
+   ret = fn(opcode, a0, a1, a2, a3, a4, a5, a6);
+
+   mtmsr(msr);
+   local_irq_restore(flags);
+
+   return ret;
+   }
 
msr &= ~MSR_EE;
 
diff --git a/arch/powerpc/platforms/powernv/opal.c 
b/arch/powerpc/platforms/powernv/opal.c
index a0e9808237b2..d00772d40680 100644
--- a/arch/powerpc/platforms/powernv/opal.c
+++ b/arch/powerpc/platforms/powernv/opal.c
@@ -46,13 +46,14 @@ static DEFINE_SPINLOCK(msg_list_lock);
 static LIST_HEAD(msg_list);
 
 /* /sys/firmware/opal */
-struct kobject *opal_kobj;
+struct kobject *opal_kobj __read_mostly;
 
 struct opal {
u64 base;
u64 entry;
u64 size;
-} opal;
+   u64 v4_le_entry;
+} opal __read_mostly;
 
 struct mcheck_recoverable_range {
u64 start_addr;
@@ -150,14 +151,15 @@ unsigned long arch_symbol_lookup_name(const char *name)
 int __init early_init_dt_scan_opal(unsigned long node,
   const char *uname, int depth, void *data)
 {
-   const void *basep, *entryp, *sizep;
-   int basesz, entrysz, runtimesz;
+   const void *basep, *entryp, *v4_le_entryp, *sizep;
+   int basesz, entrysz, v4_le_entrysz, runtimesz;
 
if (depth != 1 || strcmp(uname, "ibm,opal") != 0)
return 0;
 
basep  = of_get_flat_dt_prop(node, "opal-base-address", &basesz);
entryp = of_get_flat_dt_prop(node, "opal-entry-address", &entrysz);
+   v4_le_entryp = of_get_flat_dt_prop(node, "opal-v4-le-entry-address", 
&v4_le_entrysz);
sizep = of_get_flat_dt_prop(node, "opal-runtime-size", &runtimesz);
 
if (!basep || !entryp || !sizep)
@@ -166,19 +168,25 @@ int __init early_init_dt_scan_opal(unsigned long node,
opal.base = of_read_number(basep, basesz/4);
opal.entry = of_read_number(entryp, entrysz/4);
opal.size = of_read_number(sizep, runtimesz/4);
+   opal.v4_le_entry = of_read_number(v4_le_entryp, v4_le_entrysz/4);
+
+   if (of_flat_dt_is_compatible(node, "ibm,opal-v3")) {
+   powerpc_firmware_features |= FW_FEATURE_OPAL;
+   pr_debug("OPAL detect

[RFC PATCH 05/10] powerpc/powernv: Don't translate kernel addresses to real addresses for OPAL

2020-05-02 Thread Nicholas Piggin
A random assortment of OPAL callers use __pa() on pointers (others don't).

This is not required because __pa() behaves the same as __va() when
translation is off. In order to run OPAL with translation on, the
effective addresses have to be used.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/perf/imc-pmu.c|  4 ++--
 arch/powerpc/platforms/powernv/npu-dma.c   |  2 +-
 arch/powerpc/platforms/powernv/opal-dump.c |  2 +-
 arch/powerpc/platforms/powernv/opal-elog.c |  4 ++--
 arch/powerpc/platforms/powernv/opal-flash.c|  6 +++---
 arch/powerpc/platforms/powernv/opal-hmi.c  |  2 +-
 arch/powerpc/platforms/powernv/opal-nvram.c|  4 ++--
 arch/powerpc/platforms/powernv/opal-powercap.c |  2 +-
 arch/powerpc/platforms/powernv/opal-psr.c  |  2 +-
 arch/powerpc/platforms/powernv/opal-xscom.c|  2 +-
 arch/powerpc/platforms/powernv/opal.c  |  6 +++---
 arch/powerpc/platforms/powernv/pci-ioda.c  |  2 +-
 arch/powerpc/sysdev/xive/native.c  |  2 +-
 drivers/char/powernv-op-panel.c|  3 +--
 drivers/i2c/busses/i2c-opal.c  | 12 ++--
 drivers/mtd/devices/powernv_flash.c|  4 ++--
 16 files changed, 29 insertions(+), 30 deletions(-)

diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
index eb82dda884e5..332c7a3398f3 100644
--- a/arch/powerpc/perf/imc-pmu.c
+++ b/arch/powerpc/perf/imc-pmu.c
@@ -610,7 +610,7 @@ static int core_imc_mem_init(int cpu, int size)
mutex_init(&core_imc_refc[core_id].lock);
 
rc = opal_imc_counters_init(OPAL_IMC_COUNTERS_CORE,
-   __pa((void *)mem_info->vbase),
+   (u64)mem_info->vbase,
get_hard_smp_processor_id(cpu));
if (rc) {
free_pages((u64)mem_info->vbase, get_order(size));
@@ -1209,7 +1209,7 @@ static int trace_imc_mem_alloc(int cpu_id, int size)
per_cpu(trace_imc_mem, cpu_id) = local_mem;
 
/* Initialise the counters for trace mode */
-   rc = opal_imc_counters_init(OPAL_IMC_COUNTERS_TRACE, __pa((void 
*)local_mem),
+   rc = opal_imc_counters_init(OPAL_IMC_COUNTERS_TRACE, 
(u64)local_mem,
get_hard_smp_processor_id(cpu_id));
if (rc) {
pr_info("IMC:opal init failed for trace imc\n");
diff --git a/arch/powerpc/platforms/powernv/npu-dma.c 
b/arch/powerpc/platforms/powernv/npu-dma.c
index b95b9e3c4c98..9d38a30cc27e 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -149,7 +149,7 @@ static long pnv_npu_set_window(struct iommu_table_group 
*table_group, int num,
npe->pe_number,
npe->pe_number,
tbl->it_indirect_levels + 1,
-   __pa(tbl->it_base),
+   __pa(tbl->it_base), /* XXX? */
size << 3,
IOMMU_PAGE_SIZE(tbl));
if (rc) {
diff --git a/arch/powerpc/platforms/powernv/opal-dump.c 
b/arch/powerpc/platforms/powernv/opal-dump.c
index 543c816fa99e..94d5fb716a32 100644
--- a/arch/powerpc/platforms/powernv/opal-dump.c
+++ b/arch/powerpc/platforms/powernv/opal-dump.c
@@ -256,7 +256,7 @@ static int64_t dump_read_data(struct dump_obj *dump)
}
 
/* First entry address */
-   addr = __pa(list);
+   addr = (u64)list;
 
/* Fetch data */
rc = OPAL_BUSY_EVENT;
diff --git a/arch/powerpc/platforms/powernv/opal-elog.c 
b/arch/powerpc/platforms/powernv/opal-elog.c
index 62ef7ad995da..6af5ff892195 100644
--- a/arch/powerpc/platforms/powernv/opal-elog.c
+++ b/arch/powerpc/platforms/powernv/opal-elog.c
@@ -163,7 +163,7 @@ static ssize_t raw_attr_read(struct file *filep, struct 
kobject *kobj,
if (!elog->buffer)
return -EIO;
 
-   opal_rc = opal_read_elog(__pa(elog->buffer),
+   opal_rc = opal_read_elog((u64)elog->buffer,
 elog->size, elog->id);
if (opal_rc != OPAL_SUCCESS) {
pr_err("ELOG: log read failed for log-id=%llx\n",
@@ -206,7 +206,7 @@ static struct elog_obj *create_elog_obj(uint64_t id, size_t 
size, uint64_t type)
elog->buffer = kzalloc(elog->size, GFP_KERNEL);
 
if (elog->buffer) {
-   rc = opal_read_elog(__pa(elog->buffer),
+   rc = opal_read_elog((u64)elog->buffer,
 elog->size, elog->id);
if (rc != OPAL_SUCCESS) {
pr_err("ELOG: log read failed for log-id=%llx\n",
diff --git a/arch/powerpc/platforms/powernv/opal-flash.c 
b/arch/powerpc/platforms/powernv/opal-flash.c
index 7e7d38b17420..46f02279d36a 100644
--- a/arch/powerpc/platforms/powernv/opal-flash.c
+++ b/arch/powerpc/platforms/powernv

[RFC PATCH 04/10] powerpc/powernv: avoid polling in opal_get_chars

2020-05-02 Thread Nicholas Piggin
OPAL console IO should avoid locks and complexity where possible, to
maximise the chance of it working if there are crashes or bugs. This
poll is not necessary, opal_console_read can handle no input.

In a future patch, Linux will provide a console service to OPAL via the
OPAL console, so it must avoid using any other OPAL calls in this path.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/platforms/powernv/opal.c | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/opal.c 
b/arch/powerpc/platforms/powernv/opal.c
index 1bf2e0b31ecf..e8eba210a92d 100644
--- a/arch/powerpc/platforms/powernv/opal.c
+++ b/arch/powerpc/platforms/powernv/opal.c
@@ -467,13 +467,10 @@ static int __init opal_message_init(struct device_node 
*opal_node)
 int opal_get_chars(uint32_t vtermno, char *buf, int count)
 {
s64 rc;
-   __be64 evt, len;
+   __be64 len;
 
if (!opal.entry)
return -ENODEV;
-   opal_poll_events(&evt);
-   if ((be64_to_cpu(evt) & OPAL_EVENT_CONSOLE_INPUT) == 0)
-   return 0;
len = cpu_to_be64(count);
rc = opal_console_read(vtermno, &len, buf);
if (rc == OPAL_SUCCESS)
-- 
2.23.0



[RFC PATCH 03/10] powerpc/powernv: Use OPAL_REPORT_TRAP to cope with trap interrupts from OPAL

2020-05-02 Thread Nicholas Piggin
This isn't used yet, because OPAL is nice enough not to cause unexpected
program check interrupts to the OS. A future patch will allow OPAL to
start using traps. Like so.

  [OPAL] < assert failed at core/opal.c:814 >
  [OPAL] .
  [OPAL]  .
  [OPAL]   .
  [OPAL] OO__)
  [OPAL]<"__/
  [OPAL] ^ ^
   cpu 0x0: Vector: 700 (Program Check) at [c00080287770]
   pc: 3002f360: opal_poll_events+0x54/0x174 [OPAL]
   lr: 3002f344: opal_poll_events+0x38/0x174 [OPAL]
   sp: c00080287a00
  msr: 90021033
 current = 0xc16fa100
 paca= 0xc12c^I irqmask: 0x03^I irq_happened: 0x01
   pid   = 19, comm = kopald
   Linux version 5.7.0-rc3-00053-g2d9c3c965178-dirty
   enter ? for help
   [c00080287a80] 3002e6b8 opal_v4_le_entry+0x224/0x29c [OPAL]
   [c00080287b50] c0096ce8 opal_call+0x1c8/0x580
   [c00080287c90] c0097448 opal_poll_events+0x28/0x40
   [c00080287d00] c00a26e0 opal_handle_events+0x70/0x140
   [c00080287d50] c009a198 kopald+0x98/0x140
   [c00080287db0] c012139c kthread+0x18c/0x1a0
   [c00080287e20] c000cc28 ret_from_kernel_thread+0x5c/0x74

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/include/asm/opal-api.h|  7 +++-
 arch/powerpc/include/asm/opal.h|  2 ++
 arch/powerpc/kernel/traps.c| 39 --
 arch/powerpc/platforms/powernv/opal-call.c |  1 +
 4 files changed, 37 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/include/asm/opal-api.h 
b/arch/powerpc/include/asm/opal-api.h
index 8eb31b9aeb27..018d4734c323 100644
--- a/arch/powerpc/include/asm/opal-api.h
+++ b/arch/powerpc/include/asm/opal-api.h
@@ -216,7 +216,8 @@
 #define OPAL_SECVAR_ENQUEUE_UPDATE 178
 #define OPAL_ADDR_TO_SYM   181
 #define OPAL_SYM_TO_ADDR   182
-#define OPAL_LAST  182
+#define OPAL_REPORT_TRAP   183
+#define OPAL_LAST  183
 
 #define QUIESCE_HOLD   1 /* Spin all calls at entry */
 #define QUIESCE_REJECT 2 /* Fail all calls with OPAL_BUSY */
@@ -1184,6 +1185,10 @@ struct opal_mpipl_fadump {
struct  opal_mpipl_region region[];
 } __packed;
 
+#define OPAL_TRAP_FATAL1
+#define OPAL_TRAP_WARN 2
+#define OPAL_TRAP_PANIC3
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* __OPAL_API_H */
diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h
index 56b6994aefb7..dc77c2d5e036 100644
--- a/arch/powerpc/include/asm/opal.h
+++ b/arch/powerpc/include/asm/opal.h
@@ -314,6 +314,8 @@ s64 opal_quiesce(u64 shutdown_type, s32 cpu);
 
 int64_t opal_addr_to_sym(uint64_t addr, __be64 *symaddr, __be64 *symsize, char 
*namebuf, uint64_t buflen);
 int64_t opal_sym_to_addr(const char *name, __be64 *symaddr, __be64 *symsize);
+int64_t opal_report_trap(uint64_t nip);
+
 
 /* Internal functions */
 extern int early_init_dt_scan_opal(unsigned long node, const char *uname,
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 3fca22276bb1..0274ae7b8a03 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -52,6 +52,7 @@
 #endif
 #ifdef CONFIG_PPC64
 #include 
+#include 
 #include 
 #include 
 #endif
@@ -1471,7 +1472,6 @@ void program_check_exception(struct pt_regs *regs)
goto bail;
}
if (reason & REASON_TRAP) {
-   unsigned long bugaddr;
/* Debugger is first in line to stop recursive faults in
 * rcu_lock, notify_die, or atomic_notifier_call_chain */
if (debugger_bpt(regs))
@@ -1485,18 +1485,35 @@ void program_check_exception(struct pt_regs *regs)
== NOTIFY_STOP)
goto bail;
 
-   bugaddr = regs->nip;
-   /*
-* Fixup bugaddr for BUG_ON() in real mode
-*/
-   if (!is_kernel_addr(bugaddr) && !(regs->msr & MSR_IR))
-   bugaddr += PAGE_OFFSET;
+   if (!(regs->msr & MSR_PR)) { /* not user-mode */
+   unsigned long bugaddr;
+   enum bug_trap_type t;
+
+   /*
+* Fixup bugaddr for BUG_ON() in real mode
+*/
+   bugaddr = regs->nip;
+   if (!is_kernel_addr(bugaddr) && !(regs->msr & MSR_IR))
+   bugaddr += PAGE_OFFSET;
+   t = report_bug(bugaddr, regs);
+   if (t == BUG_TRAP_TYPE_WARN) {
+   regs->nip += 4;
+   goto bail;
+   }
+   if (t == BUG_TRAP_TYPE_BUG)
+   goto bug;
 
-   if (!(regs->m

[RFC PATCH 02/10] powerpc/powernv: Wire up OPAL address lookups

2020-05-02 Thread Nicholas Piggin
Use ARCH_HAS_SYMBOL_LOOKUP to look up the opal symbol table. This
allows crashes and xmon debugging to print firmware symbols.

  Oops: System Reset, sig: 6 [#1]
  LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA PowerNV
  Modules linked in:
  CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.6.0-rc2-dirty #903
  NIP:  30020434 LR: 3000378c CTR: 30020414
  REGS: c000fffc3d70 TRAP: 0100   Not tainted  (5.6.0-rc2-dirty)
  MSR:  92101002   CR: 28022284  XER: 2004
  CFAR: 30003788 IRQMASK: 3
  GPR00: 3000378c 31c13c90 30136200 c12cfa10
  GPR04: c12cfa10 0010  31c10060
  GPR08: c12cfaaf 30003640  0001
  GPR12: 300e c149  c139c588
  GPR16: 31c1 c125a900  c12076a8
  GPR20: c12a3950 0001 31c10060 c12cfaaf
  GPR24: 0019 30003640  
  GPR28: 0010 c12cfa10  
  NIP [30020434] .dummy_console_write_buffer_space+0x20/0x64 [OPAL]
  LR [3000378c] opal_entry+0x14c/0x17c [OPAL]

This won't unwind the firmware stack (or its Linux caller) properly if
firmware and kernel endians don't match, but that problem could be solved
in powerpc's unwinder.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/Kconfig   |  1 +
 arch/powerpc/include/asm/opal-api.h|  4 ++-
 arch/powerpc/include/asm/opal.h|  3 ++
 arch/powerpc/platforms/powernv/opal-call.c |  2 ++
 arch/powerpc/platforms/powernv/opal.c  | 40 ++
 5 files changed, 49 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 924c541a9260..0be717291e38 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -115,6 +115,7 @@ config PPC
# Please keep this list sorted alphabetically.
#
select ARCH_32BIT_OFF_T if PPC32
+   select ARCH_HAS_SYMBOL_LOOKUP   if PPC_POWERNV
select ARCH_HAS_DEBUG_VIRTUAL
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_ELF_RANDOMIZE
diff --git a/arch/powerpc/include/asm/opal-api.h 
b/arch/powerpc/include/asm/opal-api.h
index 1dffa3cb16ba..8eb31b9aeb27 100644
--- a/arch/powerpc/include/asm/opal-api.h
+++ b/arch/powerpc/include/asm/opal-api.h
@@ -214,7 +214,9 @@
 #define OPAL_SECVAR_GET176
 #define OPAL_SECVAR_GET_NEXT   177
 #define OPAL_SECVAR_ENQUEUE_UPDATE 178
-#define OPAL_LAST  178
+#define OPAL_ADDR_TO_SYM   181
+#define OPAL_SYM_TO_ADDR   182
+#define OPAL_LAST  182
 
 #define QUIESCE_HOLD   1 /* Spin all calls at entry */
 #define QUIESCE_REJECT 2 /* Fail all calls with OPAL_BUSY */
diff --git a/arch/powerpc/include/asm/opal.h b/arch/powerpc/include/asm/opal.h
index 9986ac34b8e2..56b6994aefb7 100644
--- a/arch/powerpc/include/asm/opal.h
+++ b/arch/powerpc/include/asm/opal.h
@@ -312,6 +312,9 @@ s64 opal_mpipl_query_tag(enum opal_mpipl_tags tag, u64 
*addr);
 s64 opal_signal_system_reset(s32 cpu);
 s64 opal_quiesce(u64 shutdown_type, s32 cpu);
 
+int64_t opal_addr_to_sym(uint64_t addr, __be64 *symaddr, __be64 *symsize, char 
*namebuf, uint64_t buflen);
+int64_t opal_sym_to_addr(const char *name, __be64 *symaddr, __be64 *symsize);
+
 /* Internal functions */
 extern int early_init_dt_scan_opal(unsigned long node, const char *uname,
   int depth, void *data);
diff --git a/arch/powerpc/platforms/powernv/opal-call.c 
b/arch/powerpc/platforms/powernv/opal-call.c
index 5cd0f52d258f..2233a58924cb 100644
--- a/arch/powerpc/platforms/powernv/opal-call.c
+++ b/arch/powerpc/platforms/powernv/opal-call.c
@@ -293,3 +293,5 @@ OPAL_CALL(opal_mpipl_query_tag, 
OPAL_MPIPL_QUERY_TAG);
 OPAL_CALL(opal_secvar_get, OPAL_SECVAR_GET);
 OPAL_CALL(opal_secvar_get_next,OPAL_SECVAR_GET_NEXT);
 OPAL_CALL(opal_secvar_enqueue_update,  OPAL_SECVAR_ENQUEUE_UPDATE);
+OPAL_CALL(opal_addr_to_sym,OPAL_ADDR_TO_SYM);
+OPAL_CALL(opal_sym_to_addr,OPAL_SYM_TO_ADDR);
diff --git a/arch/powerpc/platforms/powernv/opal.c 
b/arch/powerpc/platforms/powernv/opal.c
index 2b3dfd0b6cdd..1bf2e0b31ecf 100644
--- a/arch/powerpc/platforms/powernv/opal.c
+++ b/arch/powerpc/platforms/powernv/opal.c
@@ -107,6 +107,46 @@ void opal_configure_cores(void)
cur_cpu_spec->cpu_restore();
 }
 
+const char *arch_symbol_lookup_address(unsigned long addr,
+   unsigned long *symbolsize,
+   unsigned long *offset,
+   char **modname, char *namebuf

[RFC PATCH 00/10] OPAL V4

2020-05-02 Thread Nicholas Piggin
"OPAL V4" is a proposed new approach to running and calling PowerNV
OPAL firmware.

OPAL calls use the caller's (kernel) stack, which vastly simplifies
re-entrancy concerns around doing things like idle and machine check
OPAL drivers.

The OS can get at symbol and assert metadata to help with debugging
firmware.

OPAL may be called (and will run in) virtual mode in its own address
space.

And the operating system provides some services to the firmware,
message logging, for example.

This fairly close to the point where we could run OPAL in user-mode
with a few services (scv could be used to call back to the OS) for
privileged instructions, we may yet do this, but one thing that's
stopped me is it would require a slower API. As it is now with LE
skiboot and LE Linux, the OPAL call is basically a shared-library
function call, which is fast enough that it's feasible to
implement a performant CPU idle driver, which is a significant
motivation.

Anyway this is up and running, coming together pretty well just needs
a bit of polishing and more documentation. I'll post the skiboot
patches on the skiboot list.

Nicholas Piggin (10):
  kallsyms: architecture specific symbol lookups
  powerpc/powernv: Wire up OPAL address lookups
  powerpc/powernv: Use OPAL_REPORT_TRAP to cope with trap interrupts
from OPAL
  powerpc/powernv: avoid polling in opal_get_chars
  powerpc/powernv: Don't translate kernel addresses to real addresses
for OPAL
  powerpc/powernv: opal use new opal call entry point if it exists
  powerpc/powernv: Add OPAL_FIND_VM_AREA API
  powerpc/powernv: Set up an mm context to call OPAL in
  powerpc/powernv: OPAL V4 OS services
  powerpc/powernv: OPAL V4 Implement vm_map/unmap service

 arch/powerpc/Kconfig  |   1 +
 arch/powerpc/boot/opal.c  |   5 +
 arch/powerpc/include/asm/opal-api.h   |  29 +-
 arch/powerpc/include/asm/opal.h   |   8 +
 arch/powerpc/kernel/traps.c   |  39 ++-
 arch/powerpc/perf/imc-pmu.c   |   4 +-
 arch/powerpc/platforms/powernv/npu-dma.c  |   2 +-
 arch/powerpc/platforms/powernv/opal-call.c|  58 
 arch/powerpc/platforms/powernv/opal-dump.c|   2 +-
 arch/powerpc/platforms/powernv/opal-elog.c|   4 +-
 arch/powerpc/platforms/powernv/opal-flash.c   |   6 +-
 arch/powerpc/platforms/powernv/opal-hmi.c |   2 +-
 arch/powerpc/platforms/powernv/opal-nvram.c   |   4 +-
 .../powerpc/platforms/powernv/opal-powercap.c |   2 +-
 arch/powerpc/platforms/powernv/opal-psr.c |   2 +-
 arch/powerpc/platforms/powernv/opal-xscom.c   |   2 +-
 arch/powerpc/platforms/powernv/opal.c | 289 --
 arch/powerpc/platforms/powernv/pci-ioda.c |   2 +-
 arch/powerpc/sysdev/xive/native.c |   2 +-
 drivers/char/powernv-op-panel.c   |   3 +-
 drivers/i2c/busses/i2c-opal.c |  12 +-
 drivers/mtd/devices/powernv_flash.c   |   4 +-
 include/linux/kallsyms.h  |  20 ++
 kernel/kallsyms.c |  13 +-
 lib/Kconfig   |   3 +
 25 files changed, 461 insertions(+), 57 deletions(-)

-- 
2.23.0



[RFC PATCH 01/10] kallsyms: architecture specific symbol lookups

2020-05-02 Thread Nicholas Piggin
Provide CONFIG_ARCH_HAS_SYMBOL_LOOKUP which allows architectures to
do their own symbol/address lookup if kernel and module lookups miss.

powerpc will use this to deal with firmware symbols.

Signed-off-by: Nicholas Piggin 
---
 include/linux/kallsyms.h | 20 
 kernel/kallsyms.c| 13 -
 lib/Kconfig  |  3 +++
 3 files changed, 35 insertions(+), 1 deletion(-)

diff --git a/include/linux/kallsyms.h b/include/linux/kallsyms.h
index 657a83b943f0..e17c1e7c01c0 100644
--- a/include/linux/kallsyms.h
+++ b/include/linux/kallsyms.h
@@ -83,6 +83,26 @@ extern int kallsyms_lookup_size_offset(unsigned long addr,
  unsigned long *symbolsize,
  unsigned long *offset);
 
+#ifdef CONFIG_ARCH_HAS_SYMBOL_LOOKUP
+const char *arch_symbol_lookup_address(unsigned long addr,
+   unsigned long *symbolsize,
+   unsigned long *offset,
+   char **modname, char *namebuf);
+unsigned long arch_symbol_lookup_name(const char *name);
+#else
+static inline const char *arch_symbol_lookup_address(unsigned long addr,
+   unsigned long *symbolsize,
+   unsigned long *offset,
+   char **modname, char *namebuf)
+{
+   return NULL;
+}
+static inline unsigned long arch_symbol_lookup_name(const char *name)
+{
+   return 0;
+}
+#endif
+
 /* Lookup an address.  modname is set to NULL if it's in the kernel. */
 const char *kallsyms_lookup(unsigned long addr,
unsigned long *symbolsize,
diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
index 16c8c605f4b0..1e403e616126 100644
--- a/kernel/kallsyms.c
+++ b/kernel/kallsyms.c
@@ -164,6 +164,7 @@ static unsigned long kallsyms_sym_address(int idx)
 unsigned long kallsyms_lookup_name(const char *name)
 {
char namebuf[KSYM_NAME_LEN];
+   unsigned long ret;
unsigned long i;
unsigned int off;
 
@@ -173,7 +174,12 @@ unsigned long kallsyms_lookup_name(const char *name)
if (strcmp(namebuf, name) == 0)
return kallsyms_sym_address(i);
}
-   return module_kallsyms_lookup_name(name);
+
+   ret = module_kallsyms_lookup_name(name);
+   if (ret)
+   return ret;
+
+   return arch_symbol_lookup_name(name);
 }
 
 int kallsyms_on_each_symbol(int (*fn)(void *, const char *, struct module *,
@@ -309,6 +315,11 @@ const char *kallsyms_lookup(unsigned long addr,
if (!ret)
ret = ftrace_mod_address_lookup(addr, symbolsize,
offset, modname, namebuf);
+
+   if (!ret)
+   ret = arch_symbol_lookup_address(addr, symbolsize,
+   offset, modname, namebuf);
+
return ret;
 }
 
diff --git a/lib/Kconfig b/lib/Kconfig
index 5d53f9609c25..9f86f649a712 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -80,6 +80,9 @@ config ARCH_USE_CMPXCHG_LOCKREF
 config ARCH_HAS_FAST_MULTIPLIER
bool
 
+config ARCH_HAS_SYMBOL_LOOKUP
+   bool
+
 config INDIRECT_PIO
bool "Access I/O in non-MMIO mode"
depends on ARM64
-- 
2.23.0



[PATCH v2 12/12] powerpc/book3s64/pkeys: Mark all the pkeys above max pkey as reserved

2020-05-02 Thread Aneesh Kumar K.V
The hypervisor can return less than max allowed pkey (for ex: 31) instead
of 32. We should mark all the pkeys above max allowed as reserved so
that we avoid the allocation of the wrong pkey(for ex: key 31 in the above
case) by userspace.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/mm/book3s64/pkeys.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 73b5ef1490c8..0ff59acdbb84 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -175,9 +175,10 @@ static int pkey_initialize(void)
 
/*
 * Prevent the usage of OS reserved keys. Update UAMOR
-* for those keys.
+* for those keys. Also mark the rest of the bits in the
+* 32 bit mask as reserved.
 */
-   for (i = max_pkey; i < pkeys_total; i++) {
+   for (i = max_pkey; i < 32 ; i++) {
reserved_allocation_mask |= (0x1 << i);
default_uamor &= ~(0x3ul << pkeyshift(i));
}
-- 
2.26.2



[PATCH v2 11/12] powerpc/book3s64/pkeys: Make initial_allocation_mask static

2020-05-02 Thread Aneesh Kumar K.V
initial_allocation_mask is not used outside this file.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/pkeys.h | 1 -
 arch/powerpc/mm/book3s64/pkeys.c | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 652bad7334f3..47c81d41ea9a 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -13,7 +13,6 @@
 
 DECLARE_STATIC_KEY_FALSE(pkey_disabled);
 extern int max_pkey;
-extern u32 initial_allocation_mask; /*  bits set for the initially allocated 
keys */
 extern u32 reserved_allocation_mask; /* bits set for reserved keys */
 
 #define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index a4d7287082a8..73b5ef1490c8 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -15,11 +15,11 @@
 DEFINE_STATIC_KEY_FALSE(pkey_disabled);
 DEFINE_STATIC_KEY_FALSE(execute_pkey_disabled);
 int  max_pkey; /* Maximum key value supported */
-u32  initial_allocation_mask;   /* Bits set for the initially allocated keys */
 /*
  *  Keys marked in the reservation list cannot be allocated by  userspace
  */
 u32  reserved_allocation_mask;
+static u32  initial_allocation_mask;   /* Bits set for the initially allocated 
keys */
 static u64 default_amr;
 static u64 default_iamr;
 /* Allow all keys to be modified by default */
-- 
2.26.2



[PATCH v2 10/12] powerpc/book3s64/pkeys: Convert pkey_total to max_pkey

2020-05-02 Thread Aneesh Kumar K.V
max_pkey now represents max key value that userspace can allocate.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/pkeys.h |  7 +--
 arch/powerpc/mm/book3s64/pkeys.c | 14 +++---
 2 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 75d2a2c19c04..652bad7334f3 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -12,7 +12,7 @@
 #include 
 
 DECLARE_STATIC_KEY_FALSE(pkey_disabled);
-extern int pkeys_total; /* total pkeys as per device tree */
+extern int max_pkey;
 extern u32 initial_allocation_mask; /*  bits set for the initially allocated 
keys */
 extern u32 reserved_allocation_mask; /* bits set for reserved keys */
 
@@ -44,7 +44,10 @@ static inline int vma_pkey(struct vm_area_struct *vma)
return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT;
 }
 
-#define arch_max_pkey() pkeys_total
+static inline int arch_max_pkey(void)
+{
+   return max_pkey;
+}
 
 #define pkey_alloc_mask(pkey) (0x1 << pkey)
 
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 87d882a9aaf2..a4d7287082a8 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -14,7 +14,7 @@
 
 DEFINE_STATIC_KEY_FALSE(pkey_disabled);
 DEFINE_STATIC_KEY_FALSE(execute_pkey_disabled);
-int  pkeys_total;  /* Total pkeys as per device tree */
+int  max_pkey; /* Maximum key value supported */
 u32  initial_allocation_mask;   /* Bits set for the initially allocated keys */
 /*
  *  Keys marked in the reservation list cannot be allocated by  userspace
@@ -84,7 +84,7 @@ static int scan_pkey_feature(void)
 
 static int pkey_initialize(void)
 {
-   int os_reserved, i;
+   int pkeys_total, i;
 
/*
 * We define PKEY_DISABLE_EXECUTE in addition to the arch-neutral
@@ -122,12 +122,12 @@ static int pkey_initialize(void)
 * The OS can manage only 8 pkeys due to its inability to represent them
 * in the Linux 4K PTE. Mark all other keys reserved.
 */
-   os_reserved = pkeys_total - 8;
+   max_pkey = min(8, pkeys_total);
 #else
-   os_reserved = 0;
+   max_pkey = pkeys_total;
 #endif
 
-   if (unlikely((pkeys_total - os_reserved) <= execute_only_key)) {
+   if (unlikely(max_pkey <= execute_only_key)) {
/*
 * Insufficient number of keys to support
 * execute only key. Mark it unavailable.
@@ -174,10 +174,10 @@ static int pkey_initialize(void)
default_uamor &= ~(0x3ul << pkeyshift(1));
 
/*
-* Prevent the usage of OS reserved the keys. Update UAMOR
+* Prevent the usage of OS reserved keys. Update UAMOR
 * for those keys.
 */
-   for (i = (pkeys_total - os_reserved); i < pkeys_total; i++) {
+   for (i = max_pkey; i < pkeys_total; i++) {
reserved_allocation_mask |= (0x1 << i);
default_uamor &= ~(0x3ul << pkeyshift(i));
}
-- 
2.26.2



[PATCH v2 09/12] powerpc/book3s64/pkeys: Simplify pkey disable branch

2020-05-02 Thread Aneesh Kumar K.V
Make the default value FALSE (pkey enabled) and set to TRUE when we
find the total number of keys supported to be zero.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/pkeys.h | 2 +-
 arch/powerpc/mm/book3s64/pkeys.c | 7 +++
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 5dd0a79d1809..75d2a2c19c04 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -11,7 +11,7 @@
 #include 
 #include 
 
-DECLARE_STATIC_KEY_TRUE(pkey_disabled);
+DECLARE_STATIC_KEY_FALSE(pkey_disabled);
 extern int pkeys_total; /* total pkeys as per device tree */
 extern u32 initial_allocation_mask; /*  bits set for the initially allocated 
keys */
 extern u32 reserved_allocation_mask; /* bits set for reserved keys */
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 7d400d5a4076..87d882a9aaf2 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -12,7 +12,7 @@
 #include 
 #include 
 
-DEFINE_STATIC_KEY_TRUE(pkey_disabled);
+DEFINE_STATIC_KEY_FALSE(pkey_disabled);
 DEFINE_STATIC_KEY_FALSE(execute_pkey_disabled);
 int  pkeys_total;  /* Total pkeys as per device tree */
 u32  initial_allocation_mask;   /* Bits set for the initially allocated keys */
@@ -104,9 +104,8 @@ static int pkey_initialize(void)
 
/* scan the device tree for pkey feature */
pkeys_total = scan_pkey_feature();
-   if (pkeys_total)
-   static_branch_disable(&pkey_disabled);
-   else {
+   if (!pkeys_total) {
+   /* No support for pkey. Mark it disabled */
static_branch_enable(&pkey_disabled);
return 0;
}
-- 
2.26.2



[PATCH v2 08/12] powerpc/book3s64/pkeys: Convert execute key support to static key

2020-05-02 Thread Aneesh Kumar K.V
Convert the bool to a static key like pkey_disabled.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/mm/book3s64/pkeys.c | 12 +---
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 9e68a08799ee..7d400d5a4076 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -13,13 +13,13 @@
 #include 
 
 DEFINE_STATIC_KEY_TRUE(pkey_disabled);
+DEFINE_STATIC_KEY_FALSE(execute_pkey_disabled);
 int  pkeys_total;  /* Total pkeys as per device tree */
 u32  initial_allocation_mask;   /* Bits set for the initially allocated keys */
 /*
  *  Keys marked in the reservation list cannot be allocated by  userspace
  */
 u32  reserved_allocation_mask;
-static bool pkey_execute_disable_supported;
 static u64 default_amr;
 static u64 default_iamr;
 /* Allow all keys to be modified by default */
@@ -116,9 +116,7 @@ static int pkey_initialize(void)
 * execute_disable support. Instead we use a PVR check.
 */
if (pvr_version_is(PVR_POWER7) || pvr_version_is(PVR_POWER7p))
-   pkey_execute_disable_supported = false;
-   else
-   pkey_execute_disable_supported = true;
+   static_branch_enable(&execute_pkey_disabled);
 
 #ifdef CONFIG_PPC_4K_PAGES
/*
@@ -214,7 +212,7 @@ static inline void write_amr(u64 value)
 
 static inline u64 read_iamr(void)
 {
-   if (!likely(pkey_execute_disable_supported))
+   if (static_branch_unlikely(&execute_pkey_disabled))
return 0x0UL;
 
return mfspr(SPRN_IAMR);
@@ -222,7 +220,7 @@ static inline u64 read_iamr(void)
 
 static inline void write_iamr(u64 value)
 {
-   if (!likely(pkey_execute_disable_supported))
+   if (static_branch_unlikely(&execute_pkey_disabled))
return;
 
mtspr(SPRN_IAMR, value);
@@ -282,7 +280,7 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, 
int pkey,
return -EINVAL;
 
if (init_val & PKEY_DISABLE_EXECUTE) {
-   if (!pkey_execute_disable_supported)
+   if (static_branch_unlikely(&execute_pkey_disabled))
return -EINVAL;
new_iamr_bits |= IAMR_EX_BIT;
}
-- 
2.26.2



[PATCH v2 07/12] powerpc/book3s64/pkeys: kill cpu feature key CPU_FTR_PKEY

2020-05-02 Thread Aneesh Kumar K.V
We don't use CPU_FTR_PKEY anymore. Remove the feature bit and mark it
free.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/cputable.h | 10 +-
 arch/powerpc/kernel/dt_cpu_ftrs.c   |  6 --
 2 files changed, 5 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/cputable.h 
b/arch/powerpc/include/asm/cputable.h
index 40a4d3c6fd99..b77f8258ee8c 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -198,7 +198,7 @@ static inline void cpu_feature_keys_init(void) { }
 #define CPU_FTR_STCX_CHECKS_ADDRESSLONG_ASM_CONST(0x8000)
 #define CPU_FTR_POPCNTB
LONG_ASM_CONST(0x0001)
 #define CPU_FTR_POPCNTD
LONG_ASM_CONST(0x0002)
-#define CPU_FTR_PKEY   LONG_ASM_CONST(0x0004)
+/* LONG_ASM_CONST(0x0004) Free */
 #define CPU_FTR_VMX_COPY   LONG_ASM_CONST(0x0008)
 #define CPU_FTR_TM LONG_ASM_CONST(0x0010)
 #define CPU_FTR_CFAR   LONG_ASM_CONST(0x0020)
@@ -437,7 +437,7 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_DSCR | CPU_FTR_SAO  | CPU_FTR_ASYM_SMT | \
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_CFAR | CPU_FTR_HVMODE | \
-   CPU_FTR_VMX_COPY | CPU_FTR_HAS_PPR | CPU_FTR_DABRX | CPU_FTR_PKEY)
+   CPU_FTR_VMX_COPY | CPU_FTR_HAS_PPR | CPU_FTR_DABRX )
 #define CPU_FTRS_POWER8 (CPU_FTR_LWSYNC | \
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_206 |\
CPU_FTR_MMCRA | CPU_FTR_SMT | \
@@ -447,7 +447,7 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_CFAR | CPU_FTR_HVMODE | CPU_FTR_VMX_COPY | \
CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_DAWR | \
-   CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP | CPU_FTR_PKEY)
+   CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP )
 #define CPU_FTRS_POWER8E (CPU_FTRS_POWER8 | CPU_FTR_PMAO_BUG)
 #define CPU_FTRS_POWER9 (CPU_FTR_LWSYNC | \
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_206 |\
@@ -458,8 +458,8 @@ static inline void cpu_feature_keys_init(void) { }
CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \
CPU_FTR_CFAR | CPU_FTR_HVMODE | CPU_FTR_VMX_COPY | \
CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_ARCH_207S | \
-   CPU_FTR_TM_COMP | CPU_FTR_ARCH_300 | CPU_FTR_PKEY | \
-   CPU_FTR_P9_TLBIE_STQ_BUG | CPU_FTR_P9_TLBIE_ERAT_BUG | 
CPU_FTR_P9_TIDR)
+   CPU_FTR_TM_COMP | CPU_FTR_ARCH_300 | CPU_FTR_P9_TLBIE_STQ_BUG | \
+   CPU_FTR_P9_TLBIE_ERAT_BUG | CPU_FTR_P9_TIDR)
 #define CPU_FTRS_POWER9_DD2_0 (CPU_FTRS_POWER9 | CPU_FTR_P9_RADIX_PREFETCH_BUG)
 #define CPU_FTRS_POWER9_DD2_1 (CPU_FTRS_POWER9 | \
   CPU_FTR_P9_RADIX_PREFETCH_BUG | \
diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c 
b/arch/powerpc/kernel/dt_cpu_ftrs.c
index 36bc0d5c4f3a..120ea339ffda 100644
--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
+++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
@@ -747,12 +747,6 @@ static __init void cpufeatures_cpu_quirks(void)
}
 
update_tlbie_feature_flag(version);
-   /*
-* PKEY was not in the initial base or feature node
-* specification, but it should become optional in the next
-* cpu feature version sequence.
-*/
-   cur_cpu_spec->cpu_features |= CPU_FTR_PKEY;
 }
 
 static void __init cpufeatures_setup_finished(void)
-- 
2.26.2



[PATCH v2 06/12] powerpc/book3s64/pkeys: Prevent key 1 modification from userspace.

2020-05-02 Thread Aneesh Kumar K.V
Key 1 is marked reserved by ISA. Setup uamor to prevent userspace modification
of the same.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/mm/book3s64/pkeys.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 3db0b3cfc322..9e68a08799ee 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -174,6 +174,7 @@ static int pkey_initialize(void)
 * programming note.
 */
reserved_allocation_mask |= (0x1 << 1);
+   default_uamor &= ~(0x3ul << pkeyshift(1));
 
/*
 * Prevent the usage of OS reserved the keys. Update UAMOR
-- 
2.26.2



[PATCH v2 05/12] powerpc/book3s64/pkeys: Simplify the key initialization

2020-05-02 Thread Aneesh Kumar K.V
Add documentation explaining the execute_only_key. The reservation and 
initialization mask
details are also explained in this patch.

No functional change in this patch.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/mm/book3s64/pkeys.c | 186 ++-
 1 file changed, 107 insertions(+), 79 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index d60e6bfa3e03..3db0b3cfc322 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -15,48 +15,71 @@
 DEFINE_STATIC_KEY_TRUE(pkey_disabled);
 int  pkeys_total;  /* Total pkeys as per device tree */
 u32  initial_allocation_mask;   /* Bits set for the initially allocated keys */
-u32  reserved_allocation_mask;  /* Bits set for reserved keys */
+/*
+ *  Keys marked in the reservation list cannot be allocated by  userspace
+ */
+u32  reserved_allocation_mask;
 static bool pkey_execute_disable_supported;
-static bool pkeys_devtree_defined; /* property exported by device tree */
-static u64 pkey_amr_mask;  /* Bits in AMR not to be touched */
-static u64 pkey_iamr_mask; /* Bits in AMR not to be touched */
-static u64 pkey_uamor_mask;/* Bits in UMOR not to be touched */
+static u64 default_amr;
+static u64 default_iamr;
+/* Allow all keys to be modified by default */
+static u64 default_uamor = ~0x0UL;
+/*
+ * Key used to implement PROT_EXEC mmap. Denies READ/WRITE
+ * We pick key 2 because 0 is special key and 1 is reserved as per ISA.
+ */
 static int execute_only_key = 2;
 
+
 #define AMR_BITS_PER_PKEY 2
 #define AMR_RD_BIT 0x1UL
 #define AMR_WR_BIT 0x2UL
 #define IAMR_EX_BIT 0x1UL
-#define PKEY_REG_BITS (sizeof(u64)*8)
+#define PKEY_REG_BITS (sizeof(u64) * 8)
 #define pkeyshift(pkey) (PKEY_REG_BITS - ((pkey+1) * AMR_BITS_PER_PKEY))
 
-static void scan_pkey_feature(void)
+static int scan_pkey_feature(void)
 {
u32 vals[2];
+   int pkeys_total = 0;
struct device_node *cpu;
 
+   /*
+* Pkey is not supported with Radix translation.
+*/
+   if (radix_enabled())
+   return 0;
+
cpu = of_find_node_by_type(NULL, "cpu");
if (!cpu)
-   return;
+   return 0;
 
if (of_property_read_u32_array(cpu,
-   "ibm,processor-storage-keys", vals, 2))
-   return;
+  "ibm,processor-storage-keys", vals, 2) 
== 0) {
+   /*
+* Since any pkey can be used for data or execute, we will
+* just treat all keys as equal and track them as one entity.
+*/
+   pkeys_total = vals[0];
+   /*  Should we check for IAMR support FIXME!! */
+   } else {
+   /*
+* Let's assume 32 pkeys on P8 bare metal, if its not defined 
by device
+* tree. We make this exception since skiboot forgot to expose 
this
+* property on power8.
+*/
+   if (!firmware_has_feature(FW_FEATURE_LPAR) &&
+   cpu_has_feature(CPU_FTRS_POWER8))
+   pkeys_total = 32;
+   }
 
/*
-* Since any pkey can be used for data or execute, we will just treat
-* all keys as equal and track them as one entity.
+* Adjust the upper limit, based on the number of bits supported by
+* arch-neutral code.
 */
-   pkeys_total = vals[0];
-   pkeys_devtree_defined = true;
-}
-
-static inline bool pkey_mmu_enabled(void)
-{
-   if (firmware_has_feature(FW_FEATURE_LPAR))
-   return pkeys_total;
-   else
-   return cpu_has_feature(CPU_FTR_PKEY);
+   pkeys_total = min_t(int, pkeys_total,
+   ((ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT) + 1));
+   return pkeys_total;
 }
 
 static int pkey_initialize(void)
@@ -80,31 +103,13 @@ static int pkey_initialize(void)
!= (sizeof(u64) * BITS_PER_BYTE));
 
/* scan the device tree for pkey feature */
-   scan_pkey_feature();
-
-   /*
-* Let's assume 32 pkeys on P8 bare metal, if its not defined by device
-* tree. We make this exception since skiboot forgot to expose this
-* property on power8.
-*/
-   if (!pkeys_devtree_defined && !firmware_has_feature(FW_FEATURE_LPAR) &&
-   cpu_has_feature(CPU_FTRS_POWER8))
-   pkeys_total = 32;
-
-   /*
-* Adjust the upper limit, based on the number of bits supported by
-* arch-neutral code.
-*/
-   pkeys_total = min_t(int, pkeys_total,
-   ((ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT)+1));
-
-   if (!pkey_mmu_enabled() || radix_enabled() || !pkeys_total)
-   static_branch_enable(&pkey_disabled);
-   else
+   pkeys_total = scan_pkey_feature();
+   if (pkeys_total)
   

[PATCH v2 04/12] powerpc/book3s64/pkeys: Explain key 1 reservation details

2020-05-02 Thread Aneesh Kumar K.V
This explains the details w.r.t key 1.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/mm/book3s64/pkeys.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 1199fc2bfaec..d60e6bfa3e03 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -124,7 +124,10 @@ static int pkey_initialize(void)
 #else
os_reserved = 0;
 #endif
-   /* Bits are in LE format. */
+   /*
+* key 1 is recommended not to be used. PowerISA(3.0) page 1015,
+* programming note.
+*/
reserved_allocation_mask = (0x1 << 1) | (0x1 << execute_only_key);
 
/* register mask is in BE format */
-- 
2.26.2



[PATCH v2 03/12] powerpc/book3s64/pkeys: Move pkey related bits in the linux page table

2020-05-02 Thread Aneesh Kumar K.V
To keep things simple, all the pkey related bits are kept together
in linux page table for 64K config with hash translation. With hash-4k
kernel requires 4 bits to store slots details. This is done by overloading
some of the RPN bits for storing the slot details. Due to this PKEY_BIT0 on
the 4K config is used for storing hash slot details.

64K before

||RSV1| RSV2| RSV3 | RSV4 | RPN44| RPN43   | | RSV5|
|| P4 |  P3 |  P2  |  P1  | Busy | HASHPTE | |  P0 |

after

||RSV1| RSV2| RSV3 | RSV4 | RPN44 | RPN43   | | RSV5 |
|| P4 |  P3 |  P2  |  P1  | P0| HASHPTE | | Busy |

4k before

|| RSV1 | RSV2 | RSV3 | RSV4 | RPN44| RPN43 | RSV5|
|| Busy |  HASHPTE |  P2  |  P1  | F_SEC| F_GIX |  P0 |

after

|| RSV1| RSV2| RSV3 | RSV4 | Free | RPN43 | RSV5 |
|| HASHPTE |  P2 |  P1  |  P0  | F_SEC| F_GIX | BUSY |

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/hash-4k.h  | 16 
 arch/powerpc/include/asm/book3s/64/hash-64k.h | 12 ++--
 arch/powerpc/include/asm/book3s/64/pgtable.h  | 17 -
 3 files changed, 22 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h 
b/arch/powerpc/include/asm/book3s/64/hash-4k.h
index f889d56bf8cf..082b98808701 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
@@ -34,11 +34,11 @@
 #define H_PUD_TABLE_SIZE   (sizeof(pud_t) << H_PUD_INDEX_SIZE)
 #define H_PGD_TABLE_SIZE   (sizeof(pgd_t) << H_PGD_INDEX_SIZE)
 
-#define H_PAGE_F_GIX_SHIFT 53
-#define H_PAGE_F_SECOND_RPAGE_RPN44/* HPTE is in 2ndary HPTEG */
-#define H_PAGE_F_GIX   (_RPAGE_RPN43 | _RPAGE_RPN42 | _RPAGE_RPN41)
-#define H_PAGE_BUSY_RPAGE_RSV1 /* software: PTE & hash are busy */
-#define H_PAGE_HASHPTE _RPAGE_RSV2 /* software: PTE & hash are busy */
+#define H_PAGE_F_GIX_SHIFT _PAGE_PA_MAX
+#define H_PAGE_F_SECOND_RPAGE_PKEY_BIT0 /* HPTE is in 2ndary 
HPTEG */
+#define H_PAGE_F_GIX   (_RPAGE_RPN43 | _RPAGE_RPN42 | _RPAGE_RPN41)
+#define H_PAGE_BUSY_RPAGE_RSV1
+#define H_PAGE_HASHPTE _RPAGE_PKEY_BIT4
 
 /* PTE flags to conserve for HPTE identification */
 #define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | \
@@ -59,9 +59,9 @@
 /* memory key bits, only 8 keys supported */
 #define H_PTE_PKEY_BIT40
 #define H_PTE_PKEY_BIT30
-#define H_PTE_PKEY_BIT2_RPAGE_RSV3
-#define H_PTE_PKEY_BIT1_RPAGE_RSV4
-#define H_PTE_PKEY_BIT0_RPAGE_RSV5
+#define H_PTE_PKEY_BIT2_RPAGE_PKEY_BIT3
+#define H_PTE_PKEY_BIT1_RPAGE_PKEY_BIT2
+#define H_PTE_PKEY_BIT0_RPAGE_PKEY_BIT1
 
 
 /*
diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h 
b/arch/powerpc/include/asm/book3s/64/hash-64k.h
index 0a15fd14cf72..f20de1149ebe 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
@@ -32,15 +32,15 @@
  */
 #define H_PAGE_COMBO   _RPAGE_RPN0 /* this is a combo 4k page */
 #define H_PAGE_4K_PFN  _RPAGE_RPN1 /* PFN is for a single 4k page */
-#define H_PAGE_BUSY_RPAGE_RPN44 /* software: PTE & hash are busy */
+#define H_PAGE_BUSY_RPAGE_RSV1 /* software: PTE & hash are busy */
 #define H_PAGE_HASHPTE _RPAGE_RPN43/* PTE has associated HPTE */
 
 /* memory key bits. */
-#define H_PTE_PKEY_BIT4_RPAGE_RSV1
-#define H_PTE_PKEY_BIT3_RPAGE_RSV2
-#define H_PTE_PKEY_BIT2_RPAGE_RSV3
-#define H_PTE_PKEY_BIT1_RPAGE_RSV4
-#define H_PTE_PKEY_BIT0_RPAGE_RSV5
+#define H_PTE_PKEY_BIT4_RPAGE_PKEY_BIT4
+#define H_PTE_PKEY_BIT3_RPAGE_PKEY_BIT3
+#define H_PTE_PKEY_BIT2_RPAGE_PKEY_BIT2
+#define H_PTE_PKEY_BIT1_RPAGE_PKEY_BIT1
+#define H_PTE_PKEY_BIT0_RPAGE_PKEY_BIT0
 
 /*
  * We need to differentiate between explicit huge page and THP huge
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h 
b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 368b136517e0..e31369707f9f 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -32,11 +32,13 @@
 #define _RPAGE_SW1 0x00800
 #define _RPAGE_SW2 0x00400
 #define _RPAGE_SW3 0x00200
-#define _RPAGE_RSV10x1000UL
-#define _RPAGE_RSV20x0800UL
-#define _RPAGE_RSV30x0400UL
-#define _RPAGE_RSV40x0200UL
-#define _RPAGE_RSV50x00040UL
+#define _RPAGE_RSV10x00040UL
+
+#define _RPAGE_PKEY_BIT4   0x1000UL
+#define _RPAGE_PKEY_BIT3   0x0800UL
+#define _RPAGE_PKEY_BIT2   0x0400UL
+#define _RPAGE_PKEY_BIT1   0x0200UL
+#define _RPAGE_PKEY_BIT0   0x0100UL
 
 #define _PAGE_PTE  0x40

[PATCH v2 02/12] powerpc/book3s64/pkeys: pkeys are supported only on hash on book3s.

2020-05-02 Thread Aneesh Kumar K.V
Move them to hash specific file and add BUG() for radix path.
---
 .../powerpc/include/asm/book3s/64/hash-pkey.h | 32 
 arch/powerpc/include/asm/book3s/64/pkeys.h| 25 +
 arch/powerpc/include/asm/pkeys.h  | 37 ---
 3 files changed, 64 insertions(+), 30 deletions(-)
 create mode 100644 arch/powerpc/include/asm/book3s/64/hash-pkey.h
 create mode 100644 arch/powerpc/include/asm/book3s/64/pkeys.h

diff --git a/arch/powerpc/include/asm/book3s/64/hash-pkey.h 
b/arch/powerpc/include/asm/book3s/64/hash-pkey.h
new file mode 100644
index ..795010897e5d
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/hash-pkey.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_POWERPC_BOOK3S_64_HASH_PKEY_H
+#define _ASM_POWERPC_BOOK3S_64_HASH_PKEY_H
+
+static inline u64 hash__vmflag_to_pte_pkey_bits(u64 vm_flags)
+{
+   return (((vm_flags & VM_PKEY_BIT0) ? H_PTE_PKEY_BIT0 : 0x0UL) |
+   ((vm_flags & VM_PKEY_BIT1) ? H_PTE_PKEY_BIT1 : 0x0UL) |
+   ((vm_flags & VM_PKEY_BIT2) ? H_PTE_PKEY_BIT2 : 0x0UL) |
+   ((vm_flags & VM_PKEY_BIT3) ? H_PTE_PKEY_BIT3 : 0x0UL) |
+   ((vm_flags & VM_PKEY_BIT4) ? H_PTE_PKEY_BIT4 : 0x0UL));
+}
+
+static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
+{
+   return (((pteflags & H_PTE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL) |
+   ((pteflags & H_PTE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) |
+   ((pteflags & H_PTE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) |
+   ((pteflags & H_PTE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) |
+   ((pteflags & H_PTE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL));
+}
+
+static inline u16 hash__pte_to_pkey_bits(u64 pteflags)
+{
+   return (((pteflags & H_PTE_PKEY_BIT4) ? 0x10 : 0x0UL) |
+   ((pteflags & H_PTE_PKEY_BIT3) ? 0x8 : 0x0UL) |
+   ((pteflags & H_PTE_PKEY_BIT2) ? 0x4 : 0x0UL) |
+   ((pteflags & H_PTE_PKEY_BIT1) ? 0x2 : 0x0UL) |
+   ((pteflags & H_PTE_PKEY_BIT0) ? 0x1 : 0x0UL));
+}
+
+#endif
diff --git a/arch/powerpc/include/asm/book3s/64/pkeys.h 
b/arch/powerpc/include/asm/book3s/64/pkeys.h
new file mode 100644
index ..8174662a9173
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/pkeys.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+
+#ifndef _ASM_POWERPC_BOOK3S_64_PKEYS_H
+#define _ASM_POWERPC_BOOK3S_64_PKEYS_H
+
+#include 
+
+static inline u64 vmflag_to_pte_pkey_bits(u64 vm_flags)
+{
+   if (static_branch_likely(&pkey_disabled))
+   return 0x0UL;
+
+   if (radix_enabled())
+   BUG();
+   return hash__vmflag_to_pte_pkey_bits(vm_flags);
+}
+
+static inline u16 pte_to_pkey_bits(u64 pteflags)
+{
+   if (radix_enabled())
+   BUG();
+   return hash__pte_to_pkey_bits(pteflags);
+}
+
+#endif /*_ASM_POWERPC_KEYS_H */
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index f8f4d0793789..5dd0a79d1809 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -25,23 +25,18 @@ extern u32 reserved_allocation_mask; /* bits set for 
reserved keys */
PKEY_DISABLE_WRITE  | \
PKEY_DISABLE_EXECUTE)
 
+#ifdef CONFIG_PPC_BOOK3S_64
+#include 
+#else
+#error "Not supported"
+#endif
+
+
 static inline u64 pkey_to_vmflag_bits(u16 pkey)
 {
return (((u64)pkey << VM_PKEY_SHIFT) & ARCH_VM_PKEY_FLAGS);
 }
 
-static inline u64 vmflag_to_pte_pkey_bits(u64 vm_flags)
-{
-   if (static_branch_likely(&pkey_disabled))
-   return 0x0UL;
-
-   return (((vm_flags & VM_PKEY_BIT0) ? H_PTE_PKEY_BIT0 : 0x0UL) |
-   ((vm_flags & VM_PKEY_BIT1) ? H_PTE_PKEY_BIT1 : 0x0UL) |
-   ((vm_flags & VM_PKEY_BIT2) ? H_PTE_PKEY_BIT2 : 0x0UL) |
-   ((vm_flags & VM_PKEY_BIT3) ? H_PTE_PKEY_BIT3 : 0x0UL) |
-   ((vm_flags & VM_PKEY_BIT4) ? H_PTE_PKEY_BIT4 : 0x0UL));
-}
-
 static inline int vma_pkey(struct vm_area_struct *vma)
 {
if (static_branch_likely(&pkey_disabled))
@@ -51,24 +46,6 @@ static inline int vma_pkey(struct vm_area_struct *vma)
 
 #define arch_max_pkey() pkeys_total
 
-static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
-{
-   return (((pteflags & H_PTE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL) |
-   ((pteflags & H_PTE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) |
-   ((pteflags & H_PTE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) |
-   ((pteflags & H_PTE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) |
-   ((pteflags & H_PTE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL));
-}
-
-static inline u16 pte_to_pkey_bits(u64 pteflags)
-{
-   return (((pteflags & H_PTE_PKEY_BIT4) ? 0x10 : 0x0UL) |
-   ((pteflags & H_PTE_PKEY_BIT3) ? 0x8 : 0x0UL) |
-   ((pteflags & H_PTE_PKEY_BIT2) ? 0x4 : 0x0UL) |
-   ((pteflags & H_PTE_PKEY_BIT1) ? 0x2 : 0x0UL) |
- 

[PATCH v2 01/12] powerpc/book3s64/pkeys: Fixup bit numbering

2020-05-02 Thread Aneesh Kumar K.V
This number the pkey bit such that it is easy to follow. PKEY_BIT0 is
the lower order bit. This makes further changes easy to follow.

No functional change in this patch other than linux page table for
hash translation now maps pkeys differently.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/hash-4k.h  |  9 +++
 arch/powerpc/include/asm/book3s/64/hash-64k.h |  8 +++
 arch/powerpc/include/asm/book3s/64/mmu-hash.h |  8 +++
 arch/powerpc/include/asm/pkeys.h  | 24 +--
 4 files changed, 25 insertions(+), 24 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h 
b/arch/powerpc/include/asm/book3s/64/hash-4k.h
index 3f9ae3585ab9..f889d56bf8cf 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
@@ -57,11 +57,12 @@
 #define H_PMD_FRAG_NR  (PAGE_SIZE >> H_PMD_FRAG_SIZE_SHIFT)
 
 /* memory key bits, only 8 keys supported */
-#define H_PTE_PKEY_BIT00
-#define H_PTE_PKEY_BIT10
+#define H_PTE_PKEY_BIT40
+#define H_PTE_PKEY_BIT30
 #define H_PTE_PKEY_BIT2_RPAGE_RSV3
-#define H_PTE_PKEY_BIT3_RPAGE_RSV4
-#define H_PTE_PKEY_BIT4_RPAGE_RSV5
+#define H_PTE_PKEY_BIT1_RPAGE_RSV4
+#define H_PTE_PKEY_BIT0_RPAGE_RSV5
+
 
 /*
  * On all 4K setups, remap_4k_pfn() equates to remap_pfn_range()
diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h 
b/arch/powerpc/include/asm/book3s/64/hash-64k.h
index 0729c034e56f..0a15fd14cf72 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
@@ -36,11 +36,11 @@
 #define H_PAGE_HASHPTE _RPAGE_RPN43/* PTE has associated HPTE */
 
 /* memory key bits. */
-#define H_PTE_PKEY_BIT0_RPAGE_RSV1
-#define H_PTE_PKEY_BIT1_RPAGE_RSV2
+#define H_PTE_PKEY_BIT4_RPAGE_RSV1
+#define H_PTE_PKEY_BIT3_RPAGE_RSV2
 #define H_PTE_PKEY_BIT2_RPAGE_RSV3
-#define H_PTE_PKEY_BIT3_RPAGE_RSV4
-#define H_PTE_PKEY_BIT4_RPAGE_RSV5
+#define H_PTE_PKEY_BIT1_RPAGE_RSV4
+#define H_PTE_PKEY_BIT0_RPAGE_RSV5
 
 /*
  * We need to differentiate between explicit huge page and THP huge
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h 
b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index 3fa1b962dc27..58fcc959f9d5 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -86,8 +86,8 @@
 #define HPTE_R_PP0 ASM_CONST(0x8000)
 #define HPTE_R_TS  ASM_CONST(0x4000)
 #define HPTE_R_KEY_HI  ASM_CONST(0x3000)
-#define HPTE_R_KEY_BIT0ASM_CONST(0x2000)
-#define HPTE_R_KEY_BIT1ASM_CONST(0x1000)
+#define HPTE_R_KEY_BIT4ASM_CONST(0x2000)
+#define HPTE_R_KEY_BIT3ASM_CONST(0x1000)
 #define HPTE_R_RPN_SHIFT   12
 #define HPTE_R_RPN ASM_CONST(0x0000)
 #define HPTE_R_RPN_3_0 ASM_CONST(0x01fff000)
@@ -103,8 +103,8 @@
 #define HPTE_R_R   ASM_CONST(0x0100)
 #define HPTE_R_KEY_LO  ASM_CONST(0x0e00)
 #define HPTE_R_KEY_BIT2ASM_CONST(0x0800)
-#define HPTE_R_KEY_BIT3ASM_CONST(0x0400)
-#define HPTE_R_KEY_BIT4ASM_CONST(0x0200)
+#define HPTE_R_KEY_BIT1ASM_CONST(0x0400)
+#define HPTE_R_KEY_BIT0ASM_CONST(0x0200)
 #define HPTE_R_KEY (HPTE_R_KEY_LO | HPTE_R_KEY_HI)
 
 #define HPTE_V_1TB_SEG ASM_CONST(0x4000)
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 20ebf153c871..f8f4d0793789 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -35,11 +35,11 @@ static inline u64 vmflag_to_pte_pkey_bits(u64 vm_flags)
if (static_branch_likely(&pkey_disabled))
return 0x0UL;
 
-   return (((vm_flags & VM_PKEY_BIT0) ? H_PTE_PKEY_BIT4 : 0x0UL) |
-   ((vm_flags & VM_PKEY_BIT1) ? H_PTE_PKEY_BIT3 : 0x0UL) |
+   return (((vm_flags & VM_PKEY_BIT0) ? H_PTE_PKEY_BIT0 : 0x0UL) |
+   ((vm_flags & VM_PKEY_BIT1) ? H_PTE_PKEY_BIT1 : 0x0UL) |
((vm_flags & VM_PKEY_BIT2) ? H_PTE_PKEY_BIT2 : 0x0UL) |
-   ((vm_flags & VM_PKEY_BIT3) ? H_PTE_PKEY_BIT1 : 0x0UL) |
-   ((vm_flags & VM_PKEY_BIT4) ? H_PTE_PKEY_BIT0 : 0x0UL));
+   ((vm_flags & VM_PKEY_BIT3) ? H_PTE_PKEY_BIT3 : 0x0UL) |
+   ((vm_flags & VM_PKEY_BIT4) ? H_PTE_PKEY_BIT4 : 0x0UL));
 }
 
 static inline int vma_pkey(struct vm_area_struct *vma)
@@ -53,20 +53,20 @@ static inline int vma_pkey(struct vm_area_struct *vma)
 
 static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
 {
-   return (((pteflags & H_PTE_PKEY_BIT0)

[PATCH v2 00/12] powerpc/book3s/64/pkeys: Simplify the code

2020-05-02 Thread Aneesh Kumar K.V
This patch series update the pkey subsystem with more documentation and
rename variables so that it is easy to follow the code. The last patch
does fix a problem where we are treating keys above max_pkey as available.
But userspace is not impacted because using that key in mprotect_pkey returns
error due to limit check there. Also the uamor, value set by the platform is 
such
that it will deny modification of keys above max pkey.

Changes from V1:
* Rebase to the latest kernel.
* Added two new patches 6 and 12.


Aneesh Kumar K.V (12):
  powerpc/book3s64/pkeys: Fixup bit numbering
  powerpc/book3s64/pkeys: pkeys are supported only on hash on book3s.
  powerpc/book3s64/pkeys: Move pkey related bits in the linux page table
  powerpc/book3s64/pkeys: Explain key 1 reservation details
  powerpc/book3s64/pkeys: Simplify the key initialization
  powerpc/book3s64/pkeys: Prevent key 1 modification from userspace.
  powerpc/book3s64/pkeys: kill cpu feature key CPU_FTR_PKEY
  powerpc/book3s64/pkeys: Convert execute key support to static key
  powerpc/book3s64/pkeys: Simplify pkey disable branch
  powerpc/book3s64/pkeys: Convert pkey_total to max_pkey
  powerpc/book3s64/pkeys: Make initial_allocation_mask static
  powerpc/book3s64/pkeys: Mark all the pkeys above max pkey as reserved

 arch/powerpc/include/asm/book3s/64/hash-4k.h  |  21 +-
 arch/powerpc/include/asm/book3s/64/hash-64k.h |  12 +-
 .../powerpc/include/asm/book3s/64/hash-pkey.h |  32 +++
 arch/powerpc/include/asm/book3s/64/mmu-hash.h |   8 +-
 arch/powerpc/include/asm/book3s/64/pgtable.h  |  17 +-
 arch/powerpc/include/asm/book3s/64/pkeys.h|  25 +++
 arch/powerpc/include/asm/cputable.h   |  10 +-
 arch/powerpc/include/asm/pkeys.h  |  43 +---
 arch/powerpc/kernel/dt_cpu_ftrs.c |   6 -
 arch/powerpc/mm/book3s64/pkeys.c  | 210 ++
 10 files changed, 222 insertions(+), 162 deletions(-)
 create mode 100644 arch/powerpc/include/asm/book3s/64/hash-pkey.h
 create mode 100644 arch/powerpc/include/asm/book3s/64/pkeys.h

-- 
2.26.2



[powerpc:fixes-test] BUILD SUCCESS e2abb0f00606ece8b191679bbc3f9246738fb88e

2020-05-02 Thread kbuild test robot
tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git  
fixes-test
branch HEAD: e2abb0f00606ece8b191679bbc3f9246738fb88e  Merge KUAP fix from 
topic/uaccess-ppc into fixes-test

elapsed time: 689m

configs tested: 204
configs skipped: 0

The following configs have been built successfully.
More configs may be tested in the coming days.

arm64allyesconfig
arm  allyesconfig
arm64allmodconfig
arm  allmodconfig
arm64 allnoconfig
arm   allnoconfig
arm   efm32_defconfig
arm at91_dt_defconfig
armshmobile_defconfig
arm64   defconfig
arm  exynos_defconfig
armmulti_v5_defconfig
arm   sunxi_defconfig
armmulti_v7_defconfig
sparcallyesconfig
powerpc defconfig
ia64defconfig
arc defconfig
mipsar7_defconfig
mips  ath79_defconfig
mips allmodconfig
nios2 3c120_defconfig
sparc64 defconfig
cskydefconfig
sh  rsk7269_defconfig
ia64  allnoconfig
nds32 allnoconfig
m68k   sun3_defconfig
i386  allnoconfig
i386 allyesconfig
i386 alldefconfig
i386defconfig
i386  debian-10.3
ia64 allmodconfig
ia64generic_defconfig
ia64  tiger_defconfig
ia64 bigsur_defconfig
ia64 allyesconfig
ia64 alldefconfig
m68k   m5475evb_defconfig
m68k allmodconfig
m68k   bvme6000_defconfig
m68k  multi_defconfig
nios2 10m50_defconfig
c6xevmc6678_defconfig
c6x  allyesconfig
openrisc simple_smp_defconfig
openriscor1ksim_defconfig
nds32   defconfig
alpha   defconfig
h8300   h8s-sim_defconfig
h8300 edosk2674_defconfig
xtensa  iss_defconfig
h8300h8300h-sim_defconfig
xtensa   common_defconfig
arc  allyesconfig
microblaze  mmu_defconfig
microblazenommu_defconfig
mips  fuloong2e_defconfig
mips  malta_kvm_defconfig
mips allyesconfig
mips 64r6el_defconfig
mips  allnoconfig
mips   32r2_defconfig
mipsmalta_kvm_guest_defconfig
mips tb0287_defconfig
mips   capcella_defconfig
mips   ip32_defconfig
mips  decstation_64_defconfig
mips  loongson3_defconfig
mipsbcm63xx_defconfig
pariscallnoconfig
pariscgeneric-64bit_defconfig
pariscgeneric-32bit_defconfig
parisc   allyesconfig
parisc   allmodconfig
powerpc  chrp32_defconfig
powerpc   holly_defconfig
powerpc   ppc64_defconfig
powerpc  rhel-kconfig
powerpc   allnoconfig
powerpc  mpc866_ads_defconfig
powerpcamigaone_defconfig
powerpcadder875_defconfig
powerpc ep8248e_defconfig
powerpc  g5_defconfig
powerpc mpc512x_defconfig
m68k randconfig-a001-20200502
mips randconfig-a001-20200502
nds32randconfig-a001-20200502
alpharandconfig-a001-20200502
parisc   randconfig-a001-20200502
riscvrandconfig-a001-20200502
h8300randconfig-a001-20200502
nios2randconfig-a001-20200502
microblaze   randconfig-a001-20200502
c6x  randconfig-a001-20200502
sparc64  randconfig-a001-20200502
s390 randconfig-a001-20200502
xtensa   randconfig-a001-20200502
sh   randconfig

Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP

2020-05-02 Thread David Hildenbrand
>> Now, let's clarify what I want regarding virtio-mem:
>>
>> 1. kexec should not add virtio-mem memory to the initial firmware
>>memmap. The driver has to be in charge as discussed.
>> 2. kexec should not place kexec images onto virtio-mem memory. That
>>would end badly.
>> 3. kexec should still dump virtio-mem memory via kdump.
> 
> Ok, but then seems to say to me that dax/kmem is a different type of
> (driver managed) than virtio-mem and it's confusing to try to apply
> the same meaning. Why not just call your type for the distinct type it
> is "System RAM (virtio-mem)" and let any other driver managed memory
> follow the same "System RAM ($driver)" format if it wants?

I had the same idea but discarded it because it seemed to uglify the
add_memory() interface (passing yet another parameter only relevant for
driver managed memory). Maybe we really want a new one, because I like
that idea:

/*
 * Add special, driver-managed memory to the system as system ram.
 * The resource_name is expected to have the name format "System RAM
 * ($DRIVER)", so user space (esp. kexec-tools)" can special-case it.
 *
 * For this memory, no entries in /sys/firmware/memmap are created,
 * as this memory won't be part of the raw firmware-provided memory map
 * e.g., after a reboot. Also, the created memory resource is flagged
 * with IORESOURCE_MEM_DRIVER_MANAGED, so in-kernel users can special-
 * case this memory (e.g., not place kexec images onto it).
 */
int add_memory_driver_managed(int nid, u64 start, u64 size,
  const char *resource_name);


If we'd ever have to special case it even more in the kernel, we could
allow to specify further resource flags. While passing the driver name
instead of the resource_name would be an option, this way we don't have
to hand craft new resource strings for added memory resources.

Thoughts?

-- 
Thanks,

David / dhildenb



[powerpc:topic/uaccess-ppc] BUILD SUCCESS 4fe5cda9f89d0aea8e915b7c96ae34bda4e12e51

2020-05-02 Thread kbuild test robot
tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git  
topic/uaccess-ppc
branch HEAD: 4fe5cda9f89d0aea8e915b7c96ae34bda4e12e51  powerpc/uaccess: 
Implement user_read_access_begin and user_write_access_begin

elapsed time: 533m

configs tested: 216
configs skipped: 0

The following configs have been built successfully.
More configs may be tested in the coming days.

arm64allyesconfig
arm  allyesconfig
arm64allmodconfig
arm  allmodconfig
arm64 allnoconfig
arm   allnoconfig
arm   efm32_defconfig
arm at91_dt_defconfig
armshmobile_defconfig
arm64   defconfig
arm  exynos_defconfig
armmulti_v5_defconfig
arm   sunxi_defconfig
armmulti_v7_defconfig
arc defconfig
mipsar7_defconfig
mips allmodconfig
nios2 3c120_defconfig
sparc64 defconfig
cskydefconfig
sh  rsk7269_defconfig
ia64  allnoconfig
i386  allnoconfig
i386 allyesconfig
i386 alldefconfig
i386defconfig
i386  debian-10.3
ia64 allmodconfig
ia64defconfig
ia64generic_defconfig
ia64  tiger_defconfig
ia64 bigsur_defconfig
ia64 allyesconfig
ia64 alldefconfig
m68k   m5475evb_defconfig
m68k allmodconfig
m68k   bvme6000_defconfig
m68k   sun3_defconfig
m68k  multi_defconfig
nios2 10m50_defconfig
c6xevmc6678_defconfig
c6x  allyesconfig
openrisc simple_smp_defconfig
openriscor1ksim_defconfig
nds32   defconfig
nds32 allnoconfig
alpha   defconfig
h8300   h8s-sim_defconfig
h8300 edosk2674_defconfig
xtensa  iss_defconfig
h8300h8300h-sim_defconfig
xtensa   common_defconfig
arc  allyesconfig
microblaze  mmu_defconfig
microblazenommu_defconfig
mips  fuloong2e_defconfig
mips  malta_kvm_defconfig
mips allyesconfig
mips 64r6el_defconfig
mips  allnoconfig
mips   32r2_defconfig
mipsmalta_kvm_guest_defconfig
mips tb0287_defconfig
mips   capcella_defconfig
mips   ip32_defconfig
mips  decstation_64_defconfig
mips  loongson3_defconfig
mips  ath79_defconfig
mipsbcm63xx_defconfig
pariscallnoconfig
pariscgeneric-64bit_defconfig
pariscgeneric-32bit_defconfig
parisc   allyesconfig
parisc   allmodconfig
powerpc  chrp32_defconfig
powerpc defconfig
powerpc   holly_defconfig
powerpc   ppc64_defconfig
powerpc  rhel-kconfig
powerpc   allnoconfig
powerpc  mpc866_ads_defconfig
powerpcamigaone_defconfig
powerpcadder875_defconfig
powerpc ep8248e_defconfig
powerpc  g5_defconfig
powerpc mpc512x_defconfig
m68k randconfig-a001-20200502
mips randconfig-a001-20200502
nds32randconfig-a001-20200502
alpharandconfig-a001-20200502
parisc   randconfig-a001-20200502
riscvrandconfig-a001-20200502
parisc   randconfig-a001-20200430
mips randconfig-a001-20200430
m68k randconfig-a001-20200430
riscvrandconfig-a001-20200430
alpharandconfig-a001-20200430
nds32randconfig-a001-20200430
h8300randconfig-a001-20200502
nios2randconfig-a001-20200502