linux-next: manual merge of the kvm tree with the powerpc tree

2016-07-20 Thread Stephen Rothwell
Hi all,

Today's linux-next merge of the kvm tree got a conflict in:

  arch/powerpc/kernel/Makefile

between commit:

  27d114966735 ("powerpc/32: Remove RELOCATABLE_PPC32")

from the powerpc tree and commit:

  fd7bacbca47a ("KVM: PPC: Book3S HV: Fix TB corruption in guest exit path on 
HMI interrupt")

from the kvm tree.

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging.  You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

-- 
Cheers,
Stephen Rothwell

diff --cc arch/powerpc/kernel/Makefile
index 62df36c3f138,6972a23433d3..
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@@ -46,7 -41,8 +46,7 @@@ obj-$(CONFIG_VDSO32)  += vdso32
  obj-$(CONFIG_HAVE_HW_BREAKPOINT)  += hw_breakpoint.o
  obj-$(CONFIG_PPC_BOOK3S_64)   += cpu_setup_ppc970.o cpu_setup_pa6t.o
  obj-$(CONFIG_PPC_BOOK3S_64)   += cpu_setup_power.o
- obj-$(CONFIG_PPC_BOOK3S_64)   += mce.o mce_power.o
+ obj-$(CONFIG_PPC_BOOK3S_64)   += mce.o mce_power.o hmi.o
 -obj64-$(CONFIG_RELOCATABLE)   += reloc_64.o
  obj-$(CONFIG_PPC_BOOK3E_64)   += exceptions-64e.o idle_book3e.o
  obj-$(CONFIG_PPC64)   += vdso64/
  obj-$(CONFIG_ALTIVEC) += vecemu.o
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v3] powerpc: Merge 32-bit and 64-bit setup_arch()

2016-07-20 Thread Michael Ellerman
From: Benjamin Herrenschmidt 

There is little enough differences now.

mpe: Add a/p/k/setup.h to contain the prototypes and empty versions of
functions we need, rather than using weak functions. Add a few other
empty versions to avoid as many #ifdefs as possible in the code.

Signed-off-by: Benjamin Herrenschmidt 
Signed-off-by: Michael Ellerman 
---
 arch/powerpc/include/asm/rtas.h|   3 +-
 arch/powerpc/include/asm/smp.h |   9 +-
 arch/powerpc/kernel/setup-common.c | 173 
 arch/powerpc/kernel/setup.h|  58 
 arch/powerpc/kernel/setup_32.c |  65 +-
 arch/powerpc/kernel/setup_64.c | 176 ++---
 6 files changed, 250 insertions(+), 234 deletions(-)
 create mode 100644 arch/powerpc/kernel/setup.h

v3: Move empty definitions to arch/powerpc/kernel/setup.h, they're not needed
by other parts of the kernel so it's neater to keep the private. Fix build
break for SMP=n BOOK3E=y.
v2: Add empty versions using #ifdef in setup.h rather than weak functions.

diff --git a/arch/powerpc/include/asm/rtas.h b/arch/powerpc/include/asm/rtas.h
index fa3e3c4367bd..9c23baa10b81 100644
--- a/arch/powerpc/include/asm/rtas.h
+++ b/arch/powerpc/include/asm/rtas.h
@@ -351,7 +351,6 @@ extern bool rtas_indicator_present(int token, int 
*maxindex);
 extern int rtas_set_indicator(int indicator, int index, int new_value);
 extern int rtas_set_indicator_fast(int indicator, int index, int new_value);
 extern void rtas_progress(char *s, unsigned short hex);
-extern void rtas_initialize(void);
 extern int rtas_suspend_cpu(struct rtas_suspend_me_data *data);
 extern int rtas_suspend_last_cpu(struct rtas_suspend_me_data *data);
 extern int rtas_online_cpus_mask(cpumask_var_t cpus);
@@ -460,9 +459,11 @@ static inline int page_is_rtas_user_buf(unsigned long pfn)
 /* Not the best place to put pSeries_coalesce_init, will be fixed when we
  * move some of the rtas suspend-me stuff to pseries */
 extern void pSeries_coalesce_init(void);
+void rtas_initialize(void);
 #else
 static inline int page_is_rtas_user_buf(unsigned long pfn) { return 0;}
 static inline void pSeries_coalesce_init(void) { }
+static inline void rtas_initialize(void) { };
 #endif
 
 extern int call_rtas(const char *, int, int, unsigned long *, ...);
diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
index e1afd4c4f695..0d02c11dc331 100644
--- a/arch/powerpc/include/asm/smp.h
+++ b/arch/powerpc/include/asm/smp.h
@@ -160,9 +160,6 @@ static inline void set_hard_smp_processor_id(int cpu, int 
phys)
 {
paca[cpu].hw_cpu_id = phys;
 }
-
-extern void smp_release_cpus(void);
-
 #else
 /* 32-bit */
 #ifndef CONFIG_SMP
@@ -179,6 +176,12 @@ static inline void set_hard_smp_processor_id(int cpu, int 
phys)
 #endif /* !CONFIG_SMP */
 #endif /* !CONFIG_PPC64 */
 
+#if defined(CONFIG_PPC64) && (defined(CONFIG_SMP) || defined(CONFIG_KEXEC))
+extern void smp_release_cpus(void);
+#else
+static inline void smp_release_cpus(void) { };
+#endif
+
 extern int smt_enabled_at_boot;
 
 extern void smp_mpic_probe(void);
diff --git a/arch/powerpc/kernel/setup-common.c 
b/arch/powerpc/kernel/setup-common.c
index ca9255e3b763..abb81144fb8e 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -35,6 +35,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -61,6 +62,12 @@
 #include 
 #include 
 #include 
+#include 
+#include 
+#include 
+#include 
+
+#include "setup.h"
 
 #ifdef DEBUG
 #include 
@@ -758,3 +765,169 @@ void arch_setup_pdev_archdata(struct platform_device 
*pdev)
pdev->dev.dma_mask = >archdata.dma_mask;
set_dma_ops(>dev, _direct_ops);
 }
+
+static __init void print_system_info(void)
+{
+   pr_info("-\n");
+#ifdef CONFIG_PPC_STD_MMU_64
+   pr_info("ppc64_pft_size= 0x%llx\n", ppc64_pft_size);
+#endif
+#ifdef CONFIG_PPC_STD_MMU_32
+   pr_info("Hash_size = 0x%lx\n", Hash_size);
+#endif
+   pr_info("phys_mem_size = 0x%llx\n",
+   (unsigned long long)memblock_phys_mem_size());
+
+   pr_info("dcache_bsize  = 0x%x\n", dcache_bsize);
+   pr_info("icache_bsize  = 0x%x\n", icache_bsize);
+   if (ucache_bsize != 0)
+   pr_info("ucache_bsize  = 0x%x\n", ucache_bsize);
+
+   pr_info("cpu_features  = 0x%016lx\n", cur_cpu_spec->cpu_features);
+   pr_info("  possible= 0x%016lx\n",
+   (unsigned long)CPU_FTRS_POSSIBLE);
+   pr_info("  always  = 0x%016lx\n",
+   (unsigned long)CPU_FTRS_ALWAYS);
+   pr_info("cpu_user_features = 0x%08x 0x%08x\n",
+   cur_cpu_spec->cpu_user_features,
+   cur_cpu_spec->cpu_user_features2);
+   pr_info("mmu_features  = 0x%08x\n", cur_cpu_spec->mmu_features);

Re: [PATCH v2] powerpc: Merge 32-bit and 64-bit setup_arch()

2016-07-20 Thread Michael Ellerman
Michael Ellerman  writes:

> From: Benjamin Herrenschmidt 
>
> There is little enough differences now.
>
> Signed-off-by: Benjamin Herrenschmidt 
> [mpe: Add empty versions using #ifdef in setup.h rather than weak functions]
> Signed-off-by: Michael Ellerman 
> ---
>  arch/powerpc/include/asm/kvm_ppc.h |   4 -
>  arch/powerpc/include/asm/rtas.h|   3 +-
>  arch/powerpc/include/asm/setup.h   |  46 +-
>  arch/powerpc/kernel/setup-common.c | 169 +++
>  arch/powerpc/kernel/setup_32.c |  65 +-
>  arch/powerpc/kernel/setup_64.c | 178 
> ++---
>  6 files changed, 228 insertions(+), 237 deletions(-)
>
> v2: Add empty versions using #ifdef in setup.h rather than weak functions.

This breaks a SMP=n BOOK3E=y config.

New version incoming.

cheers
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH -next] wan/fsl_ucc_hdlc: remove .owner field for driver

2016-07-20 Thread David Miller
From: Wei Yongjun 
Date: Tue, 19 Jul 2016 11:25:03 +

> From: Wei Yongjun 
> 
> Remove .owner field if calls are used which set it automatically.
> 
> Generated by: scripts/coccinelle/api/platform_no_drv_owner.cocci
> 
> Signed-off-by: Wei Yongjun 

Applied.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH -next] wan/fsl_ucc_hdlc: use module_platform_driver to simplify the code

2016-07-20 Thread David Miller
From: Wei Yongjun 
Date: Tue, 19 Jul 2016 11:25:16 +

> From: Wei Yongjun 
> 
> module_platform_driver() makes the code simpler by eliminating
> boilerplate code.
> 
> Signed-off-by: Wei Yongjun 

Applied.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v4 10/12] s390/uaccess: Enable hardened usercopy

2016-07-20 Thread Kees Cook
Enables CONFIG_HARDENED_USERCOPY checks on s390.

Signed-off-by: Kees Cook 
---
 arch/s390/Kconfig   | 1 +
 arch/s390/lib/uaccess.c | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index a8c259059adf..9f694311c9ed 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -122,6 +122,7 @@ config S390
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_EARLY_PFN_TO_NID
+   select HAVE_ARCH_HARDENED_USERCOPY
select HAVE_ARCH_JUMP_LABEL
select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
index ae4de559e3a0..6986c20166f0 100644
--- a/arch/s390/lib/uaccess.c
+++ b/arch/s390/lib/uaccess.c
@@ -104,6 +104,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, 
const void __user *ptr,
 
 unsigned long __copy_from_user(void *to, const void __user *from, unsigned 
long n)
 {
+   check_object_size(to, n, false);
if (static_branch_likely(_mvcos))
return copy_from_user_mvcos(to, from, n);
return copy_from_user_mvcp(to, from, n);
@@ -177,6 +178,7 @@ static inline unsigned long copy_to_user_mvcs(void __user 
*ptr, const void *x,
 
 unsigned long __copy_to_user(void __user *to, const void *from, unsigned long 
n)
 {
+   check_object_size(from, n, true);
if (static_branch_likely(_mvcos))
return copy_to_user_mvcos(to, from, n);
return copy_to_user_mvcs(to, from, n);
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v11 4/5] powerpc/fsl: move mpc85xx.h to include/linux/fsl

2016-07-20 Thread Arnd Bergmann
On Wednesday, July 20, 2016 1:31:48 PM CEST Scott Wood wrote:
> On Wed, 2016-07-20 at 13:24 +0200, Arnd Bergmann wrote:
> > On Saturday, July 16, 2016 9:50:21 PM CEST Scott Wood wrote:
> > > 
> > > From: yangbo lu 
> > > 
> > > Move mpc85xx.h to include/linux/fsl and rename it to svr.h as a common
> > > header file.  This SVR numberspace is used on some ARM chips as well as
> > > PPC, and even to check for a PPC SVR multi-arch drivers would otherwise
> > > need to ifdef the header inclusion and all references to the SVR symbols.
> > > 
> > > Signed-off-by: Yangbo Lu 
> > > Acked-by: Wolfram Sang 
> > > Acked-by: Stephen Boyd 
> > > Acked-by: Joerg Roedel 
> > > [scottwood: update description]
> > > Signed-off-by: Scott Wood 
> > > 
> > As discussed before, please don't introduce yet another vendor specific
> > way to match a SoC ID from a device driver.
> > 
> > I've posted a patch for an extension to the soc_device infrastructure
> > to allow comparing the running SoC to a table of devices, use that
> > instead.
> 
> As I asked before, in which relevant maintainership capacity are you NACKing
> this?

I don't know why that's important, but I suggested the creation of
drivers/soc/ as a place to have a more general place for platform
specific drivers as part of being maintainer for arm-soc, and
almost all changes to drivers/soc go through our tree.

Olof does about half the merges, but I do the majority of the reviews
for drivers/soc patches. See also

git log --graph --format="%an %s" --merges drivers/soc/ 

Arnd
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v4 09/12] sparc/uaccess: Enable hardened usercopy

2016-07-20 Thread Kees Cook
Enables CONFIG_HARDENED_USERCOPY checks on sparc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook 
---
 arch/sparc/Kconfig  |  1 +
 arch/sparc/include/asm/uaccess_32.h | 14 ++
 arch/sparc/include/asm/uaccess_64.h | 11 +--
 3 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 546293d9e6c5..59b09600dd32 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -43,6 +43,7 @@ config SPARC
select OLD_SIGSUSPEND
select ARCH_HAS_SG_CHAIN
select CPU_NO_EFFICIENT_FFS
+   select HAVE_ARCH_HARDENED_USERCOPY
 
 config SPARC32
def_bool !64BIT
diff --git a/arch/sparc/include/asm/uaccess_32.h 
b/arch/sparc/include/asm/uaccess_32.h
index 57aca2792d29..341a5a133f48 100644
--- a/arch/sparc/include/asm/uaccess_32.h
+++ b/arch/sparc/include/asm/uaccess_32.h
@@ -248,22 +248,28 @@ unsigned long __copy_user(void __user *to, const void 
__user *from, unsigned lon
 
 static inline unsigned long copy_to_user(void __user *to, const void *from, 
unsigned long n)
 {
-   if (n && __access_ok((unsigned long) to, n))
+   if (n && __access_ok((unsigned long) to, n)) {
+   if (!__builtin_constant_p(n))
+   check_object_size(from, n, true);
return __copy_user(to, (__force void __user *) from, n);
-   else
+   } else
return n;
 }
 
 static inline unsigned long __copy_to_user(void __user *to, const void *from, 
unsigned long n)
 {
+   if (!__builtin_constant_p(n))
+   check_object_size(from, n, true);
return __copy_user(to, (__force void __user *) from, n);
 }
 
 static inline unsigned long copy_from_user(void *to, const void __user *from, 
unsigned long n)
 {
-   if (n && __access_ok((unsigned long) from, n))
+   if (n && __access_ok((unsigned long) from, n)) {
+   if (!__builtin_constant_p(n))
+   check_object_size(to, n, false);
return __copy_user((__force void __user *) to, from, n);
-   else
+   } else
return n;
 }
 
diff --git a/arch/sparc/include/asm/uaccess_64.h 
b/arch/sparc/include/asm/uaccess_64.h
index e9a51d64974d..8bda94fab8e8 100644
--- a/arch/sparc/include/asm/uaccess_64.h
+++ b/arch/sparc/include/asm/uaccess_64.h
@@ -210,8 +210,12 @@ unsigned long copy_from_user_fixup(void *to, const void 
__user *from,
 static inline unsigned long __must_check
 copy_from_user(void *to, const void __user *from, unsigned long size)
 {
-   unsigned long ret = ___copy_from_user(to, from, size);
+   unsigned long ret;
 
+   if (!__builtin_constant_p(size))
+   check_object_size(to, size, false);
+
+   ret = ___copy_from_user(to, from, size);
if (unlikely(ret))
ret = copy_from_user_fixup(to, from, size);
 
@@ -227,8 +231,11 @@ unsigned long copy_to_user_fixup(void __user *to, const 
void *from,
 static inline unsigned long __must_check
 copy_to_user(void __user *to, const void *from, unsigned long size)
 {
-   unsigned long ret = ___copy_to_user(to, from, size);
+   unsigned long ret;
 
+   if (!__builtin_constant_p(size))
+   check_object_size(from, size, true);
+   ret = ___copy_to_user(to, from, size);
if (unlikely(ret))
ret = copy_to_user_fixup(to, from, size);
return ret;
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v4 12/12] mm: SLUB hardened usercopy support

2016-07-20 Thread Kees Cook
Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLUB allocator to catch any copies that may span objects. Includes a
redzone handling fix discovered by Michael Ellerman.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook 
Tested-by: Michael Ellerman 
---
 init/Kconfig |  1 +
 mm/slub.c| 36 
 2 files changed, 37 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index 798c2020ee7c..1c4711819dfd 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1765,6 +1765,7 @@ config SLAB
 
 config SLUB
bool "SLUB (Unqueued Allocator)"
+   select HAVE_HARDENED_USERCOPY_ALLOCATOR
help
   SLUB is a slab allocator that minimizes cache line usage
   instead of managing queues of cached objects (SLAB approach).
diff --git a/mm/slub.c b/mm/slub.c
index 825ff4505336..7dee3d9a5843 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+   struct page *page)
+{
+   struct kmem_cache *s;
+   unsigned long offset;
+   size_t object_size;
+
+   /* Find object and usable object size. */
+   s = page->slab_cache;
+   object_size = slab_ksize(s);
+
+   /* Find offset within object. */
+   offset = (ptr - page_address(page)) % s->size;
+
+   /* Adjust for redzone and reject if within the redzone. */
+   if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
+   if (offset < s->red_left_pad)
+   return s->name;
+   offset -= s->red_left_pad;
+   }
+
+   /* Allow address range falling entirely within object size. */
+   if (offset <= object_size && n <= object_size - offset)
+   return NULL;
+
+   return s->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 static size_t __ksize(const void *object)
 {
struct page *page;
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v4 11/12] mm: SLAB hardened usercopy support

2016-07-20 Thread Kees Cook
Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLAB allocator to catch any copies that may span objects.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook 
Tested-by: Valdis Kletnieks 
---
 init/Kconfig |  1 +
 mm/slab.c| 30 ++
 2 files changed, 31 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index f755a602d4a1..798c2020ee7c 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1757,6 +1757,7 @@ choice
 
 config SLAB
bool "SLAB"
+   select HAVE_HARDENED_USERCOPY_ALLOCATOR
help
  The regular slab allocator that is established and known to work
  well in all environments. It organizes cache hot objects in
diff --git a/mm/slab.c b/mm/slab.c
index cc8bbc1e6bc9..5e2d5f349aca 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -4477,6 +4477,36 @@ static int __init slab_proc_init(void)
 module_init(slab_proc_init);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+   struct page *page)
+{
+   struct kmem_cache *cachep;
+   unsigned int objnr;
+   unsigned long offset;
+
+   /* Find and validate object. */
+   cachep = page->slab_cache;
+   objnr = obj_to_index(cachep, page, (void *)ptr);
+   BUG_ON(objnr >= cachep->num);
+
+   /* Find offset within object. */
+   offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep);
+
+   /* Allow address range falling entirely within object size. */
+   if (offset <= cachep->object_size && n <= cachep->object_size - offset)
+   return NULL;
+
+   return cachep->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 /**
  * ksize - get the actual amount of memory allocated for a given object
  * @objp: Pointer to the object
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v4 03/12] mm: Hardened usercopy

2016-07-20 Thread Kees Cook
This is the start of porting PAX_USERCOPY into the mainline kernel. This
is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
work is based on code by PaX Team and Brad Spengler, and an earlier port
from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.

This patch contains the logic for validating several conditions when
performing copy_to_user() and copy_from_user() on the kernel object
being copied to/from:
- address range doesn't wrap around
- address range isn't NULL or zero-allocated (with a non-zero copy size)
- if on the slab allocator:
  - object size must be less than or equal to copy size (when check is
implemented in the allocator, which appear in subsequent patches)
- otherwise, object must not span page allocations (excepting Reserved
  and CMA ranges)
- if on the stack
  - object must not extend before/after the current process stack
  - object must be contained by a valid stack frame (when there is
arch/build support for identifying stack frames)
- object must not overlap with kernel text

Signed-off-by: Kees Cook 
Tested-by: Valdis Kletnieks 
Tested-by: Michael Ellerman 
---
 include/linux/slab.h|  12 ++
 include/linux/thread_info.h |  15 +++
 mm/Makefile |   4 +
 mm/usercopy.c   | 268 
 security/Kconfig|  28 +
 5 files changed, 327 insertions(+)
 create mode 100644 mm/usercopy.c

diff --git a/include/linux/slab.h b/include/linux/slab.h
index aeb3e6d00a66..96a16a3fb7cb 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -155,6 +155,18 @@ void kfree(const void *);
 void kzfree(const void *);
 size_t ksize(const void *);
 
+#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
+const char *__check_heap_object(const void *ptr, unsigned long n,
+   struct page *page);
+#else
+static inline const char *__check_heap_object(const void *ptr,
+ unsigned long n,
+ struct page *page)
+{
+   return NULL;
+}
+#endif
+
 /*
  * Some archs want to perform DMA into kmalloc caches and need a guaranteed
  * alignment larger than the alignment of a 64-bit integer.
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index 3d5c80b4391d..f24b99eac969 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -155,6 +155,21 @@ static inline int arch_within_stack_frames(const void * 
const stack,
 }
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+extern void __check_object_size(const void *ptr, unsigned long n,
+   bool to_user);
+
+static inline void check_object_size(const void *ptr, unsigned long n,
+bool to_user)
+{
+   __check_object_size(ptr, n, to_user);
+}
+#else
+static inline void check_object_size(const void *ptr, unsigned long n,
+bool to_user)
+{ }
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 #endif /* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
diff --git a/mm/Makefile b/mm/Makefile
index 78c6f7dedb83..32d37247c7e5 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -21,6 +21,9 @@ KCOV_INSTRUMENT_memcontrol.o := n
 KCOV_INSTRUMENT_mmzone.o := n
 KCOV_INSTRUMENT_vmstat.o := n
 
+# Since __builtin_frame_address does work as used, disable the warning.
+CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
+
 mmu-y  := nommu.o
 mmu-$(CONFIG_MMU)  := gup.o highmem.o memory.o mincore.o \
   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
@@ -99,3 +102,4 @@ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
 obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
 obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
 obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
+obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
diff --git a/mm/usercopy.c b/mm/usercopy.c
new file mode 100644
index ..8ebae91a6b55
--- /dev/null
+++ b/mm/usercopy.c
@@ -0,0 +1,268 @@
+/*
+ * This implements the various checks for CONFIG_HARDENED_USERCOPY*,
+ * which are designed to protect kernel memory from needless exposure
+ * and overwrite under many unintended conditions. This code is based
+ * on PAX_USERCOPY, which is:
+ *
+ * Copyright (C) 2001-2016 PaX Team, Bradley Spengler, Open Source
+ * Security Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+
+enum {
+   BAD_STACK = -1,
+   NOT_STACK = 0,
+   GOOD_FRAME,
+   GOOD_STACK,
+};
+
+/*
+ * Checks if a given pointer and length is contained by the current
+ * stack frame (if possible).
+ *
+ * Returns:
+ * 

[PATCH v4 08/12] powerpc/uaccess: Enable hardened usercopy

2016-07-20 Thread Kees Cook
Enables CONFIG_HARDENED_USERCOPY checks on powerpc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook 
Tested-by: Michael Ellerman 
---
 arch/powerpc/Kconfig   |  1 +
 arch/powerpc/include/asm/uaccess.h | 21 +++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 01f7464d9fea..b7a18b2604be 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -164,6 +164,7 @@ config PPC
select ARCH_HAS_UBSAN_SANITIZE_ALL
select ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT
select HAVE_LIVEPATCH if HAVE_DYNAMIC_FTRACE_WITH_REGS
+   select HAVE_ARCH_HARDENED_USERCOPY
 
 config GENERIC_CSUM
def_bool CPU_LITTLE_ENDIAN
diff --git a/arch/powerpc/include/asm/uaccess.h 
b/arch/powerpc/include/asm/uaccess.h
index b7c20f0b8fbe..c1dc6c14deb8 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -310,10 +310,15 @@ static inline unsigned long copy_from_user(void *to,
 {
unsigned long over;
 
-   if (access_ok(VERIFY_READ, from, n))
+   if (access_ok(VERIFY_READ, from, n)) {
+   if (!__builtin_constant_p(n))
+   check_object_size(to, n, false);
return __copy_tofrom_user((__force void __user *)to, from, n);
+   }
if ((unsigned long)from < TASK_SIZE) {
over = (unsigned long)from + n - TASK_SIZE;
+   if (!__builtin_constant_p(n - over))
+   check_object_size(to, n - over, false);
return __copy_tofrom_user((__force void __user *)to, from,
n - over) + over;
}
@@ -325,10 +330,15 @@ static inline unsigned long copy_to_user(void __user *to,
 {
unsigned long over;
 
-   if (access_ok(VERIFY_WRITE, to, n))
+   if (access_ok(VERIFY_WRITE, to, n)) {
+   if (!__builtin_constant_p(n))
+   check_object_size(from, n, true);
return __copy_tofrom_user(to, (__force void __user *)from, n);
+   }
if ((unsigned long)to < TASK_SIZE) {
over = (unsigned long)to + n - TASK_SIZE;
+   if (!__builtin_constant_p(n))
+   check_object_size(from, n - over, true);
return __copy_tofrom_user(to, (__force void __user *)from,
n - over) + over;
}
@@ -372,6 +382,10 @@ static inline unsigned long __copy_from_user_inatomic(void 
*to,
if (ret == 0)
return 0;
}
+
+   if (!__builtin_constant_p(n))
+   check_object_size(to, n, false);
+
return __copy_tofrom_user((__force void __user *)to, from, n);
 }
 
@@ -398,6 +412,9 @@ static inline unsigned long __copy_to_user_inatomic(void 
__user *to,
if (ret == 0)
return 0;
}
+   if (!__builtin_constant_p(n))
+   check_object_size(from, n, true);
+
return __copy_tofrom_user(to, (__force const void __user *)from, n);
 }
 
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v4 07/12] ia64/uaccess: Enable hardened usercopy

2016-07-20 Thread Kees Cook
Enables CONFIG_HARDENED_USERCOPY checks on ia64.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook 
---
 arch/ia64/Kconfig   |  1 +
 arch/ia64/include/asm/uaccess.h | 18 +++---
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index f80758cb7157..32a87ef516a0 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -53,6 +53,7 @@ config IA64
select MODULES_USE_ELF_RELA
select ARCH_USE_CMPXCHG_LOCKREF
select HAVE_ARCH_AUDITSYSCALL
+   select HAVE_ARCH_HARDENED_USERCOPY
default y
help
  The Itanium Processor Family is Intel's 64-bit successor to
diff --git a/arch/ia64/include/asm/uaccess.h b/arch/ia64/include/asm/uaccess.h
index 2189d5ddc1ee..465c70982f40 100644
--- a/arch/ia64/include/asm/uaccess.h
+++ b/arch/ia64/include/asm/uaccess.h
@@ -241,12 +241,18 @@ extern unsigned long __must_check __copy_user (void 
__user *to, const void __use
 static inline unsigned long
 __copy_to_user (void __user *to, const void *from, unsigned long count)
 {
+   if (!__builtin_constant_p(count))
+   check_object_size(from, count, true);
+
return __copy_user(to, (__force void __user *) from, count);
 }
 
 static inline unsigned long
 __copy_from_user (void *to, const void __user *from, unsigned long count)
 {
+   if (!__builtin_constant_p(count))
+   check_object_size(to, count, false);
+
return __copy_user((__force void __user *) to, from, count);
 }
 
@@ -258,8 +264,11 @@ __copy_from_user (void *to, const void __user *from, 
unsigned long count)
const void *__cu_from = (from); 
\
long __cu_len = (n);
\

\
-   if (__access_ok(__cu_to, __cu_len, get_fs()))   
\
-   __cu_len = __copy_user(__cu_to, (__force void __user *) 
__cu_from, __cu_len);   \
+   if (__access_ok(__cu_to, __cu_len, get_fs())) { 
\
+   if (!__builtin_constant_p(n))   
\
+   check_object_size(__cu_from, __cu_len, true);   
\
+   __cu_len = __copy_user(__cu_to, (__force void __user *)  
__cu_from, __cu_len);  \
+   }   
\
__cu_len;   
\
 })
 
@@ -270,8 +279,11 @@ __copy_from_user (void *to, const void __user *from, 
unsigned long count)
long __cu_len = (n);
\

\
__chk_user_ptr(__cu_from);  
\
-   if (__access_ok(__cu_from, __cu_len, get_fs())) 
\
+   if (__access_ok(__cu_from, __cu_len, get_fs())) {   
\
+   if (!__builtin_constant_p(n))   
\
+   check_object_size(__cu_to, __cu_len, false);
\
__cu_len = __copy_user((__force void __user *) __cu_to, 
__cu_from, __cu_len);   \
+   }   
\
__cu_len;   
\
 })
 
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v4 06/12] arm64/uaccess: Enable hardened usercopy

2016-07-20 Thread Kees Cook
Enables CONFIG_HARDENED_USERCOPY checks on arm64. As done by KASAN in -next,
renames the low-level functions to __arch_copy_*_user() so a static inline
can do additional work before the copy.

Signed-off-by: Kees Cook 
---
 arch/arm64/Kconfig   |  1 +
 arch/arm64/include/asm/uaccess.h | 29 ++---
 arch/arm64/kernel/arm64ksyms.c   |  4 ++--
 arch/arm64/lib/copy_from_user.S  |  4 ++--
 arch/arm64/lib/copy_to_user.S|  4 ++--
 5 files changed, 29 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5a0a691d4220..9cdb2322c811 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -51,6 +51,7 @@ config ARM64
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_BITREVERSE
+   select HAVE_ARCH_HARDENED_USERCOPY
select HAVE_ARCH_HUGE_VMAP
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP && !(ARM64_16K_PAGES && 
ARM64_VA_BITS_48)
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 9e397a542756..92848b00e3cd 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -256,24 +256,39 @@ do {  
\
-EFAULT;\
 })
 
-extern unsigned long __must_check __copy_from_user(void *to, const void __user 
*from, unsigned long n);
-extern unsigned long __must_check __copy_to_user(void __user *to, const void 
*from, unsigned long n);
+extern unsigned long __must_check __arch_copy_from_user(void *to, const void 
__user *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_to_user(void __user *to, const 
void *from, unsigned long n);
 extern unsigned long __must_check __copy_in_user(void __user *to, const void 
__user *from, unsigned long n);
 extern unsigned long __must_check __clear_user(void __user *addr, unsigned 
long n);
 
+static inline unsigned long __must_check __copy_from_user(void *to, const void 
__user *from, unsigned long n)
+{
+   check_object_size(to, n, false);
+   return __arch_copy_from_user(to, from, n);
+}
+
+static inline unsigned long __must_check __copy_to_user(void __user *to, const 
void *from, unsigned long n)
+{
+   check_object_size(from, n, true);
+   return __arch_copy_to_user(to, from, n);
+}
+
 static inline unsigned long __must_check copy_from_user(void *to, const void 
__user *from, unsigned long n)
 {
-   if (access_ok(VERIFY_READ, from, n))
-   n = __copy_from_user(to, from, n);
-   else /* security hole - plug it */
+   if (access_ok(VERIFY_READ, from, n)) {
+   check_object_size(to, n, false);
+   n = __arch_copy_from_user(to, from, n);
+   } else /* security hole - plug it */
memset(to, 0, n);
return n;
 }
 
 static inline unsigned long __must_check copy_to_user(void __user *to, const 
void *from, unsigned long n)
 {
-   if (access_ok(VERIFY_WRITE, to, n))
-   n = __copy_to_user(to, from, n);
+   if (access_ok(VERIFY_WRITE, to, n)) {
+   check_object_size(from, n, true);
+   n = __arch_copy_to_user(to, from, n);
+   }
return n;
 }
 
diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c
index 678f30b05a45..2dc44406a7ad 100644
--- a/arch/arm64/kernel/arm64ksyms.c
+++ b/arch/arm64/kernel/arm64ksyms.c
@@ -34,8 +34,8 @@ EXPORT_SYMBOL(copy_page);
 EXPORT_SYMBOL(clear_page);
 
/* user mem (segment) */
-EXPORT_SYMBOL(__copy_from_user);
-EXPORT_SYMBOL(__copy_to_user);
+EXPORT_SYMBOL(__arch_copy_from_user);
+EXPORT_SYMBOL(__arch_copy_to_user);
 EXPORT_SYMBOL(__clear_user);
 EXPORT_SYMBOL(__copy_in_user);
 
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 17e8306dca29..0b90497d4424 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -66,7 +66,7 @@
.endm
 
 end.reqx5
-ENTRY(__copy_from_user)
+ENTRY(__arch_copy_from_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
CONFIG_ARM64_PAN)
add end, x0, x2
@@ -75,7 +75,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), 
ARM64_ALT_PAN_NOT_UAO, \
CONFIG_ARM64_PAN)
mov x0, #0  // Nothing to copy
ret
-ENDPROC(__copy_from_user)
+ENDPROC(__arch_copy_from_user)
 
.section .fixup,"ax"
.align  2
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 21faae60f988..7a7efe255034 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -65,7 +65,7 @@
.endm
 
 end.reqx5
-ENTRY(__copy_to_user)
+ENTRY(__arch_copy_to_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
  

[PATCH v4 05/12] ARM: uaccess: Enable hardened usercopy

2016-07-20 Thread Kees Cook
Enables CONFIG_HARDENED_USERCOPY checks on arm.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook 
---
 arch/arm/Kconfig   |  1 +
 arch/arm/include/asm/uaccess.h | 11 +--
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 90542db1220d..f56b29b3f57e 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -35,6 +35,7 @@ config ARM
select HARDIRQS_SW_RESEND
select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
+   select HAVE_ARCH_HARDENED_USERCOPY
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
index 35c9db857ebe..7fb59199c6bb 100644
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -496,7 +496,10 @@ arm_copy_from_user(void *to, const void __user *from, 
unsigned long n);
 static inline unsigned long __must_check
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-   unsigned int __ua_flags = uaccess_save_and_enable();
+   unsigned int __ua_flags;
+
+   check_object_size(to, n, false);
+   __ua_flags = uaccess_save_and_enable();
n = arm_copy_from_user(to, from, n);
uaccess_restore(__ua_flags);
return n;
@@ -511,11 +514,15 @@ static inline unsigned long __must_check
 __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
 #ifndef CONFIG_UACCESS_WITH_MEMCPY
-   unsigned int __ua_flags = uaccess_save_and_enable();
+   unsigned int __ua_flags;
+
+   check_object_size(from, n, true);
+   __ua_flags = uaccess_save_and_enable();
n = arm_copy_to_user(to, from, n);
uaccess_restore(__ua_flags);
return n;
 #else
+   check_object_size(from, n, true);
return arm_copy_to_user(to, from, n);
 #endif
 }
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v4 04/12] x86/uaccess: Enable hardened usercopy

2016-07-20 Thread Kees Cook
Enables CONFIG_HARDENED_USERCOPY checks on x86. This is done both in
copy_*_user() and __copy_*_user() because copy_*_user() actually calls
down to _copy_*_user() and not __copy_*_user().

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook 
Tested-by: Valdis Kletnieks 
---
 arch/x86/Kconfig  |  1 +
 arch/x86/include/asm/uaccess.h| 10 ++
 arch/x86/include/asm/uaccess_32.h |  2 ++
 arch/x86/include/asm/uaccess_64.h |  2 ++
 4 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4407f596b72c..762a0349633c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -80,6 +80,7 @@ config X86
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
select HAVE_AOUTif X86_32
select HAVE_ARCH_AUDITSYSCALL
+   select HAVE_ARCH_HARDENED_USERCOPY
select HAVE_ARCH_HUGE_VMAP  if X86_64 || X86_PAE
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_KASAN  if X86_64 && SPARSEMEM_VMEMMAP
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 2982387ba817..d3312f0fcdfc 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -742,9 +742,10 @@ copy_from_user(void *to, const void __user *from, unsigned 
long n)
 * case, and do only runtime checking for non-constant sizes.
 */
 
-   if (likely(sz < 0 || sz >= n))
+   if (likely(sz < 0 || sz >= n)) {
+   check_object_size(to, n, false);
n = _copy_from_user(to, from, n);
-   else if(__builtin_constant_p(n))
+   } else if (__builtin_constant_p(n))
copy_from_user_overflow();
else
__copy_from_user_overflow(sz, n);
@@ -762,9 +763,10 @@ copy_to_user(void __user *to, const void *from, unsigned 
long n)
might_fault();
 
/* See the comment in copy_from_user() above. */
-   if (likely(sz < 0 || sz >= n))
+   if (likely(sz < 0 || sz >= n)) {
+   check_object_size(from, n, true);
n = _copy_to_user(to, from, n);
-   else if(__builtin_constant_p(n))
+   } else if (__builtin_constant_p(n))
copy_to_user_overflow();
else
__copy_to_user_overflow(sz, n);
diff --git a/arch/x86/include/asm/uaccess_32.h 
b/arch/x86/include/asm/uaccess_32.h
index 4b32da24faaf..7d3bdd1ed697 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -37,6 +37,7 @@ unsigned long __must_check __copy_from_user_ll_nocache_nozero
 static __always_inline unsigned long __must_check
 __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
 {
+   check_object_size(from, n, true);
return __copy_to_user_ll(to, from, n);
 }
 
@@ -95,6 +96,7 @@ static __always_inline unsigned long
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
might_fault();
+   check_object_size(to, n, false);
if (__builtin_constant_p(n)) {
unsigned long ret;
 
diff --git a/arch/x86/include/asm/uaccess_64.h 
b/arch/x86/include/asm/uaccess_64.h
index 2eac2aa3e37f..673059a109fe 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -54,6 +54,7 @@ int __copy_from_user_nocheck(void *dst, const void __user 
*src, unsigned size)
 {
int ret = 0;
 
+   check_object_size(dst, size, false);
if (!__builtin_constant_p(size))
return copy_user_generic(dst, (__force void *)src, size);
switch (size) {
@@ -119,6 +120,7 @@ int __copy_to_user_nocheck(void __user *dst, const void 
*src, unsigned size)
 {
int ret = 0;
 
+   check_object_size(src, size, true);
if (!__builtin_constant_p(size))
return copy_user_generic((__force void *)dst, src, size);
switch (size) {
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v4 01/12] mm: Add is_migrate_cma_page

2016-07-20 Thread Kees Cook
From: Laura Abbott 

Code such as hardened user copy[1] needs a way to tell if a
page is CMA or not. Add is_migrate_cma_page in a similar way
to is_migrate_isolate_page.

[1]http://article.gmane.org/gmane.linux.kernel.mm/155238

Signed-off-by: Laura Abbott 
Signed-off-by: Kees Cook 
---
 include/linux/mmzone.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 02069c23486d..c8478b29f070 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -68,8 +68,10 @@ extern char * const migratetype_names[MIGRATE_TYPES];
 
 #ifdef CONFIG_CMA
 #  define is_migrate_cma(migratetype) unlikely((migratetype) == MIGRATE_CMA)
+#  define is_migrate_cma_page(_page) (get_pageblock_migratetype(_page) == 
MIGRATE_CMA)
 #else
 #  define is_migrate_cma(migratetype) false
+#  define is_migrate_cma_page(_page) false
 #endif
 
 #define for_each_migratetype_order(order, type) \
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v4 00/12] mm: Hardened usercopy

2016-07-20 Thread Kees Cook
Hi,

[This is now in my kspp -next tree, though I'd really love to add some
additional explicit Tested-bys, Reviewed-bys, or Acked-bys. If you've
looked through any part of this or have done any testing, please consider
sending an email with your "*-by:" line. :)]

This is a start of the mainline port of PAX_USERCOPY[1]. After writing
tests (now in lkdtm in -next) for Casey's earlier port[2], I kept tweaking
things further and further until I ended up with a whole new patch series.
To that end, I took Rik, Laura, and other people's feedback along with
additional changes and clean-ups.

Based on my understanding, PAX_USERCOPY was designed to catch a
few classes of flaws (mainly bad bounds checking) around the use of
copy_to_user()/copy_from_user(). These changes don't touch get_user() and
put_user(), since these operate on constant sized lengths, and tend to be
much less vulnerable. There are effectively three distinct protections in
the whole series, each of which I've given a separate CONFIG, though this
patch set is only the first of the three intended protections. (Generally
speaking, PAX_USERCOPY covers what I'm calling CONFIG_HARDENED_USERCOPY
(this) and CONFIG_HARDENED_USERCOPY_WHITELIST (future), and
PAX_USERCOPY_SLABS covers CONFIG_HARDENED_USERCOPY_SPLIT_KMALLOC
(future).)

This series, which adds CONFIG_HARDENED_USERCOPY, checks that objects
being copied to/from userspace meet certain criteria:
- if address is a heap object, the size must not exceed the object's
  allocated size. (This will catch all kinds of heap overflow flaws.)
- if address range is in the current process stack, it must be within the
  a valid stack frame (if such checking is possible) or at least entirely
  within the current process's stack. (This could catch large lengths that
  would have extended beyond the current process stack, or overflows if
  their length extends back into the original stack.)
- if the address range is part of kernel data, rodata, or bss, allow it.
- if address range is page-allocated, that it doesn't span multiple
  allocations (excepting Reserved and CMA pages).
- if address is within the kernel text, reject it.
- everything else is accepted

The patches in the series are:
- Support for examination of CMA page types:
1- mm: Add is_migrate_cma_page
- Support for arch-specific stack frame checking (which will likely be
  replaced in the future by Josh's more comprehensive unwinder):
2- mm: Implement stack frame object validation
- The core copy_to/from_user() checks, without the slab object checks:
3- mm: Hardened usercopy
- Per-arch enablement of the protection:
4- x86/uaccess: Enable hardened usercopy
5- ARM: uaccess: Enable hardened usercopy
6- arm64/uaccess: Enable hardened usercopy
7- ia64/uaccess: Enable hardened usercopy
8- powerpc/uaccess: Enable hardened usercopy
9- sparc/uaccess: Enable hardened usercopy
   10- s390/uaccess: Enable hardened usercopy
- The heap allocator implementation of object size checking:
   11- mm: SLAB hardened usercopy support
   12- mm: SLUB hardened usercopy support

Some notes:

- This is expected to apply on top of -next which contains fixes for the
  position of _etext on both arm and arm64, though it has some conflicts
  with KASAN that should be trivial to fix up. Also in -next are the
  tests for this protection (in lkdtm), prefixed with USERCOPY_.

- I couldn't detect a measurable performance change with these features
  enabled. Kernel build times were unchanged, hackbench was unchanged,
  etc. I think we could flip this to "on by default" at some point, but
  for now, I'm leaving it off until I can get some more definitive
  measurements. I would love if someone with greater familiarity with
  perf could give this a spin and report results.

- The SLOB support extracted from grsecurity seems entirely broken. I
  have no idea what's going on there, I spent my time testing SLAB and
  SLUB. Having someone else look at SLOB would be nice, but this series
  doesn't depend on it.

Additional features that would be nice, but aren't blocking this series:

- Needs more architecture support for stack frame checking (only x86 now,
  but it seems Josh will have a good solution for this soon).


Thanks!

-Kees

[1] https://grsecurity.net/download.php "grsecurity - test kernel patch"
[2] http://www.openwall.com/lists/kernel-hardening/2016/05/19/5

v4:
- handle CMA pages, labbott
- update stack checker comments, labbott
- check for vmalloc addresses, labbott
- deal with KASAN in -next changing arm64 copy*user calls
- check for linear mappings at runtime instead of via CONFIG

v3:
- switch to using BUG for better Oops integration
- when checking page allocations, check each for Reserved
- use enums for the stack check return for readability

v2:
- added s390 support
- handle slub red zone
- disallow writes to rodata area
- stack frame walker now CONFIG-controlled arch-specific helper


[PATCH v4 02/12] mm: Implement stack frame object validation

2016-07-20 Thread Kees Cook
This creates per-architecture function arch_within_stack_frames() that
should validate if a given object is contained by a kernel stack frame.
Initial implementation is on x86.

This is based on code from PaX.

Signed-off-by: Kees Cook 
---
 arch/Kconfig   |  9 
 arch/x86/Kconfig   |  1 +
 arch/x86/include/asm/thread_info.h | 44 ++
 include/linux/thread_info.h|  9 
 4 files changed, 63 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index d794384a0404..5e2776562035 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -424,6 +424,15 @@ config CC_STACKPROTECTOR_STRONG
 
 endchoice
 
+config HAVE_ARCH_WITHIN_STACK_FRAMES
+   bool
+   help
+ An architecture should select this if it can walk the kernel stack
+ frames to determine if an object is part of either the arguments
+ or local variables (i.e. that it excludes saved return addresses,
+ and similar) by implementing an inline arch_within_stack_frames(),
+ which is used by CONFIG_HARDENED_USERCOPY.
+
 config HAVE_CONTEXT_TRACKING
bool
help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0a7b885964ba..4407f596b72c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -91,6 +91,7 @@ config X86
select HAVE_ARCH_SOFT_DIRTY if X86_64
select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE
+   select HAVE_ARCH_WITHIN_STACK_FRAMES
select HAVE_EBPF_JITif X86_64
select HAVE_CC_STACKPROTECTOR
select HAVE_CMPXCHG_DOUBLE
diff --git a/arch/x86/include/asm/thread_info.h 
b/arch/x86/include/asm/thread_info.h
index 30c133ac05cd..ab386f1336f2 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -180,6 +180,50 @@ static inline unsigned long current_stack_pointer(void)
return sp;
 }
 
+/*
+ * Walks up the stack frames to make sure that the specified object is
+ * entirely contained by a single stack frame.
+ *
+ * Returns:
+ *  1 if within a frame
+ * -1 if placed across a frame boundary (or outside stack)
+ *  0 unable to determine (no frame pointers, etc)
+ */
+static inline int arch_within_stack_frames(const void * const stack,
+  const void * const stackend,
+  const void *obj, unsigned long len)
+{
+#if defined(CONFIG_FRAME_POINTER)
+   const void *frame = NULL;
+   const void *oldframe;
+
+   oldframe = __builtin_frame_address(1);
+   if (oldframe)
+   frame = __builtin_frame_address(2);
+   /*
+* low --> high
+* [saved bp][saved ip][args][local vars][saved bp][saved ip]
+* ^^
+*   allow copies only within here
+*/
+   while (stack <= frame && frame < stackend) {
+   /*
+* If obj + len extends past the last frame, this
+* check won't pass and the next frame will be 0,
+* causing us to bail out and correctly report
+* the copy as invalid.
+*/
+   if (obj + len <= frame)
+   return obj >= oldframe + 2 * sizeof(void *) ? 1 : -1;
+   oldframe = frame;
+   frame = *(const void * const *)frame;
+   }
+   return -1;
+#else
+   return 0;
+#endif
+}
+
 #else /* !__ASSEMBLY__ */
 
 #ifdef CONFIG_X86_64
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index b4c2a485b28a..3d5c80b4391d 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -146,6 +146,15 @@ static inline bool test_and_clear_restore_sigmask(void)
 #error "no set_restore_sigmask() provided and default one won't work"
 #endif
 
+#ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
+static inline int arch_within_stack_frames(const void * const stack,
+  const void * const stackend,
+  const void *obj, unsigned long len)
+{
+   return 0;
+}
+#endif
+
 #endif /* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
-- 
2.7.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v11 4/5] powerpc/fsl: move mpc85xx.h to include/linux/fsl

2016-07-20 Thread Scott Wood
On Wed, 2016-07-20 at 13:24 +0200, Arnd Bergmann wrote:
> On Saturday, July 16, 2016 9:50:21 PM CEST Scott Wood wrote:
> > 
> > From: yangbo lu 
> > 
> > Move mpc85xx.h to include/linux/fsl and rename it to svr.h as a common
> > header file.  This SVR numberspace is used on some ARM chips as well as
> > PPC, and even to check for a PPC SVR multi-arch drivers would otherwise
> > need to ifdef the header inclusion and all references to the SVR symbols.
> > 
> > Signed-off-by: Yangbo Lu 
> > Acked-by: Wolfram Sang 
> > Acked-by: Stephen Boyd 
> > Acked-by: Joerg Roedel 
> > [scottwood: update description]
> > Signed-off-by: Scott Wood 
> > 
> As discussed before, please don't introduce yet another vendor specific
> way to match a SoC ID from a device driver.
> 
> I've posted a patch for an extension to the soc_device infrastructure
> to allow comparing the running SoC to a table of devices, use that
> instead.

As I asked before, in which relevant maintainership capacity are you NACKing
this?

-Scott

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v3 00/11] mm: Hardened usercopy

2016-07-20 Thread Kees Cook
On Wed, Jul 20, 2016 at 9:02 AM, David Laight  wrote:
> From: Kees Cook
>> Sent: 20 July 2016 16:32
> ...
>> Yup: that's exactly what it's doing: walking up the stack. :)
>
> Remind me to make sure all our customers run kernels with it disabled.

What's your concern with stack walking?

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v3 00/11] mm: Hardened usercopy

2016-07-20 Thread Rik van Riel
On Wed, 2016-07-20 at 16:02 +, David Laight wrote:
> From: Kees Cook
> > Sent: 20 July 2016 16:32
> ...
> > Yup: that's exactly what it's doing: walking up the stack. :)
> 
> Remind me to make sure all our customers run kernels with it
> disabled.

You want a single copy_from_user to write to data in
multiple stack frames?

-- 

All Rights Reversed.

signature.asc
Description: This is a digitally signed message part
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

RE: [PATCH v3 00/11] mm: Hardened usercopy

2016-07-20 Thread David Laight
From: Kees Cook
> Sent: 20 July 2016 16:32
...
> Yup: that's exactly what it's doing: walking up the stack. :)

Remind me to make sure all our customers run kernels with it disabled.

David

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC 0/3] extend kexec_file_load system call

2016-07-20 Thread Thiago Jung Bauermann
Am Mittwoch, 20 Juli 2016, 13:12:20 schrieb Arnd Bergmann:
> On Wednesday, July 20, 2016 8:47:45 PM CEST Michael Ellerman wrote:
> > At least for stdout-path, I can't really see how that would
> > significantly help an attacker, but I'm all ears if anyone has ideas.
> 
> That's actually an easy one that came up before: If an attacker controls
> a tty device (e.g. network console) that can be used to enter a debugger
> (kdb, kgdb, xmon, ...), enabling that to be the console device
> gives you a direct attack vector. The same thing will happen if you
> have a piece of software that intentially gives extra rights to the
> owner of the console device by treating it as "physical presence".

I think people are talking past each other a bit in these arguments about 
what is relevant to security or not.

For the kexec maintainers, kexec_file_load has one very specific and narrow 
purpose: enable Secure Boot as defined by UEFI.

And from what I understand of their arguments so far, there is one and only 
one security concern: when in Secure Boot mode, a system must not allow 
execution of unsigned code with kernel privileges. So even if one can 
specify a different root filesystem and do a lot of nasty things to the 
system with a rogue userspace in that root filesystem, as long as the kernel 
won't load unsigned modules that's not a problem as far as they're 
concerned.

Also, AFAIK attacks requiring "physical presence" are out of scope for the 
UEFI Secure Boot security model. Thus an attack that involves control of a 
console of plugging an USB device is also not a concern.

One thing I don't know is whether an attack involving a networked IPMI 
console or a USB device that can be "plugged" virtually by a managing system 
(BMC) is considered a physical attack or a remote attack in the context of 
UEFI Secure Boot.

-- 
[]'s
Thiago Jung Bauermann
IBM Linux Technology Center

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v3 02/11] mm: Hardened usercopy

2016-07-20 Thread Laura Abbott

On 07/20/2016 03:24 AM, Balbir Singh wrote:

On Tue, 2016-07-19 at 11:48 -0700, Kees Cook wrote:

On Mon, Jul 18, 2016 at 6:06 PM, Laura Abbott  wrote:


On 07/15/2016 02:44 PM, Kees Cook wrote:

This doesn't work when copying CMA allocated memory since CMA purposely
allocates larger than a page block size without setting head pages.
Given CMA may be used with drivers doing zero copy buffers, I think it
should be permitted.

Something like the following lets it pass (I can clean up and submit
the is_migrate_cma_page APIs as a separate patch for review)

Yeah, this would be great. I'd rather use an accessor to check this
than a direct check for MIGRATE_CMA.


 */
for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr))
{
-   if (!PageReserved(page))
+   if (!PageReserved(page) && !is_migrate_cma_page(page))
return "";
}

Yeah, I'll modify this a bit so that which type it starts as is
maintained for all pages (rather than allowing to flip back and forth
-- even though that is likely impossible).


Sorry, I completely missed the MIGRATE_CMA bits. Could you clarify if you
caught this in testing/review?

Balbir Singh.



I caught it while looking at the code and then wrote a test case to confirm
I was correct because I wasn't sure how to easily find an in tree user.

Thanks,
Laura
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v3 00/11] mm: Hardened usercopy

2016-07-20 Thread Kees Cook
On Wed, Jul 20, 2016 at 2:52 AM, David Laight  wrote:
> From: Kees Cook
>> Sent: 15 July 2016 22:44
>> This is a start of the mainline port of PAX_USERCOPY[1].
> ...
>> - if address range is in the current process stack, it must be within the
>>   current stack frame (if such checking is possible) or at least entirely
>>   within the current process's stack.
> ...
>
> That description doesn't seem quite right to me.
> I presume the check is:
>   Within the current process's stack and not crossing the ends of the
>   current stack frame.

Actually, it's a bad description all around. :) The check is that the
range is within a valid stack frame (current or any prior caller's
frame). i.e. it does not cross a frame or touch the saved frame
pointer nor instruction pointer.

> The 'current' stack frame is likely to be that of copy_to/from_user().
> Even if you use the stack of the caller, any problematic buffers
> are likely to have been passed in from a calling function.
> So unless you are going to walk the stack (good luck on that)
> I'm not sure checking the stack frames is worth it.

Yup: that's exactly what it's doing: walking up the stack. :)

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH] crypto: vmx - Ignore generated files

2016-07-20 Thread Herbert Xu
On Tue, Jul 19, 2016 at 10:36:26AM -0300, Paulo Flabiano Smorigo wrote:
> Ignore assembly files generated by the perl script.
> 
> Signed-off-by: Paulo Flabiano Smorigo 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH] cxl: Delete an unnecessary check before the function call "of_node_put"

2016-07-20 Thread Julia Lawall


On Wed, 20 Jul 2016, SF Markus Elfring wrote:

> From: Markus Elfring 
> Date: Wed, 20 Jul 2016 15:10:32 +0200
>
> The of_node_put() function tests whether its argument is NULL
> and then returns immediately.
> Thus the test around the call is not needed.
>
> This issue was detected by using the Coccinelle software.
>
> Signed-off-by: Markus Elfring 
> ---
>  drivers/misc/cxl/of.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/misc/cxl/of.c b/drivers/misc/cxl/of.c
> index edc4583..333256a 100644
> --- a/drivers/misc/cxl/of.c
> +++ b/drivers/misc/cxl/of.c
> @@ -490,8 +490,7 @@ int cxl_of_probe(struct platform_device *pdev)
>   adapter->slices = 0;
>   }
>
> - if (afu_np)
> - of_node_put(afu_np);
> + of_node_put(afu_np);
>   return 0;
>  }

I don't think that the call should be there at all.  The loop only exits
when afu_np is NULL.  Furthermore, the loop should not be written as a for
loop, but rather with for_each_child_of_node.

julia
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH] cxl: Delete an unnecessary check before the function call "of_node_put"

2016-07-20 Thread SF Markus Elfring
From: Markus Elfring 
Date: Wed, 20 Jul 2016 15:10:32 +0200

The of_node_put() function tests whether its argument is NULL
and then returns immediately.
Thus the test around the call is not needed.

This issue was detected by using the Coccinelle software.

Signed-off-by: Markus Elfring 
---
 drivers/misc/cxl/of.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/misc/cxl/of.c b/drivers/misc/cxl/of.c
index edc4583..333256a 100644
--- a/drivers/misc/cxl/of.c
+++ b/drivers/misc/cxl/of.c
@@ -490,8 +490,7 @@ int cxl_of_probe(struct platform_device *pdev)
adapter->slices = 0;
}
 
-   if (afu_np)
-   of_node_put(afu_np);
+   of_node_put(afu_np);
return 0;
 }
 
-- 
2.9.2

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

RE: [PATCH] crypto: vmx - Ignore generated files

2016-07-20 Thread David Laight
From:  Paulo Flabiano Smorigo
> Sent: 19 July 2016 14:36
> Ignore assembly files generated by the perl script.
...
> diff --git a/drivers/crypto/vmx/.gitignore b/drivers/crypto/vmx/.gitignore
> new file mode 100644
> index 000..af4a7ce
> --- /dev/null
> +++ b/drivers/crypto/vmx/.gitignore
> @@ -0,0 +1,2 @@
> +aesp8-ppc.S
> +ghashp8-ppc.S

Shouldn't the generated files be written to the object tree?

I would hope the linux kernel builds from a readonly source tree.

David

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC 0/3] extend kexec_file_load system call

2016-07-20 Thread Vivek Goyal
On Wed, Jul 20, 2016 at 09:35:30AM +0100, Russell King - ARM Linux wrote:
> On Wed, Jul 20, 2016 at 01:45:42PM +1000, Balbir Singh wrote:
> > > IOW, if your kernel forced signature verification, you should not be
> > > able to do sig_enforce=0. If you kernel did not have
> > > CONFIG_MODULE_SIG_FORCE=y, then sig_enforce should be 0 by default anyway
> > > and you are not making it worse using command line.
> > 
> > OK.. I checked and you are right, but that is an example and there are
> > other things like security=, thermal.*, nosmep, nosmap that need auditing
> > for safety and might hurt the system security if used. I still think
> > think that assuming you can pass any command line without breaking security
> > is a broken argument.
> 
> Quite, and you don't need to run code in a privileged environment to do
> any of that.
> 
> It's also not trivial to protect against: new kernels gain new arguments
> which older kernels may not know about.  No matter how much protection
> is built into older kernels, newer kernels can become vulnerable through
> the addition of further arguments.

If a new kernel command line option becomes an issue, new kernel can
block that in secureboot environment. That way it helps kexec
boot as well as regular boot.

Vivek
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC 0/3] extend kexec_file_load system call

2016-07-20 Thread Vivek Goyal
On Wed, Jul 20, 2016 at 01:45:42PM +1000, Balbir Singh wrote:
> >  
> > Command line options are not signed. I thought idea behind secureboot
> > was to execute only trusted code and command line options don't enforce
> > you to execute unsigned code.
> >  
> >>
> >> You can set module.sig_enforce=0 and open up the system a bit assuming
> >> that you can get a module to load with another attack
> > 
> > IIUC, sig_enforce bool_enable_only so it can only be enabled. Default
> > value of it is 0 if CONFIG_MODULE_SIG_FORCE=n.
> > 
> > IOW, if your kernel forced signature verification, you should not be
> > able to do sig_enforce=0. If you kernel did not have
> > CONFIG_MODULE_SIG_FORCE=y, then sig_enforce should be 0 by default anyway
> > and you are not making it worse using command line.
> > 
> 
> OK.. I checked and you are right, but that is an example and there are
> other things like security=, thermal.*, nosmep, nosmap that need auditing
> for safety and might hurt the system security if used. I still think
> think that assuming you can pass any command line without breaking security
> is a broken argument.

I agree that if some command line option allows running unsigned code
at ring 0, then we probably should disable that on secureboot enabled
boot.

In fact, there were bunch of patches which made things tighter on
secureboot enabled machines from matthew garrett. AFAIK, these patches
never went upstream.

Vivek
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

RE: [RFC 3/3] kexec: extend kexec_file_load system call

2016-07-20 Thread David Laight
From: Dave Young
> On 07/15/16 at 02:19pm, Mark Rutland wrote:
> > On Fri, Jul 15, 2016 at 09:09:55AM -0400, Vivek Goyal wrote:
> > > On Tue, Jul 12, 2016 at 10:42:01AM +0900, AKASHI Takahiro wrote:
> > >
> > > [..]
> > > > -SYSCALL_DEFINE5(kexec_file_load, int, kernel_fd, int, initrd_fd,
> > > > +SYSCALL_DEFINE6(kexec_file_load, int, kernel_fd, int, initrd_fd,
> > > > unsigned long, cmdline_len, const char __user *, 
> > > > cmdline_ptr,
> > > > -   unsigned long, flags)
> > > > +   unsigned long, flags, const struct kexec_fdset __user 
> > > > *, ufdset)
> > >
> > > Can one add more parameters to existing syscall. Can it break existing
> > > programs with new kernel? I was of the impression that one can't do that.
> > > But may be I am missing something.
> >
> > I think the idea was that we would only look at the new params if a new
> > flags was set, and otherwise it would behave as the old syscall.
> >
> > Regardless, I think it makes far more sense to add a kexec_file_load2
> > syscall if we're going to modify the prototype at all. It's a rather
> > different proposition to the existing syscall, and needs to be treated
> > as such.
> 
> I do not think it is worth to add another syscall for extra fds.
> We have open(2) as an example for different numbers of arguments
> already.

Probably works 'by luck' and no one has actually thought about why.
That ioctl() works is (probably) even more lucky.

There are ABI that use different calling conventions for varags functions
(eg always stack all the arguments). I guess linux doesn't run on any of them.

ioctl() is a particular problem because the 'arg' might be an integer or a 
pointer.
Fortunately all the 64bit ABI linux uses pass the arg parameter in a register
(and don't use different registers for pointer and data arguments).

You could have two 'libc' functions that refer to the same system call entry.
Certainly safer than a varargs function.

David

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

RE: [PATCH v3 2/2] cpufreq: qoriq: Don't look at clock implementation details

2016-07-20 Thread Yuantian Tang
PING.

Regards,
Yuantian

> -Original Message-
> From: Scott Wood [mailto:o...@buserror.net]
> Sent: Saturday, July 09, 2016 5:07 AM
> To: Michael Turquette ; Russell King
> ; Stephen Boyd ; Viresh
> Kumar ; Rafael J. Wysocki 
> Cc: linux-...@vger.kernel.org; linux...@vger.kernel.org; linuxppc-
> d...@lists.ozlabs.org; Yuantian Tang ; Yang-Leo Li
> ; Xiaofeng Ren 
> Subject: Re: [PATCH v3 2/2] cpufreq: qoriq: Don't look at clock
> implementation details
> 
> On Thu, 2016-07-07 at 19:26 -0700, Michael Turquette wrote:
> > Quoting Scott Wood (2016-07-06 21:13:23)
> > >
> > > On Wed, 2016-07-06 at 18:30 -0700, Michael Turquette wrote:
> > > >
> > > > Quoting Scott Wood (2016-06-15 23:21:25)
> > > > >
> > > > >
> > > > > -static struct device_node *cpu_to_clk_node(int cpu)
> > > > > +static struct clk *cpu_to_clk(int cpu)
> > > > >  {
> > > > > -   struct device_node *np, *clk_np;
> > > > > +   struct device_node *np;
> > > > > +   struct clk *clk;
> > > > >
> > > > > if (!cpu_present(cpu))
> > > > > return NULL;
> > > > > @@ -112,37 +80,28 @@ static struct device_node
> > > > > *cpu_to_clk_node(int
> > > > > cpu)
> > > > > if (!np)
> > > > > return NULL;
> > > > >
> > > > > -   clk_np = of_parse_phandle(np, "clocks", 0);
> > > > > -   if (!clk_np)
> > > > > -   return NULL;
> > > > > -
> > > > > +   clk = of_clk_get(np, 0);
> > > > Why not use devm_clk_get here?
> > > devm_clk_get() is a wrapper around clk_get() which is not the same
> > > as of_clk_get().  What device would you pass to devm_clk_get(), and
> > > what name would you pass?
> > I'm fuzzy on whether or not you get a struct device from a cpufreq
> > driver. If so, then that would be the one to use. I would hope that
> > cpufreq drivers model cpus as devices, but I'm really not sure without
> > looking into the code.
> 
> It's not the cpufreq code that provides it, but get_cpu_device() could be
> used.
> 
> Do you have any comments on the first patch of this set?
> 
> -Scott

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v11 4/5] powerpc/fsl: move mpc85xx.h to include/linux/fsl

2016-07-20 Thread Arnd Bergmann
On Saturday, July 16, 2016 9:50:21 PM CEST Scott Wood wrote:
> From: yangbo lu 
> 
> Move mpc85xx.h to include/linux/fsl and rename it to svr.h as a common
> header file.  This SVR numberspace is used on some ARM chips as well as
> PPC, and even to check for a PPC SVR multi-arch drivers would otherwise
> need to ifdef the header inclusion and all references to the SVR symbols.
> 
> Signed-off-by: Yangbo Lu 
> Acked-by: Wolfram Sang 
> Acked-by: Stephen Boyd 
> Acked-by: Joerg Roedel 
> [scottwood: update description]
> Signed-off-by: Scott Wood 
> 

As discussed before, please don't introduce yet another vendor specific
way to match a SoC ID from a device driver.

I've posted a patch for an extension to the soc_device infrastructure
to allow comparing the running SoC to a table of devices, use that
instead.

Arnd
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC 0/3] extend kexec_file_load system call

2016-07-20 Thread Arnd Bergmann
On Wednesday, July 20, 2016 8:47:45 PM CEST Michael Ellerman wrote:
> At least for stdout-path, I can't really see how that would significantly help
> an attacker, but I'm all ears if anyone has ideas.

That's actually an easy one that came up before: If an attacker controls
a tty device (e.g. network console) that can be used to enter a debugger
(kdb, kgdb, xmon, ...), enabling that to be the console device
gives you a direct attack vector. The same thing will happen if you
have a piece of software that intentially gives extra rights to the
owner of the console device by treating it as "physical presence".

Arnd

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [RFC 0/3] extend kexec_file_load system call

2016-07-20 Thread Michael Ellerman
Russell King - ARM Linux  writes:

> On Wed, Jul 20, 2016 at 01:45:42PM +1000, Balbir Singh wrote:
>> > IOW, if your kernel forced signature verification, you should not be
>> > able to do sig_enforce=0. If you kernel did not have
>> > CONFIG_MODULE_SIG_FORCE=y, then sig_enforce should be 0 by default anyway
>> > and you are not making it worse using command line.
>> 
>> OK.. I checked and you are right, but that is an example and there are
>> other things like security=, thermal.*, nosmep, nosmap that need auditing
>> for safety and might hurt the system security if used. I still think
>> think that assuming you can pass any command line without breaking security
>> is a broken argument.
>
> Quite, and you don't need to run code in a privileged environment to do
> any of that.
>
> It's also not trivial to protect against: new kernels gain new arguments
> which older kernels may not know about.  No matter how much protection
> is built into older kernels, newer kernels can become vulnerable through
> the addition of further arguments.

Indeed. A whitelist of allowed command line arguments is the only option.

But given the existing syscall has shipped without a whitelist of command line
arguments, you can't add a whitelist now without potentially breaking someone's
setup.

Getting back to the device tree, we could similarly have a whitelist of
nodes/properties that we allow to be passed in.

At least for stdout-path, I can't really see how that would significantly help
an attacker, but I'm all ears if anyone has ideas.

> Also, how sure are we that there are no stack overflow issues with kernel
> command line parsing?  Can we be sure that there's none?  This is
> something which happens early in the kernel boot, before the full memory
> protections have been set up.

Yeah that's also a good point. More so for the device tree, because the parsing
is more complicated. I think there has been some work done on fuzzing libfdt,
but we should probably do more.

cheers
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v3 02/11] mm: Hardened usercopy

2016-07-20 Thread Balbir Singh
On Tue, 2016-07-19 at 11:48 -0700, Kees Cook wrote:
> On Mon, Jul 18, 2016 at 6:06 PM, Laura Abbott  wrote:
> > 
> > On 07/15/2016 02:44 PM, Kees Cook wrote:
> > 
> > This doesn't work when copying CMA allocated memory since CMA purposely
> > allocates larger than a page block size without setting head pages.
> > Given CMA may be used with drivers doing zero copy buffers, I think it
> > should be permitted.
> > 
> > Something like the following lets it pass (I can clean up and submit
> > the is_migrate_cma_page APIs as a separate patch for review)
> Yeah, this would be great. I'd rather use an accessor to check this
> than a direct check for MIGRATE_CMA.
>
> >  */
> > for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr))
> > {
> > -   if (!PageReserved(page))
> > +   if (!PageReserved(page) && !is_migrate_cma_page(page))
> > return "";
> > }
> Yeah, I'll modify this a bit so that which type it starts as is
> maintained for all pages (rather than allowing to flip back and forth
> -- even though that is likely impossible).
> 
Sorry, I completely missed the MIGRATE_CMA bits. Could you clarify if you
caught this in testing/review?

Balbir Singh.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v2 1/1] KVM: PPC: Introduce KVM_CAP_PPC_HTM

2016-07-20 Thread Michael Ellerman
Paolo Bonzini  writes:

> On 20/07/2016 07:46, Michael Ellerman wrote:
>> Thanks.
>> 
>> Acked-by: Michael Ellerman 
>> 
>> Or do you want me to merge this before Paul gets back?
>
> No, this should be merged through the KVM tree.  Please Cc the KVM
> maintainers before offering to apply a patch that formally belongs to
> another tree.

Yeah OK. It was just an offer, because I know the Qemu side is blocked
until this goes in.

> In particular this patch would indeed have a conflict, because you have
>
> +#define KVM_CAP_PPC_HTM 129
>
> but cap numbers 129 and 130 are already taken.  So whoever applies it
> should bump the number to 131.

Yep, I know about KVM caps, I probably would have remembered to check
the KVM tree. At the very least it would have got caught in linux-next.

cheers
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

RE: [PATCH v3 00/11] mm: Hardened usercopy

2016-07-20 Thread David Laight
From: Kees Cook
> Sent: 15 July 2016 22:44
> This is a start of the mainline port of PAX_USERCOPY[1]. 
...
> - if address range is in the current process stack, it must be within the
>   current stack frame (if such checking is possible) or at least entirely
>   within the current process's stack.
...

That description doesn't seem quite right to me.
I presume the check is:
  Within the current process's stack and not crossing the ends of the
  current stack frame.

The 'current' stack frame is likely to be that of copy_to/from_user().
Even if you use the stack of the caller, any problematic buffers
are likely to have been passed in from a calling function.
So unless you are going to walk the stack (good luck on that)
I'm not sure checking the stack frames is worth it.

I'd also guess that a lot of copies are from the middle of structures
so cannot fail the tests you are adding.

David

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [v2] rpaphp: fix slot registration for multiple slots under a PHB

2016-07-20 Thread Michael Ellerman
On Mon, 2016-11-07 at 22:16:27 UTC, Tyrel Datwyler wrote:
> PowerVM seems to only ever provide a single hotplug slot per PHB.
> The under lying slot hotplug registration code assumed multiple slots,
> but the actual implementation is broken for multiple slots. This went
> unnoticed for years due to the nature of PowerVM as mentioned
> previously. Under qemu/kvm the hotplug slot model aligns more with
> x86 where multiple slots are presented under a single PHB. As seen
> in the following each additional slot after the first fails to
> register due to each slot always being compared against the first
> child node of the PHB in the device tree.
...
> 
> Signed-off-by: Tyrel Datwyler 

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/e2413a7dae52fab290b7a8d11e

cheers
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [v2] powerpc/powernv: fix pci-cxl.c build when CONFIG_MODULES=n

2016-07-20 Thread Michael Ellerman
On Tue, 2016-19-07 at 02:33:35 UTC, Ian Munsie wrote:
> From: Ian Munsie 
> 
> pnv_cxl_enable_phb_kernel_api() grabs a reference to the cxl module to
> prevent it from being unloaded after the PHB has been switched to CX4 mode.
> This breaks the build when CONFIG_MODULES=n as module_mutex doesn't exist.
> 
> However, if we don't have modules, we don't need to protect against the
> case of the cxl module being unloaded. As such, split the relevant
> code out into a function surrounded with #if IS_MODULE(CXL) so we don't try
> to compile it if cxl isn't being compiled as a module.
> 
> Fixes: 5918dbc9b4ec ("powerpc/powernv: Add support for the cxl kernel api
> on the real phb")
> Reported-by: Michael Ellerman 
> Signed-off-by: Ian Munsie 
> Signed-off-by: Andrew Donnellan 

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/c2ca9f6b4cc4c45eb598b24b8b

cheers
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [v2] cxl: remove dead Kconfig options

2016-07-20 Thread Michael Ellerman
On Mon, 2016-18-07 at 04:52:57 UTC, Andrew Donnellan wrote:
> Remove the CXL_KERNEL_API and CXL_EEH Kconfig options, as they were only
> needed to coordinate the merging of the cxlflash driver. Also remove the
> stub implementation of cxl_perst_reloads_same_image() in cxlflash which is
> only used if CXL_EEH isn't defined (i.e. never).
> 
> Suggested-by: Ian Munsie 
> Signed-off-by: Andrew Donnellan 
> Acked-by: Ian Munsie 
> Acked-by: Matthew R. Ochs 

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/1e44727a0b220f6ead12fefcff

cheers
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: cxl: fix potential NULL dereference in free_adapter()

2016-07-20 Thread Michael Ellerman
On Fri, 2016-15-07 at 07:20:36 UTC, Andrew Donnellan wrote:
> If kzalloc() fails when allocating adapter->guest in
> cxl_guest_init_adapter(), we call free_adapter() before erroring out.
> free_adapter() in turn attempts to dereference adapter->guest, which in
> this case is NULL.
> 
> In free_adapter(), skip the adapter->guest cleanup if adapter->guest is
> NULL.
> 
> Fixes: 14baf4d9c739 ("cxl: Add guest-specific code")
> Reported-by: Dan Carpenter 
> Signed-off-by: Andrew Donnellan 

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/8fbaa51d43ef2c6a72849ec340

cheers
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: powerpc/mm: Cleanup LPCR defines

2016-07-20 Thread Michael Ellerman
On Fri, 2016-15-07 at 11:04:26 UTC, Michael Ellerman wrote:
> From: "Aneesh Kumar K.V" 
> 
> This makes it easy to verify we are not overloading the bits.
> No functionality change by this patch.
> 
> mpe: Cleanup more. Completely fixup whitespace, convert all UL values to
> ASM_CONST(), and replace all occurrences of 63-x with the actual shift.
> 
> Signed-off-by: Aneesh Kumar K.V 
> Signed-off-by: Michael Ellerman 

Applied to powerpc next.

https://git.kernel.org/powerpc/c/a4b349540a26af9a544e2e8582

cheers
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [v2, 1/3] powerpc32: booke: fix the build error when CRASH_DUMP is enabled

2016-07-20 Thread Michael Ellerman
On Wed, 2016-13-07 at 01:14:38 UTC, Kevin Hao wrote:
> In the current code, the RELOCATABLE will be forcedly enabled when
> enabling CRASH_DUMP. But for ppc32, the RELOCABLE also depend on
> ADVANCED_OPTIONS and select NONSTATIC_KERNEL. This will cause the
> following build error when CRASH_DUMP=y && ADVANCED_OPTIONS=n
> because the select of NONSTATIC_KERNEL doesn't take effect.
>   arch/powerpc/include/asm/io.h: In function 'virt_to_phys':
>   arch/powerpc/include/asm/page.h:113:26: error: 'virt_phys_offset' 
> undeclared (first use in this function)
>#define VIRT_PHYS_OFFSET virt_phys_offset
>   ^
> It doesn't have any strong reasons to make the RELOCATABLE depend on
> ADVANCED_OPTIONS. So remove this dependency to fix this issue.
> 
> Signed-off-by: Kevin Hao 

Series applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/da4230714662278781d007fb2b

cheers
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [1/3] powerpc/pseries: Add pseries hotplug workqueue

2016-07-20 Thread Michael Ellerman
On Thu, 2016-07-07 at 15:00:34 UTC, John Allen wrote:
> In support of PAPR changes to add a new hotplug interrupt, introduce a
> hotplug workqueue to avoid processing hotplug events in interrupt context.
> We will also take advantage of the queue on PowerVM to ensure hotplug
> events initiated from different sources (HMC and PRRN events) are handled
> and serialized properly.
> 
> Signed-off-by: John Allen 
> Reviewed-by: Nathan Fontenot 

Series applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/9054619ef54a3a832863ae25d1

cheers
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 3/5] powerpc: tm: Always use fp_state and vr_state to store live registers

2016-07-20 Thread Simon Guo
On Mon, Jul 18, 2016 at 11:28:30AM +1000, Cyril Bur wrote:
> On Sun, 17 Jul 2016 11:25:43 +0800
> 
> The aim of this patch is to ensure that pt_regs, fp_state and vr_state always
> hold a threads 'live' registers. So, after a recheckpoint fp_state is where 
> the
> the state should be. tm_reclaim_thread() does a save_all() before doing the
> reclaim.
> 
> This means that the call to restore_math() is a replacement for all deleted
> lines above it.
> 
> I added it here because I'd prefer to be safe but I left that comment in
> because I suspect restore_math() will be called later and we can get away with
> not calling it here.
> 
> > And, should the thread's MSR now set FP bit in tm_recheckpoint(), to 
> > indicate that FP register content is "fresh" in contrast to thread.fp_state?
> > 
> 
> I'm not sure what you mean by 'fresh'. You do highlight that we'll have to be
> sure that the MSR bits are off (so that restore_math() doesn't assume the
> registers are already loaded) which makes me think that tm_reclaim_thread()
> should be doing a giveup_all(), I'll fix that.
> 
> I hope that helps,
> 

Thanks Cyril. The explanation is detail and helpful.

- Simon
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v2 1/1] KVM: PPC: Introduce KVM_CAP_PPC_HTM

2016-07-20 Thread David Gibson
On Wed, Jul 20, 2016 at 01:41:36PM +1000, Sam Bobroff wrote:
> Introduce a new KVM capability, KVM_CAP_PPC_HTM, that can be queried to
> determine if a PowerPC KVM guest should use HTM (Hardware Transactional
> Memory).
> 
> This will be used by QEMU to populate the pa-features bits in the
> guest's device tree.
> 
> Signed-off-by: Sam Bobroff 

Reviewed-by: David Gibson 

> ---
> 
> v2:
> 
> * Use CPU_FTR_TM_COMP instead of CPU_FTR_TM.
> * I didn't unbreak the line, as with the extra characters checkpatch will
>   complain if I do. I did move the break to a more usual place.
> 
>  arch/powerpc/kvm/powerpc.c | 4 
>  include/uapi/linux/kvm.h   | 1 +
>  2 files changed, 5 insertions(+)
> 
> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
> index 02416fe..5ebc8ff 100644
> --- a/arch/powerpc/kvm/powerpc.c
> +++ b/arch/powerpc/kvm/powerpc.c
> @@ -588,6 +588,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long 
> ext)
>   r = 1;
>   break;
>  #endif
> + case KVM_CAP_PPC_HTM:
> + r = cpu_has_feature(CPU_FTR_TM_COMP) &&
> + is_kvmppc_hv_enabled(kvm);
> + break;
>   default:
>   r = 0;
>   break;
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 05ebf47..f421d0e 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -866,6 +866,7 @@ struct kvm_ppc_smmu_info {
>  #define KVM_CAP_ARM_PMU_V3 126
>  #define KVM_CAP_VCPU_ATTRIBUTES 127
>  #define KVM_CAP_MAX_VCPU_ID 128
> +#define KVM_CAP_PPC_HTM 129
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH kernel] powerpc/powernv/ioda: Fix endianness when reading TCEs

2016-07-20 Thread David Gibson
On Wed, Jul 20, 2016 at 02:26:51PM +1000, Alexey Kardashevskiy wrote:
> The iommu_table_ops::exchange() callback writes new TCE to the table
> and returns old value and permission mask. The old TCE value is
> correctly converted from BE to CPU endian; however permission mask
> was calculated from BE value and therefore always returned DMA_NONE
> which could cause memory leak on LE systems using VFIO SPAPR TCE IOMMU v1
> driver.
> 
> This fixes pnv_tce_xchg() to have @oldtce a CPU endian.
> 
> Fixes: 05c6cfb9dce0d13d37e9d007ee6a4af36f1c0a58
> Cc: sta...@vger.kernel.org # 4.2+
> Signed-off-by: Alexey Kardashevskiy 

Reviewed-by: David Gibson 

> ---
>  arch/powerpc/platforms/powernv/pci.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/platforms/powernv/pci.c 
> b/arch/powerpc/platforms/powernv/pci.c
> index 1d92bd9..7b17f88 100644
> --- a/arch/powerpc/platforms/powernv/pci.c
> +++ b/arch/powerpc/platforms/powernv/pci.c
> @@ -620,8 +620,8 @@ int pnv_tce_xchg(struct iommu_table *tbl, long index,
>   if (newtce & TCE_PCI_WRITE)
>   newtce |= TCE_PCI_READ;
>  
> - oldtce = xchg(pnv_tce(tbl, idx), cpu_to_be64(newtce));
> - *hpa = be64_to_cpu(oldtce) & ~(TCE_PCI_READ | TCE_PCI_WRITE);
> + oldtce = be64_to_cpu(xchg(pnv_tce(tbl, idx), cpu_to_be64(newtce)));
> + *hpa = oldtce & ~(TCE_PCI_READ | TCE_PCI_WRITE);
>   *direction = iommu_tce_direction(oldtce);
>  
>   return 0;

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v2] powerpc: Merge 32-bit and 64-bit setup_arch()

2016-07-20 Thread Michael Ellerman
From: Benjamin Herrenschmidt 

There is little enough differences now.

Signed-off-by: Benjamin Herrenschmidt 
[mpe: Add empty versions using #ifdef in setup.h rather than weak functions]
Signed-off-by: Michael Ellerman 
---
 arch/powerpc/include/asm/kvm_ppc.h |   4 -
 arch/powerpc/include/asm/rtas.h|   3 +-
 arch/powerpc/include/asm/setup.h   |  46 +-
 arch/powerpc/kernel/setup-common.c | 169 +++
 arch/powerpc/kernel/setup_32.c |  65 +-
 arch/powerpc/kernel/setup_64.c | 178 ++---
 6 files changed, 228 insertions(+), 237 deletions(-)

v2: Add empty versions using #ifdef in setup.h rather than weak functions.

diff --git a/arch/powerpc/include/asm/kvm_ppc.h 
b/arch/powerpc/include/asm/kvm_ppc.h
index 2544edabe7f3..bad829aae794 100644
--- a/arch/powerpc/include/asm/kvm_ppc.h
+++ b/arch/powerpc/include/asm/kvm_ppc.h
@@ -395,7 +395,6 @@ void kvmppc_set_pid(struct kvm_vcpu *vcpu, u32 pid);
 struct openpic;
 
 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
-extern void kvm_cma_reserve(void) __init;
 static inline void kvmppc_set_xics_phys(int cpu, unsigned long addr)
 {
paca[cpu].kvm_hstate.xics_phys = addr;
@@ -425,9 +424,6 @@ extern void kvm_hv_vm_deactivated(void);
 extern bool kvm_hv_mode_active(void);
 
 #else
-static inline void __init kvm_cma_reserve(void)
-{}
-
 static inline void kvmppc_set_xics_phys(int cpu, unsigned long addr)
 {}
 
diff --git a/arch/powerpc/include/asm/rtas.h b/arch/powerpc/include/asm/rtas.h
index fa3e3c4367bd..9c23baa10b81 100644
--- a/arch/powerpc/include/asm/rtas.h
+++ b/arch/powerpc/include/asm/rtas.h
@@ -351,7 +351,6 @@ extern bool rtas_indicator_present(int token, int 
*maxindex);
 extern int rtas_set_indicator(int indicator, int index, int new_value);
 extern int rtas_set_indicator_fast(int indicator, int index, int new_value);
 extern void rtas_progress(char *s, unsigned short hex);
-extern void rtas_initialize(void);
 extern int rtas_suspend_cpu(struct rtas_suspend_me_data *data);
 extern int rtas_suspend_last_cpu(struct rtas_suspend_me_data *data);
 extern int rtas_online_cpus_mask(cpumask_var_t cpus);
@@ -460,9 +459,11 @@ static inline int page_is_rtas_user_buf(unsigned long pfn)
 /* Not the best place to put pSeries_coalesce_init, will be fixed when we
  * move some of the rtas suspend-me stuff to pseries */
 extern void pSeries_coalesce_init(void);
+void rtas_initialize(void);
 #else
 static inline int page_is_rtas_user_buf(unsigned long pfn) { return 0;}
 static inline void pSeries_coalesce_init(void) { }
+static inline void rtas_initialize(void) { };
 #endif
 
 extern int call_rtas(const char *, int, int, unsigned long *, ...);
diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 654d64c9f3ac..3d171fd315c0 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -19,6 +19,9 @@ extern unsigned long reloc_offset(void);
 extern unsigned long add_reloc_offset(unsigned long);
 extern void reloc_got2(unsigned long);
 
+extern void initialize_cache_info(void);
+extern void irqstack_early_init(void);
+
 #define PTRRELOC(x)((typeof(x)) add_reloc_offset((unsigned long)(x)))
 
 void check_for_initrd(void);
@@ -38,7 +41,48 @@ static inline void pseries_big_endian_exceptions(void) {}
 static inline void pseries_little_endian_exceptions(void) {}
 #endif /* CONFIG_PPC_PSERIES */
 
+#ifdef CONFIG_PPC32
+void setup_power_save(void);
+#else
+static inline void setup_power_save(void) { };
+#endif
+
+#if defined(CONFIG_PPC64) && defined(CONFIG_SMP)
+void check_smt_enabled(void);
+#else
+static inline void check_smt_enabled(void) { };
+#endif
+
+#if defined(CONFIG_PPC_BOOK3E) && defined(CONFIG_SMP)
+void setup_tlb_core_data(void);
+#else
+static inline void setup_tlb_core_data(void) { };
+#endif
+
+#if defined(CONFIG_PPC_BOOK3E) || defined(CONFIG_BOOKE) || defined(CONFIG_40x)
+void exc_lvl_early_init(void);
+#else
+static inline void exc_lvl_early_init(void) { };
+#endif
+
+#ifdef CONFIG_PPC64
+void emergency_stack_init(void);
+void smp_release_cpus(void);
+#else
+static inline void emergency_stack_init(void) { };
+static inline void smp_release_cpus(void) { };
+#endif
+
+/*
+ * Having this in kvm_ppc.h makes include dependencies too
+ * tricky to solve for setup-common.c so have it here.
+ */
+#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+void __init kvm_cma_reserve(void);
+#else
+static inline void kvm_cma_reserve(void) { };
+#endif
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_SETUP_H */
-
diff --git a/arch/powerpc/kernel/setup-common.c 
b/arch/powerpc/kernel/setup-common.c
index ca9255e3b763..c6eda53d18c5 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -35,6 +35,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -61,6 +62,10 @@
 #include 
 #include 
 #include 
+#include 

Re: [RFC 0/3] extend kexec_file_load system call

2016-07-20 Thread Russell King - ARM Linux
On Wed, Jul 20, 2016 at 01:45:42PM +1000, Balbir Singh wrote:
> > IOW, if your kernel forced signature verification, you should not be
> > able to do sig_enforce=0. If you kernel did not have
> > CONFIG_MODULE_SIG_FORCE=y, then sig_enforce should be 0 by default anyway
> > and you are not making it worse using command line.
> 
> OK.. I checked and you are right, but that is an example and there are
> other things like security=, thermal.*, nosmep, nosmap that need auditing
> for safety and might hurt the system security if used. I still think
> think that assuming you can pass any command line without breaking security
> is a broken argument.

Quite, and you don't need to run code in a privileged environment to do
any of that.

It's also not trivial to protect against: new kernels gain new arguments
which older kernels may not know about.  No matter how much protection
is built into older kernels, newer kernels can become vulnerable through
the addition of further arguments.

Also, how sure are we that there are no stack overflow issues with kernel
command line parsing?  Can we be sure that there's none?  This is
something which happens early in the kernel boot, before the full memory
protections have been set up.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v2 1/1] KVM: PPC: Introduce KVM_CAP_PPC_HTM

2016-07-20 Thread Paolo Bonzini


On 20/07/2016 07:46, Michael Ellerman wrote:
> Thanks.
> 
> Acked-by: Michael Ellerman 
> 
> Or do you want me to merge this before Paul gets back?

No, this should be merged through the KVM tree.  Please Cc the KVM
maintainers before offering to apply a patch that formally belongs to
another tree.

I don't care if Paul merges the patch or Radim and I do, but we're
getting lots of unnecessary conflicts from patches that go through the
main architecture tree and that shouldn't really happen.  Please let's
keep some discipline, as I want to minimize the number of conflicts that
reach Linus (and 4.8 is going to be *bad* in this respect, with both PPC
and s390 having conflicts between the KVM and arch tree).

In particular this patch would indeed have a conflict, because you have

+#define KVM_CAP_PPC_HTM 129

but cap numbers 129 and 130 are already taken.  So whoever applies it
should bump the number to 131.

Paolo
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v3] of: fix memory leak related to safe_name()

2016-07-20 Thread Mathieu Malaterre
On Fri, Jun 24, 2016 at 10:38 PM, Rob Herring  wrote:
> On Fri, Jun 17, 2016 at 2:51 AM, Mathieu Malaterre
>  wrote:
>> v3 tested here multiple times ! memleak is now gone.
>>
>> Tested-by: Mathieu Malaterre 
>>
>> Thanks
>>
>> On Thu, Jun 16, 2016 at 7:51 PM, Frank Rowand  wrote:
>>> From: Frank Rowand 
>>>
>>> Fix a memory leak resulting from memory allocation in safe_name().
>>> This patch fixes all call sites of safe_name().
>
> Applied, thanks.
>
> Rob

Could this patch be considered for stable ?

Thx
-- 
Mathieu
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev