Re: [PATCH v2] powerpc/mm/hash: Fix get_region_id() for invalid addresses

2019-05-21 Thread Michael Ellerman
On Fri, 2019-05-17 at 13:29:58 UTC, Michael Ellerman wrote:
> From: "Aneesh Kumar K.V" 
> 
> Accesses by userspace to random addresses outside the user or kernel
> address range will generate an SLB fault. When we handle that fault we
> classify the effective address into several classes, eg. user, kernel
> linear, kernel virtual etc.
> 
> For addresses that are completely outside of any valid range, we
> should not insert an SLB entry at all, and instead immediately an
> exception.
> 
> In the past this was handled in two ways. Firstly we would check the
> top nibble of the address (using REGION_ID(ea)) and that would tell us
> if the address was user (0), kernel linear (c), kernel virtual (d), or
> vmemmap (f). If the address didn't match any of these it was invalid.
> 
> Then for each type of address we would do a secondary check. For the
> user region we check against H_PGTABLE_RANGE, for kernel linear we
> would mask the top nibble of the address and then check the address
> against MAX_PHYSMEM_BITS.
> 
> As part of commit 0034d395f89d ("powerpc/mm/hash64: Map all the kernel
> regions in the same 0xc range") we replaced REGION_ID() with
> get_region_id() and changed the masking of the top nibble to only mask
> the top two bits, which introduced a bug.
> 
> Addresses less than (4 << 60) are still handled correctly, they are
> either less than (1 << 60) in which case they are subject to the
> H_PGTABLE_RANGE check, or they are correctly checked against
> MAX_PHYSMEM_BITS.
> 
> However addresses from (4 << 60) to ((0xc << 60) - 1), are incorrectly
> treated as kernel linear addresses in get_region_id(). Then the top
> two bits are cleared by EA_MASK in slb_allocate_kernel() and the
> address is checked against MAX_PHYSMEM_BITS, which it passes due to
> the masking. The end result is we incorrectly insert SLB entries for
> those addresses.
> 
> That is not actually catastrophic, having inserted the SLB entry we
> will then go on to take a page fault for the address and at that point
> we detect the problem and report it as a bad fault.
> 
> Still we should not be inserting those entries, or treating them as
> kernel linear addresses in the first place. So fix get_region_id() to
> detect addresses in that range and return an invalid region id, which
> we cause use to not insert an SLB entry and directly report an
> exception.
> 
> Fixes: 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the 
> same 0xc range")
> Signed-off-by: Aneesh Kumar K.V 
> [mpe: Drop change to EA_MASK for now, rewrite change log]
> Signed-off-by: Michael Ellerman 

Applied to powerpc fixes.

https://git.kernel.org/powerpc/c/c179976cf4cbd2e65f29741d5bc07ccf

cheers


Re: [PATCH v2] powerpc/mm/hash: Fix get_region_id() for invalid addresses

2019-05-18 Thread Michael Ellerman
Nicholas Piggin  writes:
> Michael Ellerman's on May 17, 2019 11:29 pm:
>> From: "Aneesh Kumar K.V" 
>> 
>> Accesses by userspace to random addresses outside the user or kernel
>> address range will generate an SLB fault. When we handle that fault we
>> classify the effective address into several classes, eg. user, kernel
>> linear, kernel virtual etc.
>> 
>> For addresses that are completely outside of any valid range, we
>> should not insert an SLB entry at all, and instead immediately an
>> exception.
>> 
>> In the past this was handled in two ways. Firstly we would check the
>> top nibble of the address (using REGION_ID(ea)) and that would tell us
>> if the address was user (0), kernel linear (c), kernel virtual (d), or
>> vmemmap (f). If the address didn't match any of these it was invalid.
>> 
>> Then for each type of address we would do a secondary check. For the
>> user region we check against H_PGTABLE_RANGE, for kernel linear we
>> would mask the top nibble of the address and then check the address
>> against MAX_PHYSMEM_BITS.
>> 
>> As part of commit 0034d395f89d ("powerpc/mm/hash64: Map all the kernel
>> regions in the same 0xc range") we replaced REGION_ID() with
>> get_region_id() and changed the masking of the top nibble to only mask
>> the top two bits, which introduced a bug.
>> 
>> Addresses less than (4 << 60) are still handled correctly, they are
>> either less than (1 << 60) in which case they are subject to the
>> H_PGTABLE_RANGE check, or they are correctly checked against
>> MAX_PHYSMEM_BITS.
>> 
>> However addresses from (4 << 60) to ((0xc << 60) - 1), are incorrectly
>> treated as kernel linear addresses in get_region_id(). Then the top
>> two bits are cleared by EA_MASK in slb_allocate_kernel() and the
>> address is checked against MAX_PHYSMEM_BITS, which it passes due to
>> the masking. The end result is we incorrectly insert SLB entries for
>> those addresses.
>> 
>> That is not actually catastrophic, having inserted the SLB entry we
>> will then go on to take a page fault for the address and at that point
>> we detect the problem and report it as a bad fault.
>> 
>> Still we should not be inserting those entries, or treating them as
>> kernel linear addresses in the first place. So fix get_region_id() to
>> detect addresses in that range and return an invalid region id, which
>> we cause use to not insert an SLB entry and directly report an
>> exception.
>> 
>> Fixes: 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the 
>> same 0xc range")
>> Signed-off-by: Aneesh Kumar K.V 
>> [mpe: Drop change to EA_MASK for now, rewrite change log]
>> Signed-off-by: Michael Ellerman 
>
> Looks good to me.

Thanks for reviewing.

cheers


Re: [PATCH v2] powerpc/mm/hash: Fix get_region_id() for invalid addresses

2019-05-17 Thread Nicholas Piggin
Michael Ellerman's on May 17, 2019 11:29 pm:
> From: "Aneesh Kumar K.V" 
> 
> Accesses by userspace to random addresses outside the user or kernel
> address range will generate an SLB fault. When we handle that fault we
> classify the effective address into several classes, eg. user, kernel
> linear, kernel virtual etc.
> 
> For addresses that are completely outside of any valid range, we
> should not insert an SLB entry at all, and instead immediately an
> exception.
> 
> In the past this was handled in two ways. Firstly we would check the
> top nibble of the address (using REGION_ID(ea)) and that would tell us
> if the address was user (0), kernel linear (c), kernel virtual (d), or
> vmemmap (f). If the address didn't match any of these it was invalid.
> 
> Then for each type of address we would do a secondary check. For the
> user region we check against H_PGTABLE_RANGE, for kernel linear we
> would mask the top nibble of the address and then check the address
> against MAX_PHYSMEM_BITS.
> 
> As part of commit 0034d395f89d ("powerpc/mm/hash64: Map all the kernel
> regions in the same 0xc range") we replaced REGION_ID() with
> get_region_id() and changed the masking of the top nibble to only mask
> the top two bits, which introduced a bug.
> 
> Addresses less than (4 << 60) are still handled correctly, they are
> either less than (1 << 60) in which case they are subject to the
> H_PGTABLE_RANGE check, or they are correctly checked against
> MAX_PHYSMEM_BITS.
> 
> However addresses from (4 << 60) to ((0xc << 60) - 1), are incorrectly
> treated as kernel linear addresses in get_region_id(). Then the top
> two bits are cleared by EA_MASK in slb_allocate_kernel() and the
> address is checked against MAX_PHYSMEM_BITS, which it passes due to
> the masking. The end result is we incorrectly insert SLB entries for
> those addresses.
> 
> That is not actually catastrophic, having inserted the SLB entry we
> will then go on to take a page fault for the address and at that point
> we detect the problem and report it as a bad fault.
> 
> Still we should not be inserting those entries, or treating them as
> kernel linear addresses in the first place. So fix get_region_id() to
> detect addresses in that range and return an invalid region id, which
> we cause use to not insert an SLB entry and directly report an
> exception.
> 
> Fixes: 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the 
> same 0xc range")
> Signed-off-by: Aneesh Kumar K.V 
> [mpe: Drop change to EA_MASK for now, rewrite change log]
> Signed-off-by: Michael Ellerman 

Looks good to me.

> ---
>  arch/powerpc/include/asm/book3s/64/hash.h | 4 
>  1 file changed, 4 insertions(+)
> 
> v2: Drop change to EA_MASK for now, rewrite change log.
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/hash.h 
> b/arch/powerpc/include/asm/book3s/64/hash.h
> index 5486087e64ea..2781ebf6add4 100644
> --- a/arch/powerpc/include/asm/book3s/64/hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/hash.h
> @@ -93,6 +93,7 @@
>  #define VMALLOC_REGION_IDNON_LINEAR_REGION_ID(H_VMALLOC_START)
>  #define IO_REGION_ID NON_LINEAR_REGION_ID(H_KERN_IO_START)
>  #define VMEMMAP_REGION_IDNON_LINEAR_REGION_ID(H_VMEMMAP_START)
> +#define INVALID_REGION_ID(VMEMMAP_REGION_ID + 1)
>  
>  /*
>   * Defines the address of the vmemap area, in its own region on
> @@ -119,6 +120,9 @@ static inline int get_region_id(unsigned long ea)
>   if (id == 0)
>   return USER_REGION_ID;
>  
> + if (id != (PAGE_OFFSET >> 60))
> + return INVALID_REGION_ID;
> +
>   if (ea < H_KERN_VIRT_START)
>   return LINEAR_MAP_REGION_ID;
>  
> -- 
> 2.20.1
> 
> 


[PATCH v2] powerpc/mm/hash: Fix get_region_id() for invalid addresses

2019-05-17 Thread Michael Ellerman
From: "Aneesh Kumar K.V" 

Accesses by userspace to random addresses outside the user or kernel
address range will generate an SLB fault. When we handle that fault we
classify the effective address into several classes, eg. user, kernel
linear, kernel virtual etc.

For addresses that are completely outside of any valid range, we
should not insert an SLB entry at all, and instead immediately an
exception.

In the past this was handled in two ways. Firstly we would check the
top nibble of the address (using REGION_ID(ea)) and that would tell us
if the address was user (0), kernel linear (c), kernel virtual (d), or
vmemmap (f). If the address didn't match any of these it was invalid.

Then for each type of address we would do a secondary check. For the
user region we check against H_PGTABLE_RANGE, for kernel linear we
would mask the top nibble of the address and then check the address
against MAX_PHYSMEM_BITS.

As part of commit 0034d395f89d ("powerpc/mm/hash64: Map all the kernel
regions in the same 0xc range") we replaced REGION_ID() with
get_region_id() and changed the masking of the top nibble to only mask
the top two bits, which introduced a bug.

Addresses less than (4 << 60) are still handled correctly, they are
either less than (1 << 60) in which case they are subject to the
H_PGTABLE_RANGE check, or they are correctly checked against
MAX_PHYSMEM_BITS.

However addresses from (4 << 60) to ((0xc << 60) - 1), are incorrectly
treated as kernel linear addresses in get_region_id(). Then the top
two bits are cleared by EA_MASK in slb_allocate_kernel() and the
address is checked against MAX_PHYSMEM_BITS, which it passes due to
the masking. The end result is we incorrectly insert SLB entries for
those addresses.

That is not actually catastrophic, having inserted the SLB entry we
will then go on to take a page fault for the address and at that point
we detect the problem and report it as a bad fault.

Still we should not be inserting those entries, or treating them as
kernel linear addresses in the first place. So fix get_region_id() to
detect addresses in that range and return an invalid region id, which
we cause use to not insert an SLB entry and directly report an
exception.

Fixes: 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the same 
0xc range")
Signed-off-by: Aneesh Kumar K.V 
[mpe: Drop change to EA_MASK for now, rewrite change log]
Signed-off-by: Michael Ellerman 
---
 arch/powerpc/include/asm/book3s/64/hash.h | 4 
 1 file changed, 4 insertions(+)

v2: Drop change to EA_MASK for now, rewrite change log.

diff --git a/arch/powerpc/include/asm/book3s/64/hash.h 
b/arch/powerpc/include/asm/book3s/64/hash.h
index 5486087e64ea..2781ebf6add4 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -93,6 +93,7 @@
 #define VMALLOC_REGION_ID  NON_LINEAR_REGION_ID(H_VMALLOC_START)
 #define IO_REGION_ID   NON_LINEAR_REGION_ID(H_KERN_IO_START)
 #define VMEMMAP_REGION_ID  NON_LINEAR_REGION_ID(H_VMEMMAP_START)
+#define INVALID_REGION_ID  (VMEMMAP_REGION_ID + 1)
 
 /*
  * Defines the address of the vmemap area, in its own region on
@@ -119,6 +120,9 @@ static inline int get_region_id(unsigned long ea)
if (id == 0)
return USER_REGION_ID;
 
+   if (id != (PAGE_OFFSET >> 60))
+   return INVALID_REGION_ID;
+
if (ea < H_KERN_VIRT_START)
return LINEAR_MAP_REGION_ID;
 
-- 
2.20.1