Re: [PATCH] powerpc/mm/hash: Improve address limit checks

2019-05-21 Thread Michael Ellerman
Michael Ellerman  writes:
> On Thu, 2019-05-16 at 11:50:54 UTC, "Aneesh Kumar K.V" wrote:
>> Different parts of the code do the limit check by ignoring the top nibble
>> of EA. ie. we do checks like
>> 
>>  if ((ea & EA_MASK)  >= H_PGTABLE_RANGE)
>>  error
>> 
>> This patch makes sure we don't insert SLB entries for addresses whose top 
>> nibble
>> doesn't match the ignored bits.
>> 
>> With an address like 0x4800, we can result in wrong slb entries 
>> like
>> 
>> 13 4800 400ea1b217000510   1T ESID=   40 VSID=   ea1b217000 
>> LLP:110
>> 
>> without this patch we will map that EA with LINEAR_MAP_REGION_ID and further
>> those addr limit check will return false.
>> 
>> Signed-off-by: Aneesh Kumar K.V 
>
> Applied to powerpc fixes, thanks.
>
> https://git.kernel.org/powerpc/c/c179976cf4cbd2e65f29741d5bc07ccf

Actually this patch was superseeded. This should have been a reply to:

  
https://lore.kernel.org/linuxppc-dev/20190517132958.22299-1-...@ellerman.id.au/

cheers


Re: [PATCH] powerpc/mm/hash: Improve address limit checks

2019-05-18 Thread Michael Ellerman
On Thu, 2019-05-16 at 11:50:54 UTC, "Aneesh Kumar K.V" wrote:
> Different parts of the code do the limit check by ignoring the top nibble
> of EA. ie. we do checks like
> 
>   if ((ea & EA_MASK)  >= H_PGTABLE_RANGE)
>   error
> 
> This patch makes sure we don't insert SLB entries for addresses whose top 
> nibble
> doesn't match the ignored bits.
> 
> With an address like 0x4800, we can result in wrong slb entries 
> like
> 
> 13 4800 400ea1b217000510   1T ESID=   40 VSID=   ea1b217000 
> LLP:110
> 
> without this patch we will map that EA with LINEAR_MAP_REGION_ID and further
> those addr limit check will return false.
> 
> Signed-off-by: Aneesh Kumar K.V 

Applied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/c179976cf4cbd2e65f29741d5bc07ccf

cheers


[PATCH] powerpc/mm/hash: Improve address limit checks

2019-05-16 Thread Aneesh Kumar K.V
Different parts of the code do the limit check by ignoring the top nibble
of EA. ie. we do checks like

if ((ea & EA_MASK)  >= H_PGTABLE_RANGE)
error

This patch makes sure we don't insert SLB entries for addresses whose top nibble
doesn't match the ignored bits.

With an address like 0x4800, we can result in wrong slb entries like

13 4800 400ea1b217000510   1T ESID=   40 VSID=   ea1b217000 
LLP:110

without this patch we will map that EA with LINEAR_MAP_REGION_ID and further
those addr limit check will return false.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/include/asm/book3s/64/hash.h | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/hash.h 
b/arch/powerpc/include/asm/book3s/64/hash.h
index 5486087e64ea..1060fadb4a56 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -29,10 +29,8 @@
 #define H_PGTABLE_EADDR_SIZE   (H_PTE_INDEX_SIZE + H_PMD_INDEX_SIZE + \
 H_PUD_INDEX_SIZE + H_PGD_INDEX_SIZE + 
PAGE_SHIFT)
 #define H_PGTABLE_RANGE(ASM_CONST(1) << H_PGTABLE_EADDR_SIZE)
-/*
- * Top 2 bits are ignored in page table walk.
- */
-#define EA_MASK(~(0xcUL << 60))
+
+#define EA_MASK(~PAGE_OFFSET)
 
 /*
  * We store the slot details in the second half of page table.
@@ -93,6 +91,7 @@
 #define VMALLOC_REGION_ID  NON_LINEAR_REGION_ID(H_VMALLOC_START)
 #define IO_REGION_ID   NON_LINEAR_REGION_ID(H_KERN_IO_START)
 #define VMEMMAP_REGION_ID  NON_LINEAR_REGION_ID(H_VMEMMAP_START)
+#define INVALID_REGION_ID  (VMEMMAP_REGION_ID + 1)
 
 /*
  * Defines the address of the vmemap area, in its own region on
@@ -119,6 +118,9 @@ static inline int get_region_id(unsigned long ea)
if (id == 0)
return USER_REGION_ID;
 
+   if (id != (PAGE_OFFSET >> 60))
+   return INVALID_REGION_ID;
+
if (ea < H_KERN_VIRT_START)
return LINEAR_MAP_REGION_ID;
 
-- 
2.21.0