Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
On Mon, Aug 14, 2017 at 11:42:45AM -0700, Laura Abbott wrote: > On 08/14/2017 09:22 AM, Tycho Andersen wrote: > > On Sat, Aug 12, 2017 at 12:17:34PM +0100, Mark Rutland wrote: > >> Hi, > >> > >> On Fri, Aug 11, 2017 at 03:13:02PM -0600, Tycho Andersen wrote: > >>> On Fri, Aug 11, 2017 at 10:25:14AM -0700, Laura Abbott wrote: > On 08/09/2017 01:07 PM, Tycho Andersen wrote: > > @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long > > addr, unsigned long end, > > next = pmd_addr_end(addr, end); > > > > /* try section mapping first */ > > - if (((addr | next | phys) & ~SECTION_MASK) == 0 && > > + if (use_section_mapping(addr, next, phys) && > > (flags & NO_BLOCK_MAPPINGS) == 0) { > > pmd_set_huge(pmd, phys, prot); > > > > > > There is already similar logic to disable section mappings for > debug_pagealloc at the start of map_mem, can you take advantage > of that? > >>> > >>> You're suggesting something like this instead? Seems to work fine. > >>> > >>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > >>> index 38026b3ccb46..3b2c17bbbf12 100644 > >>> --- a/arch/arm64/mm/mmu.c > >>> +++ b/arch/arm64/mm/mmu.c > >>> @@ -434,6 +434,8 @@ static void __init map_mem(pgd_t *pgd) > >>> > >>> if (debug_pagealloc_enabled()) > >>> flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; > >>> + if (IS_ENABLED(CONFIG_XPFO)) > >>> + flags |= NO_BLOCK_MAPPINGS; > >>> > >> > >> IIUC, XPFO carves out individual pages just like DEBUG_PAGEALLOC, so you'll > >> also need NO_CONT_MAPPINGS. > > > > Yes, thanks! > > > > Tycho > > > > Setting NO_CONT_MAPPINGS fixes the TLB conflict aborts I was seeing > on my machine. Great, thanks for testing! I've also fixed the lookup_page_ext bug you noted in the other thread. Tycho
Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
On Mon, Aug 14, 2017 at 11:42:45AM -0700, Laura Abbott wrote: > On 08/14/2017 09:22 AM, Tycho Andersen wrote: > > On Sat, Aug 12, 2017 at 12:17:34PM +0100, Mark Rutland wrote: > >> Hi, > >> > >> On Fri, Aug 11, 2017 at 03:13:02PM -0600, Tycho Andersen wrote: > >>> On Fri, Aug 11, 2017 at 10:25:14AM -0700, Laura Abbott wrote: > On 08/09/2017 01:07 PM, Tycho Andersen wrote: > > @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long > > addr, unsigned long end, > > next = pmd_addr_end(addr, end); > > > > /* try section mapping first */ > > - if (((addr | next | phys) & ~SECTION_MASK) == 0 && > > + if (use_section_mapping(addr, next, phys) && > > (flags & NO_BLOCK_MAPPINGS) == 0) { > > pmd_set_huge(pmd, phys, prot); > > > > > > There is already similar logic to disable section mappings for > debug_pagealloc at the start of map_mem, can you take advantage > of that? > >>> > >>> You're suggesting something like this instead? Seems to work fine. > >>> > >>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > >>> index 38026b3ccb46..3b2c17bbbf12 100644 > >>> --- a/arch/arm64/mm/mmu.c > >>> +++ b/arch/arm64/mm/mmu.c > >>> @@ -434,6 +434,8 @@ static void __init map_mem(pgd_t *pgd) > >>> > >>> if (debug_pagealloc_enabled()) > >>> flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; > >>> + if (IS_ENABLED(CONFIG_XPFO)) > >>> + flags |= NO_BLOCK_MAPPINGS; > >>> > >> > >> IIUC, XPFO carves out individual pages just like DEBUG_PAGEALLOC, so you'll > >> also need NO_CONT_MAPPINGS. > > > > Yes, thanks! > > > > Tycho > > > > Setting NO_CONT_MAPPINGS fixes the TLB conflict aborts I was seeing > on my machine. Great, thanks for testing! I've also fixed the lookup_page_ext bug you noted in the other thread. Tycho
Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
On 08/14/2017 09:22 AM, Tycho Andersen wrote: > On Sat, Aug 12, 2017 at 12:17:34PM +0100, Mark Rutland wrote: >> Hi, >> >> On Fri, Aug 11, 2017 at 03:13:02PM -0600, Tycho Andersen wrote: >>> On Fri, Aug 11, 2017 at 10:25:14AM -0700, Laura Abbott wrote: On 08/09/2017 01:07 PM, Tycho Andersen wrote: > @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long addr, > unsigned long end, > next = pmd_addr_end(addr, end); > > /* try section mapping first */ > - if (((addr | next | phys) & ~SECTION_MASK) == 0 && > + if (use_section_mapping(addr, next, phys) && > (flags & NO_BLOCK_MAPPINGS) == 0) { > pmd_set_huge(pmd, phys, prot); > > There is already similar logic to disable section mappings for debug_pagealloc at the start of map_mem, can you take advantage of that? >>> >>> You're suggesting something like this instead? Seems to work fine. >>> >>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >>> index 38026b3ccb46..3b2c17bbbf12 100644 >>> --- a/arch/arm64/mm/mmu.c >>> +++ b/arch/arm64/mm/mmu.c >>> @@ -434,6 +434,8 @@ static void __init map_mem(pgd_t *pgd) >>> >>> if (debug_pagealloc_enabled()) >>> flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; >>> + if (IS_ENABLED(CONFIG_XPFO)) >>> + flags |= NO_BLOCK_MAPPINGS; >>> >> >> IIUC, XPFO carves out individual pages just like DEBUG_PAGEALLOC, so you'll >> also need NO_CONT_MAPPINGS. > > Yes, thanks! > > Tycho > Setting NO_CONT_MAPPINGS fixes the TLB conflict aborts I was seeing on my machine. Thanks, Laura
Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
On 08/14/2017 09:22 AM, Tycho Andersen wrote: > On Sat, Aug 12, 2017 at 12:17:34PM +0100, Mark Rutland wrote: >> Hi, >> >> On Fri, Aug 11, 2017 at 03:13:02PM -0600, Tycho Andersen wrote: >>> On Fri, Aug 11, 2017 at 10:25:14AM -0700, Laura Abbott wrote: On 08/09/2017 01:07 PM, Tycho Andersen wrote: > @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long addr, > unsigned long end, > next = pmd_addr_end(addr, end); > > /* try section mapping first */ > - if (((addr | next | phys) & ~SECTION_MASK) == 0 && > + if (use_section_mapping(addr, next, phys) && > (flags & NO_BLOCK_MAPPINGS) == 0) { > pmd_set_huge(pmd, phys, prot); > > There is already similar logic to disable section mappings for debug_pagealloc at the start of map_mem, can you take advantage of that? >>> >>> You're suggesting something like this instead? Seems to work fine. >>> >>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >>> index 38026b3ccb46..3b2c17bbbf12 100644 >>> --- a/arch/arm64/mm/mmu.c >>> +++ b/arch/arm64/mm/mmu.c >>> @@ -434,6 +434,8 @@ static void __init map_mem(pgd_t *pgd) >>> >>> if (debug_pagealloc_enabled()) >>> flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; >>> + if (IS_ENABLED(CONFIG_XPFO)) >>> + flags |= NO_BLOCK_MAPPINGS; >>> >> >> IIUC, XPFO carves out individual pages just like DEBUG_PAGEALLOC, so you'll >> also need NO_CONT_MAPPINGS. > > Yes, thanks! > > Tycho > Setting NO_CONT_MAPPINGS fixes the TLB conflict aborts I was seeing on my machine. Thanks, Laura
Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
On Sat, Aug 12, 2017 at 12:17:34PM +0100, Mark Rutland wrote: > Hi, > > On Fri, Aug 11, 2017 at 03:13:02PM -0600, Tycho Andersen wrote: > > On Fri, Aug 11, 2017 at 10:25:14AM -0700, Laura Abbott wrote: > > > On 08/09/2017 01:07 PM, Tycho Andersen wrote: > > > > @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long > > > > addr, unsigned long end, > > > > next = pmd_addr_end(addr, end); > > > > > > > > /* try section mapping first */ > > > > - if (((addr | next | phys) & ~SECTION_MASK) == 0 && > > > > + if (use_section_mapping(addr, next, phys) && > > > > (flags & NO_BLOCK_MAPPINGS) == 0) { > > > > pmd_set_huge(pmd, phys, prot); > > > > > > > > > > > > > > There is already similar logic to disable section mappings for > > > debug_pagealloc at the start of map_mem, can you take advantage > > > of that? > > > > You're suggesting something like this instead? Seems to work fine. > > > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > > index 38026b3ccb46..3b2c17bbbf12 100644 > > --- a/arch/arm64/mm/mmu.c > > +++ b/arch/arm64/mm/mmu.c > > @@ -434,6 +434,8 @@ static void __init map_mem(pgd_t *pgd) > > > > if (debug_pagealloc_enabled()) > > flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; > > + if (IS_ENABLED(CONFIG_XPFO)) > > + flags |= NO_BLOCK_MAPPINGS; > > > > IIUC, XPFO carves out individual pages just like DEBUG_PAGEALLOC, so you'll > also need NO_CONT_MAPPINGS. Yes, thanks! Tycho
Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
On Sat, Aug 12, 2017 at 12:17:34PM +0100, Mark Rutland wrote: > Hi, > > On Fri, Aug 11, 2017 at 03:13:02PM -0600, Tycho Andersen wrote: > > On Fri, Aug 11, 2017 at 10:25:14AM -0700, Laura Abbott wrote: > > > On 08/09/2017 01:07 PM, Tycho Andersen wrote: > > > > @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long > > > > addr, unsigned long end, > > > > next = pmd_addr_end(addr, end); > > > > > > > > /* try section mapping first */ > > > > - if (((addr | next | phys) & ~SECTION_MASK) == 0 && > > > > + if (use_section_mapping(addr, next, phys) && > > > > (flags & NO_BLOCK_MAPPINGS) == 0) { > > > > pmd_set_huge(pmd, phys, prot); > > > > > > > > > > > > > > There is already similar logic to disable section mappings for > > > debug_pagealloc at the start of map_mem, can you take advantage > > > of that? > > > > You're suggesting something like this instead? Seems to work fine. > > > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > > index 38026b3ccb46..3b2c17bbbf12 100644 > > --- a/arch/arm64/mm/mmu.c > > +++ b/arch/arm64/mm/mmu.c > > @@ -434,6 +434,8 @@ static void __init map_mem(pgd_t *pgd) > > > > if (debug_pagealloc_enabled()) > > flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; > > + if (IS_ENABLED(CONFIG_XPFO)) > > + flags |= NO_BLOCK_MAPPINGS; > > > > IIUC, XPFO carves out individual pages just like DEBUG_PAGEALLOC, so you'll > also need NO_CONT_MAPPINGS. Yes, thanks! Tycho
Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
Hi, On Fri, Aug 11, 2017 at 03:13:02PM -0600, Tycho Andersen wrote: > On Fri, Aug 11, 2017 at 10:25:14AM -0700, Laura Abbott wrote: > > On 08/09/2017 01:07 PM, Tycho Andersen wrote: > > > @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long addr, > > > unsigned long end, > > > next = pmd_addr_end(addr, end); > > > > > > /* try section mapping first */ > > > - if (((addr | next | phys) & ~SECTION_MASK) == 0 && > > > + if (use_section_mapping(addr, next, phys) && > > > (flags & NO_BLOCK_MAPPINGS) == 0) { > > > pmd_set_huge(pmd, phys, prot); > > > > > > > > > > There is already similar logic to disable section mappings for > > debug_pagealloc at the start of map_mem, can you take advantage > > of that? > > You're suggesting something like this instead? Seems to work fine. > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 38026b3ccb46..3b2c17bbbf12 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -434,6 +434,8 @@ static void __init map_mem(pgd_t *pgd) > > if (debug_pagealloc_enabled()) > flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; > + if (IS_ENABLED(CONFIG_XPFO)) > + flags |= NO_BLOCK_MAPPINGS; > IIUC, XPFO carves out individual pages just like DEBUG_PAGEALLOC, so you'll also need NO_CONT_MAPPINGS. Thanks Mark.
Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
Hi, On Fri, Aug 11, 2017 at 03:13:02PM -0600, Tycho Andersen wrote: > On Fri, Aug 11, 2017 at 10:25:14AM -0700, Laura Abbott wrote: > > On 08/09/2017 01:07 PM, Tycho Andersen wrote: > > > @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long addr, > > > unsigned long end, > > > next = pmd_addr_end(addr, end); > > > > > > /* try section mapping first */ > > > - if (((addr | next | phys) & ~SECTION_MASK) == 0 && > > > + if (use_section_mapping(addr, next, phys) && > > > (flags & NO_BLOCK_MAPPINGS) == 0) { > > > pmd_set_huge(pmd, phys, prot); > > > > > > > > > > There is already similar logic to disable section mappings for > > debug_pagealloc at the start of map_mem, can you take advantage > > of that? > > You're suggesting something like this instead? Seems to work fine. > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 38026b3ccb46..3b2c17bbbf12 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -434,6 +434,8 @@ static void __init map_mem(pgd_t *pgd) > > if (debug_pagealloc_enabled()) > flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; > + if (IS_ENABLED(CONFIG_XPFO)) > + flags |= NO_BLOCK_MAPPINGS; > IIUC, XPFO carves out individual pages just like DEBUG_PAGEALLOC, so you'll also need NO_CONT_MAPPINGS. Thanks Mark.
Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
On Fri, Aug 11, 2017 at 03:13:02PM -0600, Tycho Andersen wrote: > You're suggesting something like this instead? Seems to work fine. And in fact, using this patch instead means that booting on 4k pages works too... I guess because NO_BLOCK_MAPPINGS is looked at in a few other places that matter too? Anyway, I'll use this patch instead, thanks for the suggestion! Tycho
Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
On Fri, Aug 11, 2017 at 03:13:02PM -0600, Tycho Andersen wrote: > You're suggesting something like this instead? Seems to work fine. And in fact, using this patch instead means that booting on 4k pages works too... I guess because NO_BLOCK_MAPPINGS is looked at in a few other places that matter too? Anyway, I'll use this patch instead, thanks for the suggestion! Tycho
Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
Hi Laura, On Fri, Aug 11, 2017 at 10:25:14AM -0700, Laura Abbott wrote: > On 08/09/2017 01:07 PM, Tycho Andersen wrote: > > From: Juerg Haefliger> > > > XPFO (eXclusive Page Frame Ownership) doesn't support section mappings > > yet, so disable it if XPFO is turned on. > > > > Signed-off-by: Juerg Haefliger > > Tested-by: Tycho Andersen > > --- > > arch/arm64/mm/mmu.c | 14 +- > > 1 file changed, 13 insertions(+), 1 deletion(-) > > > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > > index f1eb15e0e864..38026b3ccb46 100644 > > --- a/arch/arm64/mm/mmu.c > > +++ b/arch/arm64/mm/mmu.c > > @@ -176,6 +176,18 @@ static void alloc_init_cont_pte(pmd_t *pmd, unsigned > > long addr, > > } while (addr = next, addr != end); > > } > > > > +static inline bool use_section_mapping(unsigned long addr, unsigned long > > next, > > + unsigned long phys) > > +{ > > + if (IS_ENABLED(CONFIG_XPFO)) > > + return false; > > + > > + if (((addr | next | phys) & ~SECTION_MASK) != 0) > > + return false; > > + > > + return true; > > +} > > + > > static void init_pmd(pud_t *pud, unsigned long addr, unsigned long end, > > phys_addr_t phys, pgprot_t prot, > > phys_addr_t (*pgtable_alloc)(void), int flags) > > @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long addr, > > unsigned long end, > > next = pmd_addr_end(addr, end); > > > > /* try section mapping first */ > > - if (((addr | next | phys) & ~SECTION_MASK) == 0 && > > + if (use_section_mapping(addr, next, phys) && > > (flags & NO_BLOCK_MAPPINGS) == 0) { > > pmd_set_huge(pmd, phys, prot); > > > > > > There is already similar logic to disable section mappings for > debug_pagealloc at the start of map_mem, can you take advantage > of that? You're suggesting something like this instead? Seems to work fine. diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 38026b3ccb46..3b2c17bbbf12 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -434,6 +434,8 @@ static void __init map_mem(pgd_t *pgd) if (debug_pagealloc_enabled()) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; + if (IS_ENABLED(CONFIG_XPFO)) + flags |= NO_BLOCK_MAPPINGS; /* * Take care not to create a writable alias for the Cheers, Tycho
Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
Hi Laura, On Fri, Aug 11, 2017 at 10:25:14AM -0700, Laura Abbott wrote: > On 08/09/2017 01:07 PM, Tycho Andersen wrote: > > From: Juerg Haefliger > > > > XPFO (eXclusive Page Frame Ownership) doesn't support section mappings > > yet, so disable it if XPFO is turned on. > > > > Signed-off-by: Juerg Haefliger > > Tested-by: Tycho Andersen > > --- > > arch/arm64/mm/mmu.c | 14 +- > > 1 file changed, 13 insertions(+), 1 deletion(-) > > > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > > index f1eb15e0e864..38026b3ccb46 100644 > > --- a/arch/arm64/mm/mmu.c > > +++ b/arch/arm64/mm/mmu.c > > @@ -176,6 +176,18 @@ static void alloc_init_cont_pte(pmd_t *pmd, unsigned > > long addr, > > } while (addr = next, addr != end); > > } > > > > +static inline bool use_section_mapping(unsigned long addr, unsigned long > > next, > > + unsigned long phys) > > +{ > > + if (IS_ENABLED(CONFIG_XPFO)) > > + return false; > > + > > + if (((addr | next | phys) & ~SECTION_MASK) != 0) > > + return false; > > + > > + return true; > > +} > > + > > static void init_pmd(pud_t *pud, unsigned long addr, unsigned long end, > > phys_addr_t phys, pgprot_t prot, > > phys_addr_t (*pgtable_alloc)(void), int flags) > > @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long addr, > > unsigned long end, > > next = pmd_addr_end(addr, end); > > > > /* try section mapping first */ > > - if (((addr | next | phys) & ~SECTION_MASK) == 0 && > > + if (use_section_mapping(addr, next, phys) && > > (flags & NO_BLOCK_MAPPINGS) == 0) { > > pmd_set_huge(pmd, phys, prot); > > > > > > There is already similar logic to disable section mappings for > debug_pagealloc at the start of map_mem, can you take advantage > of that? You're suggesting something like this instead? Seems to work fine. diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 38026b3ccb46..3b2c17bbbf12 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -434,6 +434,8 @@ static void __init map_mem(pgd_t *pgd) if (debug_pagealloc_enabled()) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; + if (IS_ENABLED(CONFIG_XPFO)) + flags |= NO_BLOCK_MAPPINGS; /* * Take care not to create a writable alias for the Cheers, Tycho
Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
On 08/09/2017 01:07 PM, Tycho Andersen wrote: > From: Juerg Haefliger> > XPFO (eXclusive Page Frame Ownership) doesn't support section mappings > yet, so disable it if XPFO is turned on. > > Signed-off-by: Juerg Haefliger > Tested-by: Tycho Andersen > --- > arch/arm64/mm/mmu.c | 14 +- > 1 file changed, 13 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index f1eb15e0e864..38026b3ccb46 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -176,6 +176,18 @@ static void alloc_init_cont_pte(pmd_t *pmd, unsigned > long addr, > } while (addr = next, addr != end); > } > > +static inline bool use_section_mapping(unsigned long addr, unsigned long > next, > + unsigned long phys) > +{ > + if (IS_ENABLED(CONFIG_XPFO)) > + return false; > + > + if (((addr | next | phys) & ~SECTION_MASK) != 0) > + return false; > + > + return true; > +} > + > static void init_pmd(pud_t *pud, unsigned long addr, unsigned long end, >phys_addr_t phys, pgprot_t prot, >phys_addr_t (*pgtable_alloc)(void), int flags) > @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long addr, > unsigned long end, > next = pmd_addr_end(addr, end); > > /* try section mapping first */ > - if (((addr | next | phys) & ~SECTION_MASK) == 0 && > + if (use_section_mapping(addr, next, phys) && > (flags & NO_BLOCK_MAPPINGS) == 0) { > pmd_set_huge(pmd, phys, prot); > > There is already similar logic to disable section mappings for debug_pagealloc at the start of map_mem, can you take advantage of that? Thanks, Laura
Re: [kernel-hardening] [PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
On 08/09/2017 01:07 PM, Tycho Andersen wrote: > From: Juerg Haefliger > > XPFO (eXclusive Page Frame Ownership) doesn't support section mappings > yet, so disable it if XPFO is turned on. > > Signed-off-by: Juerg Haefliger > Tested-by: Tycho Andersen > --- > arch/arm64/mm/mmu.c | 14 +- > 1 file changed, 13 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index f1eb15e0e864..38026b3ccb46 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -176,6 +176,18 @@ static void alloc_init_cont_pte(pmd_t *pmd, unsigned > long addr, > } while (addr = next, addr != end); > } > > +static inline bool use_section_mapping(unsigned long addr, unsigned long > next, > + unsigned long phys) > +{ > + if (IS_ENABLED(CONFIG_XPFO)) > + return false; > + > + if (((addr | next | phys) & ~SECTION_MASK) != 0) > + return false; > + > + return true; > +} > + > static void init_pmd(pud_t *pud, unsigned long addr, unsigned long end, >phys_addr_t phys, pgprot_t prot, >phys_addr_t (*pgtable_alloc)(void), int flags) > @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long addr, > unsigned long end, > next = pmd_addr_end(addr, end); > > /* try section mapping first */ > - if (((addr | next | phys) & ~SECTION_MASK) == 0 && > + if (use_section_mapping(addr, next, phys) && > (flags & NO_BLOCK_MAPPINGS) == 0) { > pmd_set_huge(pmd, phys, prot); > > There is already similar logic to disable section mappings for debug_pagealloc at the start of map_mem, can you take advantage of that? Thanks, Laura
[PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
From: Juerg HaefligerXPFO (eXclusive Page Frame Ownership) doesn't support section mappings yet, so disable it if XPFO is turned on. Signed-off-by: Juerg Haefliger Tested-by: Tycho Andersen --- arch/arm64/mm/mmu.c | 14 +- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index f1eb15e0e864..38026b3ccb46 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -176,6 +176,18 @@ static void alloc_init_cont_pte(pmd_t *pmd, unsigned long addr, } while (addr = next, addr != end); } +static inline bool use_section_mapping(unsigned long addr, unsigned long next, + unsigned long phys) +{ + if (IS_ENABLED(CONFIG_XPFO)) + return false; + + if (((addr | next | phys) & ~SECTION_MASK) != 0) + return false; + + return true; +} + static void init_pmd(pud_t *pud, unsigned long addr, unsigned long end, phys_addr_t phys, pgprot_t prot, phys_addr_t (*pgtable_alloc)(void), int flags) @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long addr, unsigned long end, next = pmd_addr_end(addr, end); /* try section mapping first */ - if (((addr | next | phys) & ~SECTION_MASK) == 0 && + if (use_section_mapping(addr, next, phys) && (flags & NO_BLOCK_MAPPINGS) == 0) { pmd_set_huge(pmd, phys, prot); -- 2.11.0
[PATCH v5 06/10] arm64/mm: Disable section mappings if XPFO is enabled
From: Juerg Haefliger XPFO (eXclusive Page Frame Ownership) doesn't support section mappings yet, so disable it if XPFO is turned on. Signed-off-by: Juerg Haefliger Tested-by: Tycho Andersen --- arch/arm64/mm/mmu.c | 14 +- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index f1eb15e0e864..38026b3ccb46 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -176,6 +176,18 @@ static void alloc_init_cont_pte(pmd_t *pmd, unsigned long addr, } while (addr = next, addr != end); } +static inline bool use_section_mapping(unsigned long addr, unsigned long next, + unsigned long phys) +{ + if (IS_ENABLED(CONFIG_XPFO)) + return false; + + if (((addr | next | phys) & ~SECTION_MASK) != 0) + return false; + + return true; +} + static void init_pmd(pud_t *pud, unsigned long addr, unsigned long end, phys_addr_t phys, pgprot_t prot, phys_addr_t (*pgtable_alloc)(void), int flags) @@ -190,7 +202,7 @@ static void init_pmd(pud_t *pud, unsigned long addr, unsigned long end, next = pmd_addr_end(addr, end); /* try section mapping first */ - if (((addr | next | phys) & ~SECTION_MASK) == 0 && + if (use_section_mapping(addr, next, phys) && (flags & NO_BLOCK_MAPPINGS) == 0) { pmd_set_huge(pmd, phys, prot); -- 2.11.0