Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN

2019-07-02 Thread Kees Cook
On Fri, May 10, 2019 at 01:43:36PM -0700, Kees Cook wrote:
> This feature continues to cause more problems than it solves[1]. Its
> intention was to check the bounds of page-allocator allocations by using
> __GFP_COMP, for which we would need to find all missing __GFP_COMP
> markings. This work has been on hold and there is an argument[2]
> that such markings are not even the correct signal for checking for
> same-allocation pages. Instead of depending on BROKEN, this just removes
> it entirely. It can be trivially reverted if/when a better solution for
> tracking page allocator sizes is found.
> 
> [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
> [2] https://lkml.kernel.org/r/20190415022412.ga29...@bombadil.infradead.org
> 
> Suggested-by: Eric Biggers 
> Signed-off-by: Kees Cook 

So, after looking at this more, I think I'm going to keep this patch,
and we can add new sanity checks on a per-Page flag check. (See below.)

Andrew, can you apply this to -mm please?

> ---
>  mm/usercopy.c| 67 
>  security/Kconfig | 11 
>  2 files changed, 78 deletions(-)
> 
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> index 14faadcedd06..15dc1bf03303 100644
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -159,70 +159,6 @@ static inline void check_bogus_address(const unsigned 
> long ptr, unsigned long n,
>   usercopy_abort("null address", NULL, to_user, ptr, n);
>  }
>  
> -/* Checks for allocs that are marked in some way as spanning multiple pages. 
> */
> -static inline void check_page_span(const void *ptr, unsigned long n,
> -struct page *page, bool to_user)
> -{
> -#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
> - const void *end = ptr + n - 1;
> - struct page *endpage;
> - bool is_reserved, is_cma;
> -
> - /*
> -  * Sometimes the kernel data regions are not marked Reserved (see
> -  * check below). And sometimes [_sdata,_edata) does not cover
> -  * rodata and/or bss, so check each range explicitly.
> -  */
> -
> - /* Allow reads of kernel rodata region (if not marked as Reserved). */
> - if (ptr >= (const void *)__start_rodata &&
> - end <= (const void *)__end_rodata) {
> - if (!to_user)
> - usercopy_abort("rodata", NULL, to_user, 0, n);
> - return;
> - }
> -
> - /* Allow kernel data region (if not marked as Reserved). */
> - if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
> - return;
> -
> - /* Allow kernel bss region (if not marked as Reserved). */
> - if (ptr >= (const void *)__bss_start &&
> - end <= (const void *)__bss_stop)
> - return;
> -
> - /* Is the object wholly within one base page? */
> - if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> -((unsigned long)end & (unsigned long)PAGE_MASK)))
> - return;
> -
> - /* Allow if fully inside the same compound (__GFP_COMP) page. */
> - endpage = virt_to_head_page(end);
> - if (likely(endpage == page))
> - return;
> -
> - /*
> -  * Reject if range is entirely either Reserved (i.e. special or
> -  * device memory), or CMA. Otherwise, reject since the object spans
> -  * several independently allocated pages.
> -  */
> - is_reserved = PageReserved(page);
> - is_cma = is_migrate_cma_page(page);
> - if (!is_reserved && !is_cma)
> - usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
> -
> - for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
> - page = virt_to_head_page(ptr);
> - if (is_reserved && !PageReserved(page))
> - usercopy_abort("spans Reserved and non-Reserved pages",
> -NULL, to_user, 0, n);
> - if (is_cma && !is_migrate_cma_page(page))
> - usercopy_abort("spans CMA and non-CMA pages", NULL,
> -to_user, 0, n);
> - }
> -#endif
> -}
> -
>  static inline void check_heap_object(const void *ptr, unsigned long n,
>bool to_user)
>  {
> @@ -236,9 +172,6 @@ static inline void check_heap_object(const void *ptr, 
> unsigned long n,
>   if (PageSlab(page)) {
>   /* Check slab allocator for flags and size. */
>   __check_heap_object(ptr, n, page, to_user);
> - } else {
> - /* Verify object does not incorrectly span multiple pages. */
> - check_page_span(ptr, n, page, to_user);
>   }

In the future, instead of this catch-all "else", we can add things like:

} else if (PageCompound(page)) {
... do some check for compound pages ...
} else if (PageReserved(page))
... etc ...
}

But for 5.3, I think we need to just entirely drop the PAGESPAN thing.

-Kees

>  }
>

Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN

2019-06-10 Thread Kees Cook
On Mon, Jun 10, 2019 at 03:30:55PM -0700, Eric Biggers wrote:
> Any progress on this patch?

I have no had time yet; sorry. If anyone else would like to take a stab
at it, I'd appreciate it. :)

-- 
Kees Cook


Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN

2019-06-10 Thread Eric Biggers
On Mon, May 13, 2019 at 02:32:43PM -0700, Kees Cook wrote:
> On Sat, May 11, 2019 at 09:11:42PM -0700, Matthew Wilcox wrote:
> > On Sat, May 11, 2019 at 05:03:08PM -0700, Kees Cook wrote:
> > > On Fri, May 10, 2019 at 08:41:43PM -0400, Laura Abbott wrote:
> > > > On 5/10/19 3:43 PM, Kees Cook wrote:
> > > > > This feature continues to cause more problems than it solves[1]. Its
> > > > > intention was to check the bounds of page-allocator allocations by 
> > > > > using
> > > > > __GFP_COMP, for which we would need to find all missing __GFP_COMP
> > > > > markings. This work has been on hold and there is an argument[2]
> > > > > that such markings are not even the correct signal for checking for
> > > > > same-allocation pages. Instead of depending on BROKEN, this just 
> > > > > removes
> > > > > it entirely. It can be trivially reverted if/when a better solution 
> > > > > for
> > > > > tracking page allocator sizes is found.
> > > > > 
> > > > > [1] 
> > > > > https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
> > > > > [2] 
> > > > > https://lkml.kernel.org/r/20190415022412.ga29...@bombadil.infradead.org
> > > > 
> > > > I agree the page spanning is broken but is it worth keeping the
> > > > checks against __rodata __bss etc.?
> > > 
> > > They're all just white-listing later checks (except RODATA which is
> > > doing a cheap RO test which is redundant on any architecture with actual
> > > rodata...) so they don't have any value in staying without the rest of
> > > the page allocator logic.
> > > 
> > > > > - /* Is the object wholly within one base page? */
> > > > > - if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> > > > > -((unsigned long)end & (unsigned long)PAGE_MASK)))
> > > > > - return;
> > > > > -
> > > > > - /* Allow if fully inside the same compound (__GFP_COMP) page. */
> > > > > - endpage = virt_to_head_page(end);
> > > > > - if (likely(endpage == page))
> > > > > - return;
> > > 
> > > We _could_ keep the mixed CMA/reserved/neither check if we really wanted
> > > to, but that's such a corner case of a corner case, I'm not sure it's
> > > worth doing the virt_to_head_page() across the whole span to figure
> > > it out.
> > 
> > I'd delete that first check, because it's a subset of the second check,
> 
> It seemed easier to short-circuit with a math test before doing the slightly 
> more expensive virt_to_head_page(end) call. Do you think that's sensible?
> 
> > 
> > /* Is the object wholly within a single (base or compound) page? */
> > endpage = virt_to_head_page(end);
> > if (likely(endpage == page))
> > return;
> > 
> > /*
> >  * If the start and end are more than MAX_ORDER apart, they must
> >  * be from separate allocations
> >  */
> > if (n >= (PAGE_SIZE << MAX_ORDER))
> > usercopy_abort("spans too many pages", NULL, to_user, 0, n);
> > 
> > /*
> >  * If neither page is compound, we can't tell if the object is
> >  * within a single allocation or not
> >  */
> > if (!PageCompound(page) && !PageCompound(endpage))
> > return;
> > 
> > > I really wish we had size of allocation reliably held somewhere. We'll
> > > need it for doing memory tagging of the page allocator too...
> > 
> > I think we'll need to store those allocations in a separate data structure
> > on the side.  As far as the rest of the kernel is concerned, those struct
> > pages belong to them once the page allocator has given them.
> 
> Okay, let me work up a page-type refactoring while allocation size can
> stay back-burnered.
> 
> -- 
> Kees Cook

Any progress on this patch?

- Eric


Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN

2019-05-13 Thread Kees Cook
On Sat, May 11, 2019 at 09:11:42PM -0700, Matthew Wilcox wrote:
> On Sat, May 11, 2019 at 05:03:08PM -0700, Kees Cook wrote:
> > On Fri, May 10, 2019 at 08:41:43PM -0400, Laura Abbott wrote:
> > > On 5/10/19 3:43 PM, Kees Cook wrote:
> > > > This feature continues to cause more problems than it solves[1]. Its
> > > > intention was to check the bounds of page-allocator allocations by using
> > > > __GFP_COMP, for which we would need to find all missing __GFP_COMP
> > > > markings. This work has been on hold and there is an argument[2]
> > > > that such markings are not even the correct signal for checking for
> > > > same-allocation pages. Instead of depending on BROKEN, this just removes
> > > > it entirely. It can be trivially reverted if/when a better solution for
> > > > tracking page allocator sizes is found.
> > > > 
> > > > [1] 
> > > > https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
> > > > [2] 
> > > > https://lkml.kernel.org/r/20190415022412.ga29...@bombadil.infradead.org
> > > 
> > > I agree the page spanning is broken but is it worth keeping the
> > > checks against __rodata __bss etc.?
> > 
> > They're all just white-listing later checks (except RODATA which is
> > doing a cheap RO test which is redundant on any architecture with actual
> > rodata...) so they don't have any value in staying without the rest of
> > the page allocator logic.
> > 
> > > > -   /* Is the object wholly within one base page? */
> > > > -   if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> > > > -  ((unsigned long)end & (unsigned long)PAGE_MASK)))
> > > > -   return;
> > > > -
> > > > -   /* Allow if fully inside the same compound (__GFP_COMP) page. */
> > > > -   endpage = virt_to_head_page(end);
> > > > -   if (likely(endpage == page))
> > > > -   return;
> > 
> > We _could_ keep the mixed CMA/reserved/neither check if we really wanted
> > to, but that's such a corner case of a corner case, I'm not sure it's
> > worth doing the virt_to_head_page() across the whole span to figure
> > it out.
> 
> I'd delete that first check, because it's a subset of the second check,

It seemed easier to short-circuit with a math test before doing the slightly 
more expensive virt_to_head_page(end) call. Do you think that's sensible?

> 
>   /* Is the object wholly within a single (base or compound) page? */
>   endpage = virt_to_head_page(end);
>   if (likely(endpage == page))
>   return;
> 
>   /*
>* If the start and end are more than MAX_ORDER apart, they must
>* be from separate allocations
>*/
>   if (n >= (PAGE_SIZE << MAX_ORDER))
>   usercopy_abort("spans too many pages", NULL, to_user, 0, n);
> 
>   /*
>* If neither page is compound, we can't tell if the object is
>* within a single allocation or not
>*/
>   if (!PageCompound(page) && !PageCompound(endpage))
>   return;
> 
> > I really wish we had size of allocation reliably held somewhere. We'll
> > need it for doing memory tagging of the page allocator too...
> 
> I think we'll need to store those allocations in a separate data structure
> on the side.  As far as the rest of the kernel is concerned, those struct
> pages belong to them once the page allocator has given them.

Okay, let me work up a page-type refactoring while allocation size can
stay back-burnered.

-- 
Kees Cook


Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN

2019-05-11 Thread Matthew Wilcox
On Sat, May 11, 2019 at 05:03:08PM -0700, Kees Cook wrote:
> On Fri, May 10, 2019 at 08:41:43PM -0400, Laura Abbott wrote:
> > On 5/10/19 3:43 PM, Kees Cook wrote:
> > > This feature continues to cause more problems than it solves[1]. Its
> > > intention was to check the bounds of page-allocator allocations by using
> > > __GFP_COMP, for which we would need to find all missing __GFP_COMP
> > > markings. This work has been on hold and there is an argument[2]
> > > that such markings are not even the correct signal for checking for
> > > same-allocation pages. Instead of depending on BROKEN, this just removes
> > > it entirely. It can be trivially reverted if/when a better solution for
> > > tracking page allocator sizes is found.
> > > 
> > > [1] 
> > > https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
> > > [2] 
> > > https://lkml.kernel.org/r/20190415022412.ga29...@bombadil.infradead.org
> > 
> > I agree the page spanning is broken but is it worth keeping the
> > checks against __rodata __bss etc.?
> 
> They're all just white-listing later checks (except RODATA which is
> doing a cheap RO test which is redundant on any architecture with actual
> rodata...) so they don't have any value in staying without the rest of
> the page allocator logic.
> 
> > > - /* Is the object wholly within one base page? */
> > > - if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> > > -((unsigned long)end & (unsigned long)PAGE_MASK)))
> > > - return;
> > > -
> > > - /* Allow if fully inside the same compound (__GFP_COMP) page. */
> > > - endpage = virt_to_head_page(end);
> > > - if (likely(endpage == page))
> > > - return;
> 
> We _could_ keep the mixed CMA/reserved/neither check if we really wanted
> to, but that's such a corner case of a corner case, I'm not sure it's
> worth doing the virt_to_head_page() across the whole span to figure
> it out.

I'd delete that first check, because it's a subset of the second check,

/* Is the object wholly within a single (base or compound) page? */
endpage = virt_to_head_page(end);
if (likely(endpage == page))
return;

/*
 * If the start and end are more than MAX_ORDER apart, they must
 * be from separate allocations
 */
if (n >= (PAGE_SIZE << MAX_ORDER))
usercopy_abort("spans too many pages", NULL, to_user, 0, n);

/*
 * If neither page is compound, we can't tell if the object is
 * within a single allocation or not
 */
if (!PageCompound(page) && !PageCompound(endpage))
return;

> I really wish we had size of allocation reliably held somewhere. We'll
> need it for doing memory tagging of the page allocator too...

I think we'll need to store those allocations in a separate data structure
on the side.  As far as the rest of the kernel is concerned, those struct
pages belong to them once the page allocator has given them.


Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN

2019-05-11 Thread Kees Cook
On Fri, May 10, 2019 at 08:41:43PM -0400, Laura Abbott wrote:
> On 5/10/19 3:43 PM, Kees Cook wrote:
> > This feature continues to cause more problems than it solves[1]. Its
> > intention was to check the bounds of page-allocator allocations by using
> > __GFP_COMP, for which we would need to find all missing __GFP_COMP
> > markings. This work has been on hold and there is an argument[2]
> > that such markings are not even the correct signal for checking for
> > same-allocation pages. Instead of depending on BROKEN, this just removes
> > it entirely. It can be trivially reverted if/when a better solution for
> > tracking page allocator sizes is found.
> > 
> > [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
> > [2] https://lkml.kernel.org/r/20190415022412.ga29...@bombadil.infradead.org
> > 
> > Suggested-by: Eric Biggers 
> > Signed-off-by: Kees Cook 
> > ---
> >   mm/usercopy.c| 67 
> >   security/Kconfig | 11 
> >   2 files changed, 78 deletions(-)
> > 
> > diff --git a/mm/usercopy.c b/mm/usercopy.c
> > index 14faadcedd06..15dc1bf03303 100644
> > --- a/mm/usercopy.c
> > +++ b/mm/usercopy.c
> > @@ -159,70 +159,6 @@ static inline void check_bogus_address(const unsigned 
> > long ptr, unsigned long n,
> > usercopy_abort("null address", NULL, to_user, ptr, n);
> >   }
> > -/* Checks for allocs that are marked in some way as spanning multiple 
> > pages. */
> > -static inline void check_page_span(const void *ptr, unsigned long n,
> > -  struct page *page, bool to_user)
> > -{
> > -#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
> > -   const void *end = ptr + n - 1;
> > -   struct page *endpage;
> > -   bool is_reserved, is_cma;
> > -
> > -   /*
> > -* Sometimes the kernel data regions are not marked Reserved (see
> > -* check below). And sometimes [_sdata,_edata) does not cover
> > -* rodata and/or bss, so check each range explicitly.
> > -*/
> > -
> > -   /* Allow reads of kernel rodata region (if not marked as Reserved). */
> > -   if (ptr >= (const void *)__start_rodata &&
> > -   end <= (const void *)__end_rodata) {
> > -   if (!to_user)
> > -   usercopy_abort("rodata", NULL, to_user, 0, n);
> > -   return;
> > -   }
> > -
> > -   /* Allow kernel data region (if not marked as Reserved). */
> > -   if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
> > -   return;
> > -
> > -   /* Allow kernel bss region (if not marked as Reserved). */
> > -   if (ptr >= (const void *)__bss_start &&
> > -   end <= (const void *)__bss_stop)
> > -   return;
> > -
> 
> 
> I agree the page spanning is broken but is it worth keeping the
> checks against __rodata __bss etc.?

They're all just white-listing later checks (except RODATA which is
doing a cheap RO test which is redundant on any architecture with actual
rodata...) so they don't have any value in staying without the rest of
the page allocator logic.

> 
> > -   /* Is the object wholly within one base page? */
> > -   if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> > -  ((unsigned long)end & (unsigned long)PAGE_MASK)))
> > -   return;
> > -
> > -   /* Allow if fully inside the same compound (__GFP_COMP) page. */
> > -   endpage = virt_to_head_page(end);
> > -   if (likely(endpage == page))
> > -   return;
> > -
> > -   /*
> > -* Reject if range is entirely either Reserved (i.e. special or
> > -* device memory), or CMA. Otherwise, reject since the object spans
> > -* several independently allocated pages.
> > -*/
> > -   is_reserved = PageReserved(page);
> > -   is_cma = is_migrate_cma_page(page);
> > -   if (!is_reserved && !is_cma)
> > -   usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
> > -
> > -   for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
> > -   page = virt_to_head_page(ptr);
> > -   if (is_reserved && !PageReserved(page))
> > -   usercopy_abort("spans Reserved and non-Reserved pages",
> > -  NULL, to_user, 0, n);
> > -   if (is_cma && !is_migrate_cma_page(page))
> > -   usercopy_abort("spans CMA and non-CMA pages", NULL,
> > -  to_user, 0, n);
> > -   }

We _could_ keep the mixed CMA/reserved/neither check if we really wanted
to, but that's such a corner case of a corner case, I'm not sure it's
worth doing the virt_to_head_page() across the whole span to figure
it out.

I really wish we had size of allocation reliably held somewhere. We'll
need it for doing memory tagging of the page allocator too...

-- 
Kees Cook


Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN

2019-05-10 Thread Laura Abbott

On 5/10/19 3:43 PM, Kees Cook wrote:

This feature continues to cause more problems than it solves[1]. Its
intention was to check the bounds of page-allocator allocations by using
__GFP_COMP, for which we would need to find all missing __GFP_COMP
markings. This work has been on hold and there is an argument[2]
that such markings are not even the correct signal for checking for
same-allocation pages. Instead of depending on BROKEN, this just removes
it entirely. It can be trivially reverted if/when a better solution for
tracking page allocator sizes is found.

[1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
[2] https://lkml.kernel.org/r/20190415022412.ga29...@bombadil.infradead.org

Suggested-by: Eric Biggers 
Signed-off-by: Kees Cook 
---
  mm/usercopy.c| 67 
  security/Kconfig | 11 
  2 files changed, 78 deletions(-)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index 14faadcedd06..15dc1bf03303 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -159,70 +159,6 @@ static inline void check_bogus_address(const unsigned long 
ptr, unsigned long n,
usercopy_abort("null address", NULL, to_user, ptr, n);
  }
  
-/* Checks for allocs that are marked in some way as spanning multiple pages. */

-static inline void check_page_span(const void *ptr, unsigned long n,
-  struct page *page, bool to_user)
-{
-#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
-   const void *end = ptr + n - 1;
-   struct page *endpage;
-   bool is_reserved, is_cma;
-
-   /*
-* Sometimes the kernel data regions are not marked Reserved (see
-* check below). And sometimes [_sdata,_edata) does not cover
-* rodata and/or bss, so check each range explicitly.
-*/
-
-   /* Allow reads of kernel rodata region (if not marked as Reserved). */
-   if (ptr >= (const void *)__start_rodata &&
-   end <= (const void *)__end_rodata) {
-   if (!to_user)
-   usercopy_abort("rodata", NULL, to_user, 0, n);
-   return;
-   }
-
-   /* Allow kernel data region (if not marked as Reserved). */
-   if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
-   return;
-
-   /* Allow kernel bss region (if not marked as Reserved). */
-   if (ptr >= (const void *)__bss_start &&
-   end <= (const void *)__bss_stop)
-   return;
-



I agree the page spanning is broken but is it worth keeping the
checks against __rodata __bss etc.?


-   /* Is the object wholly within one base page? */
-   if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
-  ((unsigned long)end & (unsigned long)PAGE_MASK)))
-   return;
-
-   /* Allow if fully inside the same compound (__GFP_COMP) page. */
-   endpage = virt_to_head_page(end);
-   if (likely(endpage == page))
-   return;
-
-   /*
-* Reject if range is entirely either Reserved (i.e. special or
-* device memory), or CMA. Otherwise, reject since the object spans
-* several independently allocated pages.
-*/
-   is_reserved = PageReserved(page);
-   is_cma = is_migrate_cma_page(page);
-   if (!is_reserved && !is_cma)
-   usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
-
-   for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
-   page = virt_to_head_page(ptr);
-   if (is_reserved && !PageReserved(page))
-   usercopy_abort("spans Reserved and non-Reserved pages",
-  NULL, to_user, 0, n);
-   if (is_cma && !is_migrate_cma_page(page))
-   usercopy_abort("spans CMA and non-CMA pages", NULL,
-  to_user, 0, n);
-   }
-#endif
-}
-
  static inline void check_heap_object(const void *ptr, unsigned long n,
 bool to_user)
  {
@@ -236,9 +172,6 @@ static inline void check_heap_object(const void *ptr, 
unsigned long n,
if (PageSlab(page)) {
/* Check slab allocator for flags and size. */
__check_heap_object(ptr, n, page, to_user);
-   } else {
-   /* Verify object does not incorrectly span multiple pages. */
-   check_page_span(ptr, n, page, to_user);
}
  }
  
diff --git a/security/Kconfig b/security/Kconfig

index 353cfef71d4e..8392647f5a4c 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -176,17 +176,6 @@ config HARDENED_USERCOPY_FALLBACK
  Booting with "slab_common.usercopy_fallback=Y/N" can change
  this setting.
  
-config HARDENED_USERCOPY_PAGESPAN

-   bool "Refuse to copy allocations that span multiple pages"
-   depends on HARDENED_USERCOPY
-   depends on EXPERT
-   help
- When a multi-page allocation is done with

[PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN

2019-05-10 Thread Kees Cook
This feature continues to cause more problems than it solves[1]. Its
intention was to check the bounds of page-allocator allocations by using
__GFP_COMP, for which we would need to find all missing __GFP_COMP
markings. This work has been on hold and there is an argument[2]
that such markings are not even the correct signal for checking for
same-allocation pages. Instead of depending on BROKEN, this just removes
it entirely. It can be trivially reverted if/when a better solution for
tracking page allocator sizes is found.

[1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
[2] https://lkml.kernel.org/r/20190415022412.ga29...@bombadil.infradead.org

Suggested-by: Eric Biggers 
Signed-off-by: Kees Cook 
---
 mm/usercopy.c| 67 
 security/Kconfig | 11 
 2 files changed, 78 deletions(-)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index 14faadcedd06..15dc1bf03303 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -159,70 +159,6 @@ static inline void check_bogus_address(const unsigned long 
ptr, unsigned long n,
usercopy_abort("null address", NULL, to_user, ptr, n);
 }
 
-/* Checks for allocs that are marked in some way as spanning multiple pages. */
-static inline void check_page_span(const void *ptr, unsigned long n,
-  struct page *page, bool to_user)
-{
-#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
-   const void *end = ptr + n - 1;
-   struct page *endpage;
-   bool is_reserved, is_cma;
-
-   /*
-* Sometimes the kernel data regions are not marked Reserved (see
-* check below). And sometimes [_sdata,_edata) does not cover
-* rodata and/or bss, so check each range explicitly.
-*/
-
-   /* Allow reads of kernel rodata region (if not marked as Reserved). */
-   if (ptr >= (const void *)__start_rodata &&
-   end <= (const void *)__end_rodata) {
-   if (!to_user)
-   usercopy_abort("rodata", NULL, to_user, 0, n);
-   return;
-   }
-
-   /* Allow kernel data region (if not marked as Reserved). */
-   if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
-   return;
-
-   /* Allow kernel bss region (if not marked as Reserved). */
-   if (ptr >= (const void *)__bss_start &&
-   end <= (const void *)__bss_stop)
-   return;
-
-   /* Is the object wholly within one base page? */
-   if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
-  ((unsigned long)end & (unsigned long)PAGE_MASK)))
-   return;
-
-   /* Allow if fully inside the same compound (__GFP_COMP) page. */
-   endpage = virt_to_head_page(end);
-   if (likely(endpage == page))
-   return;
-
-   /*
-* Reject if range is entirely either Reserved (i.e. special or
-* device memory), or CMA. Otherwise, reject since the object spans
-* several independently allocated pages.
-*/
-   is_reserved = PageReserved(page);
-   is_cma = is_migrate_cma_page(page);
-   if (!is_reserved && !is_cma)
-   usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
-
-   for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
-   page = virt_to_head_page(ptr);
-   if (is_reserved && !PageReserved(page))
-   usercopy_abort("spans Reserved and non-Reserved pages",
-  NULL, to_user, 0, n);
-   if (is_cma && !is_migrate_cma_page(page))
-   usercopy_abort("spans CMA and non-CMA pages", NULL,
-  to_user, 0, n);
-   }
-#endif
-}
-
 static inline void check_heap_object(const void *ptr, unsigned long n,
 bool to_user)
 {
@@ -236,9 +172,6 @@ static inline void check_heap_object(const void *ptr, 
unsigned long n,
if (PageSlab(page)) {
/* Check slab allocator for flags and size. */
__check_heap_object(ptr, n, page, to_user);
-   } else {
-   /* Verify object does not incorrectly span multiple pages. */
-   check_page_span(ptr, n, page, to_user);
}
 }
 
diff --git a/security/Kconfig b/security/Kconfig
index 353cfef71d4e..8392647f5a4c 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -176,17 +176,6 @@ config HARDENED_USERCOPY_FALLBACK
  Booting with "slab_common.usercopy_fallback=Y/N" can change
  this setting.
 
-config HARDENED_USERCOPY_PAGESPAN
-   bool "Refuse to copy allocations that span multiple pages"
-   depends on HARDENED_USERCOPY
-   depends on EXPERT
-   help
- When a multi-page allocation is done without __GFP_COMP,
- hardened usercopy will reject attempts to copy it. There are,
- however, several cases of this in the kernel that have not