Re: [RFC][PATCH] mm/slub.c: Allow poisoning to use the fast path

2017-08-07 Thread Christopher Lameter
On Fri, 4 Aug 2017, Laura Abbott wrote: > All slub debug features currently disable the fast path completely. > Some features such as consistency checks require this to allow taking of > locks. Poisoning and red zoning don't require this and can safely use > the per-cpu fast path. Introduce a

Re: [PATCH 0/3] IPI: Avoid to use 2 cache lines for one call_single_data

2017-08-02 Thread Christopher Lameter
On Wed, 2 Aug 2017, Huang, Ying wrote: > To allocate cache line size aligned percpu memory dynamically, > alloc_percpu_aligned() is introduced and used in iova drivers too. alloc_percpu() already aligns objects as specified when they are declared. Moreover the function is improperly named since

Re: [PATCH 0/3] IPI: Avoid to use 2 cache lines for one call_single_data

2017-08-02 Thread Christopher Lameter
On Wed, 2 Aug 2017, Huang, Ying wrote: > To allocate cache line size aligned percpu memory dynamically, > alloc_percpu_aligned() is introduced and used in iova drivers too. alloc_percpu() already aligns objects as specified when they are declared. Moreover the function is improperly named since

Re: [PATCH 1/3] percpu: Add alloc_percpu_aligned()

2017-08-02 Thread Christopher Lameter
On Wed, 2 Aug 2017, Huang, Ying wrote: > --- a/include/linux/percpu.h > +++ b/include/linux/percpu.h > @@ -129,5 +129,8 @@ extern phys_addr_t per_cpu_ptr_to_phys(void *addr); > #define alloc_percpu(type) \ > (typeof(type) __percpu

Re: [PATCH 1/3] percpu: Add alloc_percpu_aligned()

2017-08-02 Thread Christopher Lameter
On Wed, 2 Aug 2017, Huang, Ying wrote: > --- a/include/linux/percpu.h > +++ b/include/linux/percpu.h > @@ -129,5 +129,8 @@ extern phys_addr_t per_cpu_ptr_to_phys(void *addr); > #define alloc_percpu(type) \ > (typeof(type) __percpu

Re: [v3] mm: Add SLUB free list pointer obfuscation

2017-07-27 Thread Christopher Lameter
On Fri, 28 Jul 2017, Alexander Popov wrote: > I don't really like ignoring double-free. I think, that: > - it will hide dangerous bugs in the kernel, > - it can make some kernel exploits more stable. > I would rather add BUG_ON to set_freepointer() behind SLAB_FREELIST_HARDENED. > Is > it

Re: [v3] mm: Add SLUB free list pointer obfuscation

2017-07-27 Thread Christopher Lameter
On Fri, 28 Jul 2017, Alexander Popov wrote: > I don't really like ignoring double-free. I think, that: > - it will hide dangerous bugs in the kernel, > - it can make some kernel exploits more stable. > I would rather add BUG_ON to set_freepointer() behind SLAB_FREELIST_HARDENED. > Is > it

Re: [v3] mm: Add SLUB free list pointer obfuscation

2017-07-27 Thread Christopher Lameter
On Wed, 26 Jul 2017, Kees Cook wrote: > > Although in either case we are adding code to the fastpath... > > While I'd like it unconditionally, I think Alexander's proposal was to > put it behind CONFIG_SLAB_FREELIST_HARDENED. Sounds good. > BTW, while I've got your attention, can you Ack the

Re: [v3] mm: Add SLUB free list pointer obfuscation

2017-07-27 Thread Christopher Lameter
On Wed, 26 Jul 2017, Kees Cook wrote: > > Although in either case we are adding code to the fastpath... > > While I'd like it unconditionally, I think Alexander's proposal was to > put it behind CONFIG_SLAB_FREELIST_HARDENED. Sounds good. > BTW, while I've got your attention, can you Ack the

Re: [PATCH v4] mm: Add SLUB free list pointer obfuscation

2017-07-27 Thread Christopher Lameter
On Tue, 25 Jul 2017, Kees Cook wrote: > +/* > + * Returns freelist pointer (ptr). With hardening, this is obfuscated > + * with an XOR of the address where the pointer is held and a per-cache > + * random number. > + */ > +static inline void *freelist_ptr(const struct kmem_cache *s, void *ptr, >

Re: [PATCH v4] mm: Add SLUB free list pointer obfuscation

2017-07-27 Thread Christopher Lameter
On Tue, 25 Jul 2017, Kees Cook wrote: > +/* > + * Returns freelist pointer (ptr). With hardening, this is obfuscated > + * with an XOR of the address where the pointer is held and a per-cache > + * random number. > + */ > +static inline void *freelist_ptr(const struct kmem_cache *s, void *ptr, >

Re: [PATCH 2/5] mm: slub: constify attribute_group structures.

2017-07-27 Thread Christopher Lameter
On Thu, 27 Jul 2017, Arvind Yadav wrote: > attribute_group are not supposed to change at runtime. All functions > working with attribute_group provided by work with > const attribute_group. So mark the non-const structs as const. Acked-by: Christoph Lameter

Re: [PATCH 2/5] mm: slub: constify attribute_group structures.

2017-07-27 Thread Christopher Lameter
On Thu, 27 Jul 2017, Arvind Yadav wrote: > attribute_group are not supposed to change at runtime. All functions > working with attribute_group provided by work with > const attribute_group. So mark the non-const structs as const. Acked-by: Christoph Lameter

Re: [RFC PATCH] mm/slub: fix a deadlock due to incomplete patching of cpusets_enabled()

2017-07-26 Thread Christopher Lameter
On Wed, 26 Jul 2017, Dima Zavin wrote: > The fix is to cache the value that's returned by cpusets_enabled() at the > top of the loop, and only operate on the seqlock (both begin and retry) if > it was true. I think the proper fix would be to ensure that the calls to

Re: [RFC PATCH] mm/slub: fix a deadlock due to incomplete patching of cpusets_enabled()

2017-07-26 Thread Christopher Lameter
On Wed, 26 Jul 2017, Dima Zavin wrote: > The fix is to cache the value that's returned by cpusets_enabled() at the > top of the loop, and only operate on the seqlock (both begin and retry) if > it was true. I think the proper fix would be to ensure that the calls to

Re: [v3] mm: Add SLUB free list pointer obfuscation

2017-07-26 Thread Christopher Lameter
On Wed, 26 Jul 2017, Kees Cook wrote: > >> What happens if, instead of BUG_ON, we do: > >> > >> if (unlikely(WARN_RATELIMIT(object == fp, "double-free detected")) > >> return; > > > > This may work for the free fastpath but the set_freepointer function is > > use in multiple other

Re: [v3] mm: Add SLUB free list pointer obfuscation

2017-07-26 Thread Christopher Lameter
On Wed, 26 Jul 2017, Kees Cook wrote: > >> What happens if, instead of BUG_ON, we do: > >> > >> if (unlikely(WARN_RATELIMIT(object == fp, "double-free detected")) > >> return; > > > > This may work for the free fastpath but the set_freepointer function is > > use in multiple other

Re: [v3] mm: Add SLUB free list pointer obfuscation

2017-07-26 Thread Christopher Lameter
On Tue, 25 Jul 2017, Kees Cook wrote: > > @@ -290,6 +290,10 @@ static inline void set_freepointer(struct kmem_cache > > *s, > > void *object, void *fp) > > { > > unsigned long freeptr_addr = (unsigned long)object + s->offset; > > > > +#ifdef CONFIG_SLAB_FREELIST_HARDENED > > +

Re: [v3] mm: Add SLUB free list pointer obfuscation

2017-07-26 Thread Christopher Lameter
On Tue, 25 Jul 2017, Kees Cook wrote: > > @@ -290,6 +290,10 @@ static inline void set_freepointer(struct kmem_cache > > *s, > > void *object, void *fp) > > { > > unsigned long freeptr_addr = (unsigned long)object + s->offset; > > > > +#ifdef CONFIG_SLAB_FREELIST_HARDENED > > +

Re: [RFC PATCH v1 00/11] Create fast idle path for short idle periods

2017-07-19 Thread Christopher Lameter
On Wed, 19 Jul 2017, Paul E. McKenney wrote: > > Do we have any problem if we skip RCU idle enter/exit under a fast idle > > scenario? > > My understanding is, if tick is not stopped, then we don't need inform RCU > > in > > idle path, it can be informed in irq exit. > > Indeed, the problem

Re: [RFC PATCH v1 00/11] Create fast idle path for short idle periods

2017-07-19 Thread Christopher Lameter
On Wed, 19 Jul 2017, Paul E. McKenney wrote: > > Do we have any problem if we skip RCU idle enter/exit under a fast idle > > scenario? > > My understanding is, if tick is not stopped, then we don't need inform RCU > > in > > idle path, it can be informed in irq exit. > > Indeed, the problem

Re: [PATCH 1/1] mm/slub.c: add a naive detection of double free or corruption

2017-07-19 Thread Christopher Lameter
On Tue, 18 Jul 2017, Kees Cook wrote: > I think there are two issues: first, this should likely be under > CONFIG_FREELIST_HARDENED since Christoph hasn't wanted to make these > changes enabled by default (if I'm understanding his earlier review > comments to me). The second issue is what to DO

Re: [PATCH 1/1] mm/slub.c: add a naive detection of double free or corruption

2017-07-19 Thread Christopher Lameter
On Tue, 18 Jul 2017, Kees Cook wrote: > I think there are two issues: first, this should likely be under > CONFIG_FREELIST_HARDENED since Christoph hasn't wanted to make these > changes enabled by default (if I'm understanding his earlier review > comments to me). The second issue is what to DO

Re: [PATCH 1/1] mm/slub.c: add a naive detection of double free or corruption

2017-07-18 Thread Christopher Lameter
On Mon, 17 Jul 2017, Alexander Popov wrote: > Christopher, if I change BUG_ON() to VM_BUG_ON(), it will be disabled by > default > again, right? It will be enabled if the distro ships with VM debugging on by default.

Re: [PATCH 1/1] mm/slub.c: add a naive detection of double free or corruption

2017-07-18 Thread Christopher Lameter
On Mon, 17 Jul 2017, Alexander Popov wrote: > Christopher, if I change BUG_ON() to VM_BUG_ON(), it will be disabled by > default > again, right? It will be enabled if the distro ships with VM debugging on by default.

Re: [PATCH 1/1] mm/slub.c: add a naive detection of double free or corruption

2017-07-17 Thread Christopher Lameter
On Mon, 17 Jul 2017, Matthew Wilcox wrote: > On Mon, Jul 17, 2017 at 07:45:07PM +0300, Alexander Popov wrote: > > Add an assertion similar to "fasttop" check in GNU C Library allocator: > > an object added to a singly linked freelist should not point to itself. > > That helps to detect some

Re: [PATCH 1/1] mm/slub.c: add a naive detection of double free or corruption

2017-07-17 Thread Christopher Lameter
On Mon, 17 Jul 2017, Matthew Wilcox wrote: > On Mon, Jul 17, 2017 at 07:45:07PM +0300, Alexander Popov wrote: > > Add an assertion similar to "fasttop" check in GNU C Library allocator: > > an object added to a singly linked freelist should not point to itself. > > That helps to detect some

Re: [PATCH 1/1] mm/slub.c: add a naive detection of double free or corruption

2017-07-17 Thread Christopher Lameter
On Mon, 17 Jul 2017, Alexander Popov wrote: > Add an assertion similar to "fasttop" check in GNU C Library allocator: > an object added to a singly linked freelist should not point to itself. > That helps to detect some double free errors (e.g. CVE-2017-2636) without > slub_debug and KASAN.

Re: [PATCH 1/1] mm/slub.c: add a naive detection of double free or corruption

2017-07-17 Thread Christopher Lameter
On Mon, 17 Jul 2017, Alexander Popov wrote: > Add an assertion similar to "fasttop" check in GNU C Library allocator: > an object added to a singly linked freelist should not point to itself. > That helps to detect some double free errors (e.g. CVE-2017-2636) without > slub_debug and KASAN.

Re: [PATCH] slub: make sure struct kmem_cache_node is initialized before publication

2017-07-12 Thread Christopher Lameter
On Wed, 12 Jul 2017, Andrew Morton wrote: > - free_kmem_cache_nodes() frees the cache node before nulling out a > reference to it > > - init_kmem_cache_nodes() publishes the cache node before initializing it > > Neither of these matter at runtime because the cache nodes cannot be > looked up by

Re: [PATCH] slub: make sure struct kmem_cache_node is initialized before publication

2017-07-12 Thread Christopher Lameter
On Wed, 12 Jul 2017, Andrew Morton wrote: > - free_kmem_cache_nodes() frees the cache node before nulling out a > reference to it > > - init_kmem_cache_nodes() publishes the cache node before initializing it > > Neither of these matter at runtime because the cache nodes cannot be > looked up by

Re: BUG: using __this_cpu_read() in preemptible [00000000] code: mm_percpu_wq/7

2017-07-12 Thread Christopher Lameter
On Wed, 7 Jun 2017, Andre Wild wrote: > I'm currently seeing the following message running kernel version 4.11.0. > It looks like it was introduced with the patch > 4037d452202e34214e8a939fa5621b2b3bbb45b7. A 2007 patch? At that point we did not have __this_cpu_read() nor refresh_cpu_vmstats

Re: BUG: using __this_cpu_read() in preemptible [00000000] code: mm_percpu_wq/7

2017-07-12 Thread Christopher Lameter
On Wed, 7 Jun 2017, Andre Wild wrote: > I'm currently seeing the following message running kernel version 4.11.0. > It looks like it was introduced with the patch > 4037d452202e34214e8a939fa5621b2b3bbb45b7. A 2007 patch? At that point we did not have __this_cpu_read() nor refresh_cpu_vmstats

Re: [RFC][PATCH] slub: Introduce 'alternate' per cpu partial lists

2017-07-12 Thread Christopher Lameter
On Thu, 8 Jun 2017, Laura Abbott wrote: > - Some of this code is redundant and can probably be combined. > - The fast path is very sensitive and it was suggested I leave it alone. The > approach I took means the fastpath cmpxchg always fails before trying the > alternate cmpxchg. From some of my

Re: [RFC][PATCH] slub: Introduce 'alternate' per cpu partial lists

2017-07-12 Thread Christopher Lameter
On Thu, 8 Jun 2017, Laura Abbott wrote: > - Some of this code is redundant and can probably be combined. > - The fast path is very sensitive and it was suggested I leave it alone. The > approach I took means the fastpath cmpxchg always fails before trying the > alternate cmpxchg. From some of my

Re: [RFC][PATCH] slub: Introduce 'alternate' per cpu partial lists

2017-07-12 Thread Christopher Lameter
On Wed, 14 Jun 2017, Joonsoo Kim wrote: > > - Some of this code is redundant and can probably be combined. > > - The fast path is very sensitive and it was suggested I leave it alone. The > > approach I took means the fastpath cmpxchg always fails before trying the > > alternate cmpxchg. From

Re: [RFC][PATCH] slub: Introduce 'alternate' per cpu partial lists

2017-07-12 Thread Christopher Lameter
On Wed, 14 Jun 2017, Joonsoo Kim wrote: > > - Some of this code is redundant and can probably be combined. > > - The fast path is very sensitive and it was suggested I leave it alone. The > > approach I took means the fastpath cmpxchg always fails before trying the > > alternate cmpxchg. From

<    2   3   4   5   6   7