On Fri, 4 Aug 2017, Laura Abbott wrote:
> All slub debug features currently disable the fast path completely.
> Some features such as consistency checks require this to allow taking of
> locks. Poisoning and red zoning don't require this and can safely use
> the per-cpu fast path. Introduce a
On Wed, 2 Aug 2017, Huang, Ying wrote:
> To allocate cache line size aligned percpu memory dynamically,
> alloc_percpu_aligned() is introduced and used in iova drivers too.
alloc_percpu() already aligns objects as specified when they are declared.
Moreover the function is improperly named since
On Wed, 2 Aug 2017, Huang, Ying wrote:
> To allocate cache line size aligned percpu memory dynamically,
> alloc_percpu_aligned() is introduced and used in iova drivers too.
alloc_percpu() already aligns objects as specified when they are declared.
Moreover the function is improperly named since
On Wed, 2 Aug 2017, Huang, Ying wrote:
> --- a/include/linux/percpu.h
> +++ b/include/linux/percpu.h
> @@ -129,5 +129,8 @@ extern phys_addr_t per_cpu_ptr_to_phys(void *addr);
> #define alloc_percpu(type) \
> (typeof(type) __percpu
On Wed, 2 Aug 2017, Huang, Ying wrote:
> --- a/include/linux/percpu.h
> +++ b/include/linux/percpu.h
> @@ -129,5 +129,8 @@ extern phys_addr_t per_cpu_ptr_to_phys(void *addr);
> #define alloc_percpu(type) \
> (typeof(type) __percpu
On Fri, 28 Jul 2017, Alexander Popov wrote:
> I don't really like ignoring double-free. I think, that:
> - it will hide dangerous bugs in the kernel,
> - it can make some kernel exploits more stable.
> I would rather add BUG_ON to set_freepointer() behind SLAB_FREELIST_HARDENED.
> Is
> it
On Fri, 28 Jul 2017, Alexander Popov wrote:
> I don't really like ignoring double-free. I think, that:
> - it will hide dangerous bugs in the kernel,
> - it can make some kernel exploits more stable.
> I would rather add BUG_ON to set_freepointer() behind SLAB_FREELIST_HARDENED.
> Is
> it
On Wed, 26 Jul 2017, Kees Cook wrote:
> > Although in either case we are adding code to the fastpath...
>
> While I'd like it unconditionally, I think Alexander's proposal was to
> put it behind CONFIG_SLAB_FREELIST_HARDENED.
Sounds good.
> BTW, while I've got your attention, can you Ack the
On Wed, 26 Jul 2017, Kees Cook wrote:
> > Although in either case we are adding code to the fastpath...
>
> While I'd like it unconditionally, I think Alexander's proposal was to
> put it behind CONFIG_SLAB_FREELIST_HARDENED.
Sounds good.
> BTW, while I've got your attention, can you Ack the
On Tue, 25 Jul 2017, Kees Cook wrote:
> +/*
> + * Returns freelist pointer (ptr). With hardening, this is obfuscated
> + * with an XOR of the address where the pointer is held and a per-cache
> + * random number.
> + */
> +static inline void *freelist_ptr(const struct kmem_cache *s, void *ptr,
>
On Tue, 25 Jul 2017, Kees Cook wrote:
> +/*
> + * Returns freelist pointer (ptr). With hardening, this is obfuscated
> + * with an XOR of the address where the pointer is held and a per-cache
> + * random number.
> + */
> +static inline void *freelist_ptr(const struct kmem_cache *s, void *ptr,
>
On Thu, 27 Jul 2017, Arvind Yadav wrote:
> attribute_group are not supposed to change at runtime. All functions
> working with attribute_group provided by work with
> const attribute_group. So mark the non-const structs as const.
Acked-by: Christoph Lameter
On Thu, 27 Jul 2017, Arvind Yadav wrote:
> attribute_group are not supposed to change at runtime. All functions
> working with attribute_group provided by work with
> const attribute_group. So mark the non-const structs as const.
Acked-by: Christoph Lameter
On Wed, 26 Jul 2017, Dima Zavin wrote:
> The fix is to cache the value that's returned by cpusets_enabled() at the
> top of the loop, and only operate on the seqlock (both begin and retry) if
> it was true.
I think the proper fix would be to ensure that the calls to
On Wed, 26 Jul 2017, Dima Zavin wrote:
> The fix is to cache the value that's returned by cpusets_enabled() at the
> top of the loop, and only operate on the seqlock (both begin and retry) if
> it was true.
I think the proper fix would be to ensure that the calls to
On Wed, 26 Jul 2017, Kees Cook wrote:
> >> What happens if, instead of BUG_ON, we do:
> >>
> >> if (unlikely(WARN_RATELIMIT(object == fp, "double-free detected"))
> >> return;
> >
> > This may work for the free fastpath but the set_freepointer function is
> > use in multiple other
On Wed, 26 Jul 2017, Kees Cook wrote:
> >> What happens if, instead of BUG_ON, we do:
> >>
> >> if (unlikely(WARN_RATELIMIT(object == fp, "double-free detected"))
> >> return;
> >
> > This may work for the free fastpath but the set_freepointer function is
> > use in multiple other
On Tue, 25 Jul 2017, Kees Cook wrote:
> > @@ -290,6 +290,10 @@ static inline void set_freepointer(struct kmem_cache
> > *s,
> > void *object, void *fp)
> > {
> > unsigned long freeptr_addr = (unsigned long)object + s->offset;
> >
> > +#ifdef CONFIG_SLAB_FREELIST_HARDENED
> > +
On Tue, 25 Jul 2017, Kees Cook wrote:
> > @@ -290,6 +290,10 @@ static inline void set_freepointer(struct kmem_cache
> > *s,
> > void *object, void *fp)
> > {
> > unsigned long freeptr_addr = (unsigned long)object + s->offset;
> >
> > +#ifdef CONFIG_SLAB_FREELIST_HARDENED
> > +
On Wed, 19 Jul 2017, Paul E. McKenney wrote:
> > Do we have any problem if we skip RCU idle enter/exit under a fast idle
> > scenario?
> > My understanding is, if tick is not stopped, then we don't need inform RCU
> > in
> > idle path, it can be informed in irq exit.
>
> Indeed, the problem
On Wed, 19 Jul 2017, Paul E. McKenney wrote:
> > Do we have any problem if we skip RCU idle enter/exit under a fast idle
> > scenario?
> > My understanding is, if tick is not stopped, then we don't need inform RCU
> > in
> > idle path, it can be informed in irq exit.
>
> Indeed, the problem
On Tue, 18 Jul 2017, Kees Cook wrote:
> I think there are two issues: first, this should likely be under
> CONFIG_FREELIST_HARDENED since Christoph hasn't wanted to make these
> changes enabled by default (if I'm understanding his earlier review
> comments to me). The second issue is what to DO
On Tue, 18 Jul 2017, Kees Cook wrote:
> I think there are two issues: first, this should likely be under
> CONFIG_FREELIST_HARDENED since Christoph hasn't wanted to make these
> changes enabled by default (if I'm understanding his earlier review
> comments to me). The second issue is what to DO
On Mon, 17 Jul 2017, Alexander Popov wrote:
> Christopher, if I change BUG_ON() to VM_BUG_ON(), it will be disabled by
> default
> again, right?
It will be enabled if the distro ships with VM debugging on by default.
On Mon, 17 Jul 2017, Alexander Popov wrote:
> Christopher, if I change BUG_ON() to VM_BUG_ON(), it will be disabled by
> default
> again, right?
It will be enabled if the distro ships with VM debugging on by default.
On Mon, 17 Jul 2017, Matthew Wilcox wrote:
> On Mon, Jul 17, 2017 at 07:45:07PM +0300, Alexander Popov wrote:
> > Add an assertion similar to "fasttop" check in GNU C Library allocator:
> > an object added to a singly linked freelist should not point to itself.
> > That helps to detect some
On Mon, 17 Jul 2017, Matthew Wilcox wrote:
> On Mon, Jul 17, 2017 at 07:45:07PM +0300, Alexander Popov wrote:
> > Add an assertion similar to "fasttop" check in GNU C Library allocator:
> > an object added to a singly linked freelist should not point to itself.
> > That helps to detect some
On Mon, 17 Jul 2017, Alexander Popov wrote:
> Add an assertion similar to "fasttop" check in GNU C Library allocator:
> an object added to a singly linked freelist should not point to itself.
> That helps to detect some double free errors (e.g. CVE-2017-2636) without
> slub_debug and KASAN.
On Mon, 17 Jul 2017, Alexander Popov wrote:
> Add an assertion similar to "fasttop" check in GNU C Library allocator:
> an object added to a singly linked freelist should not point to itself.
> That helps to detect some double free errors (e.g. CVE-2017-2636) without
> slub_debug and KASAN.
On Wed, 12 Jul 2017, Andrew Morton wrote:
> - free_kmem_cache_nodes() frees the cache node before nulling out a
> reference to it
>
> - init_kmem_cache_nodes() publishes the cache node before initializing it
>
> Neither of these matter at runtime because the cache nodes cannot be
> looked up by
On Wed, 12 Jul 2017, Andrew Morton wrote:
> - free_kmem_cache_nodes() frees the cache node before nulling out a
> reference to it
>
> - init_kmem_cache_nodes() publishes the cache node before initializing it
>
> Neither of these matter at runtime because the cache nodes cannot be
> looked up by
On Wed, 7 Jun 2017, Andre Wild wrote:
> I'm currently seeing the following message running kernel version 4.11.0.
> It looks like it was introduced with the patch
> 4037d452202e34214e8a939fa5621b2b3bbb45b7.
A 2007 patch? At that point we did not have __this_cpu_read() nor
refresh_cpu_vmstats
On Wed, 7 Jun 2017, Andre Wild wrote:
> I'm currently seeing the following message running kernel version 4.11.0.
> It looks like it was introduced with the patch
> 4037d452202e34214e8a939fa5621b2b3bbb45b7.
A 2007 patch? At that point we did not have __this_cpu_read() nor
refresh_cpu_vmstats
On Thu, 8 Jun 2017, Laura Abbott wrote:
> - Some of this code is redundant and can probably be combined.
> - The fast path is very sensitive and it was suggested I leave it alone. The
> approach I took means the fastpath cmpxchg always fails before trying the
> alternate cmpxchg. From some of my
On Thu, 8 Jun 2017, Laura Abbott wrote:
> - Some of this code is redundant and can probably be combined.
> - The fast path is very sensitive and it was suggested I leave it alone. The
> approach I took means the fastpath cmpxchg always fails before trying the
> alternate cmpxchg. From some of my
On Wed, 14 Jun 2017, Joonsoo Kim wrote:
> > - Some of this code is redundant and can probably be combined.
> > - The fast path is very sensitive and it was suggested I leave it alone. The
> > approach I took means the fastpath cmpxchg always fails before trying the
> > alternate cmpxchg. From
On Wed, 14 Jun 2017, Joonsoo Kim wrote:
> > - Some of this code is redundant and can probably be combined.
> > - The fast path is very sensitive and it was suggested I leave it alone. The
> > approach I took means the fastpath cmpxchg always fails before trying the
> > alternate cmpxchg. From
601 - 637 of 637 matches
Mail list logo