On Wed, Oct 26, 2016 at 12:08 PM, Christoph Lameter <c...@linux.com> wrote:
> Hmmm...Doesnt this belong into memcg_create_kmem_cache() or into
> kmem_cache_create() in mm/slab_common.h? Definitely not in an allocator
> specific function since this is an issue for all allocators.
> memcg_create_kmem_cache() simply assumes that it can pass flags from the
> kmem_cache structure to kmem_cache_create(). However, those flags may
> contain slab specific options.
> kmem_cache_create() could filter out flags that cannot be specified.
That make sense.
> Maybe create SLAB_FLAGS_PERMITTED in linux/mm/slab.h and mask other bits
> out in kmem_cache_create()?
> Slub also has internal flags and those also should not be passed to
> kmem_cache_create(). If we define the valid ones we can mask them out.
> The cleanest approach would be if kmem_cache_create() would reject invalid
> flags and fail and if memcg_create_kmem_cache() would mask out the invalid
> flags using SLAB_FLAGS_PERMITTED or so.
Okay, I think for SLAB we can allow everything except the two flags
Should I deny certain flags for SLUB? I can allow everything for now.
> On Wed, 26 Oct 2016, Thomas Garnier wrote:
>> While testing OBJFREELIST_SLAB integration with pagealloc, we found a
>> bug where kmem_cache(sys) would be created with both CFLGS_OFF_SLAB &
>> The original kmem_cache is created early making OFF_SLAB not possible.
>> When kmem_cache(sys) is created, OFF_SLAB is possible and if pagealloc
>> is enabled it will try to enable it first under certain conditions.
>> Given kmem_cache(sys) reuses the original flag, you can have both flags
>> at the same time resulting in allocation failures and odd behaviors.
>> The proposed fix removes these flags by default at the entrance of
>> __kmem_cache_create. This way the function will define which way the
>> freelist should be handled at this stage for the new cache.
>> Fixes: b03a017bebc4 ("mm/slab: introduce new slab management type,
>> Signed-off-by: Thomas Garnier <thgar...@google.com>
>> Signed-off-by: Greg Thelen <gthe...@google.com>
>> Based on next-20161025
>> mm/slab.c | 8 ++++++++
>> 1 file changed, 8 insertions(+)
>> diff --git a/mm/slab.c b/mm/slab.c
>> index 3c83c29..efe280a 100644
>> --- a/mm/slab.c
>> +++ b/mm/slab.c
>> @@ -2027,6 +2027,14 @@ __kmem_cache_create (struct kmem_cache *cachep,
>> unsigned long flags)
>> int err;
>> size_t size = cachep->size;
>> + /*
>> + * memcg re-creates caches with the flags of the originals. Remove
>> + * the freelist related flags to ensure they are re-defined at this
>> + * stage. Prevent having both flags on edge cases like with pagealloc
>> + * if the original cache was created too early to be OFF_SLAB.
>> + */
>> + flags &= ~(CFLGS_OBJFREELIST_SLAB|CFLGS_OFF_SLAB);
>> #if DEBUG
>> #if FORCED_DEBUG