Re: [PATCH 01/10] mm: Assign id to every memcg-aware shrinker

2018-03-28 Thread Vladimir Davydov
On Wed, Mar 28, 2018 at 01:30:20PM +0300, Kirill Tkhai wrote:
> On 27.03.2018 18:48, Vladimir Davydov wrote:
> > On Tue, Mar 27, 2018 at 06:09:20PM +0300, Kirill Tkhai wrote:
> >> diff --git a/mm/vmscan.c b/mm/vmscan.c
> >> index 8fcd9f8d7390..91b5120b924f 100644
> >> --- a/mm/vmscan.c
> >> +++ b/mm/vmscan.c
> >> @@ -159,6 +159,56 @@ unsigned long vm_total_pages;
> >>  static LIST_HEAD(shrinker_list);
> >>  static DECLARE_RWSEM(shrinker_rwsem);
> >>  
> >> +#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
> >> +static DEFINE_IDA(bitmap_id_ida);
> >> +static DECLARE_RWSEM(bitmap_rwsem);
> >
> > Can't we reuse shrinker_rwsem for protecting the ida?
> 
>  I think it won't be better, since we allocate memory under this 
>  semaphore.
>  After we use shrinker_rwsem, we'll have to allocate the memory with 
>  GFP_ATOMIC,
>  which does not seems good. Currently, the patchset makes shrinker_rwsem 
>  be taken
>  for a small time, just to assign already allocated memory to maps.
> >>>
> >>> AFAIR it's OK to sleep under an rwsem so GFP_ATOMIC wouldn't be
> >>> necessary. Anyway, we only need to allocate memory when we extend
> >>> shrinker bitmaps, which is rare. In fact, there can only be a limited
> >>> number of such calls, as we never shrink these bitmaps (which is fine
> >>> by me).
> >>
> >> We take bitmap_rwsem for writing to expand shrinkers maps. If we replace
> >> it with shrinker_rwsem and the memory allocation get into reclaim, there
> >> will be deadlock.
> > 
> > Hmm, AFAICS we use down_read_trylock() in shrink_slab() so no deadlock
> > would be possible. We wouldn't be able to reclaim slabs though, that's
> > true, but I don't think it would be a problem for small allocations.
> > 
> > That's how I see this. We use shrinker_rwsem to protect IDR mapping
> > shrink_id => shrinker (I still insist on IDR). It may allocate, but the
> > allocation size is going to be fairly small so it's OK that we don't
> > call shrinkers there. After we allocated a shrinker ID, we release
> > shrinker_rwsem and call mem_cgroup_grow_shrinker_map (or whatever it
> > will be called), which checks if per-memcg shrinker bitmaps need growing
> > and if they do it takes its own mutex used exclusively for protecting
> > the bitmaps and reallocates the bitmaps (we will need the mutex anyway
> > to synchronize css_online vs shrinker bitmap reallocation as the
> > shrinker_rwsem is private to vmscan.c and we don't want to export it
> > to memcontrol.c).
> 
> But what the profit of prohibiting reclaim during shrinker id allocation?
> In case of this is a IDR, it still may require 1 page, and still may get
> in after fast reclaim. If we prohibit reclaim, we'll fail to register
> the shrinker.
> 
> It's not a rare situation, when all the memory is occupied by page cache.

shrinker_rwsem doesn't block page cache reclaim, only dcache reclaim.
I don't think that dcache can occupy all available memory.

> So, we will fail to mount something in some situation.
> 
> What the advantages do we have to be more significant, than this disadvantage?

The main advantage is code simplicity.


Re: [PATCH 01/10] mm: Assign id to every memcg-aware shrinker

2018-03-28 Thread Kirill Tkhai


On 27.03.2018 18:48, Vladimir Davydov wrote:
> On Tue, Mar 27, 2018 at 06:09:20PM +0300, Kirill Tkhai wrote:
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 8fcd9f8d7390..91b5120b924f 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -159,6 +159,56 @@ unsigned long vm_total_pages;
>>  static LIST_HEAD(shrinker_list);
>>  static DECLARE_RWSEM(shrinker_rwsem);
>>  
>> +#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
>> +static DEFINE_IDA(bitmap_id_ida);
>> +static DECLARE_RWSEM(bitmap_rwsem);
>
> Can't we reuse shrinker_rwsem for protecting the ida?

 I think it won't be better, since we allocate memory under this semaphore.
 After we use shrinker_rwsem, we'll have to allocate the memory with 
 GFP_ATOMIC,
 which does not seems good. Currently, the patchset makes shrinker_rwsem be 
 taken
 for a small time, just to assign already allocated memory to maps.
>>>
>>> AFAIR it's OK to sleep under an rwsem so GFP_ATOMIC wouldn't be
>>> necessary. Anyway, we only need to allocate memory when we extend
>>> shrinker bitmaps, which is rare. In fact, there can only be a limited
>>> number of such calls, as we never shrink these bitmaps (which is fine
>>> by me).
>>
>> We take bitmap_rwsem for writing to expand shrinkers maps. If we replace
>> it with shrinker_rwsem and the memory allocation get into reclaim, there
>> will be deadlock.
> 
> Hmm, AFAICS we use down_read_trylock() in shrink_slab() so no deadlock
> would be possible. We wouldn't be able to reclaim slabs though, that's
> true, but I don't think it would be a problem for small allocations.
> 
> That's how I see this. We use shrinker_rwsem to protect IDR mapping
> shrink_id => shrinker (I still insist on IDR). It may allocate, but the
> allocation size is going to be fairly small so it's OK that we don't
> call shrinkers there. After we allocated a shrinker ID, we release
> shrinker_rwsem and call mem_cgroup_grow_shrinker_map (or whatever it
> will be called), which checks if per-memcg shrinker bitmaps need growing
> and if they do it takes its own mutex used exclusively for protecting
> the bitmaps and reallocates the bitmaps (we will need the mutex anyway
> to synchronize css_online vs shrinker bitmap reallocation as the
> shrinker_rwsem is private to vmscan.c and we don't want to export it
> to memcontrol.c).

But what the profit of prohibiting reclaim during shrinker id allocation?
In case of this is a IDR, it still may require 1 page, and still may get
in after fast reclaim. If we prohibit reclaim, we'll fail to register
the shrinker.

It's not a rare situation, when all the memory is occupied by page cache.
So, we will fail to mount something in some situation.

What the advantages do we have to be more significant, than this disadvantage?

Kirill


Re: [PATCH 01/10] mm: Assign id to every memcg-aware shrinker

2018-03-27 Thread Vladimir Davydov
On Tue, Mar 27, 2018 at 06:09:20PM +0300, Kirill Tkhai wrote:
>  diff --git a/mm/vmscan.c b/mm/vmscan.c
>  index 8fcd9f8d7390..91b5120b924f 100644
>  --- a/mm/vmscan.c
>  +++ b/mm/vmscan.c
>  @@ -159,6 +159,56 @@ unsigned long vm_total_pages;
>   static LIST_HEAD(shrinker_list);
>   static DECLARE_RWSEM(shrinker_rwsem);
>   
>  +#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
>  +static DEFINE_IDA(bitmap_id_ida);
>  +static DECLARE_RWSEM(bitmap_rwsem);
> >>>
> >>> Can't we reuse shrinker_rwsem for protecting the ida?
> >>
> >> I think it won't be better, since we allocate memory under this semaphore.
> >> After we use shrinker_rwsem, we'll have to allocate the memory with 
> >> GFP_ATOMIC,
> >> which does not seems good. Currently, the patchset makes shrinker_rwsem be 
> >> taken
> >> for a small time, just to assign already allocated memory to maps.
> > 
> > AFAIR it's OK to sleep under an rwsem so GFP_ATOMIC wouldn't be
> > necessary. Anyway, we only need to allocate memory when we extend
> > shrinker bitmaps, which is rare. In fact, there can only be a limited
> > number of such calls, as we never shrink these bitmaps (which is fine
> > by me).
> 
> We take bitmap_rwsem for writing to expand shrinkers maps. If we replace
> it with shrinker_rwsem and the memory allocation get into reclaim, there
> will be deadlock.

Hmm, AFAICS we use down_read_trylock() in shrink_slab() so no deadlock
would be possible. We wouldn't be able to reclaim slabs though, that's
true, but I don't think it would be a problem for small allocations.

That's how I see this. We use shrinker_rwsem to protect IDR mapping
shrink_id => shrinker (I still insist on IDR). It may allocate, but the
allocation size is going to be fairly small so it's OK that we don't
call shrinkers there. After we allocated a shrinker ID, we release
shrinker_rwsem and call mem_cgroup_grow_shrinker_map (or whatever it
will be called), which checks if per-memcg shrinker bitmaps need growing
and if they do it takes its own mutex used exclusively for protecting
the bitmaps and reallocates the bitmaps (we will need the mutex anyway
to synchronize css_online vs shrinker bitmap reallocation as the
shrinker_rwsem is private to vmscan.c and we don't want to export it
to memcontrol.c).


Re: [PATCH 01/10] mm: Assign id to every memcg-aware shrinker

2018-03-27 Thread Kirill Tkhai
On 27.03.2018 12:15, Vladimir Davydov wrote:
> On Mon, Mar 26, 2018 at 06:09:35PM +0300, Kirill Tkhai wrote:
>> Hi, Vladimir,
>>
>> thanks for your review!
>>
>> On 24.03.2018 21:40, Vladimir Davydov wrote:
>>> Hello Kirill,
>>>
>>> I don't have any objections to the idea behind this patch set.
>>> Well, at least I don't know how to better tackle the problem you
>>> describe in the cover letter. Please, see below for my comments
>>> regarding implementation details.
>>>
>>> On Wed, Mar 21, 2018 at 04:21:17PM +0300, Kirill Tkhai wrote:
 The patch introduces shrinker::id number, which is used to enumerate
 memcg-aware shrinkers. The number start from 0, and the code tries
 to maintain it as small as possible.

 This will be used as to represent a memcg-aware shrinkers in memcg
 shrinkers map.

 Signed-off-by: Kirill Tkhai 
 ---
  include/linux/shrinker.h |1 +
  mm/vmscan.c  |   59 
 ++
  2 files changed, 60 insertions(+)

 diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
 index a3894918a436..738de8ef5246 100644
 --- a/include/linux/shrinker.h
 +++ b/include/linux/shrinker.h
 @@ -66,6 +66,7 @@ struct shrinker {
  
/* These are for internal use */
struct list_head list;
 +  int id;
>>>
>>> This definition could definitely use a comment.
>>>
>>> BTW shouldn't we ifdef it?
>>
>> Ok
>>
/* objs pending delete, per node */
atomic_long_t *nr_deferred;
  };
 diff --git a/mm/vmscan.c b/mm/vmscan.c
 index 8fcd9f8d7390..91b5120b924f 100644
 --- a/mm/vmscan.c
 +++ b/mm/vmscan.c
 @@ -159,6 +159,56 @@ unsigned long vm_total_pages;
  static LIST_HEAD(shrinker_list);
  static DECLARE_RWSEM(shrinker_rwsem);
  
 +#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
 +static DEFINE_IDA(bitmap_id_ida);
 +static DECLARE_RWSEM(bitmap_rwsem);
>>>
>>> Can't we reuse shrinker_rwsem for protecting the ida?
>>
>> I think it won't be better, since we allocate memory under this semaphore.
>> After we use shrinker_rwsem, we'll have to allocate the memory with 
>> GFP_ATOMIC,
>> which does not seems good. Currently, the patchset makes shrinker_rwsem be 
>> taken
>> for a small time, just to assign already allocated memory to maps.
> 
> AFAIR it's OK to sleep under an rwsem so GFP_ATOMIC wouldn't be
> necessary. Anyway, we only need to allocate memory when we extend
> shrinker bitmaps, which is rare. In fact, there can only be a limited
> number of such calls, as we never shrink these bitmaps (which is fine
> by me).

We take bitmap_rwsem for writing to expand shrinkers maps. If we replace
it with shrinker_rwsem and the memory allocation get into reclaim, there
will be deadlock.

>>
 +static int bitmap_id_start;
 +
 +static int alloc_shrinker_id(struct shrinker *shrinker)
 +{
 +  int id, ret;
 +
 +  if (!(shrinker->flags & SHRINKER_MEMCG_AWARE))
 +  return 0;
 +retry:
 +  ida_pre_get(&bitmap_id_ida, GFP_KERNEL);
 +  down_write(&bitmap_rwsem);
 +  ret = ida_get_new_above(&bitmap_id_ida, bitmap_id_start, &id);
>>>
>>> AFAIK ida always allocates the smallest available id so you don't need
>>> to keep track of bitmap_id_start.
>>
>> I saw mnt_alloc_group_id() does the same, so this was the reason, the 
>> additional
>> variable was used. Doesn't this gives a good advise to ida and makes it find
>> a free id faster?
> 
> As Matthew pointed out, this is rather pointless.

Kirill


Re: [PATCH 01/10] mm: Assign id to every memcg-aware shrinker

2018-03-27 Thread Vladimir Davydov
On Mon, Mar 26, 2018 at 06:09:35PM +0300, Kirill Tkhai wrote:
> Hi, Vladimir,
> 
> thanks for your review!
> 
> On 24.03.2018 21:40, Vladimir Davydov wrote:
> > Hello Kirill,
> > 
> > I don't have any objections to the idea behind this patch set.
> > Well, at least I don't know how to better tackle the problem you
> > describe in the cover letter. Please, see below for my comments
> > regarding implementation details.
> > 
> > On Wed, Mar 21, 2018 at 04:21:17PM +0300, Kirill Tkhai wrote:
> >> The patch introduces shrinker::id number, which is used to enumerate
> >> memcg-aware shrinkers. The number start from 0, and the code tries
> >> to maintain it as small as possible.
> >>
> >> This will be used as to represent a memcg-aware shrinkers in memcg
> >> shrinkers map.
> >>
> >> Signed-off-by: Kirill Tkhai 
> >> ---
> >>  include/linux/shrinker.h |1 +
> >>  mm/vmscan.c  |   59 
> >> ++
> >>  2 files changed, 60 insertions(+)
> >>
> >> diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
> >> index a3894918a436..738de8ef5246 100644
> >> --- a/include/linux/shrinker.h
> >> +++ b/include/linux/shrinker.h
> >> @@ -66,6 +66,7 @@ struct shrinker {
> >>  
> >>/* These are for internal use */
> >>struct list_head list;
> >> +  int id;
> > 
> > This definition could definitely use a comment.
> > 
> > BTW shouldn't we ifdef it?
> 
> Ok
> 
> >>/* objs pending delete, per node */
> >>atomic_long_t *nr_deferred;
> >>  };
> >> diff --git a/mm/vmscan.c b/mm/vmscan.c
> >> index 8fcd9f8d7390..91b5120b924f 100644
> >> --- a/mm/vmscan.c
> >> +++ b/mm/vmscan.c
> >> @@ -159,6 +159,56 @@ unsigned long vm_total_pages;
> >>  static LIST_HEAD(shrinker_list);
> >>  static DECLARE_RWSEM(shrinker_rwsem);
> >>  
> >> +#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
> >> +static DEFINE_IDA(bitmap_id_ida);
> >> +static DECLARE_RWSEM(bitmap_rwsem);
> > 
> > Can't we reuse shrinker_rwsem for protecting the ida?
> 
> I think it won't be better, since we allocate memory under this semaphore.
> After we use shrinker_rwsem, we'll have to allocate the memory with 
> GFP_ATOMIC,
> which does not seems good. Currently, the patchset makes shrinker_rwsem be 
> taken
> for a small time, just to assign already allocated memory to maps.

AFAIR it's OK to sleep under an rwsem so GFP_ATOMIC wouldn't be
necessary. Anyway, we only need to allocate memory when we extend
shrinker bitmaps, which is rare. In fact, there can only be a limited
number of such calls, as we never shrink these bitmaps (which is fine
by me).

> 
> >> +static int bitmap_id_start;
> >> +
> >> +static int alloc_shrinker_id(struct shrinker *shrinker)
> >> +{
> >> +  int id, ret;
> >> +
> >> +  if (!(shrinker->flags & SHRINKER_MEMCG_AWARE))
> >> +  return 0;
> >> +retry:
> >> +  ida_pre_get(&bitmap_id_ida, GFP_KERNEL);
> >> +  down_write(&bitmap_rwsem);
> >> +  ret = ida_get_new_above(&bitmap_id_ida, bitmap_id_start, &id);
> > 
> > AFAIK ida always allocates the smallest available id so you don't need
> > to keep track of bitmap_id_start.
> 
> I saw mnt_alloc_group_id() does the same, so this was the reason, the 
> additional
> variable was used. Doesn't this gives a good advise to ida and makes it find
> a free id faster?

As Matthew pointed out, this is rather pointless.


Re: [PATCH 01/10] mm: Assign id to every memcg-aware shrinker

2018-03-26 Thread Kirill Tkhai
On 26.03.2018 18:14, Matthew Wilcox wrote:
> On Mon, Mar 26, 2018 at 06:09:35PM +0300, Kirill Tkhai wrote:
>>> AFAIK ida always allocates the smallest available id so you don't need
>>> to keep track of bitmap_id_start.
>>
>> I saw mnt_alloc_group_id() does the same, so this was the reason, the 
>> additional
>> variable was used. Doesn't this gives a good advise to ida and makes it find
>> a free id faster?
> 
> No, it doesn't help the IDA in the slightest.  I have a patch in my
> tree to delete that silliness from mnt_alloc_group_id(); just haven't
> submitted it yet.

Ok, then I'll remove this trick.

Thanks,
Kirill


Re: [PATCH 01/10] mm: Assign id to every memcg-aware shrinker

2018-03-26 Thread Matthew Wilcox
On Mon, Mar 26, 2018 at 06:09:35PM +0300, Kirill Tkhai wrote:
> > AFAIK ida always allocates the smallest available id so you don't need
> > to keep track of bitmap_id_start.
> 
> I saw mnt_alloc_group_id() does the same, so this was the reason, the 
> additional
> variable was used. Doesn't this gives a good advise to ida and makes it find
> a free id faster?

No, it doesn't help the IDA in the slightest.  I have a patch in my
tree to delete that silliness from mnt_alloc_group_id(); just haven't
submitted it yet.



Re: [PATCH 01/10] mm: Assign id to every memcg-aware shrinker

2018-03-26 Thread Kirill Tkhai
Hi, Vladimir,

thanks for your review!

On 24.03.2018 21:40, Vladimir Davydov wrote:
> Hello Kirill,
> 
> I don't have any objections to the idea behind this patch set.
> Well, at least I don't know how to better tackle the problem you
> describe in the cover letter. Please, see below for my comments
> regarding implementation details.
> 
> On Wed, Mar 21, 2018 at 04:21:17PM +0300, Kirill Tkhai wrote:
>> The patch introduces shrinker::id number, which is used to enumerate
>> memcg-aware shrinkers. The number start from 0, and the code tries
>> to maintain it as small as possible.
>>
>> This will be used as to represent a memcg-aware shrinkers in memcg
>> shrinkers map.
>>
>> Signed-off-by: Kirill Tkhai 
>> ---
>>  include/linux/shrinker.h |1 +
>>  mm/vmscan.c  |   59 
>> ++
>>  2 files changed, 60 insertions(+)
>>
>> diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
>> index a3894918a436..738de8ef5246 100644
>> --- a/include/linux/shrinker.h
>> +++ b/include/linux/shrinker.h
>> @@ -66,6 +66,7 @@ struct shrinker {
>>  
>>  /* These are for internal use */
>>  struct list_head list;
>> +int id;
> 
> This definition could definitely use a comment.
> 
> BTW shouldn't we ifdef it?

Ok

>>  /* objs pending delete, per node */
>>  atomic_long_t *nr_deferred;
>>  };
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 8fcd9f8d7390..91b5120b924f 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -159,6 +159,56 @@ unsigned long vm_total_pages;
>>  static LIST_HEAD(shrinker_list);
>>  static DECLARE_RWSEM(shrinker_rwsem);
>>  
>> +#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
>> +static DEFINE_IDA(bitmap_id_ida);
>> +static DECLARE_RWSEM(bitmap_rwsem);
> 
> Can't we reuse shrinker_rwsem for protecting the ida?

I think it won't be better, since we allocate memory under this semaphore.
After we use shrinker_rwsem, we'll have to allocate the memory with GFP_ATOMIC,
which does not seems good. Currently, the patchset makes shrinker_rwsem be taken
for a small time, just to assign already allocated memory to maps.

>> +static int bitmap_id_start;
>> +
>> +static int alloc_shrinker_id(struct shrinker *shrinker)
>> +{
>> +int id, ret;
>> +
>> +if (!(shrinker->flags & SHRINKER_MEMCG_AWARE))
>> +return 0;
>> +retry:
>> +ida_pre_get(&bitmap_id_ida, GFP_KERNEL);
>> +down_write(&bitmap_rwsem);
>> +ret = ida_get_new_above(&bitmap_id_ida, bitmap_id_start, &id);
> 
> AFAIK ida always allocates the smallest available id so you don't need
> to keep track of bitmap_id_start.

I saw mnt_alloc_group_id() does the same, so this was the reason, the additional
variable was used. Doesn't this gives a good advise to ida and makes it find
a free id faster?
 
>> +if (!ret) {
>> +shrinker->id = id;
>> +bitmap_id_start = shrinker->id + 1;
>> +}
>> +up_write(&bitmap_rwsem);
>> +if (ret == -EAGAIN)
>> +goto retry;
>> +
>> +return ret;
>> +}

Thanks,
Kirill


Re: [PATCH 01/10] mm: Assign id to every memcg-aware shrinker

2018-03-24 Thread Vladimir Davydov
Hello Kirill,

I don't have any objections to the idea behind this patch set.
Well, at least I don't know how to better tackle the problem you
describe in the cover letter. Please, see below for my comments
regarding implementation details.

On Wed, Mar 21, 2018 at 04:21:17PM +0300, Kirill Tkhai wrote:
> The patch introduces shrinker::id number, which is used to enumerate
> memcg-aware shrinkers. The number start from 0, and the code tries
> to maintain it as small as possible.
> 
> This will be used as to represent a memcg-aware shrinkers in memcg
> shrinkers map.
> 
> Signed-off-by: Kirill Tkhai 
> ---
>  include/linux/shrinker.h |1 +
>  mm/vmscan.c  |   59 
> ++
>  2 files changed, 60 insertions(+)
> 
> diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
> index a3894918a436..738de8ef5246 100644
> --- a/include/linux/shrinker.h
> +++ b/include/linux/shrinker.h
> @@ -66,6 +66,7 @@ struct shrinker {
>  
>   /* These are for internal use */
>   struct list_head list;
> + int id;

This definition could definitely use a comment.

BTW shouldn't we ifdef it?

>   /* objs pending delete, per node */
>   atomic_long_t *nr_deferred;
>  };
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 8fcd9f8d7390..91b5120b924f 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -159,6 +159,56 @@ unsigned long vm_total_pages;
>  static LIST_HEAD(shrinker_list);
>  static DECLARE_RWSEM(shrinker_rwsem);
>  
> +#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
> +static DEFINE_IDA(bitmap_id_ida);
> +static DECLARE_RWSEM(bitmap_rwsem);

Can't we reuse shrinker_rwsem for protecting the ida?

> +static int bitmap_id_start;
> +
> +static int alloc_shrinker_id(struct shrinker *shrinker)
> +{
> + int id, ret;
> +
> + if (!(shrinker->flags & SHRINKER_MEMCG_AWARE))
> + return 0;
> +retry:
> + ida_pre_get(&bitmap_id_ida, GFP_KERNEL);
> + down_write(&bitmap_rwsem);
> + ret = ida_get_new_above(&bitmap_id_ida, bitmap_id_start, &id);

AFAIK ida always allocates the smallest available id so you don't need
to keep track of bitmap_id_start.

> + if (!ret) {
> + shrinker->id = id;
> + bitmap_id_start = shrinker->id + 1;
> + }
> + up_write(&bitmap_rwsem);
> + if (ret == -EAGAIN)
> + goto retry;
> +
> + return ret;
> +}