On 22 Apr 2025, at 9:28, Roi Dayan wrote:

> On 17/04/2025 15:39, Eelco Chaudron wrote:
>>
>>
>> On 7 Apr 2025, at 13:12, Roi Dayan via dev wrote:
>>
>>> From: Gaetan Rivet <gaet...@nvidia.com>
>>>
>>> Do not immediately fill the per-user ID caches in the pool.
>>> Wait for a user to do its first allocation to create its local cache.
>>>
>>> This way, a large number of users can be requested with minimal cost if
>>> they are not active.
>>
>> Hi Gaetan/Roi,
>>
>> I see that this might save 280 bytes per thread; however, is this really a 
>> problem?
>>
>> What if every thread handles a packet for each netdev? We still end up using 
>> the same amount of memory. This might just delay the system running out of 
>> memory, rather than prevent it. This will just delay the out-of-memory OVS 
>> crash.
>>
>> I’d like to get some more opinions on this.
>>
>> Cheers,
>>
>> Eelco
>>
>
> Hi,
>
> We are testing some code that uses this library with allocating big pools that
> some setups may allocate some ids and some more so the lazy allocation helps 
> there.
> Since it shouldn't affect much a normal behavior we thought to send this as 
> it can
> benefit others in the future also here.
> If it's redundant for now because for the example you pointed out that it may 
> delay
> out-of-memory issue we can delay this patch and  get back to it later with a 
> real usage.
> Though I think if a system gets out-of-memory it will probably be from 
> somewhere else
> in the code with bigger or frequent memory allocations.

Let's hold off on applying the patch until we have a concrete use case. That 
way, we can verify it brings a real benefit and doesn't introduce low-memory 
crash risks.

Cheers,

Eelco

>>> Signed-off-by: Gaetan Rivet <gaet...@nvidia.com>
>>> Acked-by: Roi Dayan <r...@nvidia.com>
>>> ---
>>>  lib/id-fpool.c | 10 ++++++----
>>>  1 file changed, 6 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/lib/id-fpool.c b/lib/id-fpool.c
>>> index 7108c104a4a3..c98bb58c0cae 100644
>>> --- a/lib/id-fpool.c
>>> +++ b/lib/id-fpool.c
>>> @@ -120,10 +120,10 @@ id_slab_remove(struct id_slab *slab, uint32_t *id)
>>>  }
>>>
>>>  static void
>>> -per_user_init(struct per_user *pu, uint32_t *next_id, uint32_t max)
>>> +per_user_init(struct per_user *pu)
>>>  {
>>>      id_fpool_lock_init(&pu->user_lock);
>>> -    pu->slab = id_slab_create(next_id, max);
>>> +    pu->slab = NULL;
>>>  }
>>>
>>>  static void
>>> @@ -151,8 +151,7 @@ id_fpool_create(unsigned int nb_user, uint32_t floor, 
>>> uint32_t n_ids)
>>>      pool->ceiling = floor + n_ids;
>>>
>>>      for (i = 0; i < nb_user; i++) {
>>> -        per_user_init(&pool->per_users[i],
>>> -                      &pool->next_id, pool->ceiling);
>>> +        per_user_init(&pool->per_users[i]);
>>>      }
>>>      pool->nb_user = nb_user;
>>>
>>> @@ -194,6 +193,9 @@ id_fpool_new_id(struct id_fpool *pool, unsigned int 
>>> uid, uint32_t *id)
>>>
>>>      id_fpool_lock(&pu->user_lock);
>>>
>>> +    if (pu->slab == NULL) {
>>> +        pu->slab = id_slab_create(&pool->next_id, pool->ceiling);
>>> +    }
>>
>> nit: I would add a cr/lf here.
>>
>>>      if (id_slab_remove(pu->slab, id)) {
>>>          res = true;
>>>          goto unlock_and_ret;
>>> -- 
>>> 2.21.0
>>>
>>> _______________________________________________
>>> dev mailing list
>>> d...@openvswitch.org
>>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>>

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to