On 2026/1/5 23:43, [email protected] wrote:
>> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
>> index d029690246f8..8665eb6b8a7d 100644
>> --- a/kernel/bpf/hashtab.c
>> +++ b/kernel/bpf/hashtab.c
>
> [ ... ]
>
>> +static int htab_lru_map_update_elem_in_place(struct bpf_htab *htab, void
>> *key, void *value,
>> + u64 map_flags, struct bucket *b,
>> + struct hlist_nulls_head *head, u32
>> hash,
>> + bool percpu, bool onallcpus)
>> +{
>> + struct htab_elem *l_new, *l_old, *l_free;
>> + struct bpf_map *map = &htab->map;
>> + u32 key_size = map->key_size;
>> + struct bpf_lru_node *node;
>> + unsigned long flags;
>> + void *l_val;
>> + int ret;
>> +
>> + node = bpf_lru_pop_extra(&htab->lru);
>> + if (!node)
>> + return -ENOENT;
>> +
>> + l_new = container_of(node, struct htab_elem, lru_node);
>> + l_new->hash = hash;
>> + memcpy(l_new->key, key, key_size);
>> + if (!percpu) {
>> + l_val = htab_elem_value(l_new, map->key_size);
>> + copy_map_value(map, l_val, value);
>> + bpf_obj_free_fields(map->record, l_val);
>> + }
>
> Should bpf_obj_free_fields() be called here on l_new? The value was just
> copied into l_new and l_new will be inserted into the hash table when
> l_old is found. Calling bpf_obj_free_fields() on l_new's value frees the
> special fields (kptr, bpf_list_head, etc.) that were just copied, then
> l_new is inserted with freed fields. This appears to be a use-after-free
> for maps with special fields.
>
> Looking at htab_lru_map_update_elem(), the pattern is to copy the value
> and NOT call bpf_obj_free_fields() on the new element. The fields are
> only freed when an element is removed from the hash table.
>
You're wrong here, that the copied value must be cleaned by
bpf_obj_free_fields().
>> +
>> + ret = htab_lock_bucket(b, &flags);
>> + if (ret)
>> + goto err_lock_bucket;
>> +
>> + l_old = lookup_elem_raw(head, hash, key, key_size);
>> +
>> + ret = check_flags(htab, l_old, map_flags);
>> + if (ret)
>> + goto err;
>> +
>> + if (l_old) {
>> + bpf_lru_node_set_ref(&l_new->lru_node);
>> + if (percpu) {
>> + /* per-cpu hash map can update value in-place.
>> + * Keep the same logic in
>> __htab_lru_percpu_map_update_elem().
>> + */
>> + pcpu_copy_value(htab, htab_elem_get_ptr(l_old,
>> key_size),
>> + value, onallcpus);
>> + l_free = l_new;
>> + } else {
>> + hlist_nulls_add_head_rcu(&l_new->hash_node, head);
>> + hlist_nulls_del_rcu(&l_old->hash_node);
>> + l_free = l_old;
>> + }
>> + } else {
>> + ret = -ENOENT;
>> + }
>> +
>> +err:
>> + htab_unlock_bucket(b, flags);
>> +
>> +err_lock_bucket:
>> + bpf_lru_push_free(&htab->lru, ret ? node : &l_free->lru_node);
> ^^^^^^^^^^^^^^^^^^
>
> When ret == 0 and !percpu, l_free points to l_old which is being removed
> from the hash table. The code calls bpf_lru_push_free() directly instead
> of htab_lru_push_free(). Looking at htab_lru_push_free():
>
> static void htab_lru_push_free(struct bpf_htab *htab, struct htab_elem
> *elem)
> {
> check_and_free_fields(htab, elem);
> bpf_map_dec_elem_count(&htab->map);
> bpf_lru_push_free(&htab->lru, &elem->lru_node);
> }
>
> It calls check_and_free_fields() to free special fields in the element
> before pushing to the free list. The code here bypasses that, which would
> leak special fields (kptr, etc.) in l_old when it has them.
>
> Should this use htab_lru_push_free() instead of bpf_lru_push_free() for
> the non-error path where l_old is being freed?
>
You're right here, that I did not free the special fields of l_old. But
htab_lru_push_free() shouldn't be used here for the non-error path,
because bpf_map_dec_elem_count() was not required here.
I'll fix it in the next revision.
Thanks,
Leon
>> +
>> + return ret;
>> +}
>
>
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
>
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/20720201621