On Mon, Feb 12, 2024 at 01:38:46PM -0800, Suren Baghdasaryan wrote:
> Low overhead [1] per-callsite memory allocation profiling. Not just for debug
> kernels, overhead low enough to be deployed in production.

What's the plan for things like devm_kmalloc() and similar relatively
simple wrappers? I was thinking it would be possible to reimplement at
least devm_kmalloc() with size and flags changing helper a while back:

https://lore.kernel.org/all/202309111428.6F36672F57@keescook/

I suspect it could be possible to adapt the alloc_hooks wrapper in this
series similarly:

#define alloc_hooks_prep(_do_alloc, _do_prepare, _do_finish,            \
                          ctx, size, flags)                             \
({                                                                      \
        typeof(_do_alloc) _res;                                         \
        DEFINE_ALLOC_TAG(_alloc_tag, _old);                             \
        ssize_t _size = (size);                                         \
        size_t _usable = _size;                                         \
        gfp_t _flags = (flags);                                         \
                                                                        \
        _res = _do_prepare(ctx, &_size, &_flags);                       \
        if (!IS_ERR_OR_NULL(_res)                                       \
                _res = _do_alloc(_size, _flags);                        \
        if (!IS_ERR_OR_NULL(_res)                                       \
                _res = _do_finish(ctx, _usable, _size, _flags, _res);   \
        _res;                                                           \
})

#define devm_kmalloc(dev, size, flags)                                  \
        alloc_hooks_prep(kmalloc, devm_alloc_prep, devm_alloc_finish,   \
                         dev, size, flags)

And devm_alloc_prep() and devm_alloc_finish() adapted from the URL
above.

And _do_finish instances could be marked with __realloc_size(2)

-Kees

-- 
Kees Cook

Reply via email to