> On May 31, 2023, at 1:54 PM, Andrew Doran <a...@netbsd.org> wrote:

>> - What would the cost of restoring attribution be, other than the
>>  obvious O(ntag*nsizebuckets) memory cost to record it and the effort
>>  to annotate allocations?
> 
> Related to this, in my experiments it turns out that using dedicated pools
> for objects for no other reason than to to get attribution or be space
> efficient seems to exert a lot of presssure on the CPU cache to the point
> that reverting to the general purpose allocator where possible yields a
> small but measurable and repeatable reduction in system time during builds.
> I plan to do this for some objects.

My big beef with specific pools for random objects is that it reduces 
utilization of memory.  Let’s say you have a pool for “foo” structures, and 
they’re sized correctly to fit into a 128 byte kmem pool.  But they have their 
own.  And there are 7 of them allocated.  That’s 3200 bytes that are being 
wasted when those objects could be allocated from a partially-fragmented 
kmem128 pool.

Back in the day, a lot of subsystems were converted to use pools directly 
because that was how access to a direct-mapped super-page was achieved.  Now 
that kmem backs malloc(), there’s less pressure there.  But for fixed-size 
allocations, malloc() is an inefficient API, because of the need to store the 
size and then round up to the minimum allocation alignment for the architecture 
(16 bytes on some systems).

Many of the direct pool consumers should switch to kmem, IMO…  and pool_cache 
should probably have a kmem_cache counterpart (pool_cache ctor/dtor support, 
but with a generic kmem bucket, rather than a specific pool).  This would 
likely lead to a general reduction in memory fragmentation in the system.

If we really want to have an “attribution tag” system, we can do that *outside* 
of the allocator… It wouldn’t be all that different from evcnt.

-- thorpej

Reply via email to