On Tue, 20 Mar 2018, Matthew Wilcox wrote:

> On Tue, Mar 20, 2018 at 01:25:09PM -0400, Mikulas Patocka wrote:
> > The reason why we need this is that we are going to merge code that does
> > block device deduplication (it was developed separatedly and sold as a
> > commercial product), and the code uses block sizes that are not a power of
> > two (block sizes 192K, 448K, 640K, 832K are used in the wild). The slab
> > allocator rounds up the allocation to the nearest power of two, but that
> > wastes a lot of memory. Performance of the solution depends on efficient
> > memory usage, so we should minimize wasted as much as possible.
>
> The SLUB allocator also falls back to using the page (buddy) allocator
> for allocations above 8kB, so this patch is going to have no effect on
> slub.  You'd be better off using alloc_pages_exact() for this kind of
> size, or managing your own pool of pages by using something like five
> 192k blocks in a 1MB allocation.

The fallback is only effective for kmalloc caches. Manually created caches
do not follow this rule.

Note that you can already control the page orders for allocation and
the objects per slab using

        slub_min_order
        slub_max_order
        slub_min_objects

This is documented in linux/Documentation/vm/slub.txt

Maybe do the same thing for SLAB?


--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

Reply via email to