Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-27 Thread Mikulas Patocka


On Fri, 27 Apr 2018, Christopher Lameter wrote:

> On Thu, 26 Apr 2018, Mikulas Patocka wrote:
> 
> > > Hmmm... order 4 for these caches may cause some concern. These should stay
> > > under costly order I think. Otherwise allocations are no longer
> > > guaranteed.
> >
> > You said that slub has fallback to smaller order allocations.
> 
> Yes it does...
> 
> > The whole purpose of this "minimize waste" approach is to use higher-order
> > allocations to use memory more efficiently, so it is just doing its job.
> > (for these 3 caches, order-4 really wastes less memory than order-3 - on
> > my system TCPv6 and sighand_cache have size 2112, task_struct 2752).
> 
> Hmmm... Ok if the others are fine with this as well. I got some pushback
> there in the past.
> 
> > We could improve the fallback code, so that if order-4 allocation fails,
> > it tries order-3 allocation, and then falls back to order-0. But I think
> > that these failures are rare enough that it is not a problem.
> 
> I also think that would be too many fallbacks.

You are right - it's better to fallback to the minimum possible size, so 
that the allocation is faster.

> The old code uses the concept of a "fraction" to calculate overhead. The
> code here uses absolute counts of bytes. Fraction looks better to me.

OK - I reworked the patch using the same "fraction" calculation as before.  
The existing logic targets 1/16 wasted space, so I used this target in 
this patch too.

This patch increases only the order of task_struct (from 3 to 4), all the 
other caches have the same order as before.

Mikulas



From: Mikulas Patocka 
Subject: [PATCH] slub: use higher order to reduce wasted space

If we create a slub cache with large object size (larger than
slub_max_order), the slub subsystem currently rounds up the object size to
the next power of two.

This is inefficient, because it wastes too much space. We use the slab
cache as a buffer cache in dm-bufio, in order to use the memory
efficiently, we need to reduce wasted space.

This patch reworks the slub order calculation algorithm, so that it uses
higher order allocations if it would reduce wasted space. The slub
subsystem has fallback if the higher-order allocations fails, so using
order higher than PAGE_ALLOC_COSTLY_ORDER is ok.

The new algorithm first calculates the minimum order that can be used for
a give object size and then increases the order according to these
conditions:
* if we would overshoot MAX_OBJS_PER_PAGE, don't increase
* if we are below slub_min_order, increase
* if we are below slub_max_order and below min_objects, increase
* we increase above slub_max_order only if it reduces wasted space and if
  we alrady waste at least 1/16 of the compound page

The new algorithm gives very similar results to the old one, all the
caches on my system have the same order as before, only the order of
task_struct (size 2752) is increased from 3 to 4.

Signed-off-by: Mikulas Patocka 

---
 mm/slub.c |   82 +++---
 1 file changed, 31 insertions(+), 51 deletions(-)

Index: linux-2.6/mm/slub.c
===
--- linux-2.6.orig/mm/slub.c2018-04-27 19:30:34.0 +0200
+++ linux-2.6/mm/slub.c 2018-04-27 21:05:53.0 +0200
@@ -3224,34 +3224,10 @@ static unsigned int slub_min_objects;
  * requested a higher mininum order then we start with that one instead of
  * the smallest order which will fit the object.
  */
-static inline unsigned int slab_order(unsigned int size,
-   unsigned int min_objects, unsigned int max_order,
-   unsigned int fract_leftover, unsigned int reserved)
+static int calculate_order(unsigned int size, unsigned int reserved)
 {
-   unsigned int min_order = slub_min_order;
-   unsigned int order;
-
-   if (order_objects(min_order, size, reserved) > MAX_OBJS_PER_PAGE)
-   return get_order(size * MAX_OBJS_PER_PAGE) - 1;
-
-   for (order = max(min_order, (unsigned int)get_order(min_objects * size 
+ reserved));
-   order <= max_order; order++) {
-
-   unsigned int slab_size = (unsigned int)PAGE_SIZE << order;
-   unsigned int rem;
-
-   rem = (slab_size - reserved) % size;
-
-   if (rem <= slab_size / fract_leftover)
-   break;
-   }
-
-   return order;
-}
-
-static inline int calculate_order(unsigned int size, unsigned int reserved)
-{
-   unsigned int order;
+   unsigned int best_order;
+   unsigned int test_order;
unsigned int min_objects;
unsigned int max_objects;
 
@@ -3269,34 +3245,38 @@ static inline int calculate_order(unsign
max_objects = order_objects(slub_max_order, size, reserved);
min_objects = min(min_objects, max_objects);
 
-   while (min_objects > 1) {
-   unsigned int fraction;
+   /* Get the 

Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-27 Thread Mikulas Patocka


On Fri, 27 Apr 2018, Christopher Lameter wrote:

> On Thu, 26 Apr 2018, Mikulas Patocka wrote:
> 
> > > Hmmm... order 4 for these caches may cause some concern. These should stay
> > > under costly order I think. Otherwise allocations are no longer
> > > guaranteed.
> >
> > You said that slub has fallback to smaller order allocations.
> 
> Yes it does...
> 
> > The whole purpose of this "minimize waste" approach is to use higher-order
> > allocations to use memory more efficiently, so it is just doing its job.
> > (for these 3 caches, order-4 really wastes less memory than order-3 - on
> > my system TCPv6 and sighand_cache have size 2112, task_struct 2752).
> 
> Hmmm... Ok if the others are fine with this as well. I got some pushback
> there in the past.
> 
> > We could improve the fallback code, so that if order-4 allocation fails,
> > it tries order-3 allocation, and then falls back to order-0. But I think
> > that these failures are rare enough that it is not a problem.
> 
> I also think that would be too many fallbacks.

You are right - it's better to fallback to the minimum possible size, so 
that the allocation is faster.

> The old code uses the concept of a "fraction" to calculate overhead. The
> code here uses absolute counts of bytes. Fraction looks better to me.

OK - I reworked the patch using the same "fraction" calculation as before.  
The existing logic targets 1/16 wasted space, so I used this target in 
this patch too.

This patch increases only the order of task_struct (from 3 to 4), all the 
other caches have the same order as before.

Mikulas



From: Mikulas Patocka 
Subject: [PATCH] slub: use higher order to reduce wasted space

If we create a slub cache with large object size (larger than
slub_max_order), the slub subsystem currently rounds up the object size to
the next power of two.

This is inefficient, because it wastes too much space. We use the slab
cache as a buffer cache in dm-bufio, in order to use the memory
efficiently, we need to reduce wasted space.

This patch reworks the slub order calculation algorithm, so that it uses
higher order allocations if it would reduce wasted space. The slub
subsystem has fallback if the higher-order allocations fails, so using
order higher than PAGE_ALLOC_COSTLY_ORDER is ok.

The new algorithm first calculates the minimum order that can be used for
a give object size and then increases the order according to these
conditions:
* if we would overshoot MAX_OBJS_PER_PAGE, don't increase
* if we are below slub_min_order, increase
* if we are below slub_max_order and below min_objects, increase
* we increase above slub_max_order only if it reduces wasted space and if
  we alrady waste at least 1/16 of the compound page

The new algorithm gives very similar results to the old one, all the
caches on my system have the same order as before, only the order of
task_struct (size 2752) is increased from 3 to 4.

Signed-off-by: Mikulas Patocka 

---
 mm/slub.c |   82 +++---
 1 file changed, 31 insertions(+), 51 deletions(-)

Index: linux-2.6/mm/slub.c
===
--- linux-2.6.orig/mm/slub.c2018-04-27 19:30:34.0 +0200
+++ linux-2.6/mm/slub.c 2018-04-27 21:05:53.0 +0200
@@ -3224,34 +3224,10 @@ static unsigned int slub_min_objects;
  * requested a higher mininum order then we start with that one instead of
  * the smallest order which will fit the object.
  */
-static inline unsigned int slab_order(unsigned int size,
-   unsigned int min_objects, unsigned int max_order,
-   unsigned int fract_leftover, unsigned int reserved)
+static int calculate_order(unsigned int size, unsigned int reserved)
 {
-   unsigned int min_order = slub_min_order;
-   unsigned int order;
-
-   if (order_objects(min_order, size, reserved) > MAX_OBJS_PER_PAGE)
-   return get_order(size * MAX_OBJS_PER_PAGE) - 1;
-
-   for (order = max(min_order, (unsigned int)get_order(min_objects * size 
+ reserved));
-   order <= max_order; order++) {
-
-   unsigned int slab_size = (unsigned int)PAGE_SIZE << order;
-   unsigned int rem;
-
-   rem = (slab_size - reserved) % size;
-
-   if (rem <= slab_size / fract_leftover)
-   break;
-   }
-
-   return order;
-}
-
-static inline int calculate_order(unsigned int size, unsigned int reserved)
-{
-   unsigned int order;
+   unsigned int best_order;
+   unsigned int test_order;
unsigned int min_objects;
unsigned int max_objects;
 
@@ -3269,34 +3245,38 @@ static inline int calculate_order(unsign
max_objects = order_objects(slub_max_order, size, reserved);
min_objects = min(min_objects, max_objects);
 
-   while (min_objects > 1) {
-   unsigned int fraction;
+   /* Get the minimum acceptable order for one object 

Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-27 Thread Christopher Lameter
On Thu, 26 Apr 2018, Mikulas Patocka wrote:

> > Hmmm... order 4 for these caches may cause some concern. These should stay
> > under costly order I think. Otherwise allocations are no longer
> > guaranteed.
>
> You said that slub has fallback to smaller order allocations.

Yes it does...

> The whole purpose of this "minimize waste" approach is to use higher-order
> allocations to use memory more efficiently, so it is just doing its job.
> (for these 3 caches, order-4 really wastes less memory than order-3 - on
> my system TCPv6 and sighand_cache have size 2112, task_struct 2752).

Hmmm... Ok if the others are fine with this as well. I got some pushback
there in the past.

> We could improve the fallback code, so that if order-4 allocation fails,
> it tries order-3 allocation, and then falls back to order-0. But I think
> that these failures are rare enough that it is not a problem.

I also think that would be too many fallbacks.

> > > + /* Increase order even more, but only if it reduces waste */
> > > + if (test_order_obj <= 32 &&
> >
> > Where does the 32 come from?
>
> It is to avoid extremely high order for extremely small slabs.
>
> For example, see kmalloc-96.
> 10922 96-byte objects would fit into 1MiB
> 21845 96-byte objects would fit into 2MiB

That is the result of considering absolute byte wastage..

> The algorithm would recognize this one more object that fits into 2MiB
> slab as "waste reduction" and increase the order to 2MiB - and we don't
> want this.
>
> So, the general reasoning is - if we have 32 objects in a slab, then it is
> already considered that wasted space is reasonably low and we don't want
> to increase the order more.
>
> Currently, kmalloc-96 uses order-0 - that is reasonable (we already have
> 42 objects in 4k page, so we don't need to use higher order, even if it
> wastes one-less object).


The old code uses the concept of a "fraction" to calculate overhead. The
code here uses absolute counts of bytes. Fraction looks better to me.


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-27 Thread Christopher Lameter
On Thu, 26 Apr 2018, Mikulas Patocka wrote:

> > Hmmm... order 4 for these caches may cause some concern. These should stay
> > under costly order I think. Otherwise allocations are no longer
> > guaranteed.
>
> You said that slub has fallback to smaller order allocations.

Yes it does...

> The whole purpose of this "minimize waste" approach is to use higher-order
> allocations to use memory more efficiently, so it is just doing its job.
> (for these 3 caches, order-4 really wastes less memory than order-3 - on
> my system TCPv6 and sighand_cache have size 2112, task_struct 2752).

Hmmm... Ok if the others are fine with this as well. I got some pushback
there in the past.

> We could improve the fallback code, so that if order-4 allocation fails,
> it tries order-3 allocation, and then falls back to order-0. But I think
> that these failures are rare enough that it is not a problem.

I also think that would be too many fallbacks.

> > > + /* Increase order even more, but only if it reduces waste */
> > > + if (test_order_obj <= 32 &&
> >
> > Where does the 32 come from?
>
> It is to avoid extremely high order for extremely small slabs.
>
> For example, see kmalloc-96.
> 10922 96-byte objects would fit into 1MiB
> 21845 96-byte objects would fit into 2MiB

That is the result of considering absolute byte wastage..

> The algorithm would recognize this one more object that fits into 2MiB
> slab as "waste reduction" and increase the order to 2MiB - and we don't
> want this.
>
> So, the general reasoning is - if we have 32 objects in a slab, then it is
> already considered that wasted space is reasonably low and we don't want
> to increase the order more.
>
> Currently, kmalloc-96 uses order-0 - that is reasonable (we already have
> 42 objects in 4k page, so we don't need to use higher order, even if it
> wastes one-less object).


The old code uses the concept of a "fraction" to calculate overhead. The
code here uses absolute counts of bytes. Fraction looks better to me.


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-26 Thread Mikulas Patocka


On Thu, 26 Apr 2018, Christopher Lameter wrote:

> On Wed, 25 Apr 2018, Mikulas Patocka wrote:
> 
> > Do you want this? It deletes slab_order and replaces it with the
> > "minimize_waste" logic directly.
> 
> Well yes that looks better. Now we need to make it easy to read and less
> complicated. Maybe try to keep as much as possible of the old code
> and also the names of variables to make it easier to review?
> 
> > It simplifies the code and it is very similar to the old algorithms, most
> > slab caches have the same order, so it shouldn't cause any regressions.
> >
> > This patch changes order of these slabs:
> > TCPv6: 3 -> 4
> > sighand_cache: 3 -> 4
> > task_struct: 3 -> 4
> 
> Hmmm... order 4 for these caches may cause some concern. These should stay
> under costly order I think. Otherwise allocations are no longer
> guaranteed.

You said that slub has fallback to smaller order allocations.

The whole purpose of this "minimize waste" approach is to use higher-order 
allocations to use memory more efficiently, so it is just doing its job. 
(for these 3 caches, order-4 really wastes less memory than order-3 - on 
my system TCPv6 and sighand_cache have size 2112, task_struct 2752).

We could improve the fallback code, so that if order-4 allocation fails, 
it tries order-3 allocation, and then falls back to order-0. But I think 
that these failures are rare enough that it is not a problem.

> > @@ -3269,35 +3245,35 @@ static inline int calculate_order(unsign
> > max_objects = order_objects(slub_max_order, size, reserved);
> > min_objects = min(min_objects, max_objects);
> >
> > -   while (min_objects > 1) {
> > -   unsigned int fraction;
> > +   /* Get the minimum acceptable order for one object */
> > +   order = get_order(size + reserved);
> > +
> > +   for (test_order = order + 1; test_order < MAX_ORDER; test_order++) {
> > +   unsigned order_obj = order_objects(order, size, reserved);
> > +   unsigned test_order_obj = order_objects(test_order, size, 
> > reserved);
> > +
> > +   /* If there are too many objects, stop searching */
> > +   if (test_order_obj > MAX_OBJS_PER_PAGE)
> > +   break;
> >
> > -   fraction = 16;
> > -   while (fraction >= 4) {
> > -   order = slab_order(size, min_objects,
> > -   slub_max_order, fraction, reserved);
> > -   if (order <= slub_max_order)
> > -   return order;
> > -   fraction /= 2;
> > -   }
> > -   min_objects--;
> > +   /* Always increase up to slub_min_order */
> > +   if (test_order <= slub_min_order)
> > +   order = test_order;
> 
> Well that is a significant change. In our current scheme the order
> boundart wins.

I think it's not a change. The existing function slab_order() starts with 
min_order (unless it overshoots MAX_OBJS_PER_PAGE) and then goes upwards. 
My code does the same - my code tests for MAX_OBJS_PER_PAGE (and bails out 
if we would overshoot it) and increases the order until it reaches 
slub_min_order (and then increases it even more if it satisfies the other 
conditions).

If you believe that it behaves differently, please describe the situation 
in detail.

> > +
> > +   /* If we are below min_objects and slub_max_order, increase 
> > order */
> > +   if (order_obj < min_objects && test_order <= slub_max_order)
> > +   order = test_order;
> > +
> > +   /* Increase order even more, but only if it reduces waste */
> > +   if (test_order_obj <= 32 &&
> 
> Where does the 32 come from?

It is to avoid extremely high order for extremely small slabs.

For example, see kmalloc-96.
10922 96-byte objects would fit into 1MiB
21845 96-byte objects would fit into 2MiB

The algorithm would recognize this one more object that fits into 2MiB 
slab as "waste reduction" and increase the order to 2MiB - and we don't 
want this.

So, the general reasoning is - if we have 32 objects in a slab, then it is 
already considered that wasted space is reasonably low and we don't want 
to increase the order more.

Currently, kmalloc-96 uses order-0 - that is reasonable (we already have 
42 objects in 4k page, so we don't need to use higher order, even if it 
wastes one-less object).

> > +   test_order_obj > order_obj << (test_order - order))
> 
> Add more () to make the condition better readable.
> 
> > +   order = test_order;
> 
> Can we just call test_order order and avoid using the long variable names
> here? Variable names in functions are typically short.

You need two variables - "order" and "test_order".

"order" is the best order found so far and "test_order" is the order that 
we are now testing. If "test_order" wastes less space than "order", we 
assign order = test_order.

Mikulas


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-26 Thread Mikulas Patocka


On Thu, 26 Apr 2018, Christopher Lameter wrote:

> On Wed, 25 Apr 2018, Mikulas Patocka wrote:
> 
> > Do you want this? It deletes slab_order and replaces it with the
> > "minimize_waste" logic directly.
> 
> Well yes that looks better. Now we need to make it easy to read and less
> complicated. Maybe try to keep as much as possible of the old code
> and also the names of variables to make it easier to review?
> 
> > It simplifies the code and it is very similar to the old algorithms, most
> > slab caches have the same order, so it shouldn't cause any regressions.
> >
> > This patch changes order of these slabs:
> > TCPv6: 3 -> 4
> > sighand_cache: 3 -> 4
> > task_struct: 3 -> 4
> 
> Hmmm... order 4 for these caches may cause some concern. These should stay
> under costly order I think. Otherwise allocations are no longer
> guaranteed.

You said that slub has fallback to smaller order allocations.

The whole purpose of this "minimize waste" approach is to use higher-order 
allocations to use memory more efficiently, so it is just doing its job. 
(for these 3 caches, order-4 really wastes less memory than order-3 - on 
my system TCPv6 and sighand_cache have size 2112, task_struct 2752).

We could improve the fallback code, so that if order-4 allocation fails, 
it tries order-3 allocation, and then falls back to order-0. But I think 
that these failures are rare enough that it is not a problem.

> > @@ -3269,35 +3245,35 @@ static inline int calculate_order(unsign
> > max_objects = order_objects(slub_max_order, size, reserved);
> > min_objects = min(min_objects, max_objects);
> >
> > -   while (min_objects > 1) {
> > -   unsigned int fraction;
> > +   /* Get the minimum acceptable order for one object */
> > +   order = get_order(size + reserved);
> > +
> > +   for (test_order = order + 1; test_order < MAX_ORDER; test_order++) {
> > +   unsigned order_obj = order_objects(order, size, reserved);
> > +   unsigned test_order_obj = order_objects(test_order, size, 
> > reserved);
> > +
> > +   /* If there are too many objects, stop searching */
> > +   if (test_order_obj > MAX_OBJS_PER_PAGE)
> > +   break;
> >
> > -   fraction = 16;
> > -   while (fraction >= 4) {
> > -   order = slab_order(size, min_objects,
> > -   slub_max_order, fraction, reserved);
> > -   if (order <= slub_max_order)
> > -   return order;
> > -   fraction /= 2;
> > -   }
> > -   min_objects--;
> > +   /* Always increase up to slub_min_order */
> > +   if (test_order <= slub_min_order)
> > +   order = test_order;
> 
> Well that is a significant change. In our current scheme the order
> boundart wins.

I think it's not a change. The existing function slab_order() starts with 
min_order (unless it overshoots MAX_OBJS_PER_PAGE) and then goes upwards. 
My code does the same - my code tests for MAX_OBJS_PER_PAGE (and bails out 
if we would overshoot it) and increases the order until it reaches 
slub_min_order (and then increases it even more if it satisfies the other 
conditions).

If you believe that it behaves differently, please describe the situation 
in detail.

> > +
> > +   /* If we are below min_objects and slub_max_order, increase 
> > order */
> > +   if (order_obj < min_objects && test_order <= slub_max_order)
> > +   order = test_order;
> > +
> > +   /* Increase order even more, but only if it reduces waste */
> > +   if (test_order_obj <= 32 &&
> 
> Where does the 32 come from?

It is to avoid extremely high order for extremely small slabs.

For example, see kmalloc-96.
10922 96-byte objects would fit into 1MiB
21845 96-byte objects would fit into 2MiB

The algorithm would recognize this one more object that fits into 2MiB 
slab as "waste reduction" and increase the order to 2MiB - and we don't 
want this.

So, the general reasoning is - if we have 32 objects in a slab, then it is 
already considered that wasted space is reasonably low and we don't want 
to increase the order more.

Currently, kmalloc-96 uses order-0 - that is reasonable (we already have 
42 objects in 4k page, so we don't need to use higher order, even if it 
wastes one-less object).

> > +   test_order_obj > order_obj << (test_order - order))
> 
> Add more () to make the condition better readable.
> 
> > +   order = test_order;
> 
> Can we just call test_order order and avoid using the long variable names
> here? Variable names in functions are typically short.

You need two variables - "order" and "test_order".

"order" is the best order found so far and "test_order" is the order that 
we are now testing. If "test_order" wastes less space than "order", we 
assign order = test_order.

Mikulas


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-26 Thread Christopher Lameter
On Wed, 25 Apr 2018, Mikulas Patocka wrote:

> Do you want this? It deletes slab_order and replaces it with the
> "minimize_waste" logic directly.

Well yes that looks better. Now we need to make it easy to read and less
complicated. Maybe try to keep as much as possible of the old code
and also the names of variables to make it easier to review?

> It simplifies the code and it is very similar to the old algorithms, most
> slab caches have the same order, so it shouldn't cause any regressions.
>
> This patch changes order of these slabs:
> TCPv6: 3 -> 4
> sighand_cache: 3 -> 4
> task_struct: 3 -> 4

Hmmm... order 4 for these caches may cause some concern. These should stay
under costly order I think. Otherwise allocations are no longer
guaranteed.

> @@ -3269,35 +3245,35 @@ static inline int calculate_order(unsign
>   max_objects = order_objects(slub_max_order, size, reserved);
>   min_objects = min(min_objects, max_objects);
>
> - while (min_objects > 1) {
> - unsigned int fraction;
> + /* Get the minimum acceptable order for one object */
> + order = get_order(size + reserved);
> +
> + for (test_order = order + 1; test_order < MAX_ORDER; test_order++) {
> + unsigned order_obj = order_objects(order, size, reserved);
> + unsigned test_order_obj = order_objects(test_order, size, 
> reserved);
> +
> + /* If there are too many objects, stop searching */
> + if (test_order_obj > MAX_OBJS_PER_PAGE)
> + break;
>
> - fraction = 16;
> - while (fraction >= 4) {
> - order = slab_order(size, min_objects,
> - slub_max_order, fraction, reserved);
> - if (order <= slub_max_order)
> - return order;
> - fraction /= 2;
> - }
> - min_objects--;
> + /* Always increase up to slub_min_order */
> + if (test_order <= slub_min_order)
> + order = test_order;

Well that is a significant change. In our current scheme the order
boundart wins.


> +
> + /* If we are below min_objects and slub_max_order, increase 
> order */
> + if (order_obj < min_objects && test_order <= slub_max_order)
> + order = test_order;
> +
> + /* Increase order even more, but only if it reduces waste */
> + if (test_order_obj <= 32 &&

Where does the 32 come from?

> + test_order_obj > order_obj << (test_order - order))

Add more () to make the condition better readable.

> + order = test_order;

Can we just call test_order order and avoid using the long variable names
here? Variable names in functions are typically short.


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-26 Thread Christopher Lameter
On Wed, 25 Apr 2018, Mikulas Patocka wrote:

> Do you want this? It deletes slab_order and replaces it with the
> "minimize_waste" logic directly.

Well yes that looks better. Now we need to make it easy to read and less
complicated. Maybe try to keep as much as possible of the old code
and also the names of variables to make it easier to review?

> It simplifies the code and it is very similar to the old algorithms, most
> slab caches have the same order, so it shouldn't cause any regressions.
>
> This patch changes order of these slabs:
> TCPv6: 3 -> 4
> sighand_cache: 3 -> 4
> task_struct: 3 -> 4

Hmmm... order 4 for these caches may cause some concern. These should stay
under costly order I think. Otherwise allocations are no longer
guaranteed.

> @@ -3269,35 +3245,35 @@ static inline int calculate_order(unsign
>   max_objects = order_objects(slub_max_order, size, reserved);
>   min_objects = min(min_objects, max_objects);
>
> - while (min_objects > 1) {
> - unsigned int fraction;
> + /* Get the minimum acceptable order for one object */
> + order = get_order(size + reserved);
> +
> + for (test_order = order + 1; test_order < MAX_ORDER; test_order++) {
> + unsigned order_obj = order_objects(order, size, reserved);
> + unsigned test_order_obj = order_objects(test_order, size, 
> reserved);
> +
> + /* If there are too many objects, stop searching */
> + if (test_order_obj > MAX_OBJS_PER_PAGE)
> + break;
>
> - fraction = 16;
> - while (fraction >= 4) {
> - order = slab_order(size, min_objects,
> - slub_max_order, fraction, reserved);
> - if (order <= slub_max_order)
> - return order;
> - fraction /= 2;
> - }
> - min_objects--;
> + /* Always increase up to slub_min_order */
> + if (test_order <= slub_min_order)
> + order = test_order;

Well that is a significant change. In our current scheme the order
boundart wins.


> +
> + /* If we are below min_objects and slub_max_order, increase 
> order */
> + if (order_obj < min_objects && test_order <= slub_max_order)
> + order = test_order;
> +
> + /* Increase order even more, but only if it reduces waste */
> + if (test_order_obj <= 32 &&

Where does the 32 come from?

> + test_order_obj > order_obj << (test_order - order))

Add more () to make the condition better readable.

> + order = test_order;

Can we just call test_order order and avoid using the long variable names
here? Variable names in functions are typically short.


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-26 Thread Christopher Lameter
On Wed, 25 Apr 2018, Mikulas Patocka wrote:

> >
> > Could yo move that logic into slab_order()? It does something awfully
> > similar.
>
> But slab_order (and its caller) limits the order to "max_order" and we
> want more.
>
> Perhaps slab_order should be dropped and calculate_order totally
> rewritten?

Yes you likely need to do something creative with max_order if not with
more stuff.



Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-26 Thread Christopher Lameter
On Wed, 25 Apr 2018, Mikulas Patocka wrote:

> >
> > Could yo move that logic into slab_order()? It does something awfully
> > similar.
>
> But slab_order (and its caller) limits the order to "max_order" and we
> want more.
>
> Perhaps slab_order should be dropped and calculate_order totally
> rewritten?

Yes you likely need to do something creative with max_order if not with
more stuff.



Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-25 Thread Mikulas Patocka


On Wed, 25 Apr 2018, Mikulas Patocka wrote:

> 
> 
> On Wed, 18 Apr 2018, Christopher Lameter wrote:
> 
> > On Tue, 17 Apr 2018, Mikulas Patocka wrote:
> > 
> > > I can make a slub-only patch with no extra flag (on a freshly booted
> > > system it increases only the order of caches "TCPv6" and "sighand_cache"
> > > by one - so it should not have unexpected effects):
> > >
> > > Doing a generic solution for slab would be more comlpicated because slab
> > > assumes that all slabs have the same order, so it can't fall-back to
> > > lower-order allocations.
> > 
> > Well again SLAB uses compound pages and thus would be able to detect the
> > size of the page. It may be some work but it could be done.
> > 
> > >
> > > Index: linux-2.6/mm/slub.c
> > > ===
> > > --- linux-2.6.orig/mm/slub.c  2018-04-17 19:59:49.0 +0200
> > > +++ linux-2.6/mm/slub.c   2018-04-17 20:58:23.0 +0200
> > > @@ -3252,6 +3252,7 @@ static inline unsigned int slab_order(un
> > >  static inline int calculate_order(unsigned int size, unsigned int 
> > > reserved)
> > >  {
> > >   unsigned int order;
> > > + unsigned int test_order;
> > >   unsigned int min_objects;
> > >   unsigned int max_objects;
> > >
> > > @@ -3277,7 +3278,7 @@ static inline int calculate_order(unsign
> > >   order = slab_order(size, min_objects,
> > >   slub_max_order, fraction, reserved);
> > >   if (order <= slub_max_order)
> > > - return order;
> > > + goto ret_order;
> > >   fraction /= 2;
> > >   }
> > >   min_objects--;
> > > @@ -3289,15 +3290,25 @@ static inline int calculate_order(unsign
> > >*/
> > >   order = slab_order(size, 1, slub_max_order, 1, reserved);
> > 
> > The slab order is determined in slab_order()
> > 
> > >   if (order <= slub_max_order)
> > > - return order;
> > > + goto ret_order;
> > >
> > >   /*
> > >* Doh this slab cannot be placed using slub_max_order.
> > >*/
> > >   order = slab_order(size, 1, MAX_ORDER, 1, reserved);
> > > - if (order < MAX_ORDER)
> > > - return order;
> > > - return -ENOSYS;
> > > + if (order >= MAX_ORDER)
> > > + return -ENOSYS;
> > > +
> > > +ret_order:
> > > + for (test_order = order + 1; test_order < MAX_ORDER; test_order++) {
> > > + unsigned long order_objects = ((PAGE_SIZE << order) - reserved) 
> > > / size;
> > > + unsigned long test_order_objects = ((PAGE_SIZE << test_order) - 
> > > reserved) / size;
> > > + if (test_order_objects > min(32, MAX_OBJS_PER_PAGE))
> > > + break;
> > > + if (test_order_objects > order_objects << (test_order - order))
> > > + order = test_order;
> > > + }
> > > + return order;
> > 
> > Could yo move that logic into slab_order()? It does something awfully
> > similar.
> 
> But slab_order (and its caller) limits the order to "max_order" and we 
> want more.
> 
> Perhaps slab_order should be dropped and calculate_order totally 
> rewritten?
> 
> Mikulas

Do you want this? It deletes slab_order and replaces it with the 
"minimize_waste" logic directly.

The patch starts with a minimal order for a given size and increases the 
order if one of these conditions is met:
* we is below slub_min_order
* we are below min_objects and slub_max_order
* we go above slub_max_order only if it minimizes waste and if we don't 
  increase the object count above 32

It simplifies the code and it is very similar to the old algorithms, most 
slab caches have the same order, so it shouldn't cause any regressions.

This patch changes order of these slabs:
TCPv6: 3 -> 4
sighand_cache: 3 -> 4
task_struct: 3 -> 4

---
 mm/slub.c |   76 +-
 1 file changed, 26 insertions(+), 50 deletions(-)

Index: linux-2.6/mm/slub.c
===
--- linux-2.6.orig/mm/slub.c2018-04-26 00:07:30.0 +0200
+++ linux-2.6/mm/slub.c 2018-04-26 00:21:37.0 +0200
@@ -3224,34 +3224,10 @@ static unsigned int slub_min_objects;
  * requested a higher mininum order then we start with that one instead of
  * the smallest order which will fit the object.
  */
-static inline unsigned int slab_order(unsigned int size,
-   unsigned int min_objects, unsigned int max_order,
-   unsigned int fract_leftover, unsigned int reserved)
-{
-   unsigned int min_order = slub_min_order;
-   unsigned int order;
-
-   if (order_objects(min_order, size, reserved) > MAX_OBJS_PER_PAGE)
-   return get_order(size * MAX_OBJS_PER_PAGE) - 1;
-
-   for (order = max(min_order, (unsigned int)get_order(min_objects * size 
+ reserved));
-   order <= max_order; order++) {
-
-   unsigned int slab_size = (unsigned int)PAGE_SIZE << 

Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-25 Thread Mikulas Patocka


On Wed, 25 Apr 2018, Mikulas Patocka wrote:

> 
> 
> On Wed, 18 Apr 2018, Christopher Lameter wrote:
> 
> > On Tue, 17 Apr 2018, Mikulas Patocka wrote:
> > 
> > > I can make a slub-only patch with no extra flag (on a freshly booted
> > > system it increases only the order of caches "TCPv6" and "sighand_cache"
> > > by one - so it should not have unexpected effects):
> > >
> > > Doing a generic solution for slab would be more comlpicated because slab
> > > assumes that all slabs have the same order, so it can't fall-back to
> > > lower-order allocations.
> > 
> > Well again SLAB uses compound pages and thus would be able to detect the
> > size of the page. It may be some work but it could be done.
> > 
> > >
> > > Index: linux-2.6/mm/slub.c
> > > ===
> > > --- linux-2.6.orig/mm/slub.c  2018-04-17 19:59:49.0 +0200
> > > +++ linux-2.6/mm/slub.c   2018-04-17 20:58:23.0 +0200
> > > @@ -3252,6 +3252,7 @@ static inline unsigned int slab_order(un
> > >  static inline int calculate_order(unsigned int size, unsigned int 
> > > reserved)
> > >  {
> > >   unsigned int order;
> > > + unsigned int test_order;
> > >   unsigned int min_objects;
> > >   unsigned int max_objects;
> > >
> > > @@ -3277,7 +3278,7 @@ static inline int calculate_order(unsign
> > >   order = slab_order(size, min_objects,
> > >   slub_max_order, fraction, reserved);
> > >   if (order <= slub_max_order)
> > > - return order;
> > > + goto ret_order;
> > >   fraction /= 2;
> > >   }
> > >   min_objects--;
> > > @@ -3289,15 +3290,25 @@ static inline int calculate_order(unsign
> > >*/
> > >   order = slab_order(size, 1, slub_max_order, 1, reserved);
> > 
> > The slab order is determined in slab_order()
> > 
> > >   if (order <= slub_max_order)
> > > - return order;
> > > + goto ret_order;
> > >
> > >   /*
> > >* Doh this slab cannot be placed using slub_max_order.
> > >*/
> > >   order = slab_order(size, 1, MAX_ORDER, 1, reserved);
> > > - if (order < MAX_ORDER)
> > > - return order;
> > > - return -ENOSYS;
> > > + if (order >= MAX_ORDER)
> > > + return -ENOSYS;
> > > +
> > > +ret_order:
> > > + for (test_order = order + 1; test_order < MAX_ORDER; test_order++) {
> > > + unsigned long order_objects = ((PAGE_SIZE << order) - reserved) 
> > > / size;
> > > + unsigned long test_order_objects = ((PAGE_SIZE << test_order) - 
> > > reserved) / size;
> > > + if (test_order_objects > min(32, MAX_OBJS_PER_PAGE))
> > > + break;
> > > + if (test_order_objects > order_objects << (test_order - order))
> > > + order = test_order;
> > > + }
> > > + return order;
> > 
> > Could yo move that logic into slab_order()? It does something awfully
> > similar.
> 
> But slab_order (and its caller) limits the order to "max_order" and we 
> want more.
> 
> Perhaps slab_order should be dropped and calculate_order totally 
> rewritten?
> 
> Mikulas

Do you want this? It deletes slab_order and replaces it with the 
"minimize_waste" logic directly.

The patch starts with a minimal order for a given size and increases the 
order if one of these conditions is met:
* we is below slub_min_order
* we are below min_objects and slub_max_order
* we go above slub_max_order only if it minimizes waste and if we don't 
  increase the object count above 32

It simplifies the code and it is very similar to the old algorithms, most 
slab caches have the same order, so it shouldn't cause any regressions.

This patch changes order of these slabs:
TCPv6: 3 -> 4
sighand_cache: 3 -> 4
task_struct: 3 -> 4

---
 mm/slub.c |   76 +-
 1 file changed, 26 insertions(+), 50 deletions(-)

Index: linux-2.6/mm/slub.c
===
--- linux-2.6.orig/mm/slub.c2018-04-26 00:07:30.0 +0200
+++ linux-2.6/mm/slub.c 2018-04-26 00:21:37.0 +0200
@@ -3224,34 +3224,10 @@ static unsigned int slub_min_objects;
  * requested a higher mininum order then we start with that one instead of
  * the smallest order which will fit the object.
  */
-static inline unsigned int slab_order(unsigned int size,
-   unsigned int min_objects, unsigned int max_order,
-   unsigned int fract_leftover, unsigned int reserved)
-{
-   unsigned int min_order = slub_min_order;
-   unsigned int order;
-
-   if (order_objects(min_order, size, reserved) > MAX_OBJS_PER_PAGE)
-   return get_order(size * MAX_OBJS_PER_PAGE) - 1;
-
-   for (order = max(min_order, (unsigned int)get_order(min_objects * size 
+ reserved));
-   order <= max_order; order++) {
-
-   unsigned int slab_size = (unsigned int)PAGE_SIZE << 

Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-25 Thread Mikulas Patocka


On Wed, 18 Apr 2018, Christopher Lameter wrote:

> On Tue, 17 Apr 2018, Mikulas Patocka wrote:
> 
> > I can make a slub-only patch with no extra flag (on a freshly booted
> > system it increases only the order of caches "TCPv6" and "sighand_cache"
> > by one - so it should not have unexpected effects):
> >
> > Doing a generic solution for slab would be more comlpicated because slab
> > assumes that all slabs have the same order, so it can't fall-back to
> > lower-order allocations.
> 
> Well again SLAB uses compound pages and thus would be able to detect the
> size of the page. It may be some work but it could be done.
> 
> >
> > Index: linux-2.6/mm/slub.c
> > ===
> > --- linux-2.6.orig/mm/slub.c2018-04-17 19:59:49.0 +0200
> > +++ linux-2.6/mm/slub.c 2018-04-17 20:58:23.0 +0200
> > @@ -3252,6 +3252,7 @@ static inline unsigned int slab_order(un
> >  static inline int calculate_order(unsigned int size, unsigned int reserved)
> >  {
> > unsigned int order;
> > +   unsigned int test_order;
> > unsigned int min_objects;
> > unsigned int max_objects;
> >
> > @@ -3277,7 +3278,7 @@ static inline int calculate_order(unsign
> > order = slab_order(size, min_objects,
> > slub_max_order, fraction, reserved);
> > if (order <= slub_max_order)
> > -   return order;
> > +   goto ret_order;
> > fraction /= 2;
> > }
> > min_objects--;
> > @@ -3289,15 +3290,25 @@ static inline int calculate_order(unsign
> >  */
> > order = slab_order(size, 1, slub_max_order, 1, reserved);
> 
> The slab order is determined in slab_order()
> 
> > if (order <= slub_max_order)
> > -   return order;
> > +   goto ret_order;
> >
> > /*
> >  * Doh this slab cannot be placed using slub_max_order.
> >  */
> > order = slab_order(size, 1, MAX_ORDER, 1, reserved);
> > -   if (order < MAX_ORDER)
> > -   return order;
> > -   return -ENOSYS;
> > +   if (order >= MAX_ORDER)
> > +   return -ENOSYS;
> > +
> > +ret_order:
> > +   for (test_order = order + 1; test_order < MAX_ORDER; test_order++) {
> > +   unsigned long order_objects = ((PAGE_SIZE << order) - reserved) 
> > / size;
> > +   unsigned long test_order_objects = ((PAGE_SIZE << test_order) - 
> > reserved) / size;
> > +   if (test_order_objects > min(32, MAX_OBJS_PER_PAGE))
> > +   break;
> > +   if (test_order_objects > order_objects << (test_order - order))
> > +   order = test_order;
> > +   }
> > +   return order;
> 
> Could yo move that logic into slab_order()? It does something awfully
> similar.

But slab_order (and its caller) limits the order to "max_order" and we 
want more.

Perhaps slab_order should be dropped and calculate_order totally 
rewritten?

Mikulas


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-25 Thread Mikulas Patocka


On Wed, 18 Apr 2018, Christopher Lameter wrote:

> On Tue, 17 Apr 2018, Mikulas Patocka wrote:
> 
> > I can make a slub-only patch with no extra flag (on a freshly booted
> > system it increases only the order of caches "TCPv6" and "sighand_cache"
> > by one - so it should not have unexpected effects):
> >
> > Doing a generic solution for slab would be more comlpicated because slab
> > assumes that all slabs have the same order, so it can't fall-back to
> > lower-order allocations.
> 
> Well again SLAB uses compound pages and thus would be able to detect the
> size of the page. It may be some work but it could be done.
> 
> >
> > Index: linux-2.6/mm/slub.c
> > ===
> > --- linux-2.6.orig/mm/slub.c2018-04-17 19:59:49.0 +0200
> > +++ linux-2.6/mm/slub.c 2018-04-17 20:58:23.0 +0200
> > @@ -3252,6 +3252,7 @@ static inline unsigned int slab_order(un
> >  static inline int calculate_order(unsigned int size, unsigned int reserved)
> >  {
> > unsigned int order;
> > +   unsigned int test_order;
> > unsigned int min_objects;
> > unsigned int max_objects;
> >
> > @@ -3277,7 +3278,7 @@ static inline int calculate_order(unsign
> > order = slab_order(size, min_objects,
> > slub_max_order, fraction, reserved);
> > if (order <= slub_max_order)
> > -   return order;
> > +   goto ret_order;
> > fraction /= 2;
> > }
> > min_objects--;
> > @@ -3289,15 +3290,25 @@ static inline int calculate_order(unsign
> >  */
> > order = slab_order(size, 1, slub_max_order, 1, reserved);
> 
> The slab order is determined in slab_order()
> 
> > if (order <= slub_max_order)
> > -   return order;
> > +   goto ret_order;
> >
> > /*
> >  * Doh this slab cannot be placed using slub_max_order.
> >  */
> > order = slab_order(size, 1, MAX_ORDER, 1, reserved);
> > -   if (order < MAX_ORDER)
> > -   return order;
> > -   return -ENOSYS;
> > +   if (order >= MAX_ORDER)
> > +   return -ENOSYS;
> > +
> > +ret_order:
> > +   for (test_order = order + 1; test_order < MAX_ORDER; test_order++) {
> > +   unsigned long order_objects = ((PAGE_SIZE << order) - reserved) 
> > / size;
> > +   unsigned long test_order_objects = ((PAGE_SIZE << test_order) - 
> > reserved) / size;
> > +   if (test_order_objects > min(32, MAX_OBJS_PER_PAGE))
> > +   break;
> > +   if (test_order_objects > order_objects << (test_order - order))
> > +   order = test_order;
> > +   }
> > +   return order;
> 
> Could yo move that logic into slab_order()? It does something awfully
> similar.

But slab_order (and its caller) limits the order to "max_order" and we 
want more.

Perhaps slab_order should be dropped and calculate_order totally 
rewritten?

Mikulas


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-18 Thread Christopher Lameter
On Tue, 17 Apr 2018, Mikulas Patocka wrote:

> I can make a slub-only patch with no extra flag (on a freshly booted
> system it increases only the order of caches "TCPv6" and "sighand_cache"
> by one - so it should not have unexpected effects):
>
> Doing a generic solution for slab would be more comlpicated because slab
> assumes that all slabs have the same order, so it can't fall-back to
> lower-order allocations.

Well again SLAB uses compound pages and thus would be able to detect the
size of the page. It may be some work but it could be done.

>
> Index: linux-2.6/mm/slub.c
> ===
> --- linux-2.6.orig/mm/slub.c  2018-04-17 19:59:49.0 +0200
> +++ linux-2.6/mm/slub.c   2018-04-17 20:58:23.0 +0200
> @@ -3252,6 +3252,7 @@ static inline unsigned int slab_order(un
>  static inline int calculate_order(unsigned int size, unsigned int reserved)
>  {
>   unsigned int order;
> + unsigned int test_order;
>   unsigned int min_objects;
>   unsigned int max_objects;
>
> @@ -3277,7 +3278,7 @@ static inline int calculate_order(unsign
>   order = slab_order(size, min_objects,
>   slub_max_order, fraction, reserved);
>   if (order <= slub_max_order)
> - return order;
> + goto ret_order;
>   fraction /= 2;
>   }
>   min_objects--;
> @@ -3289,15 +3290,25 @@ static inline int calculate_order(unsign
>*/
>   order = slab_order(size, 1, slub_max_order, 1, reserved);

The slab order is determined in slab_order()

>   if (order <= slub_max_order)
> - return order;
> + goto ret_order;
>
>   /*
>* Doh this slab cannot be placed using slub_max_order.
>*/
>   order = slab_order(size, 1, MAX_ORDER, 1, reserved);
> - if (order < MAX_ORDER)
> - return order;
> - return -ENOSYS;
> + if (order >= MAX_ORDER)
> + return -ENOSYS;
> +
> +ret_order:
> + for (test_order = order + 1; test_order < MAX_ORDER; test_order++) {
> + unsigned long order_objects = ((PAGE_SIZE << order) - reserved) 
> / size;
> + unsigned long test_order_objects = ((PAGE_SIZE << test_order) - 
> reserved) / size;
> + if (test_order_objects > min(32, MAX_OBJS_PER_PAGE))
> + break;
> + if (test_order_objects > order_objects << (test_order - order))
> + order = test_order;
> + }
> + return order;

Could yo move that logic into slab_order()? It does something awfully
similar.



Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-18 Thread Christopher Lameter
On Tue, 17 Apr 2018, Mikulas Patocka wrote:

> I can make a slub-only patch with no extra flag (on a freshly booted
> system it increases only the order of caches "TCPv6" and "sighand_cache"
> by one - so it should not have unexpected effects):
>
> Doing a generic solution for slab would be more comlpicated because slab
> assumes that all slabs have the same order, so it can't fall-back to
> lower-order allocations.

Well again SLAB uses compound pages and thus would be able to detect the
size of the page. It may be some work but it could be done.

>
> Index: linux-2.6/mm/slub.c
> ===
> --- linux-2.6.orig/mm/slub.c  2018-04-17 19:59:49.0 +0200
> +++ linux-2.6/mm/slub.c   2018-04-17 20:58:23.0 +0200
> @@ -3252,6 +3252,7 @@ static inline unsigned int slab_order(un
>  static inline int calculate_order(unsigned int size, unsigned int reserved)
>  {
>   unsigned int order;
> + unsigned int test_order;
>   unsigned int min_objects;
>   unsigned int max_objects;
>
> @@ -3277,7 +3278,7 @@ static inline int calculate_order(unsign
>   order = slab_order(size, min_objects,
>   slub_max_order, fraction, reserved);
>   if (order <= slub_max_order)
> - return order;
> + goto ret_order;
>   fraction /= 2;
>   }
>   min_objects--;
> @@ -3289,15 +3290,25 @@ static inline int calculate_order(unsign
>*/
>   order = slab_order(size, 1, slub_max_order, 1, reserved);

The slab order is determined in slab_order()

>   if (order <= slub_max_order)
> - return order;
> + goto ret_order;
>
>   /*
>* Doh this slab cannot be placed using slub_max_order.
>*/
>   order = slab_order(size, 1, MAX_ORDER, 1, reserved);
> - if (order < MAX_ORDER)
> - return order;
> - return -ENOSYS;
> + if (order >= MAX_ORDER)
> + return -ENOSYS;
> +
> +ret_order:
> + for (test_order = order + 1; test_order < MAX_ORDER; test_order++) {
> + unsigned long order_objects = ((PAGE_SIZE << order) - reserved) 
> / size;
> + unsigned long test_order_objects = ((PAGE_SIZE << test_order) - 
> reserved) / size;
> + if (test_order_objects > min(32, MAX_OBJS_PER_PAGE))
> + break;
> + if (test_order_objects > order_objects << (test_order - order))
> + order = test_order;
> + }
> + return order;

Could yo move that logic into slab_order()? It does something awfully
similar.



Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-17 Thread Vlastimil Babka
On 04/17/2018 07:26 PM, Mikulas Patocka wrote:
> 
> 
> On Tue, 17 Apr 2018, Vlastimil Babka wrote:
> 
>> On 04/17/2018 04:45 PM, Christopher Lameter wrote:
>>> On Mon, 16 Apr 2018, Mikulas Patocka wrote:
>>>
 This patch introduces a flag SLAB_MINIMIZE_WASTE for slab and slub. This
 flag causes allocation of larger slab caches in order to minimize wasted
 space.

 This is needed because we want to use dm-bufio for deduplication index and
 there are existing installations with non-power-of-two block sizes (such
 as 640KB). The performance of the whole solution depends on efficient
 memory use, so we must waste as little memory as possible.
>>>
>>> Hmmm. Can we come up with a generic solution instead?
>>
>> Yes please.
>>
>>> This may mean relaxing the enforcement of the allocation max order a bit
>>> so that we can get dense allocation through higher order allocs.
>>>
>>> But then higher order allocs are generally seen as problematic.
>>
>> I think in this case they are better than wasting/fragmenting 384kB for
>> 640kB object.
> 
> Wasting 37% of memory is still better than the kernel randomly returning 
> -ENOMEM when higher-order allocation fails.

Of course, see below.

>>> That
>>> means that callers need to be able to tolerate failures.
>>
>> Is it any different from now? I suppose there would still be
>> smallest-order fallback involved in sl*b itself? And if your allocation

^ There: "I suppose there would still be smallest-order fallback
involved in sl*b itself?"

If SLAB doesn't currently support fallback to different order, it either
learns to do that, or keeps wasting memory and more people will migrate
to SLUB. Simple.


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-17 Thread Vlastimil Babka
On 04/17/2018 07:26 PM, Mikulas Patocka wrote:
> 
> 
> On Tue, 17 Apr 2018, Vlastimil Babka wrote:
> 
>> On 04/17/2018 04:45 PM, Christopher Lameter wrote:
>>> On Mon, 16 Apr 2018, Mikulas Patocka wrote:
>>>
 This patch introduces a flag SLAB_MINIMIZE_WASTE for slab and slub. This
 flag causes allocation of larger slab caches in order to minimize wasted
 space.

 This is needed because we want to use dm-bufio for deduplication index and
 there are existing installations with non-power-of-two block sizes (such
 as 640KB). The performance of the whole solution depends on efficient
 memory use, so we must waste as little memory as possible.
>>>
>>> Hmmm. Can we come up with a generic solution instead?
>>
>> Yes please.
>>
>>> This may mean relaxing the enforcement of the allocation max order a bit
>>> so that we can get dense allocation through higher order allocs.
>>>
>>> But then higher order allocs are generally seen as problematic.
>>
>> I think in this case they are better than wasting/fragmenting 384kB for
>> 640kB object.
> 
> Wasting 37% of memory is still better than the kernel randomly returning 
> -ENOMEM when higher-order allocation fails.

Of course, see below.

>>> That
>>> means that callers need to be able to tolerate failures.
>>
>> Is it any different from now? I suppose there would still be
>> smallest-order fallback involved in sl*b itself? And if your allocation

^ There: "I suppose there would still be smallest-order fallback
involved in sl*b itself?"

If SLAB doesn't currently support fallback to different order, it either
learns to do that, or keeps wasting memory and more people will migrate
to SLUB. Simple.


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-17 Thread Mikulas Patocka


On Tue, 17 Apr 2018, Christopher Lameter wrote:

> On Tue, 17 Apr 2018, Vlastimil Babka wrote:
> 
> > On 04/17/2018 04:45 PM, Christopher Lameter wrote:
> 
> > > But then higher order allocs are generally seen as problematic.
> >
> > I think in this case they are better than wasting/fragmenting 384kB for
> > 640kB object.
> 
> Well typically we have suggested that people use vmalloc in the past.

vmalloc is slow - it is unuseable for a buffer cache.

> > > That
> > > means that callers need to be able to tolerate failures.
> >
> > Is it any different from now? I suppose there would still be
> > smallest-order fallback involved in sl*b itself? And if your allocation
> > is so large it can fail even with the fallback (i.e. >= costly order),
> > you need to tolerate failures anyway?
> 
> Failures can occur even with < costly order as far as I can telkl. Order 0
> is the only safe one.

The alloc_pages functions seems to retry indefinitely for order <= 
PAGE_ALLOC_COSTLY_ORDER. Do you have some explanation why it should fail?

> > One corner case I see is if there is anyone who would rather use their
> > own fallback instead of the space-wasting smallest-order fallback.
> > Maybe we could map some GFP flag to indicate that.
> 
> Well if you have a fallback then maybe the slab allocator should not fall
> back on its own but let the caller deal with it.

Mikulas


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-17 Thread Mikulas Patocka


On Tue, 17 Apr 2018, Christopher Lameter wrote:

> On Tue, 17 Apr 2018, Vlastimil Babka wrote:
> 
> > On 04/17/2018 04:45 PM, Christopher Lameter wrote:
> 
> > > But then higher order allocs are generally seen as problematic.
> >
> > I think in this case they are better than wasting/fragmenting 384kB for
> > 640kB object.
> 
> Well typically we have suggested that people use vmalloc in the past.

vmalloc is slow - it is unuseable for a buffer cache.

> > > That
> > > means that callers need to be able to tolerate failures.
> >
> > Is it any different from now? I suppose there would still be
> > smallest-order fallback involved in sl*b itself? And if your allocation
> > is so large it can fail even with the fallback (i.e. >= costly order),
> > you need to tolerate failures anyway?
> 
> Failures can occur even with < costly order as far as I can telkl. Order 0
> is the only safe one.

The alloc_pages functions seems to retry indefinitely for order <= 
PAGE_ALLOC_COSTLY_ORDER. Do you have some explanation why it should fail?

> > One corner case I see is if there is anyone who would rather use their
> > own fallback instead of the space-wasting smallest-order fallback.
> > Maybe we could map some GFP flag to indicate that.
> 
> Well if you have a fallback then maybe the slab allocator should not fall
> back on its own but let the caller deal with it.

Mikulas


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-17 Thread Mikulas Patocka


On Tue, 17 Apr 2018, Christopher Lameter wrote:

> On Mon, 16 Apr 2018, Mikulas Patocka wrote:
> 
> > This patch introduces a flag SLAB_MINIMIZE_WASTE for slab and slub. This
> > flag causes allocation of larger slab caches in order to minimize wasted
> > space.
> >
> > This is needed because we want to use dm-bufio for deduplication index and
> > there are existing installations with non-power-of-two block sizes (such
> > as 640KB). The performance of the whole solution depends on efficient
> > memory use, so we must waste as little memory as possible.
> 
> Hmmm. Can we come up with a generic solution instead?
> 
> This may mean relaxing the enforcement of the allocation max order a bit
> so that we can get dense allocation through higher order allocs.
> 
> But then higher order allocs are generally seen as problematic.
> 
> Note that SLUB will fall back to smallest order already if a failure
> occurs so increasing slub_max_order may not be that much of an issue.
> 
> Maybe drop the max order limit completely and use MAX_ORDER instead? That
> means that callers need to be able to tolerate failures.

I can make a slub-only patch with no extra flag (on a freshly booted 
system it increases only the order of caches "TCPv6" and "sighand_cache" 
by one - so it should not have unexpected effects):

Doing a generic solution for slab would be more comlpicated because slab 
assumes that all slabs have the same order, so it can't fall-back to 
lower-order allocations.


From: Mikulas Patocka 
Subject: [PATCH] slub: minimize wasted space

When object size is greater than slub_max_order, the slub subsystem rounds
up the size to the next power of two. This causes a lot of wasted space -
i.e. 640KB block consumes 1MB of memory.

This patch makes the slub subsystem increase the order if it is benefical.
The order is increased as long as it reduces wasted space. There is cutoff
at 32 objects per slab.

Signed-off-by: Mikulas Patocka 

---
 mm/slub.c |   21 -
 1 file changed, 16 insertions(+), 5 deletions(-)

Index: linux-2.6/mm/slub.c
===
--- linux-2.6.orig/mm/slub.c2018-04-17 19:59:49.0 +0200
+++ linux-2.6/mm/slub.c 2018-04-17 20:58:23.0 +0200
@@ -3252,6 +3252,7 @@ static inline unsigned int slab_order(un
 static inline int calculate_order(unsigned int size, unsigned int reserved)
 {
unsigned int order;
+   unsigned int test_order;
unsigned int min_objects;
unsigned int max_objects;
 
@@ -3277,7 +3278,7 @@ static inline int calculate_order(unsign
order = slab_order(size, min_objects,
slub_max_order, fraction, reserved);
if (order <= slub_max_order)
-   return order;
+   goto ret_order;
fraction /= 2;
}
min_objects--;
@@ -3289,15 +3290,25 @@ static inline int calculate_order(unsign
 */
order = slab_order(size, 1, slub_max_order, 1, reserved);
if (order <= slub_max_order)
-   return order;
+   goto ret_order;
 
/*
 * Doh this slab cannot be placed using slub_max_order.
 */
order = slab_order(size, 1, MAX_ORDER, 1, reserved);
-   if (order < MAX_ORDER)
-   return order;
-   return -ENOSYS;
+   if (order >= MAX_ORDER)
+   return -ENOSYS;
+
+ret_order:
+   for (test_order = order + 1; test_order < MAX_ORDER; test_order++) {
+   unsigned long order_objects = ((PAGE_SIZE << order) - reserved) 
/ size;
+   unsigned long test_order_objects = ((PAGE_SIZE << test_order) - 
reserved) / size;
+   if (test_order_objects > min(32, MAX_OBJS_PER_PAGE))
+   break;
+   if (test_order_objects > order_objects << (test_order - order))
+   order = test_order;
+   }
+   return order;
 }
 
 static void


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-17 Thread Mikulas Patocka


On Tue, 17 Apr 2018, Christopher Lameter wrote:

> On Mon, 16 Apr 2018, Mikulas Patocka wrote:
> 
> > This patch introduces a flag SLAB_MINIMIZE_WASTE for slab and slub. This
> > flag causes allocation of larger slab caches in order to minimize wasted
> > space.
> >
> > This is needed because we want to use dm-bufio for deduplication index and
> > there are existing installations with non-power-of-two block sizes (such
> > as 640KB). The performance of the whole solution depends on efficient
> > memory use, so we must waste as little memory as possible.
> 
> Hmmm. Can we come up with a generic solution instead?
> 
> This may mean relaxing the enforcement of the allocation max order a bit
> so that we can get dense allocation through higher order allocs.
> 
> But then higher order allocs are generally seen as problematic.
> 
> Note that SLUB will fall back to smallest order already if a failure
> occurs so increasing slub_max_order may not be that much of an issue.
> 
> Maybe drop the max order limit completely and use MAX_ORDER instead? That
> means that callers need to be able to tolerate failures.

I can make a slub-only patch with no extra flag (on a freshly booted 
system it increases only the order of caches "TCPv6" and "sighand_cache" 
by one - so it should not have unexpected effects):

Doing a generic solution for slab would be more comlpicated because slab 
assumes that all slabs have the same order, so it can't fall-back to 
lower-order allocations.


From: Mikulas Patocka 
Subject: [PATCH] slub: minimize wasted space

When object size is greater than slub_max_order, the slub subsystem rounds
up the size to the next power of two. This causes a lot of wasted space -
i.e. 640KB block consumes 1MB of memory.

This patch makes the slub subsystem increase the order if it is benefical.
The order is increased as long as it reduces wasted space. There is cutoff
at 32 objects per slab.

Signed-off-by: Mikulas Patocka 

---
 mm/slub.c |   21 -
 1 file changed, 16 insertions(+), 5 deletions(-)

Index: linux-2.6/mm/slub.c
===
--- linux-2.6.orig/mm/slub.c2018-04-17 19:59:49.0 +0200
+++ linux-2.6/mm/slub.c 2018-04-17 20:58:23.0 +0200
@@ -3252,6 +3252,7 @@ static inline unsigned int slab_order(un
 static inline int calculate_order(unsigned int size, unsigned int reserved)
 {
unsigned int order;
+   unsigned int test_order;
unsigned int min_objects;
unsigned int max_objects;
 
@@ -3277,7 +3278,7 @@ static inline int calculate_order(unsign
order = slab_order(size, min_objects,
slub_max_order, fraction, reserved);
if (order <= slub_max_order)
-   return order;
+   goto ret_order;
fraction /= 2;
}
min_objects--;
@@ -3289,15 +3290,25 @@ static inline int calculate_order(unsign
 */
order = slab_order(size, 1, slub_max_order, 1, reserved);
if (order <= slub_max_order)
-   return order;
+   goto ret_order;
 
/*
 * Doh this slab cannot be placed using slub_max_order.
 */
order = slab_order(size, 1, MAX_ORDER, 1, reserved);
-   if (order < MAX_ORDER)
-   return order;
-   return -ENOSYS;
+   if (order >= MAX_ORDER)
+   return -ENOSYS;
+
+ret_order:
+   for (test_order = order + 1; test_order < MAX_ORDER; test_order++) {
+   unsigned long order_objects = ((PAGE_SIZE << order) - reserved) 
/ size;
+   unsigned long test_order_objects = ((PAGE_SIZE << test_order) - 
reserved) / size;
+   if (test_order_objects > min(32, MAX_OBJS_PER_PAGE))
+   break;
+   if (test_order_objects > order_objects << (test_order - order))
+   order = test_order;
+   }
+   return order;
 }
 
 static void


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-17 Thread Mikulas Patocka


On Tue, 17 Apr 2018, Vlastimil Babka wrote:

> On 04/17/2018 04:45 PM, Christopher Lameter wrote:
> > On Mon, 16 Apr 2018, Mikulas Patocka wrote:
> > 
> >> This patch introduces a flag SLAB_MINIMIZE_WASTE for slab and slub. This
> >> flag causes allocation of larger slab caches in order to minimize wasted
> >> space.
> >>
> >> This is needed because we want to use dm-bufio for deduplication index and
> >> there are existing installations with non-power-of-two block sizes (such
> >> as 640KB). The performance of the whole solution depends on efficient
> >> memory use, so we must waste as little memory as possible.
> > 
> > Hmmm. Can we come up with a generic solution instead?
> 
> Yes please.
> 
> > This may mean relaxing the enforcement of the allocation max order a bit
> > so that we can get dense allocation through higher order allocs.
> > 
> > But then higher order allocs are generally seen as problematic.
> 
> I think in this case they are better than wasting/fragmenting 384kB for
> 640kB object.

Wasting 37% of memory is still better than the kernel randomly returning 
-ENOMEM when higher-order allocation fails.

> > That
> > means that callers need to be able to tolerate failures.
> 
> Is it any different from now? I suppose there would still be
> smallest-order fallback involved in sl*b itself? And if your allocation
> is so large it can fail even with the fallback (i.e. >= costly order),
> you need to tolerate failures anyway?
> 
> One corner case I see is if there is anyone who would rather use their
> own fallback instead of the space-wasting smallest-order fallback.
> Maybe we could map some GFP flag to indicate that.

For example, if you create a cache with 17KB objects, the slab subsystem 
will pad it up to 32KB. You are wasting almost 1/2 memory, but the 
allocation is realiable and it won't fail.

If you use order higher than 32KB, you get less wasted memory, but you 
also get random -ENOMEMs (yes, we had a problem in dm-thin that it was 
randomly failing during initialization due to 64KB allocation).

Mikulas


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-17 Thread Mikulas Patocka


On Tue, 17 Apr 2018, Vlastimil Babka wrote:

> On 04/17/2018 04:45 PM, Christopher Lameter wrote:
> > On Mon, 16 Apr 2018, Mikulas Patocka wrote:
> > 
> >> This patch introduces a flag SLAB_MINIMIZE_WASTE for slab and slub. This
> >> flag causes allocation of larger slab caches in order to minimize wasted
> >> space.
> >>
> >> This is needed because we want to use dm-bufio for deduplication index and
> >> there are existing installations with non-power-of-two block sizes (such
> >> as 640KB). The performance of the whole solution depends on efficient
> >> memory use, so we must waste as little memory as possible.
> > 
> > Hmmm. Can we come up with a generic solution instead?
> 
> Yes please.
> 
> > This may mean relaxing the enforcement of the allocation max order a bit
> > so that we can get dense allocation through higher order allocs.
> > 
> > But then higher order allocs are generally seen as problematic.
> 
> I think in this case they are better than wasting/fragmenting 384kB for
> 640kB object.

Wasting 37% of memory is still better than the kernel randomly returning 
-ENOMEM when higher-order allocation fails.

> > That
> > means that callers need to be able to tolerate failures.
> 
> Is it any different from now? I suppose there would still be
> smallest-order fallback involved in sl*b itself? And if your allocation
> is so large it can fail even with the fallback (i.e. >= costly order),
> you need to tolerate failures anyway?
> 
> One corner case I see is if there is anyone who would rather use their
> own fallback instead of the space-wasting smallest-order fallback.
> Maybe we could map some GFP flag to indicate that.

For example, if you create a cache with 17KB objects, the slab subsystem 
will pad it up to 32KB. You are wasting almost 1/2 memory, but the 
allocation is realiable and it won't fail.

If you use order higher than 32KB, you get less wasted memory, but you 
also get random -ENOMEMs (yes, we had a problem in dm-thin that it was 
randomly failing during initialization due to 64KB allocation).

Mikulas


Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-17 Thread Christopher Lameter
On Tue, 17 Apr 2018, Vlastimil Babka wrote:

> On 04/17/2018 04:45 PM, Christopher Lameter wrote:

> > But then higher order allocs are generally seen as problematic.
>
> I think in this case they are better than wasting/fragmenting 384kB for
> 640kB object.

Well typically we have suggested that people use vmalloc in the past.


> > Note that SLUB will fall back to smallest order already if a failure
> > occurs so increasing slub_max_order may not be that much of an issue.
> >
> > Maybe drop the max order limit completely and use MAX_ORDER instead?
>
> For packing, sure. For performance, please no (i.e. don't try to
> allocate MAX_ORDER for each and every cache).

No of course not. We would have to modify the order selection on kmem
cache creation.

> > That
> > means that callers need to be able to tolerate failures.
>
> Is it any different from now? I suppose there would still be
> smallest-order fallback involved in sl*b itself? And if your allocation
> is so large it can fail even with the fallback (i.e. >= costly order),
> you need to tolerate failures anyway?

Failures can occur even with < costly order as far as I can telkl. Order 0
is the only safe one.

> One corner case I see is if there is anyone who would rather use their
> own fallback instead of the space-wasting smallest-order fallback.
> Maybe we could map some GFP flag to indicate that.

Well if you have a fallback then maybe the slab allocator should not fall
back on its own but let the caller deal with it.



Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-17 Thread Christopher Lameter
On Tue, 17 Apr 2018, Vlastimil Babka wrote:

> On 04/17/2018 04:45 PM, Christopher Lameter wrote:

> > But then higher order allocs are generally seen as problematic.
>
> I think in this case they are better than wasting/fragmenting 384kB for
> 640kB object.

Well typically we have suggested that people use vmalloc in the past.


> > Note that SLUB will fall back to smallest order already if a failure
> > occurs so increasing slub_max_order may not be that much of an issue.
> >
> > Maybe drop the max order limit completely and use MAX_ORDER instead?
>
> For packing, sure. For performance, please no (i.e. don't try to
> allocate MAX_ORDER for each and every cache).

No of course not. We would have to modify the order selection on kmem
cache creation.

> > That
> > means that callers need to be able to tolerate failures.
>
> Is it any different from now? I suppose there would still be
> smallest-order fallback involved in sl*b itself? And if your allocation
> is so large it can fail even with the fallback (i.e. >= costly order),
> you need to tolerate failures anyway?

Failures can occur even with < costly order as far as I can telkl. Order 0
is the only safe one.

> One corner case I see is if there is anyone who would rather use their
> own fallback instead of the space-wasting smallest-order fallback.
> Maybe we could map some GFP flag to indicate that.

Well if you have a fallback then maybe the slab allocator should not fall
back on its own but let the caller deal with it.



Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-17 Thread Vlastimil Babka
On 04/17/2018 04:45 PM, Christopher Lameter wrote:
> On Mon, 16 Apr 2018, Mikulas Patocka wrote:
> 
>> This patch introduces a flag SLAB_MINIMIZE_WASTE for slab and slub. This
>> flag causes allocation of larger slab caches in order to minimize wasted
>> space.
>>
>> This is needed because we want to use dm-bufio for deduplication index and
>> there are existing installations with non-power-of-two block sizes (such
>> as 640KB). The performance of the whole solution depends on efficient
>> memory use, so we must waste as little memory as possible.
> 
> Hmmm. Can we come up with a generic solution instead?

Yes please.

> This may mean relaxing the enforcement of the allocation max order a bit
> so that we can get dense allocation through higher order allocs.
> 
> But then higher order allocs are generally seen as problematic.

I think in this case they are better than wasting/fragmenting 384kB for
640kB object.

> Note that SLUB will fall back to smallest order already if a failure
> occurs so increasing slub_max_order may not be that much of an issue.
> 
> Maybe drop the max order limit completely and use MAX_ORDER instead?

For packing, sure. For performance, please no (i.e. don't try to
allocate MAX_ORDER for each and every cache).

> That
> means that callers need to be able to tolerate failures.

Is it any different from now? I suppose there would still be
smallest-order fallback involved in sl*b itself? And if your allocation
is so large it can fail even with the fallback (i.e. >= costly order),
you need to tolerate failures anyway?

One corner case I see is if there is anyone who would rather use their
own fallback instead of the space-wasting smallest-order fallback.
Maybe we could map some GFP flag to indicate that.

> 



Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-17 Thread Vlastimil Babka
On 04/17/2018 04:45 PM, Christopher Lameter wrote:
> On Mon, 16 Apr 2018, Mikulas Patocka wrote:
> 
>> This patch introduces a flag SLAB_MINIMIZE_WASTE for slab and slub. This
>> flag causes allocation of larger slab caches in order to minimize wasted
>> space.
>>
>> This is needed because we want to use dm-bufio for deduplication index and
>> there are existing installations with non-power-of-two block sizes (such
>> as 640KB). The performance of the whole solution depends on efficient
>> memory use, so we must waste as little memory as possible.
> 
> Hmmm. Can we come up with a generic solution instead?

Yes please.

> This may mean relaxing the enforcement of the allocation max order a bit
> so that we can get dense allocation through higher order allocs.
> 
> But then higher order allocs are generally seen as problematic.

I think in this case they are better than wasting/fragmenting 384kB for
640kB object.

> Note that SLUB will fall back to smallest order already if a failure
> occurs so increasing slub_max_order may not be that much of an issue.
> 
> Maybe drop the max order limit completely and use MAX_ORDER instead?

For packing, sure. For performance, please no (i.e. don't try to
allocate MAX_ORDER for each and every cache).

> That
> means that callers need to be able to tolerate failures.

Is it any different from now? I suppose there would still be
smallest-order fallback involved in sl*b itself? And if your allocation
is so large it can fail even with the fallback (i.e. >= costly order),
you need to tolerate failures anyway?

One corner case I see is if there is anyone who would rather use their
own fallback instead of the space-wasting smallest-order fallback.
Maybe we could map some GFP flag to indicate that.

> 



Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-17 Thread Christopher Lameter
On Mon, 16 Apr 2018, Mikulas Patocka wrote:

> This patch introduces a flag SLAB_MINIMIZE_WASTE for slab and slub. This
> flag causes allocation of larger slab caches in order to minimize wasted
> space.
>
> This is needed because we want to use dm-bufio for deduplication index and
> there are existing installations with non-power-of-two block sizes (such
> as 640KB). The performance of the whole solution depends on efficient
> memory use, so we must waste as little memory as possible.

Hmmm. Can we come up with a generic solution instead?

This may mean relaxing the enforcement of the allocation max order a bit
so that we can get dense allocation through higher order allocs.

But then higher order allocs are generally seen as problematic.

Note that SLUB will fall back to smallest order already if a failure
occurs so increasing slub_max_order may not be that much of an issue.

Maybe drop the max order limit completely and use MAX_ORDER instead? That
means that callers need to be able to tolerate failures.




Re: [PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-17 Thread Christopher Lameter
On Mon, 16 Apr 2018, Mikulas Patocka wrote:

> This patch introduces a flag SLAB_MINIMIZE_WASTE for slab and slub. This
> flag causes allocation of larger slab caches in order to minimize wasted
> space.
>
> This is needed because we want to use dm-bufio for deduplication index and
> there are existing installations with non-power-of-two block sizes (such
> as 640KB). The performance of the whole solution depends on efficient
> memory use, so we must waste as little memory as possible.

Hmmm. Can we come up with a generic solution instead?

This may mean relaxing the enforcement of the allocation max order a bit
so that we can get dense allocation through higher order allocs.

But then higher order allocs are generally seen as problematic.

Note that SLUB will fall back to smallest order already if a failure
occurs so increasing slub_max_order may not be that much of an issue.

Maybe drop the max order limit completely and use MAX_ORDER instead? That
means that callers need to be able to tolerate failures.




[PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-16 Thread Mikulas Patocka
This patch introduces a flag SLAB_MINIMIZE_WASTE for slab and slub. This
flag causes allocation of larger slab caches in order to minimize wasted
space.

This is needed because we want to use dm-bufio for deduplication index and
there are existing installations with non-power-of-two block sizes (such
as 640KB). The performance of the whole solution depends on efficient
memory use, so we must waste as little memory as possible.

Signed-off-by: Mikulas Patocka 

---
 drivers/md/dm-bufio.c |2 +-
 include/linux/slab.h  |7 +++
 mm/slab.c |4 ++--
 mm/slab.h |7 ---
 mm/slab_common.c  |2 +-
 mm/slub.c |   25 -
 6 files changed, 35 insertions(+), 12 deletions(-)

Index: linux-2.6/include/linux/slab.h
===
--- linux-2.6.orig/include/linux/slab.h 2018-04-16 21:10:45.0 +0200
+++ linux-2.6/include/linux/slab.h  2018-04-16 21:10:45.0 +0200
@@ -108,6 +108,13 @@
 #define SLAB_KASAN 0
 #endif
 
+/*
+ * Use higer order allocations to minimize wasted space.
+ * Note: the allocation is unreliable if this flag is used, the caller
+ * must handle allocation failures gracefully.
+ */
+#define SLAB_MINIMIZE_WASTE((slab_flags_t __force)0x1000U)
+
 /* The following flags affect the page allocator grouping pages by mobility */
 /* Objects are reclaimable */
 #define SLAB_RECLAIM_ACCOUNT   ((slab_flags_t __force)0x0002U)
Index: linux-2.6/mm/slab_common.c
===
--- linux-2.6.orig/mm/slab_common.c 2018-04-16 21:10:45.0 +0200
+++ linux-2.6/mm/slab_common.c  2018-04-16 21:10:45.0 +0200
@@ -53,7 +53,7 @@ static DECLARE_WORK(slab_caches_to_rcu_d
SLAB_FAILSLAB | SLAB_KASAN)
 
 #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
-SLAB_ACCOUNT)
+SLAB_ACCOUNT | SLAB_MINIMIZE_WASTE)
 
 /*
  * Merge control. If this is set then no merging of slab caches will occur.
Index: linux-2.6/mm/slub.c
===
--- linux-2.6.orig/mm/slub.c2018-04-16 21:10:45.0 +0200
+++ linux-2.6/mm/slub.c 2018-04-16 21:12:41.0 +0200
@@ -3249,7 +3249,7 @@ static inline unsigned int slab_order(un
return order;
 }
 
-static inline int calculate_order(unsigned int size, unsigned int reserved)
+static inline int calculate_order(unsigned int size, unsigned int reserved, 
slab_flags_t flags)
 {
unsigned int order;
unsigned int min_objects;
@@ -3277,7 +3277,7 @@ static inline int calculate_order(unsign
order = slab_order(size, min_objects,
slub_max_order, fraction, reserved);
if (order <= slub_max_order)
-   return order;
+   goto ret_order;
fraction /= 2;
}
min_objects--;
@@ -3289,15 +3289,30 @@ static inline int calculate_order(unsign
 */
order = slab_order(size, 1, slub_max_order, 1, reserved);
if (order <= slub_max_order)
-   return order;
+   goto ret_order;
 
/*
 * Doh this slab cannot be placed using slub_max_order.
 */
order = slab_order(size, 1, MAX_ORDER, 1, reserved);
if (order < MAX_ORDER)
-   return order;
+   goto ret_order;
return -ENOSYS;
+
+ret_order:
+   if (flags & SLAB_MINIMIZE_WASTE) {
+   /* Increase the order if it decreases waste */
+   int test_order;
+   for (test_order = order + 1; test_order < MAX_ORDER; 
test_order++) {
+   unsigned long order_objects = ((PAGE_SIZE << order) - 
reserved) / size;
+   unsigned long test_order_objects = ((PAGE_SIZE << 
test_order) - reserved) / size;
+   if (test_order_objects >= min(32, MAX_OBJS_PER_PAGE))
+   break;
+   if (test_order_objects > order_objects << (test_order - 
order))
+   order = test_order;
+   }
+   }
+   return order;
 }
 
 static void
@@ -3562,7 +3577,7 @@ static int calculate_sizes(struct kmem_c
if (forced_order >= 0)
order = forced_order;
else
-   order = calculate_order(size, s->reserved);
+   order = calculate_order(size, s->reserved, flags);
 
if ((int)order < 0)
return 0;
Index: linux-2.6/drivers/md/dm-bufio.c
===
--- linux-2.6.orig/drivers/md/dm-bufio.c2018-04-16 21:10:45.0 
+0200
+++ linux-2.6/drivers/md/dm-bufio.c 2018-04-16 21:11:23.0 

[PATCH RESEND] slab: introduce the flag SLAB_MINIMIZE_WASTE

2018-04-16 Thread Mikulas Patocka
This patch introduces a flag SLAB_MINIMIZE_WASTE for slab and slub. This
flag causes allocation of larger slab caches in order to minimize wasted
space.

This is needed because we want to use dm-bufio for deduplication index and
there are existing installations with non-power-of-two block sizes (such
as 640KB). The performance of the whole solution depends on efficient
memory use, so we must waste as little memory as possible.

Signed-off-by: Mikulas Patocka 

---
 drivers/md/dm-bufio.c |2 +-
 include/linux/slab.h  |7 +++
 mm/slab.c |4 ++--
 mm/slab.h |7 ---
 mm/slab_common.c  |2 +-
 mm/slub.c |   25 -
 6 files changed, 35 insertions(+), 12 deletions(-)

Index: linux-2.6/include/linux/slab.h
===
--- linux-2.6.orig/include/linux/slab.h 2018-04-16 21:10:45.0 +0200
+++ linux-2.6/include/linux/slab.h  2018-04-16 21:10:45.0 +0200
@@ -108,6 +108,13 @@
 #define SLAB_KASAN 0
 #endif
 
+/*
+ * Use higer order allocations to minimize wasted space.
+ * Note: the allocation is unreliable if this flag is used, the caller
+ * must handle allocation failures gracefully.
+ */
+#define SLAB_MINIMIZE_WASTE((slab_flags_t __force)0x1000U)
+
 /* The following flags affect the page allocator grouping pages by mobility */
 /* Objects are reclaimable */
 #define SLAB_RECLAIM_ACCOUNT   ((slab_flags_t __force)0x0002U)
Index: linux-2.6/mm/slab_common.c
===
--- linux-2.6.orig/mm/slab_common.c 2018-04-16 21:10:45.0 +0200
+++ linux-2.6/mm/slab_common.c  2018-04-16 21:10:45.0 +0200
@@ -53,7 +53,7 @@ static DECLARE_WORK(slab_caches_to_rcu_d
SLAB_FAILSLAB | SLAB_KASAN)
 
 #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
-SLAB_ACCOUNT)
+SLAB_ACCOUNT | SLAB_MINIMIZE_WASTE)
 
 /*
  * Merge control. If this is set then no merging of slab caches will occur.
Index: linux-2.6/mm/slub.c
===
--- linux-2.6.orig/mm/slub.c2018-04-16 21:10:45.0 +0200
+++ linux-2.6/mm/slub.c 2018-04-16 21:12:41.0 +0200
@@ -3249,7 +3249,7 @@ static inline unsigned int slab_order(un
return order;
 }
 
-static inline int calculate_order(unsigned int size, unsigned int reserved)
+static inline int calculate_order(unsigned int size, unsigned int reserved, 
slab_flags_t flags)
 {
unsigned int order;
unsigned int min_objects;
@@ -3277,7 +3277,7 @@ static inline int calculate_order(unsign
order = slab_order(size, min_objects,
slub_max_order, fraction, reserved);
if (order <= slub_max_order)
-   return order;
+   goto ret_order;
fraction /= 2;
}
min_objects--;
@@ -3289,15 +3289,30 @@ static inline int calculate_order(unsign
 */
order = slab_order(size, 1, slub_max_order, 1, reserved);
if (order <= slub_max_order)
-   return order;
+   goto ret_order;
 
/*
 * Doh this slab cannot be placed using slub_max_order.
 */
order = slab_order(size, 1, MAX_ORDER, 1, reserved);
if (order < MAX_ORDER)
-   return order;
+   goto ret_order;
return -ENOSYS;
+
+ret_order:
+   if (flags & SLAB_MINIMIZE_WASTE) {
+   /* Increase the order if it decreases waste */
+   int test_order;
+   for (test_order = order + 1; test_order < MAX_ORDER; 
test_order++) {
+   unsigned long order_objects = ((PAGE_SIZE << order) - 
reserved) / size;
+   unsigned long test_order_objects = ((PAGE_SIZE << 
test_order) - reserved) / size;
+   if (test_order_objects >= min(32, MAX_OBJS_PER_PAGE))
+   break;
+   if (test_order_objects > order_objects << (test_order - 
order))
+   order = test_order;
+   }
+   }
+   return order;
 }
 
 static void
@@ -3562,7 +3577,7 @@ static int calculate_sizes(struct kmem_c
if (forced_order >= 0)
order = forced_order;
else
-   order = calculate_order(size, s->reserved);
+   order = calculate_order(size, s->reserved, flags);
 
if ((int)order < 0)
return 0;
Index: linux-2.6/drivers/md/dm-bufio.c
===
--- linux-2.6.orig/drivers/md/dm-bufio.c2018-04-16 21:10:45.0 
+0200
+++ linux-2.6/drivers/md/dm-bufio.c 2018-04-16 21:11:23.0 +0200
@@ -1683,7 +1683,7