2015-06-16 17:57 GMT+09:00 Jesper Dangaard Brouer bro...@redhat.com:
On Tue, 16 Jun 2015 10:21:10 +0200
Jesper Dangaard Brouer bro...@redhat.com wrote:
On Tue, 16 Jun 2015 16:28:06 +0900 Joonsoo Kim iamjoonsoo@lge.com
wrote:
Is this really better than just calling
2015-06-16 18:20 GMT+09:00 Jesper Dangaard Brouer bro...@redhat.com:
On Tue, 16 Jun 2015 16:23:28 +0900
Joonsoo Kim iamjoonsoo@lge.com wrote:
On Mon, Jun 15, 2015 at 05:52:56PM +0200, Jesper Dangaard Brouer wrote:
This implements SLUB specific kmem_cache_free_bulk(). SLUB allocator
On Tue, 16 Jun 2015, Joonsoo Kim wrote:
So, in your test, most of objects may come from one or two slabs and your
algorithm is well optimized for this case. But, is this workload normal case?
It is normal if the objects were bulk allocated because SLUB ensures that
all objects are first
On Tue, 16 Jun 2015 10:10:25 -0500 (CDT)
Christoph Lameter c...@linux.com wrote:
On Tue, 16 Jun 2015, Joonsoo Kim wrote:
So, in your test, most of objects may come from one or two slabs and your
algorithm is well optimized for this case. But, is this workload normal
case?
It is
On Tue, 16 Jun 2015, Joonsoo Kim wrote:
If adding these, then I would also need to add those on alloc path...
Yes, please.
Lets fall back to the generic implementation for any of these things. We
need to focus on maximum performance in these functions. The more special
cases we have to
On Tue, 16 Jun 2015, Jesper Dangaard Brouer wrote:
It is very important that everybody realizes that the save+restore
variant is very expensive, this is key:
CPU: i7-4790K CPU @ 4.00GHz
* local_irq_{disable,enable}: 7 cycles(tsc) - 1.821 ns
* local_irq_{save,restore} : 37 cycles(tsc) -
On Mon, Jun 15, 2015 at 05:52:56PM +0200, Jesper Dangaard Brouer wrote:
This implements SLUB specific kmem_cache_free_bulk(). SLUB allocator
now both have bulk alloc and free implemented.
Play nice and reenable local IRQs while calling slowpath.
Signed-off-by: Jesper Dangaard Brouer
On Mon, Jun 15, 2015 at 05:52:56PM +0200, Jesper Dangaard Brouer wrote:
This implements SLUB specific kmem_cache_free_bulk(). SLUB allocator
now both have bulk alloc and free implemented.
Play nice and reenable local IRQs while calling slowpath.
Signed-off-by: Jesper Dangaard Brouer
On Tue, 16 Jun 2015 16:23:28 +0900
Joonsoo Kim iamjoonsoo@lge.com wrote:
On Mon, Jun 15, 2015 at 05:52:56PM +0200, Jesper Dangaard Brouer wrote:
This implements SLUB specific kmem_cache_free_bulk(). SLUB allocator
now both have bulk alloc and free implemented.
Play nice and
On Tue, 16 Jun 2015 16:28:06 +0900 Joonsoo Kim iamjoonsoo@lge.com wrote:
Is this really better than just calling __kmem_cache_free_bulk()?
Yes, as can be seen by cover-letter, but my cover-letter does not seem
to have reached mm-list.
Measurements for the entire patchset:
Bulk - Fallback
On Tue, 16 Jun 2015 10:21:10 +0200
Jesper Dangaard Brouer bro...@redhat.com wrote:
On Tue, 16 Jun 2015 16:28:06 +0900 Joonsoo Kim iamjoonsoo@lge.com wrote:
Is this really better than just calling __kmem_cache_free_bulk()?
Yes, as can be seen by cover-letter, but my cover-letter does
On Mon, 15 Jun 2015 11:34:44 -0500 (CDT)
Christoph Lameter c...@linux.com wrote:
On Mon, 15 Jun 2015, Jesper Dangaard Brouer wrote:
+ for (i = 0; i size; i++) {
+ void *object = p[i];
+
+ if (unlikely(!object))
+ continue; // HOW ABOUT
On Mon, 15 Jun 2015, Jesper Dangaard Brouer wrote:
+ for (i = 0; i size; i++) {
+ void *object = p[i];
+
+ if (unlikely(!object))
+ continue; // HOW ABOUT BUG_ON()???
Sure BUG_ON would be fitting here.
+
+ page =
On 06/15/2015 08:52 AM, Jesper Dangaard Brouer wrote:
This implements SLUB specific kmem_cache_free_bulk(). SLUB allocator
now both have bulk alloc and free implemented.
Play nice and reenable local IRQs while calling slowpath.
Signed-off-by: Jesper Dangaard Brouer bro...@redhat.com
---
14 matches
Mail list logo