> -----Original Message-----
> From: [email protected] [mailto:ceph-devel-
> [email protected]] On Behalf Of Sage Weil
> Sent: Tuesday, August 11, 2015 10:11 PM
>
> I went ahead and implemented both of these pieces. See
>
> https://github.com/ceph/ceph/pull/5534
>
> My benchmark numbers are highly suspect, but the approximate takeaway is
> that it's 2x faster for the simple microbenchmarks and does 1/3rd the
> allocations. But there is some weird interaction with the allocator going on
> for 16k allocations that I saw, so it needs some more careful benchmarking.
16k allocations aren't that common, actually.
Some time ago I took an alloc profile for raw_char and posix_aligned buffers,
and...
[root@storage1 /]# sort buffer::raw_char-2143984.dat | uniq -c | sort -g
1 12
1 33
1 393
1 41
2 473
2 66447
3 190
3 20
3 64
4 16
36 206
88 174
88 48
89 272
89 36
90 34
312 207
3238 208
32403 209
196300 210
360164 45
[root@storage1 /]# sort buffer::posix_aligned-2081990.dat | uniq -c | sort -g
36 36864
433635 4096
So most common are very small (<255 bytes) allocs and CEPH_PAGE_SIZE allocs.
> The other interesting thing is that either of these pieces in isolation seems
> to
> have a pretty decent benefit, but when combined the benefits are fully
> additive.
>
> It seems to be reasonably stable, though!
I'm going to test them in my environment, because allocations and deallocations
alone, when done in "best-case pattern" (series of allocations followed by
series of frees not interleaved by frees/allocs), aren't a good benchmark for
memory allocators (actually, most allocators are specially optimized for this
case).
With best regards / Pozdrawiam
Piotr Dałek
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html