> -----Original Message-----
> From: Haomai Wang [mailto:[email protected]]
> Sent: Wednesday, August 12, 2015 4:56 AM
> To: Dałek, Piotr
> 
> On Wed, Aug 12, 2015 at 5:48 AM, Dałek, Piotr <[email protected]>
> wrote:
> >> -----Original Message-----
> >> From: [email protected] [mailto:ceph-devel-
> >> [email protected]] On Behalf Of Sage Weil
> >> Sent: Tuesday, August 11, 2015 10:11 PM
> >>
> >> I went ahead and implemented both of these pieces.  See
> >>
> >>       https://github.com/ceph/ceph/pull/5534
> >>
> >> My benchmark numbers are highly suspect, but the approximate
> takeaway
> >> is that it's 2x faster for the simple microbenchmarks and does 1/3rd
> >> the allocations.  But there is some weird interaction with the
> >> allocator going on for 16k allocations that I saw, so it needs some more
> careful benchmarking.
> >
> > 16k allocations aren't that common, actually.
> > Some time ago I took an alloc profile for raw_char and posix_aligned
> buffers, and...
> >
> > [root@storage1 /]# sort buffer::raw_char-2143984.dat | uniq -c | sort -g
> >       1 12
> >       1 33
> >       1 393
> >       1 41
> >       2 473
> >       2 66447
> >       3 190
> >       3 20
> >       3 64
> >       4 16
> >      36 206
> >      88 174
> >      88 48
> >      89 272
> >      89 36
> >      90 34
> >     312 207
> >    3238 208
> >   32403 209
> >  196300 210
> >  360164 45
> 
> Since size is centralization, we could use a fixed size buffer pool to 
> optimize
> this. The performance is outstanding as I perf.

Idea is great, but execution is tricky, especially in case of simple messenger 
-- we have a lot of threads allocing and freeing memory, so pool must be aware 
of that and not become a bottleneck itself. 


With best regards / Pozdrawiam
Piotr Dałek

Reply via email to