----- Original Message -----
> I want your opinion guys regarding two features implemented in attempt to
> greatly reduce number of memory allocation without major surgery in the
> code.
> The features are:
> 1. Custom STL allocator, which allocates first N items from the STL
> container itself. This is semi-transparent replacement of standard
> allocator. Just need to replace std::map with ceph_map for example.
> Limitation: a) Brakes move semantic. b) No deallocation implemented, so no
> good for big long living containers.
> 2. Placement allocator, which allows chained allocation of shorter living
> object from longer living. Example would be allocation of finish contexts
> from aio completion context.
> Limitation: a) May require some code rearrangement in order to avoid
> concurrent deallocations, otherwise deallocation code uses synchronization
> what limits performance. b) same as above b)
> Performance results for 32 threads in a synthetic test, std allocator time
> to custom
> allocator time ratio:
>             stlalloc                   stl+placement alloc
> block jemalloc tcmalloc ptmalloc      jemalloc tcmalloc ptmalloc
> 1M    1298.01 650.66  137.64          735.49  824.45  9.62
> 64K   514.84  2.82    304.62          570.74  4.85    12.21
> 32K   838.89  2.17    5.03            1600.5  7.43    8.28
> 4K    2.76    1.99    4.98            4.36    5.3     8.23
> 32B   2.67    5.09    3.69            4.41    8.48    6.4
> (100M test iterations for 32B and 4K, 2M for 32K and 64K, 200K for 1M)
> I didn¹t see any performance improvement in 100% write fio test, it still
> can shine in other workloads or proper classes replaced.
> Let me know if it worth to PR them.
> STL allocator:
> https://github.com/efirs/ceph/commit/4eed0d63dbcbd00ee3aa325355bfbe56acbb7b
> 05
> STL allocator usage example:
> https://github.com/efirs/ceph/commit/362c5c4e10563785cc89370d28511e0493f1b2
> 11
> https://github.com/efirs/ceph/commit/e2df67f7570c68e53775bc55cda12c6253e66d
> 2f
> Placement allocator:
> https://github.com/efirs/ceph/commit/8df5cd7d753fd09e79a24f2fc781cf3af02e6d
> 3e
> Placement allocator usage example:
> https://github.com/efirs/ceph/commit/70db18d9c1b39190bde68548b57c2aa7a9e455
> e0
> ‹
> Evgeniy
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Hi Evgeniy,

I did some investigation into custom allocators recently as well. The use
cases that Sage was interested in were a) inline allocation for std::vector
that avoids the heap allocation from reserve(), and b) preallocation of
elements for std::maps that are long-lived but mostly small.

I made a branch[1] that explored this, and ran into some of the same
limitations that you did. Deallocation was a big one, and there's no simple
solution there that's both space- and time-efficient. So while it's less
useful for containers like list and map that mix insertions and deletions,
it's a great fit for vector because it never needs to reclaim that memory.

And while you mention that moves can't be supported, I found that copies were
a problem as well. Because the standard library assumes that allocators are
stateless (until c++14, at least), it expected the copied allocator to be able
to deallocate entries from the initial allocator.

The boost container library includes a small_vector[2] class that does this
preallocation, but also supports move and copy. It wasn't added until v1.58,
however, so we can't depend on it yet without doing some boost header surgery.

For list/map containers, there may be a benefit to using a custom allocator
if all we're doing is constructing, inserting some elements, then destructing.
But I would argue that choosing a more appropriate data structure (whether it's
just std::vector, or something like boost::small_vector, boost::flat_map, etc)
would provide more wins overall with less code to maintain.


[1] https://github.com/cbodley/ceph/commits/wip-preallocator
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to