This is an automated email from the ASF dual-hosted git repository.

amc pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
     new 60e778e  MemArena: Add make method to construct objects in the arena. 
Update documentation.
60e778e is described below

commit 60e778e177d36b6d4cb26689da2a9782821c8aa8
Author: Alan M. Carroll <a...@apache.org>
AuthorDate: Thu Jun 28 21:07:48 2018 -0500

    MemArena: Add make method to construct objects in the arena.
    Update documentation.
---
 doc/conf.py                                        |   3 +-
 .../internal-libraries/MemArena.en.rst             | 309 ++++++++-------------
 lib/ts/MemArena.cc                                 |  99 +++----
 lib/ts/MemArena.h                                  | 161 ++++++-----
 lib/ts/unit-tests/test_MemArena.cc                 |  99 +++++--
 5 files changed, 334 insertions(+), 337 deletions(-)

diff --git a/doc/conf.py b/doc/conf.py
index 1f09e42..54e7b10 100644
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -167,7 +167,8 @@ pygments_style = 'sphinx'
 #modindex_common_prefix = []
 
 nitpicky = True
-nitpick_ignore = [
+nitpick_ignore = [ ('cpp:typeOrConcept', 'T')
+                 , ('cpp:typeOrConcept', 'Args')
                  ]
 
 # Autolink issue references.
diff --git a/doc/developer-guide/internal-libraries/MemArena.en.rst 
b/doc/developer-guide/internal-libraries/MemArena.en.rst
index 76ffdb4..6b2bc02 100644
--- a/doc/developer-guide/internal-libraries/MemArena.en.rst
+++ b/doc/developer-guide/internal-libraries/MemArena.en.rst
@@ -1,22 +1,17 @@
 .. Licensed to the Apache Software Foundation (ASF) under one
-   or more contributor license agreements.  See the NOTICE file
-   distributed with this work for additional information
-   regarding copyright ownership.  The ASF licenses this file
-   to you under the Apache License, Version 2.0 (the
-   "License"); you may not use this file except in compliance
-   with the License.  You may obtain a copy of the License at
+   or more contributor license agreements. See the NOTICE file distributed 
with this work for
+   additional information regarding copyright ownership. The ASF licenses this 
file to you under the
+   Apache License, Version 2.0 (the "License"); you may not use this file 
except in compliance with
+   the License. You may obtain a copy of the License at
 
    http://www.apache.org/licenses/LICENSE-2.0
 
-   Unless required by applicable law or agreed to in writing,
-   software distributed under the License is distributed on an
-   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-   KIND, either express or implied.  See the License for the
-   specific language governing permissions and limitations
-   under the License.
-   
-.. include:: ../../common.defs
+   Unless required by applicable law or agreed to in writing, software 
distributed under the License
+   is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+   or implied. See the License for the specific language governing permissions 
and limitations under
+   the License.
 
+.. include:: ../../common.defs
 .. highlight:: cpp
 .. default-domain:: cpp
 .. |MemArena| replace:: :class:`MemArena`
@@ -26,211 +21,133 @@
 MemArena
 *************
 
-|MemArena| provides a memory arena or pool for allocating memory. The intended 
use is for allocating many small chunks of memory - few, large allocations are 
best handled independently. The purpose is to amortize the cost of allocation 
of each chunk across larger allocations in a heap style. In addition the 
allocated memory is presumed to have similar lifetimes so that all of the 
memory in the arena can be de-allocatred en masse. This is a memory allocation 
style used by many cotainers - [...]
+|MemArena| provides a memory arena or pool for allocating memory. Internally 
|MemArena| reserves
+memory in large blocks and allocates pieces of those blocks when memory is 
requested. Upon
+destruction all of the reserved memory is released which also destroys all of 
the allocated memory.
+This is useful when the goal is any (or all) of trying to
+
+*  amortize allocation costs for many small allocations.
+*  create better memory locality for containers.
+*  de-allocate memory in bulk.
 
 Description
 +++++++++++
 
-|MemArena| manages an internal list of memory blocks, out of which it provides 
allocated
-blocks of memory. When an instance is destructed all the internal blocks are 
also freed. The
-expected use of this class is as an embedded memory manager for a container 
class.
-
-To support coalescence and compaction of memory, the methods 
:func:`MemArena::freeze` and
-:func:`MemArena::thaw` are provided. These create in effect generations of 
memory allocation.
-Calling :func:`MemArena::freeze` marks a generation. After this call any 
further allocations will
-be in new internal memory blocks. The corresponding call to 
:func:`MemArena::thaw` cause older
-generations of internal memory to be freed. The general logic for the 
container would be to freeze,
-re-allocate and copy the container elements, then thaw. This would result in 
compacted memory
-allocation in a single internal block. The uses cases would be either a 
process static data
-structure after initialization (coalescing for locality performence) or a 
container that naturally
-re-allocates (such as a hash table during a bucket expansion). A container 
could also provide its
-own API for its clients to cause a coalesence.
-
-Other than freeze / thaw, this class does not offer any mechanism to release 
memory beyond its destruction. This is not an issue for either process globals 
or transient arenas.
-
-Internals
-+++++++++
-
-|MemArena| opperates in *generations* of internal blocks of memory. Each 
generation marks a series internal block of memory. Allocations always occur 
from the most recent block within a generation, as it is always the largest and 
has the most unallocated space. The most recent block (current) is also the 
head of the linked list of memory blocks. Allocations are given in the form of 
a :class:`MemSpan`. Once an internal block of memory has exhausted it's 
avaliable space, a new, larger, int [...]
-
-.. uml::
-   :align: center
-
-   component [block] as b1
-   component [block] as b2
-   component [block] as b3
-   component [block] as b4
-   component [block] as b5
-   component [block] as b6
-
-   b1 -> b2 
-   b2 -> b3
-   b3 -> b4
-   b4 -> b5
-   b5 -> b6
-
-   generation -u- b3
-   current -u- b1
-
-A call to :func:`MemArena::thaw` will deallocate any generation that is not 
the current generation. Thus, currently it is impossible to deallocate ie. just 
the third generation. Everything after the generation pointer is in previous 
generations and everything before, and including, the generation pointer is in 
the current generation. Since blocks are reference counted, thawing is just a 
single assignment to drop everything after the generation pointer. After a 
:func:`MemArena::thaw`:
-
-.. uml::
-   :align: center
-
-   component [block] as b3
-   component [block] as b4
-   component [block] as b5
-   component [block] as b6
-
-
-   b3 -> b4
-   b4 -> b5
-   b5 -> b6
-
-   current -u- b3
-   generation -u- b6
-
-A generation can only be updated with an explicit call to 
:func:`MemArena::freeze`. The next generation is not actually allocated until a 
call to :func:`MemArena::alloc` happens. On the :func:`MemArena::alloc` 
following a :func:`MemArena::freeze` the next internal block of memory is the 
larger of the sum of all current allocations or the number of bytes requested. 
The reason for this is that the caller could :func:`MemArena::alloc` a size 
larger than all current allocations at which poin [...]
-
-.. uml::
-   :align: center
-
-   component [block] as b3
-   component [block] as b4
-   component [block] as b5
-   component [block] as b6
-
-
-   b3 -> b4
-   b4 -> b5
-   b5 -> b6
-
-   current -u- b3
-
-After the next :func:`MemArena::alloc`:
-
-.. uml::
-   :align: center
-
-   component [block\nnew generation] as b3
-   component [block] as b4
-   component [block] as b5
-   component [block] as b6
-   component [block] as b7
-
-
-   b3 -> b4
-   b4 -> b5
-   b5 -> b6
-   b6 -> b7
-
-   generation -u- b3
-   current -u- b3
-
-A caller can actually :func:`MemArena::alloc` **any** number of bytes. 
Internally, if the arena is unable to allocate enough memory for the 
allocation, it will create a new internal block of memory large enough and 
allocate from that. So if the arena is allocated like:
-
-.. code-block:: cpp
-   
-   ts::MemArena *arena = new ts::MemArena(64);
-
-The caller can actually allocate more than 64 bytes. 
-
-.. code-block:: cpp
-
-   ts::MemSpan span1 = arena->alloc(16);
-   ts::MemSpan span1 = arena->alloc(256);
-
-Now, span1 and span2 are in the same generation and can both be safely used. 
After:
-
-.. code-block:: cpp
-
-   arena->freeze();
-   ts::MemSpan span3 = arena->alloc(512);
-   arena->thaw();
-
-span3 can still be used but span1 and span2 have been deallocated and usage is 
undefined. 
-
-Internal blocks are adjusted for optimization. Each :class:`MemArena::Block` 
is just a header for the underlying memory it manages. The header and memory 
are allocated together for locality such that each :class:`MemArena::Block` is 
immediately followed with the memory it manages. If a :class:`MemArena::Block` 
is larger than a page (defaulted at 4KB), it is aligned to a power of two. The 
actual memory that a :class:`MemArena::Block` can allocate out is slightly 
smaller. This is because a [...]
+When a |MemArena| instance is constructed no memory is reserved. A hint can be 
provided so that the
+first internal reservation of memory will have close to but at least that 
amount of free space
+available to be allocated.
+
+In normal use memory is allocated from |MemArena| using 
:func:`MemArena::alloc` to get chunks
+of memory, or :func:`MemArena::make` to get constructed class instances. 
:func:`MemArena::make`
+takes an arbitrary set of arguments which it attempts to pass to a constructor 
for the type
+:code:`T` after allocating memory (:code:`sizeof(T)` bytes) for the object. If 
there isn't enough
+free reserved memory, a new internal block is reserved. The size of the new 
reserved memory will be at least
+the size of the currently reserved memory, making each reservation larger than 
the last.
+
+The arena can be **frozen** using :func:`MemArena::freeze` which locks down 
the currently reserved
+memory and forces the internal reservation of memory for the next allocation. 
By default this
+internal reservation will be the size of the frozen allocated memory. If this 
isn't the best value a
+hint can be provided to the :func:`MemArena::freeze` method to specify a 
different value, in the
+same manner as the hint to the constructor. When the arena is thawed 
(unfrozen) using
+:func:`MemArena::thaw` the frozen memory is released, which also destroys the 
frozen allocated
+memory. Doing this can be useful after a series of allocations, which can 
result in the allocated
+memory being in different internal blocks, along with possibly no longer in 
use memory. The result
+is to coalesce (or garbage collect) all of the in use memory in the arena into 
a single bulk
+internal reserved block. This improves memory efficiency and memory locality. 
This coalescence is
+done by
+
+#. Freezing the arena.
+#. Copying all objects back in to the arena.
+#. Thawing the arena.
+
+Because the default reservation hint is large enough for all of the previously 
allocated memory, all
+of the copied objects will be put in the same new internal block. If this for 
some reason this
+sizing isn't correct a hint can be passed to :func:`MemArena::freeze` to 
specify a different value
+(if, for instance, there is a lot of unused memory of known size). Generally 
this is most useful for
+data that is initialized on process start and not changed after process 
startup. After the process
+start initilization, the data can be coalesced for better performance after 
all modifications have
+been done. Alternatively, a container that allocates and de-allocates same 
sized objects (such as a
+:code:`std::map`) can use a free list to re-use objects before going to the 
|MemArena| for more
+memory and thereby avoiding collecting unused memory in the arena.
+
+Other than a freeze / thaw cycle, there is no mechanism to release memory 
except for the destruction
+of the |MemArena|. In such use cases either wasted memory must be small enough 
or temporary enough
+to not be an issue, or there must be a provision for some sort of garbage 
collection.
+
+Generally |MemArena| is not as useful for classes that allocate their own 
internal memory
+(such as :code:`std::string` or :code:`std::vector`), which includes most 
container classes. One
+container class that can be easily used is :class:`IntrusiveDList` because the 
links are in the
+instance and therefore also in the arena.
+
+Objects created in the arena must not have :code:`delete` called on them as 
this will corrupt
+memory, usually leading to an immediate crash. The memory for the instance 
will be released when the
+arena is destroyed. The destructor can be called if needed but in general if a 
destructor is needed
+it is probably not a class that should be constructed in the arena. Looking at
+:class:`IntrusiveDList` again for an example, if this is used to link objects 
in the arena, there is
+no need for a destructor to clean up the links - all of the objects will be 
de-allocated when the
+arena is destroyed. Whether this kind of situation can be arranged with 
reasonable effort is a good
+heuristic on whether |MemArena| is an appropriate choice.
+
+While |MemArena| will normally allocate memory in successive chunks from an 
internal block, if the
+allocation request is large (more than a memory page) and there is not enough 
space in the current
+internal block, a block just for that allocation will be created. This is 
useful if the purpose of
+|MemArena| is to track blocks of memory more than reduce the number of system 
level allocations.
 
 Reference
 +++++++++
 
 .. class:: MemArena
 
-   .. class:: Block
-      
-      Underlying memory allocated is owned by the :class:`Block`. A linked 
list. 
-
-      .. member:: size_t size
-      .. member:: size_t allocated
-      .. member:: std::shared_ptr<Block> next
-      .. function:: Block(size_t n)
-      .. function:: char* data()
-
-   .. function:: MemArena()
+   .. function:: MemArena(size_t n)
 
-      Construct an empty arena.
-
-   .. function:: explicit MemArena(size_t n)
-
-      Construct an arena with :arg:`n` bytes. 
+      Construct a memory arena. :arg:`n` is optional. Initially not memory is 
reserved. If :arg:`n`
+      is provided this is a hint that the first internal memory reservation 
should provide roughly
+      and at least :arg:`n` bytes of free space. Otherwise the internal 
default hint is used. A call
+      to :code:`alloc(0)` will not allocate memory but will force the 
reservation of internal memory
+      if this should be done immediately rather than lazily.
 
    .. function:: MemSpan alloc(size_t n)
 
-      Allocate an :arg:`n` byte chunk of memory in the arena.
-
-   .. function:: MemArena& freeze(size_t n = 0)
+      Allocate memory of size :arg:`n` bytes in the arena. If :arg:`n` is zero 
then internal memory
+      will be reserved if there is currently none, otherwise it is a no-op.
 
-      Block all further allocation from any existing internal blocks. If 
:arg:`n` is zero then on the next allocation request a block twice as large as 
the current generation, otherwise the next internal block will be large enough 
to hold :arg:`n` bytes.
+   .. function:: template < typename T, typename ... Args > T * make(Args&& 
... args)
 
-   .. function:: MemArena& thaw()
-
-      Unallocate all internal blocks that were allocated before the current 
generation. 
-    
-   .. function:: MemArena& empty()
-     
-      Empties the entire arena and deallocates all underlying memory. Next 
block size will be equal to the sum of all allocations before the call to empty.
-
-   .. function:: size_t size() const 
-
-      Get the current generation size. The default size of the arena is 32KB 
unless otherwise specified. 
-
-   .. function:: size_t remaining() const 
-
-      Amount of space left in the generation. 
-
-   .. function:: size_t allocated_size() const
-
-      Total number of bytes allocated in the arena.
+      Create an instance of :arg:`T`. :code:`sizeof(T)` bytes of memory are 
allocated from the arena
+      and the constructor invoked. This method takes any set of arguments, 
which are passed to
+      the constructor. A pointer to the newly constructed instance of :arg:`T` 
is returned. Note if
+      the instance allocates other memory that memory will not be in the 
arena. Example constructing
+      a :code:`std::string_view` ::
 
-   .. function:: size_t unallocated_size() const
+         std::string_view * sv = arena.make<std::string_view>(pointer, n);
 
-      Total number of bytes unallocated in the arena. Can be used to see the 
internal fragmentation. 
+   .. function:: MemArena& freeze(size_t n)
 
-   .. function:: bool contains (void *ptr) const
+      Stop allocating from existing internal memory blocks. These blocks are 
now "frozen". Further
+      allocation calls will cause new memory to be reserved.
 
-      Returns whether or not a pointer is in the arena.
-       
-   .. function:: Block* newInternalBlock(size_t n, bool custom)
+      :arg:`n` is optional. If not provided, make the hint for the next 
internal memory reservation
+      to be large enough to hold all currently (now frozen) memory allocation. 
If :arg:`n` is
+      provided it is used as the reservation hint.
 
-      Create a new internal block and returns a pointer to the block. 
-
-   .. member:: size_t arena_size
-
-      Current generation size. 
-  
-   .. member:: size_t total_alloc
-
-      Number of bytes allocated out. 
-
-   .. member:: size_t next_block_size
+   .. function:: MemArena& thaw()
 
-      Size of next generation.
+      Release all frozen internal memory blocks, destroying all frozen 
allocations.
 
-   .. member:: std::shared_ptr<Block> generation
+   .. function:: MemArena& clear(size_t n)
 
-      Pointer to the current generation.
+      Release all memory, destroying all allocations. The next memory 
reservation will be the size
+      of the allocated memory (frozen and not) at the time of the call to 
:func:`MemArena::clear`.
+      :arg:`n` is optional. If this is provided it is used as the hint for the 
next reserved block,
+      otherwise the hint is the size of all allocated memory.
 
-   .. member:: std::shared_ptr<Block> current
+Internals
++++++++++
 
-      Pointer to most recent internal block of memory.
+Allocated memory is tracked by two linked lists, one for current memory and 
the other for frozen
+memory. The latter is used only while the arena is frozen. Because a shared 
pointer is used for the
+link, the list can be de-allocated by clearing the head pointer in |MemArena|. 
This pattern is
+similar to that used by the :code:`IOBuffer` data blocks, and so those were 
considered for use as
+the internal memory allcation blocks. However, that would have required some 
non-trivial tweaks and,
+with the move away from internal allocation pools to memory support from 
libraries like "jemalloc",
+unlikely to provide any benefit.
diff --git a/lib/ts/MemArena.cc b/lib/ts/MemArena.cc
index 9b3db24..1646282 100644
--- a/lib/ts/MemArena.cc
+++ b/lib/ts/MemArena.cc
@@ -39,9 +39,18 @@ MemArena::Block::operator delete(void *ptr)
 MemArena::BlockPtr
 MemArena::make_block(size_t n)
 {
+  // If there's no reservation hint, use the extent. This is transient because 
the hint is cleared.
+  if (_reserve_hint == 0) {
+    if (_active_reserved) {
+      _reserve_hint = _active_reserved;
+    } else if (_prev_allocated) {
+      _reserve_hint = _prev_allocated;
+    }
+  }
+
   // If post-freeze or reserved, allocate at least that much.
-  n               = std::max<size_t>(n, next_block_size);
-  next_block_size = 0; // did this, clear for next time.
+  n             = std::max<size_t>(n, _reserve_hint);
+  _reserve_hint = 0; // did this, clear for next time.
   // Add in overhead and round up to paragraph units.
   n = Paragraph{round_up(n + ALLOC_HEADER_SIZE + sizeof(Block))};
   // If a page or more, round up to page unit size and clip back to account 
for alloc header.
@@ -51,50 +60,43 @@ MemArena::make_block(size_t n)
 
   // Allocate space for the Block instance and the request memory and 
construct a Block at the front.
   // In theory this could use ::operator new(n) but this causes a size 
mismatch during ::operator delete.
-  // Easier to use malloc and not carry a memory block size value around.
-  return BlockPtr(new (::malloc(n)) Block(n - sizeof(Block)));
-}
-
-MemArena::MemArena(size_t n)
-{
-  next_block_size = 0; // Don't use default size.
-  current         = this->make_block(n);
+  // Easier to use malloc and override @c delete.
+  auto free_space = n - sizeof(Block);
+  _active_reserved += free_space;
+  return BlockPtr(new (::malloc(n)) Block(free_space));
 }
 
 MemSpan
 MemArena::alloc(size_t n)
 {
   MemSpan zret;
-  current_alloc += n;
-
-  if (!current) {
-    current = this->make_block(n);
-    zret    = current->alloc(n);
-  } else if (n > current->remaining()) { // too big, need another block
-    if (next_block_size < n) {
-      next_block_size = 2 * current->size;
-    }
+  _active_allocated += n;
+
+  if (!_active) {
+    _active = this->make_block(n);
+    zret    = _active->alloc(n);
+  } else if (n > _active->remaining()) { // too big, need another block
     BlockPtr block = this->make_block(n);
     // For the new @a current, pick the block which will have the most free 
space after taking
     // the request space out of the new block.
     zret = block->alloc(n);
-    if (block->remaining() > current->remaining()) {
-      block->next = current;
-      current     = block;
+    if (block->remaining() > _active->remaining()) {
+      block->next = _active;
+      _active     = block;
 #if defined(__clang_analyzer__)
       // Defeat another clang analyzer false positive. Unit tests validate the 
code is correct.
       ink_assert(current.use_count() > 1);
 #endif
     } else {
-      block->next   = current->next;
-      current->next = block;
+      block->next   = _active->next;
+      _active->next = block;
 #if defined(__clang_analyzer__)
       // Defeat another clang analyzer false positive. Unit tests validate the 
code is correct.
       ink_assert(block.use_count() > 1);
 #endif
     }
   } else {
-    zret = current->alloc(n);
+    zret = _active->alloc(n);
   }
   return zret;
 }
@@ -102,11 +104,15 @@ MemArena::alloc(size_t n)
 MemArena &
 MemArena::freeze(size_t n)
 {
-  prev       = current;
-  prev_alloc = current_alloc;
-  current.reset();
-  next_block_size = n ? n : current_alloc;
-  current_alloc   = 0;
+  _prev = _active;
+  _active.reset(); // it's in _prev now, start fresh.
+  // Update the meta data.
+  _prev_allocated   = _active_allocated;
+  _active_allocated = 0;
+  _prev_reserved    = _active_reserved;
+  _active_reserved  = 0;
+
+  _reserve_hint = n;
 
   return *this;
 }
@@ -114,20 +120,20 @@ MemArena::freeze(size_t n)
 MemArena &
 MemArena::thaw()
 {
-  prev_alloc = 0;
-  prev.reset();
+  _prev.reset();
+  _prev_reserved = _prev_allocated = 0;
   return *this;
 }
 
 bool
 MemArena::contains(const void *ptr) const
 {
-  for (Block *b = current.get(); b; b = b->next.get()) {
+  for (Block *b = _active.get(); b; b = b->next.get()) {
     if (b->contains(ptr)) {
       return true;
     }
   }
-  for (Block *b = prev.get(); b; b = b->next.get()) {
+  for (Block *b = _prev.get(); b; b = b->next.get()) {
     if (b->contains(ptr)) {
       return true;
     }
@@ -137,26 +143,13 @@ MemArena::contains(const void *ptr) const
 }
 
 MemArena &
-MemArena::clear()
+MemArena::clear(size_t n)
 {
-  prev.reset();
-  prev_alloc = 0;
-  current.reset();
-  current_alloc = 0;
+  _reserve_hint = n ? n : _prev_allocated + _active_allocated;
+  _prev.reset();
+  _prev_reserved = _prev_allocated = 0;
+  _active.reset();
+  _active_reserved = _active_allocated = 0;
 
   return *this;
 }
-
-size_t
-MemArena::extent() const
-{
-  size_t zret{0};
-  Block *b;
-  for (b = current.get(); b; b = b->next.get()) {
-    zret += b->size;
-  }
-  for (b = prev.get(); b; b = b->next.get()) {
-    zret += b->size;
-  }
-  return zret;
-};
diff --git a/lib/ts/MemArena.h b/lib/ts/MemArena.h
index ad10cee..fe00eaf 100644
--- a/lib/ts/MemArena.h
+++ b/lib/ts/MemArena.h
@@ -26,6 +26,7 @@
 #include <new>
 #include <mutex>
 #include <memory>
+#include <utility>
 #include <ts/MemSpan.h>
 #include <ts/Scalar.h>
 #include <tsconfig/IntrusivePtr.h>
@@ -33,29 +34,31 @@
 /// Apache Traffic Server commons.
 namespace ts
 {
-/** MemArena is a memory arena for allocations.
+/** A memory arena.
 
-    The intended use is for allocating many small chunks of memory - few, 
large allocations are best handled independently.
-    The purpose is to amortize the cost of allocation of each chunk across 
larger allocations in a heap style. In addition the
-    allocated memory is presumed to have similar lifetimes so that all of the 
memory in the arena can be de-allocatred en masse.
-
-    A generation is essentially a block of memory. The normal workflow is to 
freeze() the current generation, alloc() a larger and
-    newer generation, copy the contents of the previous generation to the new 
generation, and then thaw() the previous generation.
-    Note that coalescence must be done by the caller because MemSpan will only 
give a reference to the underlying memory.
+    The intended use is for allocating many small chunks of memory - few, 
large allocations are best
+    handled through other mechanisms. The purpose is to amortize the cost of 
allocation of each
+    chunk across larger internal allocations ("reserving memory"). In addition 
the allocated memory
+    chunks are presumed to have similar lifetimes so all of the memory in the 
arena can be released
+    when the arena is destroyed.
  */
 class MemArena
 {
   using self_type = MemArena; ///< Self reference type.
 protected:
-  struct Block;
+  struct Block; // Forward declare
   using BlockPtr = ts::IntrusivePtr<Block>;
   friend struct IntrusivePtrPolicy<Block>;
   /** Simple internal arena block of memory. Maintains the underlying memory.
+   *
+   * Intrusive pointer is used to keep all of the memory in this single block. 
This struct is just
+   * the header on the full memory block allowing the raw memory and the meta 
data to be obtained
+   * in a single memory allocation.
    */
   struct Block : public ts::IntrusivePtrCounter {
     size_t size;         ///< Actual block size.
     size_t allocated{0}; ///< Current allocated (in use) bytes.
-    BlockPtr next;       ///< Previously allocated block list.
+    BlockPtr next;       ///< List of previous blocks.
 
     /** Construct to have @a n bytes of available storage.
      *
@@ -64,14 +67,19 @@ protected:
      * @param n The amount of storage.
      */
     Block(size_t n);
+
     /// Get the start of the data in this block.
     char *data();
+
     /// Get the start of the data in this block.
     const char *data() const;
+
     /// Amount of unallocated storage.
     size_t remaining() const;
+
     /// Span of unallocated storage.
     MemSpan remnant();
+
     /** Allocate @a n bytes from this block.
      *
      * @param n Number of bytes to allocate.
@@ -89,7 +97,7 @@ protected:
     /** Override standard delete.
      *
      * This is required because the allocated memory size is larger than the 
class size which requires
-     * passing different parameters to de-allocate the memory.
+     * calling @c free differently.
      *
      * @param ptr Memory to be de-allocated.
      */
@@ -97,66 +105,76 @@ protected:
   };
 
 public:
-  /** Default constructor.
-   * Construct with no memory.
-   */
-  MemArena();
-  /** Construct with @a n bytes of storage.
+  /** Construct with reservation hint.
    *
-   * @param n Number of bytes in the initial block.
+   * No memory is initially reserved, but when memory is needed this will be 
done so at least
+   * @a n bytes of available memory is reserved.
+   *
+   * To pre-reserve call @c alloc(0), e.g.
+   * @code
+   * MemArena arena(512); // Make sure at least 512 bytes available in first 
block.
+   * arena.alloc(0); // Force allocation of first block.
+   * @endcode
+   *
+   * @param n Minimum number of available bytes in the first internally 
reserved block.
    */
-  explicit MemArena(size_t n);
+  explicit MemArena(size_t n = DEFAULT_BLOCK_SIZE);
 
   /** Allocate @a n bytes of storage.
 
-      Returns a span of memory within the arena. alloc() is self expanding but 
DOES NOT self coalesce. This means
-      that no matter the arena size, the caller will always be able to alloc() 
@a n bytes.
+      Returns a span of memory within the arena. alloc() is self expanding but 
DOES NOT self
+      coalesce. This means that no matter the arena size, the caller will 
always be able to alloc()
+      @a n bytes.
 
       @param n number of bytes to allocate.
       @return a MemSpan of the allocated memory.
    */
   MemSpan alloc(size_t n);
 
-  /** Adjust future block allocation size.
-      This does not cause allocation, but instead makes a note of the size @a 
n and when a new block
-      is needed, it will be at least @a n bytes. This is most useful for 
default constructed instances
-      where the initial allocation should be delayed until use.
-      @param n Minimum size of next allocated block.
-      @return @a this
-   */
-  self_type &reserve(size_t n);
+  /** Allocate and initialize a block of memory.
+
+      The template type specifies the type to create and any arguments are 
forwarded to the constructor. Example:
+      @code
+      struct Thing { ... };
+      Thing* thing = arena.make<Thing>(...constructor args...);
+      @endcode
 
-  /** Freeze memory allocation.
+      Do @b not call @c delete an object created this way - that will attempt 
to free the memory and break. A
+      destructor may be invoked explicitly but the point of this class is that 
no object in it needs to be
+      deleted, the memory will all be reclaimed when the Arena is destroyed. 
In general it is a bad idea
+      to make objects in the Arena that own memory that is not also in the 
Arena.
+  */
+  template <typename T, typename... Args> T *make(Args &&... args);
 
-      Will "freeze" a generation of memory. Any memory previously allocated 
can still be used. This is an
-      important distinction as freeze does not mean that the memory is 
immutable, only that subsequent allocations
-      will be in a new generation.
+  /** Freeze reserved memory.
 
-      If @a n == 0, the first block of next generation will be large enough to 
hold all existing allocations.
-      This enables coalescence for locality of reference.
+      All internal memory blocks are frozen and will not be involved in future 
allocations. Subsequent
+      allocation will reserve new internal blocks. By default the first 
reserved block will be large
+      enough to contain all frozen memory. If this is not correct a different 
target can be
+      specified as @a n.
 
-      @param n Number of bytes for new generation.
+      @param n Target number of available bytes in the next reserved internal 
block.
       @return @c *this
    */
   MemArena &freeze(size_t n = 0);
 
-  /** Unfreeze memory allocation, discard previously frozen memory.
-
-      Will "thaw" away any previously frozen generations. Any generation that 
is not the current generation is considered
-      frozen because there is no way to allocate in any of those memory 
blocks. thaw() is the only mechanism for deallocating
-      memory in the arena (other than destroying the arena itself). Thawing 
away previous generations means that all spans
-      of memory allocated in those generations are no longer safe to use.
-
-      @return @c *this
+  /** Unfreeze arena.
+   *
+   * Frozen memory is released.
+   *
+   * @return @c *this
    */
-  MemArena &thaw();
+  self_type &thaw();
 
   /** Release all memory.
 
-      Empties the entire arena and deallocates all underlying memory. Next 
block size will be equal to the sum of all
-      allocations before the call to empty.
+      Empties the entire arena and deallocates all underlying memory. The hint 
for the next reservered block size will
+      be @a n if @a n is not zero, otherwise it will be the sum of all 
allocations when this method was called.
+
+      @return @c *this
+
    */
-  MemArena &clear();
+  MemArena &clear(size_t n = 0);
 
   /// @returns the memory allocated in the generation.
   size_t size() const;
@@ -180,11 +198,9 @@ public:
   /** Total memory footprint, including wasted space.
    * @return Total memory footprint.
    */
-  size_t extent() const;
+  size_t reserved_size() const;
 
 protected:
-  /// creates a new @c Block with at least @n free space.
-
   /** Internally allocates a new block of memory of size @a n bytes.
    *
    * @param n Size of block to allocate.
@@ -199,14 +215,19 @@ protected:
   /// Initial block size to allocate if not specified via API.
   static constexpr size_t DEFAULT_BLOCK_SIZE = Page::SCALE - 
Paragraph{round_up(ALLOC_HEADER_SIZE + sizeof(Block))};
 
-  size_t current_alloc = 0; ///< Total allocations in the active generation.
+  size_t _active_allocated = 0; ///< Total allocations in the active 
generation.
+  size_t _active_reserved  = 0; ///< Total current reserved memory.
   /// Total allocations in the previous generation. This is only non-zero 
while the arena is frozen.
-  size_t prev_alloc = 0;
+  size_t _prev_allocated = 0;
+  /// Total frozen reserved memory.
+  size_t _prev_reserved = 0;
 
-  size_t next_block_size = DEFAULT_BLOCK_SIZE; ///< Next internal block size
+  /// Minimum free space needed in the next allocated block.
+  /// This is not zero iff @c reserve was called.
+  size_t _reserve_hint = 0;
 
-  BlockPtr prev;    ///< Previous generation.
-  BlockPtr current; ///< Head of allocations list. Allocate from this.
+  BlockPtr _prev;   ///< Previous generation, frozen memory.
+  BlockPtr _active; ///< Current generation. Allocate here.
 };
 
 // Implementation
@@ -247,7 +268,14 @@ MemArena::Block::alloc(size_t n)
   return zret;
 }
 
-inline MemArena::MemArena() {}
+template <typename T, typename... Args>
+T *
+MemArena::make(Args &&... args)
+{
+  return new (this->alloc(sizeof(T)).data()) T(std::forward<Args>(args)...);
+}
+
+inline MemArena::MemArena(size_t n) : _reserve_hint(n) {}
 
 inline MemSpan
 MemArena::Block::remnant()
@@ -258,32 +286,31 @@ MemArena::Block::remnant()
 inline size_t
 MemArena::size() const
 {
-  return current_alloc;
+  return _active_allocated;
 }
 
 inline size_t
 MemArena::allocated_size() const
 {
-  return prev_alloc + current_alloc;
-}
-
-inline MemArena &
-MemArena::reserve(size_t n)
-{
-  next_block_size = n;
-  return *this;
+  return _prev_allocated + _active_allocated;
 }
 
 inline size_t
 MemArena::remaining() const
 {
-  return current ? current->remaining() : 0;
+  return _active ? _active->remaining() : 0;
 }
 
 inline MemSpan
 MemArena::remnant() const
 {
-  return current ? current->remnant() : MemSpan{};
+  return _active ? _active->remnant() : MemSpan{};
+}
+
+inline size_t
+MemArena::reserved_size() const
+{
+  return _active_reserved + _prev_reserved;
 }
 
 } // namespace ts
diff --git a/lib/ts/unit-tests/test_MemArena.cc 
b/lib/ts/unit-tests/test_MemArena.cc
index 6e358b4..18d2dad 100644
--- a/lib/ts/unit-tests/test_MemArena.cc
+++ b/lib/ts/unit-tests/test_MemArena.cc
@@ -23,15 +23,20 @@
 
 #include <catch.hpp>
 
+#include <string_view>
 #include <ts/MemArena.h>
 using ts::MemSpan;
 using ts::MemArena;
+using namespace std::literals;
 
 TEST_CASE("MemArena generic", "[libts][MemArena]")
 {
   ts::MemArena arena{64};
   REQUIRE(arena.size() == 0);
-  REQUIRE(arena.extent() >= 64);
+  REQUIRE(arena.reserved_size() == 0);
+  arena.alloc(0);
+  REQUIRE(arena.size() == 0);
+  REQUIRE(arena.reserved_size() >= 64);
 
   ts::MemSpan span1 = arena.alloc(32);
   REQUIRE(span1.size() == 32);
@@ -42,9 +47,9 @@ TEST_CASE("MemArena generic", "[libts][MemArena]")
   REQUIRE(span1.data() != span2.data());
   REQUIRE(arena.size() == 64);
 
-  auto extent{arena.extent()};
+  auto extent{arena.reserved_size()};
   span1 = arena.alloc(128);
-  REQUIRE(extent < arena.extent());
+  REQUIRE(extent < arena.reserved_size());
 }
 
 TEST_CASE("MemArena freeze and thaw", "[libts][MemArena]")
@@ -53,45 +58,75 @@ TEST_CASE("MemArena freeze and thaw", "[libts][MemArena]")
   MemSpan span1{arena.alloc(1024)};
   REQUIRE(span1.size() == 1024);
   REQUIRE(arena.size() == 1024);
+  REQUIRE(arena.reserved_size() >= 1024);
 
   arena.freeze();
 
   REQUIRE(arena.size() == 0);
   REQUIRE(arena.allocated_size() == 1024);
-  REQUIRE(arena.extent() >= 1024);
+  REQUIRE(arena.reserved_size() >= 1024);
 
   arena.thaw();
   REQUIRE(arena.size() == 0);
-  REQUIRE(arena.extent() == 0);
+  REQUIRE(arena.allocated_size() == 0);
+  REQUIRE(arena.reserved_size() == 0);
 
-  arena.reserve(2000);
+  span1 = arena.alloc(1024);
+  arena.freeze();
+  auto extent{arena.reserved_size()};
   arena.alloc(512);
-  arena.alloc(1024);
-  REQUIRE(arena.extent() >= 1536);
-  REQUIRE(arena.extent() < 3000);
-  auto extent = arena.extent();
+  REQUIRE(arena.reserved_size() > extent); // new extent should be bigger.
+  arena.thaw();
+  REQUIRE(arena.size() == 512);
+  REQUIRE(arena.reserved_size() >= 1024);
+
+  arena.clear();
+  REQUIRE(arena.size() == 0);
+  REQUIRE(arena.reserved_size() == 0);
 
+  span1 = arena.alloc(262144);
   arena.freeze();
+  extent = arena.reserved_size();
   arena.alloc(512);
-  REQUIRE(arena.extent() > extent); // new extent should be bigger.
+  REQUIRE(arena.reserved_size() > extent); // new extent should be bigger.
   arena.thaw();
   REQUIRE(arena.size() == 512);
-  REQUIRE(arena.extent() > 1536);
+  REQUIRE(arena.reserved_size() >= 262144);
 
   arena.clear();
-  REQUIRE(arena.size() == 0);
-  REQUIRE(arena.extent() == 0);
+
+  span1  = arena.alloc(262144);
+  extent = arena.reserved_size();
+  arena.freeze();
+  for (int i = 0; i < 262144 / 512; ++i)
+    arena.alloc(512);
+  REQUIRE(arena.reserved_size() > extent); // Bigger while frozen memory is 
still around.
+  arena.thaw();
+  REQUIRE(arena.size() == 262144);
+  REQUIRE(arena.reserved_size() == extent); // should be identical to before 
freeze.
 
   arena.alloc(512);
   arena.alloc(768);
   arena.freeze(32000);
   arena.thaw();
-  arena.alloc(1);
-  REQUIRE(arena.extent() >= 32000);
+  arena.alloc(0);
+  REQUIRE(arena.reserved_size() >= 32000);
+  REQUIRE(arena.reserved_size() < 2 * 32000);
 }
 
 TEST_CASE("MemArena helper", "[libts][MemArena]")
 {
+  struct Thing {
+    int ten{10};
+    std::string name{"name"};
+
+    Thing() {}
+    Thing(int x) : ten(x) {}
+    Thing(std::string const &s) : name(s) {}
+    Thing(int x, std::string_view s) : ten(x), name(s) {}
+    Thing(std::string const &s, int x) : ten(x), name(s) {}
+  };
+
   ts::MemArena arena{256};
   REQUIRE(arena.size() == 0);
   ts::MemSpan s = arena.alloc(56);
@@ -115,6 +150,31 @@ TEST_CASE("MemArena helper", "[libts][MemArena]")
   arena.thaw();
   REQUIRE(!arena.contains((char *)ptr));
   REQUIRE(arena.contains((char *)ptr2));
+
+  Thing *thing_one{arena.make<Thing>()};
+
+  REQUIRE(thing_one->ten == 10);
+  REQUIRE(thing_one->name == "name");
+
+  thing_one = arena.make<Thing>(17, "bob"sv);
+
+  REQUIRE(thing_one->name == "bob");
+  REQUIRE(thing_one->ten == 17);
+
+  thing_one = arena.make<Thing>("Dave", 137);
+
+  REQUIRE(thing_one->name == "Dave");
+  REQUIRE(thing_one->ten == 137);
+
+  thing_one = arena.make<Thing>(9999);
+
+  REQUIRE(thing_one->ten == 9999);
+  REQUIRE(thing_one->name == "name");
+
+  thing_one = arena.make<Thing>("Persia");
+
+  REQUIRE(thing_one->ten == 10);
+  REQUIRE(thing_one->name == "Persia");
 }
 
 TEST_CASE("MemArena large alloc", "[libts][MemArena]")
@@ -170,17 +230,16 @@ TEST_CASE("MemArena block allocation", 
"[libts][MemArena]")
 TEST_CASE("MemArena full blocks", "[libts][MemArena]")
 {
   // couple of large allocations - should be exactly sized in the generation.
-  ts::MemArena arena;
   size_t init_size = 32000;
+  ts::MemArena arena(init_size);
 
-  arena.reserve(init_size);
   MemSpan m1{arena.alloc(init_size - 64)};
   MemSpan m2{arena.alloc(32000)};
   MemSpan m3{arena.alloc(64000)};
 
   REQUIRE(arena.remaining() >= 64);
-  REQUIRE(arena.extent() > 32000 + 64000 + init_size);
-  REQUIRE(arena.extent() < 2 * (32000 + 64000 + init_size));
+  REQUIRE(arena.reserved_size() > 32000 + 64000 + init_size);
+  REQUIRE(arena.reserved_size() < 2 * (32000 + 64000 + init_size));
 
   // Let's see if that memory is really there.
   memset(m1.data(), 0xa5, m1.size());

Reply via email to