The branch, master has been updated
       via  8f4069c tevent: Use talloc_pooled_object for tevent_req_create
       via  7f9bdab smbd: Use talloc_pooled_object in cp_smb_filename
       via  256d10f talloc: Test the pooled object
       via  e82320e talloc: Add talloc_pooled_object
       via  20ad6d7 talloc: Allow nested pools.
       via  a3d9099 talloc: Add a separate pool size
       via  b87c8fd talloc: Put pool-specific data before the chunk
       via  9887f38 talloc: Introduce __talloc_with_prefix
       via  1334c74 talloc: Decouple the dual use of chunk->pool
      from  81f8b9c s3: include/smb : changing smb server version

http://gitweb.samba.org/?p=samba.git;a=shortlog;h=master


- Log -----------------------------------------------------------------
commit 8f4069c7cd10a143286c7a32c1b612380afd7c72
Author: Volker Lendecke <[email protected]>
Date:   Fri Sep 6 15:37:56 2013 -0700

    tevent: Use talloc_pooled_object for tevent_req_create
    
    Signed-off-by: Volker Lendecke <[email protected]>
    Reviewed-by: Jeremy Allison <[email protected]>
    
    Autobuild-User(master): Volker Lendecke <[email protected]>
    Autobuild-Date(master): Sun Sep  8 13:39:25 CEST 2013 on sn-devel-104

commit 7f9bdabda53b63497d67d844198a28bf3ba04693
Author: Volker Lendecke <[email protected]>
Date:   Fri Sep 6 15:34:44 2013 -0700

    smbd: Use talloc_pooled_object in cp_smb_filename
    
    Requires new talloc
    
    Signed-off-by: Volker Lendecke <[email protected]>
    Reviewed-by: Jeremy Allison <[email protected]>

commit 256d10f5792a37d20cbb45f2af3f8578bd354110
Author: Volker Lendecke <[email protected]>
Date:   Fri Sep 6 15:30:38 2013 -0700

    talloc: Test the pooled object
    
    Signed-off-by: Volker Lendecke <[email protected]>
    Reviewed-by: Jeremy Allison <[email protected]>
    Reviewed-by: Stefan Metzmacher <[email protected]>

commit e82320e5197bcdd0330bc829c0963ad09854a36c
Author: Volker Lendecke <[email protected]>
Date:   Fri Sep 6 15:15:32 2013 -0700

    talloc: Add talloc_pooled_object
    
    Signed-off-by: Volker Lendecke <[email protected]>
    Reviewed-by: Jeremy Allison <[email protected]>

commit 20ad6d7aa3dc5e7db4d886202f757ac1f68287d4
Author: Volker Lendecke <[email protected]>
Date:   Fri Sep 6 14:52:28 2013 -0700

    talloc: Allow nested pools.
    
    Signed-off-by: Volker Lendecke <[email protected]>
    Signed-off-by: Jeremy Allison <[email protected]>

commit a3d9099d9a96b36df21ee0733adc5210438fe9dc
Author: Volker Lendecke <[email protected]>
Date:   Fri Sep 6 14:20:20 2013 -0700

    talloc: Add a separate pool size
    
    This is necessary to allow talloc pools to be objects on their own. It
    is an incompatible change in the sense that talloc_get_size(pool) now
    returns 0 instead of the pool size. When the talloc_pooled_object()
    call is added, this will start to make sense again.
    
    Maybe we should add a talloc_pool_size call? Or is that overkill?
    
    Signed-off-by: Volker Lendecke <[email protected]>
    Reviewed-by: Jeremy Allison <[email protected]>
    Reviewed-by: Stefan Metzmacher <[email protected]>

commit b87c8fd435d1863d6efcec03830ecd85ddfcd7fb
Author: Volker Lendecke <[email protected]>
Date:   Fri Sep 6 14:08:43 2013 -0700

    talloc: Put pool-specific data before the chunk
    
    This is a preparation to make talloc pool real objects themselves.
    
    Signed-off-by: Volker Lendecke <[email protected]>
    Signed-off-by: Jeremy Allison <[email protected]>

commit 9887f387a10e94f71790c0c3c7dc5f8cde7e4eb2
Author: Volker Lendecke <[email protected]>
Date:   Fri Sep 6 12:18:26 2013 -0700

    talloc: Introduce __talloc_with_prefix
    
    This will allow to exchange the extra talloc pool header with the
    talloc_chunk structure
    
    Signed-off-by: Volker Lendecke <[email protected]>
    Signed-off-by: Jeremy Allison <[email protected]>

commit 1334c745e1f2157b66e14f9d8b4f6f7750238717
Author: Volker Lendecke <[email protected]>
Date:   Fri Sep 6 10:54:43 2013 -0700

    talloc: Decouple the dual use of chunk->pool
    
    If we want nested pools, we will have pools that are pool members. So
    we will have to have a separate "next object" pointer  for pools. As
    we have struct talloc_pool_chunk now, this additional pointer does not
    affect normal talloc objects.
    
    Signed-off-by: Volker Lendecke <[email protected]>
    Reviewed-by: Jeremy Allison <[email protected]>
    Reviewed-by: Stefan Metzmacher <[email protected]>

-----------------------------------------------------------------------

Summary of changes:
 ...oc-util-2.0.6.sigs => pytalloc-util-2.1.0.sigs} |    0
 .../ABI/{talloc-2.0.8.sigs => talloc-2.1.0.sigs}   |    1 +
 lib/talloc/talloc.c                                |  346 ++++++++++++++------
 lib/talloc/talloc.h                                |   40 +++-
 lib/talloc/testsuite.c                             |   62 ++++
 lib/talloc/wscript                                 |    2 +-
 lib/tevent/tevent_req.c                            |    5 +-
 source3/lib/filename_util.c                        |   48 ++-
 8 files changed, 379 insertions(+), 125 deletions(-)
 copy lib/talloc/ABI/{pytalloc-util-2.0.6.sigs => pytalloc-util-2.1.0.sigs} 
(100%)
 copy lib/talloc/ABI/{talloc-2.0.8.sigs => talloc-2.1.0.sigs} (97%)


Changeset truncated at 500 lines:

diff --git a/lib/talloc/ABI/pytalloc-util-2.0.6.sigs 
b/lib/talloc/ABI/pytalloc-util-2.1.0.sigs
similarity index 100%
copy from lib/talloc/ABI/pytalloc-util-2.0.6.sigs
copy to lib/talloc/ABI/pytalloc-util-2.1.0.sigs
diff --git a/lib/talloc/ABI/talloc-2.0.8.sigs b/lib/talloc/ABI/talloc-2.1.0.sigs
similarity index 97%
copy from lib/talloc/ABI/talloc-2.0.8.sigs
copy to lib/talloc/ABI/talloc-2.1.0.sigs
index 15a9e95..eae12cc 100644
--- a/lib/talloc/ABI/talloc-2.0.8.sigs
+++ b/lib/talloc/ABI/talloc-2.1.0.sigs
@@ -4,6 +4,7 @@ _talloc_free: int (void *, const char *)
 _talloc_get_type_abort: void *(const void *, const char *, const char *)
 _talloc_memdup: void *(const void *, const void *, size_t, const char *)
 _talloc_move: void *(const void *, const void *)
+_talloc_pooled_object: void *(const void *, size_t, const char *, unsigned 
int, size_t)
 _talloc_realloc: void *(const void *, void *, size_t, const char *)
 _talloc_realloc_array: void *(const void *, void *, size_t, unsigned int, 
const char *)
 _talloc_reference_loc: void *(const void *, const void *, const char *)
diff --git a/lib/talloc/talloc.c b/lib/talloc/talloc.c
index 69d5a16..1cb4d7d 100644
--- a/lib/talloc/talloc.c
+++ b/lib/talloc/talloc.c
@@ -244,6 +244,8 @@ static void talloc_memlimit_update_on_free(struct 
talloc_chunk *tc);
 
 typedef int (*talloc_destructor_t)(void *);
 
+struct talloc_pool_hdr;
+
 struct talloc_chunk {
        struct talloc_chunk *next, *prev;
        struct talloc_chunk *parent, *child;
@@ -263,17 +265,12 @@ struct talloc_chunk {
        struct talloc_memlimit *limit;
 
        /*
-        * "pool" has dual use:
-        *
-        * For the talloc pool itself (i.e. TALLOC_FLAG_POOL is set), "pool"
-        * marks the end of the currently allocated area.
-        *
-        * For members of the pool (i.e. TALLOC_FLAG_POOLMEM is set), "pool"
+        * For members of a pool (i.e. TALLOC_FLAG_POOLMEM is set), "pool"
         * is a pointer to the struct talloc_chunk of the pool that it was
         * allocated from. This way children can quickly find the pool to chew
         * from.
         */
-       void *pool;
+       struct talloc_pool_hdr *pool;
 };
 
 /* 16 byte alignment seems to keep everyone happy */
@@ -461,30 +458,33 @@ _PUBLIC_ const char *talloc_parent_name(const void *ptr)
   memory footprint of each talloc chunk by those 16 bytes.
 */
 
-union talloc_pool_chunk {
-       /* This lets object_count nestle into 16-byte padding of talloc_chunk,
-        * on 32-bit platforms. */
-       struct tc_pool_hdr {
-               struct talloc_chunk c;
-               unsigned int object_count;
-       } hdr;
-       /* This makes it always 16 byte aligned. */
-       char pad[TC_ALIGN16(sizeof(struct tc_pool_hdr))];
+struct talloc_pool_hdr {
+       void *end;
+       unsigned int object_count;
+       size_t poolsize;
 };
 
-static void *tc_pool_end(union talloc_pool_chunk *pool_tc)
+#define TP_HDR_SIZE TC_ALIGN16(sizeof(struct talloc_pool_hdr))
+
+static struct talloc_pool_hdr *talloc_pool_from_chunk(struct talloc_chunk *c)
+{
+       return (struct talloc_pool_hdr *)((char *)c - TP_HDR_SIZE);
+}
+
+static struct talloc_chunk *talloc_chunk_from_pool(struct talloc_pool_hdr *h)
 {
-       return (char *)pool_tc + TC_HDR_SIZE + pool_tc->hdr.c.size;
+       return (struct talloc_chunk *)((char *)h + TP_HDR_SIZE);
 }
 
-static size_t tc_pool_space_left(union talloc_pool_chunk *pool_tc)
+static void *tc_pool_end(struct talloc_pool_hdr *pool_hdr)
 {
-       return (char *)tc_pool_end(pool_tc) - (char *)pool_tc->hdr.c.pool;
+       struct talloc_chunk *tc = talloc_chunk_from_pool(pool_hdr);
+       return (char *)tc + TC_HDR_SIZE + pool_hdr->poolsize;
 }
 
-static void *tc_pool_first_chunk(union talloc_pool_chunk *pool_tc)
+static size_t tc_pool_space_left(struct talloc_pool_hdr *pool_hdr)
 {
-       return pool_tc + 1;
+       return (char *)tc_pool_end(pool_hdr) - (char *)pool_hdr->end;
 }
 
 /* If tc is inside a pool, this gives the next neighbour. */
@@ -493,17 +493,23 @@ static void *tc_next_chunk(struct talloc_chunk *tc)
        return (char *)tc + TC_ALIGN16(TC_HDR_SIZE + tc->size);
 }
 
+static void *tc_pool_first_chunk(struct talloc_pool_hdr *pool_hdr)
+{
+       struct talloc_chunk *tc = talloc_chunk_from_pool(pool_hdr);
+       return tc_next_chunk(tc);
+}
+
 /* Mark the whole remaining pool as not accessable */
-static void tc_invalidate_pool(union talloc_pool_chunk *pool_tc)
+static void tc_invalidate_pool(struct talloc_pool_hdr *pool_hdr)
 {
-       size_t flen = tc_pool_space_left(pool_tc);
+       size_t flen = tc_pool_space_left(pool_hdr);
 
        if (unlikely(talloc_fill.enabled)) {
-               memset(pool_tc->hdr.c.pool, talloc_fill.fill_value, flen);
+               memset(pool_hdr->end, talloc_fill.fill_value, flen);
        }
 
 #if defined(DEVELOPER) && defined(VALGRIND_MAKE_MEM_NOACCESS)
-       VALGRIND_MAKE_MEM_NOACCESS(pool_tc->hdr.c.pool, flen);
+       VALGRIND_MAKE_MEM_NOACCESS(pool_hdr->end, flen);
 #endif
 }
 
@@ -512,9 +518,9 @@ static void tc_invalidate_pool(union talloc_pool_chunk 
*pool_tc)
 */
 
 static struct talloc_chunk *talloc_alloc_pool(struct talloc_chunk *parent,
-                                             size_t size)
+                                             size_t size, size_t prefix_len)
 {
-       union talloc_pool_chunk *pool_ctx = NULL;
+       struct talloc_pool_hdr *pool_hdr = NULL;
        size_t space_left;
        struct talloc_chunk *result;
        size_t chunk_size;
@@ -524,39 +530,39 @@ static struct talloc_chunk *talloc_alloc_pool(struct 
talloc_chunk *parent,
        }
 
        if (parent->flags & TALLOC_FLAG_POOL) {
-               pool_ctx = (union talloc_pool_chunk *)parent;
+               pool_hdr = talloc_pool_from_chunk(parent);
        }
        else if (parent->flags & TALLOC_FLAG_POOLMEM) {
-               pool_ctx = (union talloc_pool_chunk *)parent->pool;
+               pool_hdr = parent->pool;
        }
 
-       if (pool_ctx == NULL) {
+       if (pool_hdr == NULL) {
                return NULL;
        }
 
-       space_left = tc_pool_space_left(pool_ctx);
+       space_left = tc_pool_space_left(pool_hdr);
 
        /*
         * Align size to 16 bytes
         */
-       chunk_size = TC_ALIGN16(size);
+       chunk_size = TC_ALIGN16(size + prefix_len);
 
        if (space_left < chunk_size) {
                return NULL;
        }
 
-       result = (struct talloc_chunk *)pool_ctx->hdr.c.pool;
+       result = (struct talloc_chunk *)((char *)pool_hdr->end + prefix_len);
 
 #if defined(DEVELOPER) && defined(VALGRIND_MAKE_MEM_UNDEFINED)
-       VALGRIND_MAKE_MEM_UNDEFINED(result, size);
+       VALGRIND_MAKE_MEM_UNDEFINED(pool_hdr->end, chunk_size);
 #endif
 
-       pool_ctx->hdr.c.pool = (void *)((char *)result + chunk_size);
+       pool_hdr->end = (void *)((char *)pool_hdr->end + chunk_size);
 
        result->flags = TALLOC_MAGIC | TALLOC_FLAG_POOLMEM;
-       result->pool = pool_ctx;
+       result->pool = pool_hdr;
 
-       pool_ctx->hdr.object_count++;
+       pool_hdr->object_count++;
 
        return result;
 }
@@ -564,10 +570,12 @@ static struct talloc_chunk *talloc_alloc_pool(struct 
talloc_chunk *parent,
 /*
    Allocate a bit of memory as a child of an existing pointer
 */
-static inline void *__talloc(const void *context, size_t size)
+static inline void *__talloc_with_prefix(const void *context, size_t size,
+                                       size_t prefix_len)
 {
        struct talloc_chunk *tc = NULL;
        struct talloc_memlimit *limit = NULL;
+       size_t total_len = TC_HDR_SIZE + size + prefix_len;
 
        if (unlikely(context == NULL)) {
                context = null_context;
@@ -577,6 +585,10 @@ static inline void *__talloc(const void *context, size_t 
size)
                return NULL;
        }
 
+       if (unlikely(total_len < TC_HDR_SIZE)) {
+               return NULL;
+       }
+
        if (context != NULL) {
                struct talloc_chunk *ptc = talloc_chunk_from_ptr(context);
 
@@ -584,24 +596,29 @@ static inline void *__talloc(const void *context, size_t 
size)
                        limit = ptc->limit;
                }
 
-               tc = talloc_alloc_pool(ptc, TC_HDR_SIZE+size);
+               tc = talloc_alloc_pool(ptc, TC_HDR_SIZE+size, prefix_len);
        }
 
        if (tc == NULL) {
+               char *ptr;
+
                /*
                 * Only do the memlimit check/update on actual allocation.
                 */
-               if (!talloc_memlimit_check(limit, TC_HDR_SIZE + size)) {
+               if (!talloc_memlimit_check(limit, total_len)) {
                        errno = ENOMEM;
                        return NULL;
                }
 
-               tc = (struct talloc_chunk *)malloc(TC_HDR_SIZE+size);
-               if (unlikely(tc == NULL)) return NULL;
+               ptr = malloc(total_len);
+               if (unlikely(ptr == NULL)) {
+                       return NULL;
+               }
+               tc = (struct talloc_chunk *)(ptr + prefix_len);
                tc->flags = TALLOC_MAGIC;
                tc->pool  = NULL;
 
-               talloc_memlimit_grow(limit, TC_HDR_SIZE + size);
+               talloc_memlimit_grow(limit, total_len);
        }
 
        tc->limit = limit;
@@ -631,35 +648,106 @@ static inline void *__talloc(const void *context, size_t 
size)
        return TC_PTR_FROM_CHUNK(tc);
 }
 
+static inline void *__talloc(const void *context, size_t size)
+{
+       return __talloc_with_prefix(context, size, 0);
+}
+
 /*
  * Create a talloc pool
  */
 
 _PUBLIC_ void *talloc_pool(const void *context, size_t size)
 {
-       union talloc_pool_chunk *pool_tc;
-       void *result = __talloc(context, sizeof(*pool_tc) - TC_HDR_SIZE + size);
+       struct talloc_chunk *tc;
+       struct talloc_pool_hdr *pool_hdr;
+       void *result;
+
+       result = __talloc_with_prefix(context, size, TP_HDR_SIZE);
 
        if (unlikely(result == NULL)) {
                return NULL;
        }
 
-       pool_tc = (union talloc_pool_chunk *)talloc_chunk_from_ptr(result);
-       if (unlikely(pool_tc->hdr.c.flags & TALLOC_FLAG_POOLMEM)) {
-               /* We don't handle this correctly, so fail. */
-               talloc_log("talloc: cannot allocate pool off another pool %s\n",
-                          talloc_get_name(context));
-               talloc_free(result);
+       tc = talloc_chunk_from_ptr(result);
+       pool_hdr = talloc_pool_from_chunk(tc);
+
+       tc->flags |= TALLOC_FLAG_POOL;
+       tc->size = 0;
+
+       pool_hdr->object_count = 1;
+       pool_hdr->end = result;
+       pool_hdr->poolsize = size;
+
+       tc_invalidate_pool(pool_hdr);
+
+       return result;
+}
+
+/*
+ * Create a talloc pool correctly sized for a basic size plus
+ * a number of subobjects whose total size is given. Essentially
+ * a custom allocator for talloc to reduce fragmentation.
+ */
+
+_PUBLIC_ void *_talloc_pooled_object(const void *ctx,
+                                    size_t type_size,
+                                    const char *type_name,
+                                    unsigned num_subobjects,
+                                    size_t total_subobjects_size)
+{
+       size_t poolsize, subobjects_slack, tmp;
+       struct talloc_chunk *tc;
+       struct talloc_pool_hdr *pool_hdr;
+       void *ret;
+
+       poolsize = type_size + total_subobjects_size;
+
+       if ((poolsize < type_size) || (poolsize < total_subobjects_size)) {
+               goto overflow;
+       }
+
+       if (num_subobjects == UINT_MAX) {
+               goto overflow;
+       }
+       num_subobjects += 1;       /* the object body itself */
+
+       /*
+        * Alignment can increase the pool size by at most 15 bytes per object
+        * plus alignment for the object itself
+        */
+       subobjects_slack = (TC_HDR_SIZE + TP_HDR_SIZE + 15) * num_subobjects;
+       if (subobjects_slack < num_subobjects) {
+               goto overflow;
+       }
+
+       tmp = poolsize + subobjects_slack;
+       if ((tmp < poolsize) || (tmp < subobjects_slack)) {
+               goto overflow;
+       }
+       poolsize = tmp;
+
+       ret = talloc_pool(ctx, poolsize);
+       if (ret == NULL) {
                return NULL;
        }
-       pool_tc->hdr.c.flags |= TALLOC_FLAG_POOL;
-       pool_tc->hdr.c.pool = tc_pool_first_chunk(pool_tc);
 
-       pool_tc->hdr.object_count = 1;
+       tc = talloc_chunk_from_ptr(ret);
+       tc->size = type_size;
 
-       tc_invalidate_pool(pool_tc);
+       pool_hdr = talloc_pool_from_chunk(tc);
 
-       return result;
+#if defined(DEVELOPER) && defined(VALGRIND_MAKE_MEM_UNDEFINED)
+       VALGRIND_MAKE_MEM_UNDEFINED(pool_hdr->end, type_size);
+#endif
+
+       pool_hdr->end = ((char *)pool_hdr->end + TC_ALIGN16(type_size));
+
+       talloc_set_name_const(ret, type_name);
+       return ret;
+
+overflow:
+       return NULL;
 }
 
 /*
@@ -760,10 +848,12 @@ static void *_talloc_steal_internal(const void *new_ctx, 
const void *ptr);
 static inline void _talloc_free_poolmem(struct talloc_chunk *tc,
                                        const char *location)
 {
-       union talloc_pool_chunk *pool;
+       struct talloc_pool_hdr *pool;
+       struct talloc_chunk *pool_tc;
        void *next_tc;
 
-       pool = (union talloc_pool_chunk *)tc->pool;
+       pool = tc->pool;
+       pool_tc = talloc_chunk_from_pool(pool);
        next_tc = tc_next_chunk(tc);
 
        tc->flags |= TALLOC_FLAG_FREE;
@@ -776,15 +866,15 @@ static inline void _talloc_free_poolmem(struct 
talloc_chunk *tc,
 
        TC_INVALIDATE_FULL_CHUNK(tc);
 
-       if (unlikely(pool->hdr.object_count == 0)) {
+       if (unlikely(pool->object_count == 0)) {
                talloc_abort("Pool object count zero!");
                return;
        }
 
-       pool->hdr.object_count--;
+       pool->object_count--;
 
-       if (unlikely(pool->hdr.object_count == 1
-                    && !(pool->hdr.c.flags & TALLOC_FLAG_FREE))) {
+       if (unlikely(pool->object_count == 1
+                    && !(pool_tc->flags & TALLOC_FLAG_FREE))) {
                /*
                 * if there is just one object left in the pool
                 * and pool->flags does not have TALLOC_FLAG_FREE,
@@ -792,33 +882,42 @@ static inline void _talloc_free_poolmem(struct 
talloc_chunk *tc,
                 * the rest is available for new objects
                 * again.
                 */
-               pool->hdr.c.pool = tc_pool_first_chunk(pool);
+               pool->end = tc_pool_first_chunk(pool);
                tc_invalidate_pool(pool);
                return;
        }
 
-       if (unlikely(pool->hdr.object_count == 0)) {
+       if (unlikely(pool->object_count == 0)) {
                /*
                 * we mark the freed memory with where we called the free
                 * from. This means on a double free error we can report where
                 * the first free came from
                 */
-               pool->hdr.c.name = location;
-
-               talloc_memlimit_update_on_free(&pool->hdr.c);
+               pool_tc->name = location;
 
-               TC_INVALIDATE_FULL_CHUNK(&pool->hdr.c);
-               free(pool);
+               if (pool_tc->flags & TALLOC_FLAG_POOLMEM) {
+                       _talloc_free_poolmem(pool_tc, location);
+               } else {
+                       /*
+                        * The talloc_memlimit_update_on_free()
+                        * call takes into account the
+                        * prefix TP_HDR_SIZE allocated before
+                        * the pool talloc_chunk.
+                        */
+                       talloc_memlimit_update_on_free(pool_tc);
+                       TC_INVALIDATE_FULL_CHUNK(pool_tc);
+                       free(pool);
+               }
                return;
        }
 
-       if (pool->hdr.c.pool == next_tc) {
+       if (pool->end == next_tc) {
                /*
                 * if pool->pool still points to end of
                 * 'tc' (which is stored in the 'next_tc' variable),
                 * we can reclaim the memory of 'tc'.
                 */
-               pool->hdr.c.pool = tc;
+               pool->end = tc;
                return;
        }
 
@@ -838,6 +937,7 @@ static inline void _talloc_free_children_internal(struct 
talloc_chunk *tc,
 static inline int _talloc_free_internal(void *ptr, const char *location)
 {
        struct talloc_chunk *tc;
+       void *ptr_to_free;
 
        if (unlikely(ptr == NULL)) {
                return -1;
@@ -914,24 +1014,29 @@ static inline int _talloc_free_internal(void *ptr, const 
char *location)
        tc->name = location;
 
        if (tc->flags & TALLOC_FLAG_POOL) {
-               union talloc_pool_chunk *pool = (union talloc_pool_chunk *)tc;
+               struct talloc_pool_hdr *pool;
+
+               pool = talloc_pool_from_chunk(tc);
 
-               if (unlikely(pool->hdr.object_count == 0)) {
+               if (unlikely(pool->object_count == 0)) {
                        talloc_abort("Pool object count zero!");
                        return 0;
                }
 
-               pool->hdr.object_count--;
+               pool->object_count--;
 
-               if (likely(pool->hdr.object_count != 0)) {
+               if (likely(pool->object_count != 0)) {
                        return 0;
                }
 
-               talloc_memlimit_update_on_free(tc);
-
-               TC_INVALIDATE_FULL_CHUNK(tc);
-               free(tc);
-               return 0;
+               /*
+                * With object_count==0, a pool becomes a normal piece of
+                * memory to free. If it's allocated inside a pool, it needs
+                * to be freed as poolmem, else it needs to be just freed.
+               */
+               ptr_to_free = pool;


-- 
Samba Shared Repository

Reply via email to