--- Comment #15 from Ruurd Beerstra <> ---

Thank you for reviewing the patch.

>--- Comment #14 from Philippe Waroquiers <> --- 
>Quickly read the last
>version of the patch, sorry for entering in the game so late
>Some comments:
>* Typo in the xml documentation:  alocator

Oops. Fixed. 

>* lines like below:  opening brace should be on the same line
>+         if (MC_(is_mempool_block)(ch1))
>+         {

I've written code in the OTB (One True Brace style) for so many years it is
hard to be forced to stop doing that :-)
Done, though.

>* for detecting/reporting/asserting the overlap condition
>   in case ch1_is_meta != ch2_is_meta, I am wondering if we should not check
>   that the non meta block is (fully) inside the meta block.
>   It looks to be an error if the non meta block is not fully inside the meta 
> block.

Yes, that would be a serious error in the custom allocator.
Of course our allocator does not do that, so I didn't think of that :-)
I've added an extra check for that. Ran all the regression tests, all is well.

>*  free_mallocs_in_mempool_block : this looks to be an algorithm that will be
>    O (n * m)   when n is the nr of malloc-ed blocks, and m is the nr of blocks
>    by Start/End address. That might be very slow for big applications, that 
> allocates millions
>    of blocks, e.g. 1 million normal block, and one million blocks in meta 
> blocks
>    will take a lot of time to cleanup ?

Short answer: Yes.
Long answer:
Part of the inefficiency is that it has to restart the scan after modifying the
list. Can't help that.
Also, I can't find any other way in valgrind to find the chunks with a
particular address-range other than a brute-force scan.
But if the big application you describe were not using auto-free pools, and it
wanted to prevent memory leaks, it would have to explicitly free all those
items, which takes the same lot of time + incurring the extra overhead to pass
those calls to valgrind. I can't see any way around that, either.
The overhead is only incurred by custom allocators using the auto-free feature,
not by any existing applications or allocators.
Also, if you memcheck an application using many millions of alloc/frees, you're
willing to wait a while, anyway.

Our custom allocator has a clever feature where it doles out a chunk of a meta
block to the application without keeping track of it.
It simply advances a "used" pointer in the pool block.
Those chunks are non-freeable and the application knows this, of course.
Very efficient way to, for example, store a temporary XML tree in a separate
When the XML tree is discarded, the auto-free pool is destroyed and the
application does not have to traverse the tree to free it.
Our allocator simply marks all the pool blocks as free for re-use.
The problem was that valgrind would not allow that (when a re-use happened it
would see that as an internal error because the MALLOCLIKE blocks had never
been freed as far valgrind was concerned and handing out the same address twice
is a Bad Thing).
This patch of mine makes valgrind usable for our environment.
I now use the modified valgrind in our regression test environment and we're
very happy with it.

Does that answer the questions?

    Attached is a revised version of the patch,
    Ruurd Beerstra.

You are receiving this mail because:
You are watching all bug changes.

Reply via email to