Joe Orton wrote:
> On Thu, Mar 26, 2009 at 03:10:56PM +0100, Mladen Turk wrote:
>> What's the point?
>
> The null hypothesis is: modern malloc implementations do exactly the
> same optimisation work (e.g. maintaining freelists) that we duplicate in
> APR pools.  By avoiding that duplication, and relying on malloc
> optimisation, we might get better/equivalent performance whilst reducing
> the complexity of APR.
>

That's all true, but the pool's purpose is not to be malloc replacement.
This new concept remembers the allocated chunks of data,
so it's pretty unreliable regarding performance predictability.
Smaller chunks cause larger memory usage, because you have
to remember many tiny pointers. With things like concatenating
8 char string, you actually have doubled the memory usage
(8+ bytes for data depending on the malloc implementation +
 4(8) bytes for storing this pointer)
So the memory doesn't depend on the allocated size only,
but on the allocated quantity as well.

That's why I think this new concept doesn't fit in the
apr_pool usage entirely. Well at least for the string, table and
hash operations which are hugely used across httpd.

However, for non string operations like managing
system objects, it probably makes no difference.


Also, I think it would be more useful to benchmark something like Subversion's "make check", or an httpd load test.


Probably.


Regards
--
^(TM)

Reply via email to