On 3/17/10 3:35 PM, "Neil Conway" <[email protected]> wrote:

> Note that I'd be hesitant to use apr_hash for large tables, unless you
> can accurately pre-size it: see
> http://markmail.org/message/ljylkgde37xf3wdm and related threads (the
> referenced patch ultimately had to be reverted, because it broke code
> that accessed hash tables from a cleanup function in the same pool).

Regarding the amount of allocation of memory for a large hashtable:

It seems the overhead is limited to 2 X overhead.  For example if I am growing 
a table from 1 entry to 1024 entries would have an array of size 16, 32, 64, 
128, 256, 512, 1024 = 2048 entries allocated instead of 1024 entries

Also this overhead is just for the pointers... If each item in the table is 4KB 
for example the overall overhead of the data structure is an extra 100-200 
Bytes per entry whereas the data itself is much larger than that.

I am missing something that makes the situation much worse?

Cheers,
Ivan Novick

Reply via email to