On Wed, Mar 17, 2010 at 3:18 PM, Ivan Novick <[email protected]> wrote:
> When adding to a hash table using apr_hash_set ... If memory can not be
> allocated what is the expected behavior?
>
> I am seeing apr_hash_set calls expand_array.
>
> expand_array core dumps if memory can not be allocated.
>
> Is this expected?  Is there a way to get an error code for a failed insert
> to a table rather than a coredump?

Seems easy enough to just make expand_array() give up if alloc_array()
fails: the next call to apr_hash_set() will try to expand the array
again (if warranted). See attached patch.

Note that I'd be hesitant to use apr_hash for large tables, unless you
can accurately pre-size it: see
http://markmail.org/message/ljylkgde37xf3wdm and related threads (the
referenced patch ultimately had to be reverted, because it broke code
that accessed hash tables from a cleanup function in the same pool).

Neil

Attachment: apr_hash_expand_oom-1.patch
Description: Binary data

Reply via email to