> What if the best solution is to abort the operation requesting the big
> chunk of unavailable memory? We don't have any significant cache in
> this process to dump, and it wouldn't have helped for long anyway.

That should be handled in the code that deals with requesting big chunks of
memory. Hopefully, you don't have code that requests big chunks of memory in
too many places, so it's not too painful.

In my own code, I use 'malloc' for an allocator that never returns NULL and
'softmalloc' for one that does return NULL. In debug builds, using 'malloc'
for large allocations triggers a debug warning.

Ideally, all large allocations will use 'softmalloc' and trigger aborting
large operations. All calls to 'softmalloc' are audited during code
inspection to make sure NULL returns are safely handled.

I also have various booleans that indicate the system's memory state. So you
can checkpoint operations that are easily aborted. Kind of like 'if
(LOW_MEMORY) return NOPE_NOT_NOW'.

There is really no place in OpenSSL that allocates huge enough chunks of
memory to be worth aborting. So my own code that uses OpenSSL gives OpenSSL
an allocator that never fails. Before establishing new SSL connections, I
check my own memory status indicators, and refuse to allow new SSL
connections if I have insufficient memory to allow them.

Plus, I think it's rude to bail an SSL connection once you've made it. If at
all possible, the decision is best made at connection establishment, which
application code moderates anyway.

DS


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       openssl-dev@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to