> Keepin it apart from the memory leak, i would like to know by
> example how a
> perfect cleanup can casue performance problems?

One common case goes like this:

1) You have an object you create very early in the library initialization.

2) The object is accessed a lot, and having to check if it's available or
initialized would be tricky to do every place it might be invoked.

3) So long as you never cleanup the object, you know it must exist in any
code that library could call, since the first thing the library does is
create it.

4) If you destroy it in cleanup, any other cleanup code that runs after that
has to deal with the fact that the object doesn't exist. This could in some
cases be most of the library.

5) You could destroy it last, but you can't destroy everything last.

Consider a linked list of algorithms with sentinels to save pointer compares
to NULL. For perfect cleanup, you have to get rid of those dummy objects.
But that means any code that runs after that cleanup will blow up if it
accesses the list (or all code that accesses the list will have to make sure
it hasn't been cleaned up, defeating the whole point of the sentinels). So
you have a new ordering rule that affects any code called from cleanup code
that might go after those objects are cleaned up -- it most not ever touch
that list, and no code it calls can ever touch the list. If you get too many
of these rules, it's easier to change to a less-efficient list algorithm
that doesn't require sentinels.

Again, you can't put everything last. And anything that goes after something
else cannot touch what has already been cleaned up unless you add code to
make that work at the point where you touch the thing that might be cleaned
up. And that point is usually *not* the cleanup code.

The perfect solution involves a lot of very complex dependency work to make
absolutely sure that nothing can be used after it's cleaned up. What if the
cleanup code for A might need B, and the cleanup code for B might need C,
and the cleanup code for C might need A? Do you un-cleanup A? Then when do
you clean it back up again? Might you never terminate?

There are many examples. Engineering a library for perfect cleanup is a
*major* design requirement and if you insist on it, things will have to give
in other areas.

> >Since nobody really cares whether you can free every single byte before
> >you terminate a process, nobody bothers to code it.

> Not just for a single byte, but in embedded systems and in deviced with
> resource contraints woudn't we care about even a small amount of
> memory not
> being freed after it is no more required?

Sure and in those special cases, you have totally different design
requirements and thus totally different designs. Unless you have embedded
systems that have enough resources that they can run commodity code, in
which case you don't care -- that's why you put the resources there.

If you want to do the most with the very least, you have a very special case
that bears no resemblance to typical desktop/server code (and there exist
SSL/crypto libraries for these unusual cases).

If you want to use mainstream code, you have to make the mainstream
tradeoffs. Otherwise, you were wrong to want to use mainstream code.

One of the reasons people put lots of CPU and lots of memory in modern
embedded systems is because it's cheap and allows you to use more mainstream
engineering tradeoffs. This is one of those tradeoffs.

Welcome to the real world.

DS


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       openssl-dev@openssl.org
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to