On Thu, 12 Aug 2010 16:08:17 -0400, Don <[email protected]> wrote:

Steven Schveighoffer wrote:
On Thu, 12 Aug 2010 13:05:53 -0400, Joe Greer <[email protected]> wrote:

"Steven Schveighoffer" <[email protected]> wrote in
news:[email protected]:

Logically speaking if an object isn't destructed, then it lives
forever and if it continues to hold it's resource, then we have a
programming error.  The GC is for reclaiming memory, not files.  It
can take a long time for a GC to reclaim an object and you surely
don't want a file locked for that long anymore than you want it held
open forever.  My point is, that it is a programming error to expect
the GC be involved in reclaiming anything but memory.  IMO, the best
use of a finalizer is to error out if an object holding a resource
hasn't been destructed, because there is obviously a programming
error here and something has leaked. GCs aren't there to support
sloppy programming.  They are there to make your life easier and
safer.

An open file maybe, but why should the compiler decide the severity of
not  closing the resource?  What if the resource is just some
C-malloc'd memory?

That's the only example of an nearly unlimited resource which I've heard thus far, but that raises the question, why the hell would you be using malloc if you're not going to free it, when you have a language with a gc? Effectively, the gc is freeing your malloced memory.

malloc/free is a lot more efficient than the GC. Partly on account of it being so mature, and partly because it doesn't have to deal with GC-like problems.

Kris of Tango made an allocator for tango.container that is extremely fast based on malloc/free, but you are required to use it only for pure value types (no references).

Another reason is to be able to use it in the finalizer, since malloc'd memory will be valid. If, for instance, you registered some object with a name, you need to have a malloc'd copy of the name to be able to unregister it on finalization.

Finally, if the 3rd party library your using gives you malloc'd data, what do you do then?

That's a possible solution. I just don't like the blanket assumptions being made.

Actually it's the absence of a use case.

Hypothesis: if a finalizer is run, where it actually DOES something (as opposed to, for example, running a pile of asserts), there's always a bug.

It's an extreme hypothesis, so it should be really easy to disprove.
Can you come up with a counterexample?

Isn't it a bug to rely on finalization to tell you something is wrong with your program? Essentially, if an object is finalized, and it must close resources, you have two options: close the resources and continue silently or loudly alert the user that there is a programming bug. Considering that finalization is not guaranteed, that raises significantly the potential of bugs escaping into shipping code because the "bad" case just didn't get triggered. I'd say in most cases, the bad case isn't so bad.

What if a finalizer asserted things were closed, and then closed them if they weren't. That way, you get asserts when not compiling in release mode, and in release mode, the program soldiers on as best it can without throwing seemingly random errors. It's just a thought. I don't like the idea of throwing errors in the finalizer, because the error report can possibly be completey decoupled from the source.

For an example, Tango manages all I/O with classes, and I've written code that could run indefinitely opening and closing files continuously. I never ran into resource problems. If all of a sudden it started complaining that I wasn't closing the files, I'd question why someone put in that change, since it was working fine before. I agree that the GC is not the best place to rely on closing files, but it doesn't *hurt* to close resources when you realize the last reference to that resource is about to be eliminated.

-Steve

Reply via email to