Darren New wrote:
Christopher Smith wrote:
leave scope. While it accomplishes much the same thing, it doesn't allow this cleanup work to be encapsulated in the object (you have to write it out each time).

Yeah, that's one of the nice things C# cleaned up:
For the record, using doesn't invoke the finalizer, but rather the IDisposable.Dispose(). One of those nice little subtleties that unfortunately is important.
using (x = new X())
using (y = new Y())
{
  x.this();
  y.that();
}

x and y's finalizers will get called at the end of the block.

It messes up in that if the "new" throws an exception, the finalizer doesn't necessarily get called right there, but it's still better than Java or C++.
It's still lousy encapsulation. You've just exchanged a try/finally pair for a "using" statement, *and* now you can't have exceptions during construction, so you have to add a test after object construction. Ick.
Destructors in C++ don't necessarily get called unless you can actually instantiate the object on the stack anyway,
No. they always get called exactly when an object dies. If you want it to be lexically scoped but it is heap allocated, you can use auto_ptr<> in much the same manner as a "using" statement, although unlike using it doesn't require a common base type or the invocation of a virtual method..
while finalizers do eventually get called.
Actually... they don't. In fact if you don't have two GC's during the execution of your program your only hope is really Dispose methods.
And most programs where you're worried about this stuff have an obvious place where you can invoke finalizers, like the start of each frame in a video game, or the end of each transaction in a database server, so you can just do a GC at that point and force the issue. Or do a GC at each top-level catch of an exception, and you have a pretty well-defined place for finalizers to run.
Note that a GC doesn't ensure that all the finalizers that can run will run, not by a long shot. This can be highly problematic if you have resource contention that can occur independently of memory contention. It is also a *very* expensive operation as compared to a method invocation. I'm kind of surprised that anyone would use it primarily as a way to trigger finalizers.
The "Connection" object's destructor cleans up the connection as best it can when conn drops out of scope.
Assuming it doesn't throw an exception in the constructor.
Fair enough... although in C++ parlance, that would mean the object never acquired the connection's resources in the first place, and a sanely written constructor would do the right thing with minimal effort anyway.
unwind, the program will terminate if conn's destructor gets invoked as part of the unwind *and* it throws an exception.

       log.error("Error when closing connection......");

Better hope log never throws an exception...
Well, generally you know this from the spec and you can even specify it with std::nothrow. If it can throw an exception then you have to wrap it in a try/catch, but at least you are only doing it once per class, rather than one per use of instances of the class.
the parent destructors will get invoked after you've finished your work, and there's not much you can do to stop it beyond crashing.
Which is not always what you want either.
If you don't want that, I'd argue you are misusing/abusing inheritance anyway, and you have bigger fish to fry.
The solution is to make A::~A() virtual.
The thing that always kills me about C++ is the advice "always do X", and then X isn't enforced in the language, even tho it's always wrong not to do X.
Yes, C# is far superior in this regard, which is why you see articles about never invoking Finalize directly and why you get compiler errors when a IDisposable is created outside of a using statement. ;-)

Sure though, C++ is complex and messy as compared to most any language I can think of (with the possible exception of Perl). That said, all the rules I can think of in C++ actually can be broken if you understand why the rule is there and why it doesn't apply. The "always do X" is there because it's a damn complex language, so it's easier to just say "always do X" knowing that when people learn the language well enough they'll understand when it is appropriate to use the exceptions to the rules.
Anyway, in practice, finalizers prove not to be that useful for most cases, and also a source of much additional complexity (though thankfully in the common case most of the complexity is in the hands of whomever has to write the memory manager) and destructors prove to be quite useful, particularly for managing non-memory related resources, but are also a source of much additional complexity.
I think it depends what you're used to. I never had a problem with finalizers. Also, when you're using an OS that has support for these kinds of things, it becomes pretty transparent. When you have an OS that (for example) will run all finalizers when you try to open a file and it's busy or you're out of handles, things work much more smoothly. Just like when you have an OS that'll run a GC when you get low on memory.
Ick... that's going to have some lovely interactions. Does Windows really have that many hooks in to the CLR or is it more the various classes being written such that a bad return code will trigger a "GC and then retry"?

The OS will at best only know about OS level resource limitations, rather than application level ones. Nor will it necessarily be aware of the timing needs. My observation is you in C# you end up writing a *lot* of "using" statements, as well as plenty of uses of the Disposable interface, and even then you unfortunately don't have clean enough resource management to get the job done without some extra work.
If you think of finalizers as GCing stuff that the OS isn't GCing for you, it makes more sense. With the appropriate syntax, it's neither harder nor easier than destructors to get right.
I think the difference though is that unlike with memory, there can be huge benefits to having determinism for releasing other resources. Indeed, most everywhere you find a write up finalizers you'll find advice to avoid using them in most cases and instead use the dispose design pattern (wait, did I just mention a design pattern for languages with automatic memory management? ;-).

--Chris

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to