Christopher Smith wrote:
It's still lousy encapsulation. You've just exchanged a try/finally pair for a "using" statement,

Which is much cleaner, yes.

*and* now you can't have exceptions during construction,

You can. It just doesn't immediately call Dispose on that particular object.

so you have to add a test after object construction. Ick.

Err, no. What kind of test?

Destructors in C++ don't necessarily get called unless you can actually instantiate the object on the stack anyway,
No. they always get called exactly when an object dies.

There's no guarantee an object dies. If you allocate it on the heap and don't call destroy, it doesn't get disposed. Ding! Memory leak.

while finalizers do eventually get called.
Actually... they don't. In fact if you don't have two GC's during the execution of your program your only hope is really Dispose methods.

I *think* C# guarantees to call them during clean program exit. But of course if you call exit() from C++ you don't get your destructors called either.

Note that a GC doesn't ensure that all the finalizers that can run will run, not by a long shot. This can be highly problematic if you have resource contention that can occur independently of memory contention.

Right.

It is also a *very* expensive operation as compared to a method invocation. I'm kind of surprised that anyone would use it primarily as a way to trigger finalizers.

One wouldn't. I'd use it to trigger finalizers in the event that I forgot to use "using" or I otherwise failed to explicitly invoke a disposal. Something one can't do in C++.

The "Connection" object's destructor cleans up the connection as best it can when conn drops out of scope.
Assuming it doesn't throw an exception in the constructor.
Fair enough... although in C++ parlance, that would mean the object never acquired the connection's resources in the first place, and a sanely written constructor would do the right thing with minimal effort anyway.

As would the C# constructor throwing the exception you're complaining about. :-) Just like in C++, in C# if you aquire a time-sensitive resource in a constructor that can be "lost" if you throw an exception before the constructor finishes, you'd best catch the possible exceptions and take care of them inside the constructor. It's the same pattern, really.

       log.error("Error when closing connection......");

Better hope log never throws an exception...
Well, generally you know this from the spec and you can even specify it with std::nothrow. If it can throw an exception then you have to wrap it in a try/catch, but at least you are only doing it once per class, rather than one per use of instances of the class.

Um, not the way you've written it. If log.error() can throw an exception, then your destructor needs to catch it in C++.

If you don't want that, I'd argue you are misusing/abusing inheritance anyway, and you have bigger fish to fry.

Oh, and *that* never happens in C++ ;-)

Yes, C# is far superior in this regard, which is why you see articles about never invoking Finalize directly

That's more a library thing than a language thing. There's nothing wrong with invoking Finalize directly, except that most of the built-in classes use a different pattern than you might expect.

and why you get compiler errors when a IDisposable is created outside of a using statement. ;-)

You do? I never encountered that.

Ick... that's going to have some lovely interactions.

Not really.

Does Windows really have that many hooks in to the CLR or is it more the various classes being written such that a bad return code will trigger a "GC and then retry"?

I'm not talking about Windows or C# here. Systems that work this way don't have finalizers or destructors. GC will GC files as well as memory, because files are in memory. Processes get GCed when they get deadlocked or when all their channels of communication are closed and can't be reopened. Etc.

The OS will at best only know about OS level resource limitations, rather than application level ones. Nor will it necessarily be aware of the timing needs. My observation is you in C# you end up writing a *lot* of "using" statements,

I never had that problem. Most of the places I used "using" was to trigger block-scoped transactions. YMMV.

If you think of finalizers as GCing stuff that the OS isn't GCing for you, it makes more sense. With the appropriate syntax, it's neither harder nor easier than destructors to get right.
I think the difference though is that unlike with memory, there can be huge benefits to having determinism for releasing other resources.

Like what? Consider an OS where all files are kept in memory by actively-running processes that you communicate to via RPC. All disk space is essentially swap. What's the benefit to having deterministic reclamation of space there? Or what other resource would you be speaking of?

BTW, GC doesn't necessarily mean "wait until I run out of resource to do this." There are systems with deterministic automatic GC, wherein (for example) the liveness of values is tracked, and the compiler essentially generates "destroy" statements for values as soon as the code will no longer reference them.

You wind up with stuff that doesn't look anything at all like C++ or C#, mind.


--
  Darren New / San Diego, CA, USA (PST)
    It's not feature creep if you put it
    at the end and adjust the release date.

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to