Darren New wrote:
Christopher Smith wrote:
It's still lousy encapsulation. You've just exchanged a try/finally
pair for a "using" statement,
Which is much cleaner, yes.
using (foo = blah)
{
}
vs.
foo = blah;
try {
} finally {
foo.dispose();
}
vs.
{
foo = blah;
}
You still lack the ability to encapsulate knowledge about whether an
object owns resources. The using case requires "foo" be of type
IDisposable. Sorry, but I'd say it is at best debatable which of the
first two is cleaner, and neither is "much cleaner". The third one
clearly carries the day. If you can't see that, I've sadly been wasting
my breath.
*and* now you can't have exceptions during construction,
You can. It just doesn't immediately call Dispose on that particular
object.
Which means you can have a resource that leaks until the GC gets it's
job done (if ever).
so you have to add a test after object construction. Ick.
Err, no. What kind of test?
Well, you have one of two idioms you can go with in C#. You can either
a) have your constructor not throw exceptions, and instead it returns an
"invalid" object, so you then have to invoke some test method to
determine if you actually got something valid or b) let the constructor
throw exceptions, and then the caller has to figure out where the
exception occurred and what resources need to be freed up (lovely
encapsulation either way).
Destructors in C++ don't necessarily get called unless you can
actually instantiate the object on the stack anyway,
No. they always get called exactly when an object dies.
There's no guarantee an object dies. If you allocate it on the heap
and don't call destroy, it doesn't get disposed. Ding! Memory leak.
I guess if you exit() on the stack it doesn't get destroyed either. ;-)
However, if you follow the RAII paradigm, you shouldn't have either
problem unless you have an object whose lifecycle isn't deterministic by
design.
while finalizers do eventually get called.
Actually... they don't. In fact if you don't have two GC's during the
execution of your program your only hope is really Dispose methods.
I *think* C# guarantees to call them during clean program exit. But of
course if you call exit() from C++ you don't get your destructors
called either.
It gives it the ol' college try, but unfortunately some *other* random
object's finalizer() can prevent it from getting the call.
It is also a *very* expensive operation as compared to a method
invocation. I'm kind of surprised that anyone would use it primarily
as a way to trigger finalizers.
One wouldn't. I'd use it to trigger finalizers in the event that I
forgot to use "using" or I otherwise failed to explicitly invoke a
disposal. Something one can't do in C++.
Lots of ways to do that in C++ actually. The semantics are different of
course, and most people primarily use it to identify bugs in their
program, rather than leave them in there.
The "Connection" object's destructor cleans up the connection as
best it can when conn drops out of scope.
Assuming it doesn't throw an exception in the constructor.
Fair enough... although in C++ parlance, that would mean the object
never acquired the connection's resources in the first place, and a
sanely written constructor would do the right thing with minimal
effort anyway.
As would the C# constructor throwing the exception you're complaining
about. :-) Just like in C++, in C# if you aquire a time-sensitive
resource in a constructor that can be "lost" if you throw an exception
before the constructor finishes, you'd best catch the possible
exceptions and take care of them inside the constructor. It's the same
pattern, really.
Foo::Foo(....)
: resource1(..), resource2(..), resource3(..) {
something_that_might_throw();
}
now you can construct Foo and have exceptions thrown while initializing
any of those members or in something_that_might_throw(), and all the
resources are cleaned up by the time the exception bubbles up to the
caller, all without Foo having to be aware of the particular workings of
its member variables.
That is *not* the same pattern.
log.error("Error when closing connection......");
Better hope log never throws an exception...
Well, generally you know this from the spec and you can even specify
it with std::nothrow. If it can throw an exception then you have to
wrap it in a try/catch, but at least you are only doing it once per
class, rather than one per use of instances of the class.
Um, not the way you've written it. If log.error() can throw an
exception, then your destructor needs to catch it in C++.
When I said "generally you know this from the spec", I meant that
generally you know that your logging won't throw an exception, which is
why I didn't bother to smother it. Some would argue that if it does fail
than the best behavior is the ensuing core dump anyway. ;-)
If you don't want that, I'd argue you are misusing/abusing
inheritance anyway, and you have bigger fish to fry.
Oh, and *that* never happens in C++ ;-)
Honestly, I've never seen it. I've sure seen the reverse in Java though,
particularly since Generics showed up.
and why you get compiler errors when a IDisposable is created outside
of a using statement. ;-)
You do? I never encountered that.
Obviously I forgot to insert my <sarcasm></sarcasm> tags.
It sure would be nice if "always do X" were enforced by the language...
I'm not talking about Windows or C# here. Systems that work this way
don't have finalizers or destructors. GC will GC files as well as
memory, because files are in memory. Processes get GCed when they get
deadlocked or when all their channels of communication are closed and
can't be reopened. Etc.
Ah, a platform where all resources are automatically managed. Yes, that
is a nice thing, so long as you can have user defined resource
management heuristics as well.
If you think of finalizers as GCing stuff that the OS isn't GCing
for you, it makes more sense. With the appropriate syntax, it's
neither harder nor easier than destructors to get right.
I think the difference though is that unlike with memory, there can
be huge benefits to having determinism for releasing other resources.
Like what? Consider an OS where all files are kept in memory by
actively-running processes that you communicate to via RPC. All disk
space is essentially swap. What's the benefit to having deterministic
reclamation of space there?
Actually, if you imagine a turing machine, there isn't much need there
either. Unfortunately, such beasties are hard to come by.
But let's take your RPC example. Let's say that I'm using an RPC
service, and it is holding on to 400 petabytes of information for me.
Perhaps it'd be good to tell it to free that up before I ask it to
allocate a new 400 petabyte dataset for me, yes?
Or what other resource would you be speaking of?
Socktets, file descriptors, semaphores, mutexes, database connections,
MAC addresses, IP addresses, mlocked memory, disk space, ports,
requests, etc.
BTW, GC doesn't necessarily mean "wait until I run out of resource to
do this."
Of course not. Most JVM's don't do that.. of course it makes them less
deterministic,,,,
There are systems with deterministic automatic GC, wherein (for
example) the liveness of values is tracked, and the compiler
essentially generates "destroy" statements for values as soon as the
code will no longer reference them.
I've seen such systems employed only in a research context. The joy of
I/O and preemptive scheduling (both of which aren't *always* needed, but
are often there) tends to make such techniques difficult to employ
across the board (although good memory managers will do escape analysis
to determine some of the good cases).
--Chris
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg