Christopher Smith wrote:
The third one clearly carries the day.
Sure, until you need the lifetime to not match the scope.
Which means you can have a resource that leaks until the GC gets it's
job done (if ever).
... which is easy to fix, and really wouldn't even need a change in the
source if the runtime invoked the GC whenever you caught an exception.
so you have to add a test after object construction. Ick.
Err, no. What kind of test?
Well, you have one of two idioms you can go with in C#. You can either
a) have your constructor not throw exceptions, and instead it returns an
"invalid" object, so you then have to invoke some test method to
determine if you actually got something valid or b) let the constructor
throw exceptions, and then the caller has to figure out where the
exception occurred and what resources need to be freed up (lovely
encapsulation either way).
b) isn't any worse than C++. If a C++ constructor throws an exception,
you have to clean it up. Don't let the constructor throw exceptions
after it has created resources that it needs to clean up promptly. I.e.,
just like C++.
However, if you follow the RAII paradigm, you shouldn't have either
problem unless you have an object whose lifecycle isn't deterministic by
design.
Excactly. I.e., RAII only is guaranteed to clean up resources promptly
if you only allocate such objects on the stack. My programs tend to be a
bit more complicated than that.
while finalizers do eventually get called.
Actually... they don't. In fact if you don't have two GC's during the
execution of your program your only hope is really Dispose methods.
I *think* C# guarantees to call them during clean program exit. But of
course if you call exit() from C++ you don't get your destructors
called either.
It gives it the ol' college try, but unfortunately some *other* random
object's finalizer() can prevent it from getting the call.
I'm not sure what that means. Sure, it's *possible* to prevent a
finalizer from running in C#. But it's *possible* to prevent a
destructor from running in C++, so I'm not sure your point. Don't do
that if you don't want it.
It is also a *very* expensive operation as compared to a method
invocation. I'm kind of surprised that anyone would use it primarily
as a way to trigger finalizers.
One wouldn't. I'd use it to trigger finalizers in the event that I
forgot to use "using" or I otherwise failed to explicitly invoke a
disposal. Something one can't do in C++.
Lots of ways to do that in C++ actually.
Tell me how you ensure the destructor runs on an object you dynamically
allocated and forgot to call destroy on?
now you can construct Foo and have exceptions thrown while initializing
any of those members or in something_that_might_throw(), and all the
resources are cleaned up by the time the exception bubbles up to the
caller, all without Foo having to be aware of the particular workings of
its member variables.
Yes. And you can do essentially the same thing in C# - put all the
resource allocation in objects whose constructors won't throw an
exception. It's mildly clunkier syntax (involving new and assignments),
but it's essentially the same. Certainly not different enough to make a
difference.
And since 95%+ of "resources" in C++ are memory allocations which *are*
handled automatically in C#, it's really not that big a deal.
Granted, people don't tend to do this, opting to instead just catch any
exception in the constructor and clean up, but you see it all through
the professional libraries.
When I said "generally you know this from the spec", I meant that
generally you know that your logging won't throw an exception, which is
why I didn't bother to smother it. Some would argue that if it does fail
than the best behavior is the ensuing core dump anyway. ;-)
Well, uh, *you* might think so. I write servers that are expected to run
indefinitely in the face of failures of all kinds of underlying failures
of service providers.
If you don't want that, I'd argue you are misusing/abusing
inheritance anyway, and you have bigger fish to fry.
Oh, and *that* never happens in C++ ;-)
Honestly, I've never seen it.
OK. I've seen it, but admittedly primarily in tutorial-type programs.
and why you get compiler errors when a IDisposable is created outside
of a using statement. ;-)
You do? I never encountered that.
Obviously I forgot to insert my <sarcasm></sarcasm> tags.
Oh! No, you don't. But just because it has an IDisposable interface
doesn't mean you want LIFO semantics on the lifetime.
That would be like saying you couldn't heap-allocate any C++ object that
has a destructor.
Ah, a platform where all resources are automatically managed. Yes, that
is a nice thing, so long as you can have user defined resource
management heuristics as well.
Yes. They work very nicely, and surprisingly efficiently. Generally, the
optimization/management stuff is at a different conceptual level than
the code you write. (Think SQL statements vs specifying on which disk
various table indexes live.)
Like what? Consider an OS where all files are kept in memory by
actively-running processes that you communicate to via RPC. All disk
space is essentially swap. What's the benefit to having deterministic
reclamation of space there?
Actually, if you imagine a turing machine, there isn't much need there
either. Unfortunately, such beasties are hard to come by.
Not *that* difficult. IBM SNA switches used to work that way, for
example. There are a number of current OSes that work that way, like
Eros, Hermes, and a couple others I'd have to look up the names of.
The problem with such a system is the same problem that APL and
Smalltalk and such have with popularity: they don't play well with
others, being their own little world. It's difficult to put a UNIX
emulation layer on top of a system that's capabilities-based and
massively parallelized.
But let's take your RPC example. Let's say that I'm using an RPC
service, and it is holding on to 400 petabytes of information for me.
Perhaps it'd be good to tell it to free that up before I ask it to
allocate a new 400 petabyte dataset for me, yes?
No, why? If it knows you've stopped using it when you stop having a
reference to it, why do you care when it gets freed?
Or what other resource would you be speaking of?
Socktets, file descriptors, semaphores, mutexes, database connections,
MAC addresses, IP addresses, mlocked memory, disk space, ports,
requests, etc.
Sockets (in these types of OSes) get reclaimed immediately when the last
reference goes away, unless they're sockets to other processes in the
same type of system (i.e., not TCP/IP sockets to "external" hosts). This
is because the system knows they're going off-host. I.e., for the same
reason that sockets get closed for you when your process exits in UNIX.
The other stuff are all concepts that are unnecessary in these systems,
since you don't do boundaries between users and processes in the same
way. Disk space, ports (not sure what kind of port you mean), file
descriptors, semaphores, etc are all things used to coordinate multiple
processes competing over some shared resource that is external to all
user-visible processes, and thus needs OS support to coordinate. I.e.,
in the systems I'm talking about, you no more have problems with
semaphores and mutexes than in UNIX you worry over locking the disk
space you're going to write to when you create a file.
I'm not 100% sure how you go about freeing a MAC address. ;-)
I've seen such systems employed only in a research context.
IBM SNA switches used to (maybe still do) use a system called NIL that
do just this. There's a research paper describing it if you hunt it
down, but it took me two or three days to track it down when I was in
grad school, so I can't imagine it's gotten any easier. :-)
--
Darren New / San Diego, CA, USA (PST)
It's not feature creep if you put it
at the end and adjust the release date.
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg