Mike Small wrote on Thu, Mar 18, 2010 at 01:07:52PM -0400: > On Thu, Mar 18, 2010 at 11:40:40AM -0400, Martin Cracauer wrote: > ... > > > This is the reason why coding standards for systems with very high uptime > > > requirements often disallow throwing exceptions. This can extend to > > > disallowing use of libraries that throw (or taking pains to configure > > > libs > > > so that they do not). > > > > Wouldn't that mean "don't let exceptions escape out of the library"? > > Not that I like either. > > > > In my mind, not using exceptions is almost a guarantee that there are > > resource leaks. The only reason why this can halfway work today is > > that we have humongous amounts of virtual memory and can have > > thousands of file descriptions. > > > > > Heh, and using exceptions is also a guarantee that there are resouce > leaks unless the programmer is well disciplined with RAII, has > garbage collection (which only helps with memory resouces), or > reference counted smart pointers.
Resources are more than memory. In the age of 64 bit virtual memory memory leaks are actually a comparably minor issue. File descriptors not closed, files not synced, TCP connections left hanging, messing up the list of forked children that you expect to return are much more severe issues. There resources are subject to the same limits as ever. Fighting those issues in long-uptime programs is much easier with exceptions than without. Forbidding them makes no sense, except specifically if you think your programmers are too dumb. Ironically, C++ of all languages is better off here since it has the same mechanism for guaranteed cleanup whether you use exceptions or not. In both cases you use destructors in stack-based objects. That is the result why C++ doesn't have a finally statement, BTW. > And even then they have to have > designed well enough not to forget that they have some collection > at top level or some global singleton that never releases a reference > to some huge tree of objects. Too often it seems enough to say > only, "there are resource leaks." I guess you folks are probably > reading better code than I am. Still, tell me you haven't talked > to people who think it's perfectly normal acceptable practice to > run servers with a wrapper of some kind that kills them and restarts > them periodically. Then you are not talking about a long-uptime program. > But for safety critical projects, the no rtti, no exceptions rule > is pretty common isn't it? Not sure if it's justified or not, but > very common from what I hear. Something to do with code verification, > provability? Not that that gives any lessons for application code > in other domains, necessarily. No rtti means casting things around without language support. No exceptions means deeper nesting of if statements and/or lots of if-return pairs. And it means infecting functions that are somewhere in the call tree between what would be the thrower and what would be the catcher with addition logic to deal with early returns, when they actually have no interest in either rising or dealing with that error. This screws up code readability even more than jumping through those functions. It is also not change tolerant since people might insert new function calls and might never actually see the raiser or catcher and have no clue about the set of possible exceptional cases that loom below. The rule should be to only let programmers who know what they are doing write safety critical code. Martin -- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Martin Cracauer <[email protected]> http://www.cons.org/cracauer/ FreeBSD - where you want to go, today. http://www.freebsd.org/ _______________________________________________ Boston-pm mailing list [email protected] http://mail.pm.org/mailman/listinfo/boston-pm

