On May 13, 2011, at 10:26 AM, Jed Brown wrote:

> On Fri, May 13, 2011 at 17:03, Barry Smith <bsmith at mcs.anl.gov> wrote:
> > BTW, Writing a REALLY good destructor in C++ is a
> > pain. What's happens if an exception is thrown in the middle of object
> > destruction and you still have things to delete[]?
> 
>   Then I would argue that something is wrong with C++ in this regard.
> 
> Perhaps, but the larger problem is independent of language and independent of 
> parallelism. Say you have several lock files open. The locks have to be 
> released in a specific order. When you try to close one, an exception/error 
> is returned. What do you do? If you exit early, then locks remain that will 
> cause problems in the future. If you ignore the error, perhaps something 
> really bad could happen (corruption?). There isn't any general-purpose rule 
> that can be applied when there are errors in destructors. PETSc has chosen 
> that no integrity is guaranteed if a function returns an error. If you want 
> to maintain integrity, then you basically have no choice but to say that 
> exceptions in destructors are BAD. You can do this by having the caller not 
> check error codes (or try: destroy(); except: pass) or by preventing 
> destructors from returning errors.
> 
> It's a similar issue to why PETSc does not check for errors in its own error 
> handlers.

   I agree it is an exceedingly hard problem to solve in general; I am not 
claiming I know how to solve it or even what a good solution is. My comment is 
only that they punted on problem (just like PETSc punts on the problem) 

Reply via email to