Darren New wrote:
Christopher Smith wrote:
What he means (well, not sure if this is what he means, but it *should* be what it means), is if you have any memory in the derived class that is allocated with new()/malloc(), or really any resources that need to be cleaned up in the subclass's destructor.
My understanding of the problem is that if you create a subclass that has member variables, you [new] an instance of that subclass, assign that pointer to a pointer-to-parent, and then [destroy] the parent, you will wind up calling the parent's destructor, which (a) won't clean up child memory and (b) won't necessarily even deallocate all the memory allocated.
So, neither a or b is certain. Given:

class Foo {
public:
  virtual void baz();
  ~Foo();
private:
   //stuff
};

class Bar : public Foo {
   int var;
public:
   void baz();
};

Foo* ptr = new Bar();
delete ptr; //cleans things up just fine.

The key thing is that the compiler *doesn't* do in this case is invoke Bar::~Bar(), but instead invokes Foo::~Foo() (the idea being that if Foo::~Foo() *were* virtual, then the vtable lookup would make you actually invoke Bar::~Bar()). Fortunately, Bar::~Bar() doesn't do anything, so it doesn't matter. You might think that delete ptr fails to free up the memory allocated for Bar correctly, but point in fact you could cast ptr to a void* and invoke free() and it would do fine, because C++'s heap is essentially untyped memory. It's just blobs of void*'s for certain sizes.

But wait, that's not all! :-) It's also possible that you know that you will *never* invoke Foo::~Foo(). Maybe you've done the visitor pattern and implemented a visitor that actually invokes the delete. Maybe you've templated everything to death so that at destruction time you always know the exact type of the object you are destroying. In those cases, even if Bar::~Bar() is defined *and* does something important, it still doesn't matter that Foo::~Foo() isn't virtual. Then there is the truly weird case where for whatever reason, even though Bar::Bar() does something, you know you want to invoke Foo::~Foo() but not Bar::~Bar() (weird, and should raise a red flag, but still possible).

Having this flexibility is helpful, but it is also helpful to have the compiler warn you when you when you are starting to wander off in to dangerous territory, so a human can check and make sure that it all makes sense.
Hence, the compiler warning would seem to be "Hey, you've said you've written code I don't know about, and if it uses any memory, you're screwed." Wouldn't it be better to make that an error, really?

It's not like there's anything you can do to *not* make it an error, given that it's code you haven't written yet, and perhaps even code you aren't even the author of.
Classes can and do have rules for their subclasses. Not having a virtual destructor implies that the subclass can't really do any resource management of its own (or at least not resource management tied to its own lifetime). That isn't necessarily an error and might even be a good idea.

--Chris

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to