Thanks for all the links. I knew there were people wanting this but I didn't quite get how big an issue it was.
Brain Dessent wrote: > You're essentially trusting that all > exception specifiers for every function in the program and *all* library > code are always present and always correct which is a huge leap of faith > that I don't think is supported by reality. I agree that it won't be very useful initially due to lots of third party code like boost neither defining nor adhering exception restrictions 100% of the time (STL may be guilty also). However, this is a catch 22. Why not provide the mechanism for verifying exception specifications so that these libraries can, in future, become fully compliant? It won't take much work, especially when you can get a warning telling you exactly what you need to change, and the only thing you need to do 99% of the time is add throw() clauses to definitions and declarations. I bet you could get boost & STL compliant within a week even if they had 0 throw() clauses to begin with. Brain Dessent wrote: > The general consensus is that doing this at compile time cannot give you > anything useful, because the compiler can only see one compilation unit > at a time and so the only thing it can know of all the downstream > functions that are in other CUs is what is provided in the exception > specifiers of the prototypes. So, once the other CUs adhere to their prototype exception specifiers, this becomes OK. I definately intend all of my own CUs to adhere. Also, you rely on CUs doing what they say in all other means (eg not modifying const-passed references), why not rely on their exception specifiers as long as they say you can. And if only some (ie your own) CUs adhere, that's by no means worse than none adhering (as long as you are aware that some don't). > You should certainly look at EDoc++, mentioned in the above threads. http://edoc.sourceforge.net/ This is definitely interesting, however our goals are a bit different. EDoc++ is about checking that there is no chance a thrown that actually exists in code can propogate to somewhere where it will terminate the app due to being unhandled. It's not about enforcing code requirements. For example, foo() throw() can call bar() throw(zug) as long as it doesn't actually throw a zug, or call anything that does. I would like an environment where the above example is considered poor practice. One thing I read there that concerned me was when it was said that some compilers do "pessimizations" such as not inlining functions with throw clauses or placing try/catch blocks around calls to these. I hope that these compilers aren't mainstream. Also, I don't think that consideration of current bad implementations should factor into the decision. If people decide to use exception specifications more these compilers will either get fixed or become slower. So overall: - Lots of people request it (for good reasons or bad). Even bad reasons mean there's an advantage from providing this as an option, ie showing that it doesn't do what they want. - Nobody has yet shown me any fundamental problems with implementing this, apart from my typedefs of function pointers not being allowed exception specifications which, while annoying, can be avoided. - It helps code comply with what it says (if you read the throw() clause in what seems to be its spirit: as an indication of what may be thrown). - It's fully optional, and off by default. - The only real issue is that it will complain about third party code that doesn't yet "comply". But by this same token this helps third party code become "compliant" if the authors desire it to be. There's also the possibility of adding a modification that could ignore calls to functions declared inside a whitelist of headers like <vector> to further reduce this. - GCC probably doesn't do the above pessimizations (I guess I'll have to look into this). Where's the downside?