In article <53bae1db$0$29995$c3e8da3$54964...@news.astraweb.com>, Steven D'Aprano <steve+comp.lang.pyt...@pearwood.info> wrote:
> While I agree with the general idea that try blocks should be as narrow > *as reasonable*, they shouldn't be as narrow *as possible* since one can > start guarding against unreasonable things. I'm more willing to accept multi-statement try blocks with multiple except clauses when the things you're catching are very specific: try: foo.quack() bar.roar() except Foo.QuackError: print "OMG, can't quack" except Bar.RoarError: print "Yowza" If you're catching generic things like IOError or ValueError, it's more likely for there to be some code path you didn't expect. > The thing is, even if you catch these bizarre things, what are you going > to do with them? If you can't do anything about it, there's no point > catching the exception -- never catch anything you can't recover from, or > otherwise handle. Just treat it as a fatal error and let it cause a > traceback. That I agree with. Of course, sometimes you do want to catch *everything*. For example, in something like a web server, you want to have something like try: do_request() except Exception: handle_exception() except: # WTF? This should never happen, but deal with it anyway handle_exception() way up at the top. Our handle_exception() logs a stack trace and returns a 500-something HTTP response. The alternative is to have the web server exit, which would be a Bad Thing. Well, actually, if that happened, the gunicorn master process would catch that a worker exited and restart it, but that would be slow. And if gunicorn exited, then upstart would catch that, and restart gunicorn :-) -- https://mail.python.org/mailman/listinfo/python-list