On Thu, 06 May 2010 21:56:10 -0700, Chris Rebert wrote: [...] > Output from example: > Traceback (most recent call last): > File "tmp.py", line 35, in <module> > multiple_err.do_raise() > File "tmp.py", line 25, in do_raise > raise self > __main__.MultipleValidationErrors: See the following exception > tracebacks: > ============================================================================== > Traceback (most recent call last): > File "tmp.py", line 32, in <module> > int(c) # obviously fails > ValueError: invalid literal for int() with base 10: 'h' > > > Traceback (most recent call last): > File "tmp.py", line 32, in <module> > int(c) # obviously fails > ValueError: invalid literal for int() with base 10: 'e' [...]
That's a nice trick, but I'd really hate to see it in real life code. Especially when each traceback was deep rather than shallow. Imagine having fifty errors, each one of which was ten or twenty levels deep! I also question how useful this would be in real life. Errors can cascade in practice, and so you would very likely get spurious errors that were caused by the first error. You see the same thing in doctests, e.g. if you do this: >>> x = 42/0 # oops, I meant 10 >>> y = x+1 not only do you get a legitimate DivideByZero error, but then you get a spurious NameError because x doesn't exist. (This is, I believe, a weakness of doctests.) Also, unless I'm badly mistaken, don't traceback objects interfere with garbage deletion? So keeping all those tracebacks around for some indefinite time could be very expensive. -- Steven -- http://mail.python.org/mailman/listinfo/python-list