OK, I sort of get it, at a very high-level, although I still feel this is wildly out of my league.
I guess I should try it first. ;-)
It's not unlike David Mertz' articles on implementing coroutines and multitasking using generators, except that I'm adding more "debugging sugar", if you will, by making the tracebacks look normal. It's just that the *how* requires me to pass the traceback into the generator. At the moment, I accomplish that by doing a 3-argument raise inside of 'events.resume()', but it would be really nice to be able to get rid of 'events.resume()' in a future version of Python.
> Of course, it seems to me that you also have the problem of adding to the > traceback when the same error is reraised...
I think when it is re-raised, no traceback entry should be added; the place that re-raises it should not show up in the traceback, only the place that raised it in the first place. To me that's the essence of re-raising (and I think that's how it works when you use raise without arguments).
I think maybe I misspoke. I mean adding to the traceback *so* that when the same error is reraised, the intervening frames are included, rather than lost.
In other words, IIRC, the traceback chain is normally increased by one entry for each frame the exception escapes. However, if you start hiding that inside of the exception instance, you'll have to modify it instead of just modifying the threadstate. Does that make sense, or am I missing something?
> For that matter, I don't see a lot of value in
> hand-writing new objects with resume/error, instead of just using a generator.
Not a lot, but I expect that there may be a few, like an optimized version of lock synchronization.
My point was mainly that we can err on the side of caller convenience rather than callee convenience, if there are fewer implementations. So, e.g. multiple methods aren't a big deal if it makes the 'block' implementation simpler, if only generators and a handful of special template objects are going need to implement the block API.
> So, I guess I'm thinking you'd have something like tp_block_resume and > tp_block_error type slots, and generators' tp_iter_next would just be the > same as tp_block_resume(None).
I hadn't thought much about the C-level slots yet, but this is a reasonable proposal.
Note that it also doesn't require a 'next()' builtin, or a next vs. __next__ distinction, if you don't try to overload iteration and templating. The fact that a generator can be used for templating, doesn't have to imply that any iterator should be usable as a template, or that the iteration protocol is involved in any way. You could just have __resume__/__error__ matching the tp_block_* slots.
This also has the benefit of making the delineation between template blocks and for loops more concrete. For example, this:
block open("filename") as f: ...
could be an immediate TypeError (due to the lack of a __resume__) instead of biting you later on in the block when you try to do something with f, or because the block is repeating for each line of the file, etc.
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com