[Guido] > >I'm not sure what the relevance of including a stack trace would be, > >and why that feature would be necessary to call them coroutines.
[Phillip] > Well, you need that feature in order to retain traceback information when > you're simulating threads with a stack of generators. Although you can't > return from a generator inside a nested generator, you can simulate this by > keeping a stack of generators and having a wrapper that passes control > between generators, such that: > > def somegen(): > result = yield othergen() > > causes the wrapper to push othergen() on the generator stack and execute > it. If othergen() raises an error, the wrapper resumes somegen() and > passes in the error. If you can only specify the value but not the > traceback, you lose the information about where the error occurred in > othergen(). > > So, the feature is necessary for anything other than "simple" (i.e. > single-frame) coroutines, at least if you want to retain any possibility of > debugging. :)
OK. I think you must be describing continuations there, because my brain just exploded. :-)
Probably my attempt at a *brief* explanation backfired. No, they're not continuations or anything nearly that complicated. I'm "just" simulating threads using generators that yield a nested generator when they need to do something that might block waiting for I/O. The pseudothread object pushes the yielded generator-iterator and resumes it. If that generator-iterator raises an error, the pseudothread catches it, pops the previous generator-iterator, and passes the error into it, traceback and all.
The net result is that as long as you use a "yield expression" for any function/method call that might do blocking I/O, and those functions or methods are written as generators, you get the benefits of Twisted (async I/O without threading headaches) without having to "twist" your code into the callback-registration patterns of Twisted. And, by passing in errors with tracebacks, the normal process of exception call-stack unwinding combined with pseudothread stack popping results in a traceback that looks just as if you had called the functions or methods normally, rather than via the pseudothreading mechanism. Without that, you would only get the error context of 'async_readline()', because the traceback wouldn't be able to show who *called* async_readline.
In Python 3000 I want to make the traceback a standard attribute of Exception instances; would that suffice?
If you're planning to make 'raise' reraise it, such that 'raise exc' is equivalent to 'raise type(exc), exc, exc.traceback'. Is that what you mean? (i.e., just making it easier to pass the darn things around)
If so, then I could probably do what I need as long as there exist no error types whose instances disallow setting a 'traceback' attribute on them after the fact. Of course, if Exception provides a slot (or dictionary) for this, then it shouldn't be a problem.
Of course, it seems to me that you also have the problem of adding to the traceback when the same error is reraised...
All in all it seems more complex than just allowing an exception and a traceback to be passed.
I really don't want to pass the whole (type, value, traceback) triple that currently represents an exception through __next__().
The point of passing it in is so that the traceback can be preserved without special action in the body of generators the exception is passing through.
I could be wrong, but it seems to me you need this even for PEP 340, if you're going to support error management templates, and want tracebacks to include the line in the block where the error originated. Just reraising the error inside the generator doesn't seem like it would be enough.
> >An alternative that solves this would be to give __next__() a second > >argument, which is a bool that should be true when the first argument > >is an exception that should be raised. What do people think? > > I think it'd be simpler just to have two methods, conceptually > "resume(value=None)" and "error(value,tb=None)", whatever the actual method > names are.
Part of me likes this suggestion, but part of me worries that it complicates the iterator API too much.
I was thinking that maybe these would be a "coroutine API" or "generator API" instead. That is, something not usable except with generator-iterators and with *new* objects written to conform to it. I don't really see a lot of value in making template blocks work with existing iterators. For that matter, I don't see a lot of value in hand-writing new objects with resume/error, instead of just using a generator.
So, I guess I'm thinking you'd have something like tp_block_resume and tp_block_error type slots, and generators' tp_iter_next would just be the same as tp_block_resume(None).
But maybe this is the part you're thinking is complicated. :)
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com