On Thu, Jan 5, 2017 at 2:31 AM, Stefan Behnel <[email protected]> wrote: > Hi Robert! > > Robert Bradshaw schrieb am 04.01.2017 um 07:46: >> By default a cdef (or cpdef) function returning C type suppresses all >> exceptions, which is quite surprising to new (and old) users, and >> makes debugging difficult. I would propose that we make exception >> propagation the default. >> >> The primary motivation to not do this would be for fear of performance >> regression, especially for external functions that can't possibly >> raise exceptions. While the the catch-all "except *" does seem to have >> non-trivial overhead, the "except? value" version looks >> indistinguishable from no exception throwing at all (in tight loops, >> yay branch prediction, would be interested in what other people see >> [1]). >> >> Does this sound like a reasonable proposal? > > Sounds good to me. > > >> There are still some open questions, e.g. >> >> - What value should we pick. We want something unlikely, but >> recognizable to external code (e.g. 0xBadBad). > > Depends on the value type, obviously. Seems like there should be one error > value per C type, e.g. > > pointer: NULL? (or maybe 1 ?) > char: 0xEF > short: 0xBad > int/long/etc.: 0xBadBad
It'd be nice if it were consistent, e.g. (type)0xBadBad when possible. But I think we'll need to do this per type. > What about C++ in general? Should we also default to catching C++ > exceptions? (Separate proposal, but IIRC that's also still an open issue?) I think the primary surprise is from Python users who make (or convert) a couple of c[p]def functions and are surprised in the change in behavior. Hadn't thought about wrapping every single C function call in a try...catch statement. Wonder what the overhead (time and space) of that would be. >> - What about non-numeric/pointer/reference types; are there other >> sentinel values we could use? > > I can't see a way to handle struct/union. Well, we could handle a union > that involves an integer, but returning a plain union is fairly unlikely to > occur in practice, so the main thing here are structs (and pass-by-value > C++ objects, I guess). > >> - Should we default to except * in that case? Presumably these are >> likely to be more expensive functions anyways (making the relative >> cost lower). > > Depends on how the C compiler implements them. If it inlines a function > that returns a struct (e.g. because the function is only called in one > place), the overhead of "returning" that struct would become negligible and > the exception check overhead could easily show up. > > But I agree with your intuition that returning something non-trivial from a > function suggests a non-trivial implementation of the function. > > >> - Or could we use, essentially, reinterpret_cast to get a possible >> error value? This could work for PODs (including C structs) but >> destructors could wreck havoc here. > > A solution for C would already help, regardless of anything we could or > could not do for C++. We need to consider C++ though, especially as things declared as structs may actually be C++ classes, or contain C++ class members. >> - Should we add syntax (e.g. "except -") to manually obtain the old >> behavior? > > Question is: when would we want that? Should we really trade the weird > WriteUnraisable() behaviour for speed here? I think, an unraisable > exception is always a bug, even if it doesn't appear in practice. If > there's not enough memory, exceptions can occur in any place, and shadowing > them in unexpected places can do arbitrary harm to your system, especially > if it doesn't crash but continues running in an inconsistent state. There is the case that the function in question is a callback with no ways to report errors--one might prefer to at least print something. >> Or should such be done by surrounding the entire block with >> an try/bare except statement? This wouldn't catch argument-parsing >> errors. > > Argument parsing errors cannot occur for cdef functions and C level calls > to cpdef functions, as they would appear on caller side, before even > calling the function. For Python level calls, I don't think we need to care. > >> (Another crazy idea, though I'm not sold, is lifting an >> initial try and with block that spans the full function body to above >> the argument parsing--this could simply the gil syntax as well.) > > This is unprecedented in Python. If you pass positional arguments into a > function that only accepts keyword arguments, for example, you will get a > TypeError regardless of what your function body looks like. > > We could restrict it to argument type conversion, but that would still make > things less understandable. Good point about argument parsing not being needed for cdef functions. Given the other issues, I agree it's best not to do this. > My preference (for now) would be to switch and wait for real world legacy > use cases to show up. Sounds good. _______________________________________________ cython-devel mailing list [email protected] https://mail.python.org/mailman/listinfo/cython-devel
