Antoine Pitrou schrieb am 23.08.2018 um 09:04: > On Thu, 23 Aug 2018 08:07:08 +0200 > Jeroen Demeyer wrote: >>> - the maintenance problem (how do ensure we can change small things in >>> the C API, especially semi-private ones, without having to submit PRs >>> to Cython as well) >> >> Why don't you want to submit PRs to Cython? > > Because it forces a much longer cycle time when we want to introduce a > change in the C API: first prototype the C API change, then notice it > breaks Cython, then try to make a PR (which isn't trivial, given > Cython's size), then wait for the PR to be merged and a new Cython to > be released.
I think you can put that argument back into the attic. When CPython 3.6 and 3.7 came out, I swear I had already forgotten which new features they provided, because we had implemented and released most of the major features 6-12 months earlier in Cython (backported to Py2.6). And it has happened more than once that we pushed out a point release within a few days to fix a real need on user side. What I would rather like to see instead is that both of our sides try to jointly discuss ideas for C-API changes, so that we don't even run into the problem that changes we made on one side surprisingly break the other. Don't forget that the spark for this whole discussion was to make it easier to change the C-API at all. Being able to change Cython in one place and then adapt a whole bunch of real world extensions out there by simply regenerating their C code with it is a really cool feature. Basically, it passes the ability to do that back into your own hands. >> If you're saying "I don't >> want to wait for the next stable release of Cython", you could use >> development versions of Cython for development versions of CPython. > > But depending on the development version of a compiler isn't very > enticing, IMHO. In case that need arises, feel free to ask which git revision we recommend for use in CPython. In the worst case, we can always create a stable branch for you that makes sure we don't break your productivity while we're doing our thing. >>> - the debugging problem (Cython's generated C code is unreadable, >>> unlike Argument Clinic's, which can make debugging annoying) >> >> Luckily, you don't need to read the C code most of the time. And it's >> also a matter of experience: I can read Cython-generated C code just fine. > > Let's be serious here. Regardless of the experience, nobody enjoys > reading / stepping through code like the following: Ok, you posted generated C code, let's read it together. > __Pyx_TraceLine(206,0,__PYX_ERR(1, 206, __pyx_L1_error)) This shows that you have enabled the generation of line tracing code with the directive "linetrace=True", and Cython is translating line 206 of one of your source modules here. > __Pyx_XDECREF(__pyx_r); > __pyx_t_2 = __Pyx_GetModuleGlobalName(__pyx_n_s_datetime); if > (unlikely(!__pyx_t_2)) __PYX_ERR(1, 206, __pyx_L1_error) > __Pyx_GOTREF(__pyx_t_2); > __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_datetime); if > (unlikely(!__pyx_t_3)) __PYX_ERR(1, 206, __pyx_L1_error) > __Pyx_GOTREF(__pyx_t_3); > __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; > __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, > __pyx_n_s_utcfromtimestamp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 206, > __pyx_L1_error) > __Pyx_GOTREF(__pyx_t_2); > __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; This implements "datetime.datetime.utcfromtimestamp", probably used in a "return" statement. > __pyx_t_3 = __Pyx_PyFloat_DivideObjC(__pyx_v_x, __pyx_float_1e3, 1e3, 0); > if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 206, __pyx_L1_error) > __Pyx_GOTREF(__pyx_t_3); This is "x/1e3", optimised for fast computation in the case that "x" turns out to be a number, especially a float object. > __pyx_t_4 = NULL; > if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { > __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2); > if (likely(__pyx_t_4)) { > PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); > __Pyx_INCREF(__pyx_t_4); > __Pyx_INCREF(function); > __Pyx_DECREF_SET(__pyx_t_2, function); > } > } > if (!__pyx_t_4) { > __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3); if > (unlikely(!__pyx_t_1)) __PYX_ERR(1, 206, __pyx_L1_error) > __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; > __Pyx_GOTREF(__pyx_t_1); > } else { > #if CYTHON_FAST_PYCALL > if (PyFunction_Check(__pyx_t_2)) { > PyObject *__pyx_temp[2] = {__pyx_t_4, __pyx_t_3}; > __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); > if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 206, __pyx_L1_error) > __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; > __Pyx_GOTREF(__pyx_t_1); > __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; > } else > #endif > #if CYTHON_FAST_PYCCALL > if (__Pyx_PyFastCFunction_Check(__pyx_t_2)) { > PyObject *__pyx_temp[2] = {__pyx_t_4, __pyx_t_3}; > __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_2, __pyx_temp+1-1, 1+1); > if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 206, __pyx_L1_error) > __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; > __Pyx_GOTREF(__pyx_t_1); > __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; > } else > #endif > { > __pyx_t_5 = PyTuple_New(1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, > 206, __pyx_L1_error) > __Pyx_GOTREF(__pyx_t_5); > __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); > __pyx_t_4 = NULL; > __Pyx_GIVEREF(__pyx_t_3); > PyTuple_SET_ITEM(__pyx_t_5, 0+1, __pyx_t_3); > __pyx_t_3 = 0; > __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_2, __pyx_t_5, NULL); if > (unlikely(!__pyx_t_1)) __PYX_ERR(1, 206, __pyx_L1_error) > __Pyx_GOTREF(__pyx_t_1); > __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; > } > } And this is an optimised, inlined version of the method call "tmp2(tmp3)", using the FASTCALL protocol if available, where tmp2 is the "datetime.datetime.utcfromtimestamp" above and tmp3 is "x/1e3". You probably have your reasons for calculating that. ;) This code would probably be simplified if PEP 580 was accepted, although we would want to keep it around for a while in order to avoid a performance regression in older Python versions. The C compile time feature switches like "CYTHON_FAST_PYCALL" or "CYTHON_UNPACK_METHODS" are one of the reasons why Cython is so versatile and adaptive, also for end-users. If any of these features break or if any CPython implementation detail isn't available on a certain Python implementation (that tries to implement to C-API), you can switch off the usage of that feature in the C code that Cython has generated for you by simply defining a C macro at C compile time. We do that automatically for PyPy and Pyston, but also for older CPython versions that lack certain C-API features. Sometimes there is an actual reason behind what looks like complexity at first sight. :) What you stripped in your example was the fact that Cython generates a C code comment with the exact original source code line right above this C code section, which helps a lot in understanding what goes on. And then there's "cython -a", which gives you the whole thing nicely visualised as an HTML file. And you get Cython source level profiling and code coverage reporting, which is a huge raise in comfort compared to hand written C code. Jeroen is right, in almost all cases, you really do not care what exactly the C code is, even as an expert. It's perfectly enough to let Cython do its thing. If you want to improve or otherwise modify the C code that Cython generates, then yes, you certainly want to look at the code it generates first. But in all other cases, you want to use Cython *because* you then don't have to look at the C code. That's a huge feature, too. Stefan _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com