Hi everyone, As many of you know, speed has been a point of contention in scikit-image for a long time. We've made a very deliberate decision to focus on writing high-level, understandable code (via Python and Cython): both to lower the barrier to entry for newcomers, and to lessen the burden on maintainers. But execution time comparisons, vs OpenCV e.g., left much to be desired.
I think we have hit a turning point in the road. Binary wheels for Numba (actually, llvmlite) were recently uploaded to PyPi, making this technology available to users on both pip and conda installations. The importance of this release on pypi should not be dismissed, and I am grateful to the numba team and Continuum for making that decision. So, how does that impact scikit-image? Well, imagine we choose to optimize various procedures via numba (see Juan's blog post for exactly how impactful this can be: https://ilovesymposia.com/2017/03/15/prettier-lowlevelcallables-with-numba-jit-and-decorators/). The only question we have to answer (from a survival point of view) needs to be: if, somehow, something happens to numba, will an alternative will be available at that time? Looking at the Python JIT landscape (which is very active), and the current state of numba development, I think this is likely. And, if we choose to use numba, of course we'll help to keep it healthy, as far as we can. I'd love to hear your thoughts. I, for one, am excited about the prospect of writing kernels as simply as: >>> @jit_filter_function ... def fmin(values): ... result = np.inf ... for v in values: ... if v < result: ... result = v ... return result >>> ndi.generic_filter(image, fmin, footprint=fp) Best regards Stéfan _______________________________________________ scikit-image mailing list scikit-image@python.org https://mail.python.org/mailman/listinfo/scikit-image