Awesome effort! ;)

Is it maybe worth, during the refactor, to evaluate how NDimage could be 
supportive for auto-parallelization?
I’m thinking that it somehow would be able to split off into embarrassingly 
parallel n tasks, if thrown to a n-core machine?

I’m sorry if I talk non-sense, I’m just fighting to get GLCM faster, when 
running it via “generic_filter” over large-ish images, so far not successful, 
even trying to use numba, but as many things are already running in Cython, so 
I guess I can’t expect much auto-improvements. ;)

So, I’m anticipating to have to go to a cluster for my hundreds of images I 
want to run multiple/many GLCM analyses on.

Thanks again for the developer docs, they are great!

Regards,
Michael


> On Jun 27, 2017, at 09:06, Stefan van der Walt <stef...@berkeley.edu> wrote:
> 
> On Tue, Jun 27, 2017, at 07:49, Egor Panfilov wrote:
>> - Not sure that I'll find enough time to contribute to the code by myself, 
>> but it won't harm if you could provide to the public some directions on 
>> where to start with `scipy.ndimage` C-code.
> 
> Making ndimage developer notes along the way would be extremely helpful!
> 
> Stéfan
> 
> _______________________________________________
> scikit-image mailing list
> scikit-image@python.org
> https://mail.python.org/mailman/listinfo/scikit-image

_______________________________________________
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image

Reply via email to