[scikit-image] Image intensity measure.regionprops

2017-07-13 Thread Cedric Espenel
Hi scikit-image users/devs,

I'm using skimage.measure.regionprops and I'm little bit confused about a
result I get with it.

I'm working with a zstack of a microscopy image that I have segmented/label
and I'm trying to get the mean intensity of the labeled region:

region_props = measure.regionprops(label_image, intensity_image)
>

If now I do:

for prop in region_prop:

print('x', prop.mean_intensity)
>>
> print('y', np.mean(prop.intensity_image))
>>
>
prop.mean_intensity and np.mean(prop.intensity_image) give me different
value, which confuse me. Can someone help me understand why I'm getting
something different?

Thank you in advance for your help.

Sincerely,

Cedric
___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image


Re: [scikit-image] Numba on pypi

2017-07-13 Thread Stefan van der Walt

On 13 July 2017 22:30:03 Nadav Horesh  wrote:


I'd make numba dependency optional, since keeping llvm and llvmlite
versions in sync requires special attention (at least with the Linux
distros I used), what makes numba availability below 100%.


I don't think we need to keep these things in sync (if llvmlite is 
available at all, we're fine). And for pip installs we can consider 
precompiled binary modules, I guess?


Best regards
Stéfan
___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image


Re: [scikit-image] Numba on pypi

2017-07-13 Thread Nadav Horesh
I'd make numba dependency optional, since keeping llvm and llvmlite
versions in sync requires special attention (at least with the Linux
distros I used), what makes numba availability below 100%.

  Nadav.

On Jul 14, 2017 5:23 AM, "Gregory Lee"  wrote:

> I am also +1 on allowing numba code in scikit-image.  I have tended to
> prefer Cython in the past, but it has been a while since I looked at numba
> and it seems it has come a long way in recent years.  Juan's blog post and
> the simple example you provided are great examples of relevant use cases.
>
>
> ___
> scikit-image mailing list
> scikit-image@python.org
> https://mail.python.org/mailman/listinfo/scikit-image
>
>
___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image


Re: [scikit-image] Numba on pypi

2017-07-13 Thread Gregory Lee
I am also +1 on allowing numba code in scikit-image.  I have tended to
prefer Cython in the past, but it has been a while since I looked at numba
and it seems it has come a long way in recent years.  Juan's blog post and
the simple example you provided are great examples of relevant use cases.
___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image


Re: [scikit-image] 回复:Re: 回复: Numba on pypi

2017-07-13 Thread Josh Warner
What I find numba brings to the table is a significantly more expressive
and easier to maintain code base.  Essentially writing basic bare bones
loops often are easiest to JIT.  They are then easier to debug, and faster
to iterate on thanks to removing the compilation step.

I haven't checked recently if numba is happy with certain very useful
features like np.nditer - if so, actually a large chunk of our Cython could
be replaced by simpler JITted code while making it n-D.

That said, Cython isn't going anywhere for the reasons Stéfan mentions
above... but I believe as soon as it is reasonable adding numba would yield
large benefits. Probably even more so than we fully appreciate.

Looks like we're nearing that point.

Josh


On Jul 13, 2017 11:23 AM, "Stefan van der Walt" 
wrote:

On Thu, Jul 13, 2017, at 03:04, imag...@sina.com wrote:

May I start a new topic thread? with the test image, and my numba code.


Yes, please.

Stéfan


___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image
___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image


Re: [scikit-image] Memory consumption of measure.label (compared to matlab)

2017-07-13 Thread Stefan van der Walt
On Thu, Jul 13, 2017, at 04:21, Martin Fleck wrote:
> Indeed, this could be the complete problem already! For the analysis I use a 
> binary image - so only one bit per pixel.
FWIW, binary images are stored as ubyte, so 1 *byte* per pixel.

>  Greg: Regarding your PR and my analysis: My analysis using a 1.2GB file 
> stops due to memory problems already in 
> skimage.morphology.remove_small_objects() even if the major memory blowup 
> happens with skimage.morphology.label().>  So there are problems at multiple 
> steps that hopefully can be improved.
Thanks for helping us uncover and trace these!

Stéfan

___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image


Re: [scikit-image] 回复:Re: 回复: Numba on pypi

2017-07-13 Thread Stefan van der Walt
On Thu, Jul 13, 2017, at 03:04, imag...@sina.com wrote:
> May I start a new topic thread? with the test image, and my numba code.
Yes, please.

Stéfan

___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image


Re: [scikit-image] Memory consumption of measure.label (compared to matlab)

2017-07-13 Thread Martin Fleck
Hi again,

attached is a file "matlab_memory_info" and again the same
"skiamge_memory_profiler.out" that I showed before.
in the matlab_memory_info file, I added for every matlab call the
equivalent that I do in skimage.

I don't think it will be needed - the attached files should be enough -
but if someone wants to see the full memory report of matlab, you can
download it here:
https://drive.google.com/open?id=0BzmlODsuIIz0dVRsMk9sT3RuU0E
(it's an html file)

Cheers,
Martin


On 07/13/2017 02:09 PM, Martin Fleck wrote:
>
> Hi again,
>
> here you can download a minimal example:
>
> https://drive.google.com/open?id=0BzmlODsuIIz0elpIcU1kdmpNTlE
> (download button is the arrow on the top right)
>
> In order to run it and get the memory_profiler output you have to
> install memory_profiler
> e.g. with
>
> pip3 install memory_profiler
>
> and run the file with
>
> python3 -m memory_profiler minimal_test.py
>
> If you just want to run the example without memory profiling and
> installing memory_profiler, you have to comment out or remove line 8
> "@profile"
>
> Cheers,
> Martin
>
>
>
> On 07/13/2017 01:21 PM, Martin Fleck wrote:
>>
>> Hi Juan, hi Greg,
>>
>> quoting Greg:
>> > I think the main reason for the increased memory usage is that the
>> output type of the label function is int64 while your input is most
>> likely uint8.
>>
>> Indeed, this could be the complete problem already! For the analysis
>> I use a binary image - so only one bit per pixel.
>>
>> Greg: Regarding your PR and my analysis: My analysis using a 1.2GB
>> file stops due to memory problems already in
>> skimage.morphology.remove_small_objects() even if the major memory
>> blowup happens with skimage.morphology.label().
>> So there are problems at multiple steps that hopefully can be improved.
>>
>> Quoting Juan:
>> > For example, what are the data types of the outputs in Matlab?
>>
>> the first steps of my analysis are to convert the 8 bit input image
>> to a meaningful binary image. The whole analysis is done on binary
>> images. So all inputs and outputs in Matlab are of Matlab Class
>> "logical".
>>
>> I will provide you with a minimal example script and data for the
>> skimage case.
>> I will try to create equivalent memory inofrmation in Matlab.
>>
>> I'll both post it here as soon as I'm done with that.
>>
>> Thanks so far!
>>
>> Martin
>>
>> On 07/13/2017 03:05 AM, Juan Nunez-Iglesias wrote:
>>> Hi Martin,
>>>
>>> No one on this list wants to push you to more Matlab usage, believe
>>> me. ;)
>>>
>>> Do you think you could provide a script and sample data that we can
>>> use for troubleshooting? As Greg pointed out, the optimization
>>> approach *might* have to be data-type dependent. We could, for
>>> example, provide a dtype= keyword argument that would force the
>>> output to be of a particular, more memory-efficient type, if you
>>> know in advance how many objects you expect.
>>>
>>> If you can provide something similar to a memory profile, and
>>> diagnostic information, for your equivalent Matlab script, that
>>> would be really useful, so we know what we are aiming for. For
>>> example, what are the data types of the outputs in Matlab?
>>>
>>> Juan.
>>>
>>> On 13 Jul 2017, 9:59 AM +1000, Gregory Lee , wrote:
 Hi Martin,

 My problem my analysis uses much more memory than I expect.
 I attached output from the memory_profiler package, with which
 I tried
 to keep track of the memory consumption of my analysis.
 You can see that for an ~8MiB file that I used for testing,
 skimage.measure.label needs to use 56MiB of memory, which
 surprised me.


 I haven't looked at it in much detail, but I did find what appear
 to be some unnecessary copies in the top-level Cython routine
 called by skimage.morphology.label.  I opened a PR to try and avoid
 this here:
 https://github.com/scikit-image/scikit-image/pull/2701
 

 However, I think that PR is going to give a minor performance
 improvement, but not help with memory use much if at all.  I think
 the main reason for the increased memory usage is that the output
 type of the label function is int64 while your input is most likely
 uint8.  This means that the labels array requires 8 times the
 memory usage of the uint8 input.  I don't think there is much way
 around that without making a version of the routines that allows
 specifying a smaller integer dtype.

 - Greg
 ___
 scikit-image mailing list
 scikit-image@python.org
 https://mail.python.org/mailman/listinfo/scikit-image
>>>
>>>
>>> ___
>>> scikit-image mailing list
>>> scikit-image@python.org
>>> https://mail.python.org/mailman/listinfo/scikit-image
>>
>>
>>
>> ___
>> 

Re: [scikit-image] Memory consumption of measure.label (compared to matlab)

2017-07-13 Thread Martin Fleck
Hi again,

here you can download a minimal example:

https://drive.google.com/open?id=0BzmlODsuIIz0elpIcU1kdmpNTlE
(download button is the arrow on the top right)

In order to run it and get the memory_profiler output you have to
install memory_profiler
e.g. with

pip3 install memory_profiler

and run the file with

python3 -m memory_profiler minimal_test.py

If you just want to run the example without memory profiling and
installing memory_profiler, you have to comment out or remove line 8
"@profile"

Cheers,
Martin



On 07/13/2017 01:21 PM, Martin Fleck wrote:
>
> Hi Juan, hi Greg,
>
> quoting Greg:
> > I think the main reason for the increased memory usage is that the
> output type of the label function is int64 while your input is most
> likely uint8.
>
> Indeed, this could be the complete problem already! For the analysis I
> use a binary image - so only one bit per pixel.
>
> Greg: Regarding your PR and my analysis: My analysis using a 1.2GB
> file stops due to memory problems already in
> skimage.morphology.remove_small_objects() even if the major memory
> blowup happens with skimage.morphology.label().
> So there are problems at multiple steps that hopefully can be improved.
>
> Quoting Juan:
> > For example, what are the data types of the outputs in Matlab?
>
> the first steps of my analysis are to convert the 8 bit input image to
> a meaningful binary image. The whole analysis is done on binary
> images. So all inputs and outputs in Matlab are of Matlab Class "logical".
>
> I will provide you with a minimal example script and data for the
> skimage case.
> I will try to create equivalent memory inofrmation in Matlab.
>
> I'll both post it here as soon as I'm done with that.
>
> Thanks so far!
>
> Martin
>
> On 07/13/2017 03:05 AM, Juan Nunez-Iglesias wrote:
>> Hi Martin,
>>
>> No one on this list wants to push you to more Matlab usage, believe
>> me. ;)
>>
>> Do you think you could provide a script and sample data that we can
>> use for troubleshooting? As Greg pointed out, the optimization
>> approach *might* have to be data-type dependent. We could, for
>> example, provide a dtype= keyword argument that would force the
>> output to be of a particular, more memory-efficient type, if you know
>> in advance how many objects you expect.
>>
>> If you can provide something similar to a memory profile, and
>> diagnostic information, for your equivalent Matlab script, that would
>> be really useful, so we know what we are aiming for. For example,
>> what are the data types of the outputs in Matlab?
>>
>> Juan.
>>
>> On 13 Jul 2017, 9:59 AM +1000, Gregory Lee , wrote:
>>> Hi Martin,
>>>
>>> My problem my analysis uses much more memory than I expect.
>>> I attached output from the memory_profiler package, with which I
>>> tried
>>> to keep track of the memory consumption of my analysis.
>>> You can see that for an ~8MiB file that I used for testing,
>>> skimage.measure.label needs to use 56MiB of memory, which
>>> surprised me.
>>>
>>>
>>> I haven't looked at it in much detail, but I did find what appear to
>>> be some unnecessary copies in the top-level Cython routine called by
>>> skimage.morphology.label.  I opened a PR to try and avoid this here:
>>> https://github.com/scikit-image/scikit-image/pull/2701
>>> 
>>>
>>> However, I think that PR is going to give a minor performance
>>> improvement, but not help with memory use much if at all.  I think
>>> the main reason for the increased memory usage is that the output
>>> type of the label function is int64 while your input is most likely
>>> uint8.  This means that the labels array requires 8 times the memory
>>> usage of the uint8 input.  I don't think there is much way around
>>> that without making a version of the routines that allows specifying
>>> a smaller integer dtype.
>>>
>>> - Greg
>>> ___
>>> scikit-image mailing list
>>> scikit-image@python.org
>>> https://mail.python.org/mailman/listinfo/scikit-image
>>
>>
>> ___
>> scikit-image mailing list
>> scikit-image@python.org
>> https://mail.python.org/mailman/listinfo/scikit-image
>
>
>
> ___
> scikit-image mailing list
> scikit-image@python.org
> https://mail.python.org/mailman/listinfo/scikit-image

___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image


Re: [scikit-image] Memory consumption of measure.label (compared to matlab)

2017-07-13 Thread Martin Fleck
Hi Juan, hi Greg,

quoting Greg:
> I think the main reason for the increased memory usage is that the
output type of the label function is int64 while your input is most
likely uint8.

Indeed, this could be the complete problem already! For the analysis I
use a binary image - so only one bit per pixel.

Greg: Regarding your PR and my analysis: My analysis using a 1.2GB file
stops due to memory problems already in
skimage.morphology.remove_small_objects() even if the major memory
blowup happens with skimage.morphology.label().
So there are problems at multiple steps that hopefully can be improved.

Quoting Juan:
> For example, what are the data types of the outputs in Matlab?

the first steps of my analysis are to convert the 8 bit input image to a
meaningful binary image. The whole analysis is done on binary images. So
all inputs and outputs in Matlab are of Matlab Class "logical".

I will provide you with a minimal example script and data for the
skimage case.
I will try to create equivalent memory inofrmation in Matlab.

I'll both post it here as soon as I'm done with that.

Thanks so far!

Martin

On 07/13/2017 03:05 AM, Juan Nunez-Iglesias wrote:
> Hi Martin,
>
> No one on this list wants to push you to more Matlab usage, believe me. ;)
>
> Do you think you could provide a script and sample data that we can
> use for troubleshooting? As Greg pointed out, the optimization
> approach *might* have to be data-type dependent. We could, for
> example, provide a dtype= keyword argument that would force the output
> to be of a particular, more memory-efficient type, if you know in
> advance how many objects you expect.
>
> If you can provide something similar to a memory profile, and
> diagnostic information, for your equivalent Matlab script, that would
> be really useful, so we know what we are aiming for. For example, what
> are the data types of the outputs in Matlab?
>
> Juan.
>
> On 13 Jul 2017, 9:59 AM +1000, Gregory Lee , wrote:
>> Hi Martin,
>>
>> My problem my analysis uses much more memory than I expect.
>> I attached output from the memory_profiler package, with which I
>> tried
>> to keep track of the memory consumption of my analysis.
>> You can see that for an ~8MiB file that I used for testing,
>> skimage.measure.label needs to use 56MiB of memory, which
>> surprised me.
>>
>>
>> I haven't looked at it in much detail, but I did find what appear to
>> be some unnecessary copies in the top-level Cython routine called by
>> skimage.morphology.label.  I opened a PR to try and avoid this here:
>> https://github.com/scikit-image/scikit-image/pull/2701
>> 
>>
>> However, I think that PR is going to give a minor performance
>> improvement, but not help with memory use much if at all.  I think
>> the main reason for the increased memory usage is that the output
>> type of the label function is int64 while your input is most likely
>> uint8.  This means that the labels array requires 8 times the memory
>> usage of the uint8 input.  I don't think there is much way around
>> that without making a version of the routines that allows specifying
>> a smaller integer dtype.
>>
>> - Greg
>> ___
>> scikit-image mailing list
>> scikit-image@python.org
>> https://mail.python.org/mailman/listinfo/scikit-image
>
>
> ___
> scikit-image mailing list
> scikit-image@python.org
> https://mail.python.org/mailman/listinfo/scikit-image

___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image


Re: [scikit-image] 回复: Numba on pypi

2017-07-13 Thread Stefan van der Walt
Hi Thomas

On Thu, Jul 13, 2017, at 01:04, Thomas Walter via scikit-image wrote:
>  A question to Stéfan: would this mean that you would remove all cython code 
> from scikit-image or would numba just be another option? 
I think we'd probably keep both around; one of the advantages of this approach 
is that we can gradually explore new functionality with numba, while retaining 
all existing code until there's a strong need to modify for speed/simplicity.  
Cython & numba also do not fulfill exactly the same roles.  E.g., Cython is 
great for wrapping C and C++  libraries.
Best regards
Stéfan

___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image


Re: [scikit-image] 回复: Numba on pypi

2017-07-13 Thread Thomas Walter via scikit-image

Hi YXDragon,

just a word on some aspect you mention:

> the local_max has not a tolerance, so the result is too massy, then 
do a watershed end with too many fragments...,


The definition that underlies this function is: "A local maximum is a 
region of constant grey level strictly greater than the grey levels of 
all pixels in direct neighborhood of the region." - There is no 
tolerance associated to this definition, and I am perfectly fine with 
this. There are definitely many cases where you want to extract all 
local maxima. Nevertheless, there are other functions for detection of 
maxima that allow one to also impose certain criteria (such as h_maxima 
or peak_local_max).


A question to Stéfan: would this mean that you would remove all cython 
code from scikit-image or would numba just be another option?


Best,

Thomas.

On 7/13/17 8:49 AM, imag...@sina.com wrote:

Hi Stéfan:

  I appreciate Numba. for sometimes, we must do a 'for' in our python 
code, but just a 'for' with a 'if', It is fussy to compile a so/dll or 
write cython. Numba is very portable, and can run anywhere, just need 
to install numba and llvmlite. (That means our package could be a 
light-weight library, undepended any native so/dll)


  As I metioned befor, many scikit-image's algrisms are not exquisite 
enough(just my own opinion),
  the mid_axi function results too many branch and sometimes with 
hols,
  the local_max has not a tolerance, so the result is too massy, 
then do a watershed end with too many fragments...,
  and how to build a graph from the skeleton, then do a network 
analysis.


I want to do a contribute to scikit-image, But after some effort, I 
give up, I prefor to write a dynamic lib rather then Cython. In the 
end, I wrote them in Numba. So I appreciate to use Numba.


Best
YXDragon

- 原始邮件 -
发件人:Stefan van der Walt 
收件人:scikit-image@python.org
主题:[scikit-image] Numba on pypi
日期:2017年07月13日 14点17分

Hi everyone,
As many of you know, speed has been a point of contention in
scikit-image for a long time. We've made a very deliberate decision to
focus on writing high-level, understandable code (via Python and
Cython): both to lower the barrier to entry for newcomers, and to lessen
the burden on maintainers. But execution time comparisons, vs OpenCV
e.g., left much to be desired.
I think we have hit a turning point in the road. Binary wheels for
Numba (actually, llvmlite) were recently uploaded to PyPi, making this
technology available to users on both pip and conda installations. The
importance of this release on pypi should not be dismissed, and I am
grateful to the numba team and Continuum for making that decision.
So, how does that impact scikit-image? Well, imagine we choose to
optimize various procedures via numba (see Juan's blog post for exactly
how impactful this can be:
https://ilovesymposia.com/2017/03/15/prettier-lowlevelcallables-with-numba-jit-and-decorators/).
The only question we have to answer (from a survival point of view)
needs to be: if, somehow, something happens to numba, will an
alternative will be available at that time? Looking at the Python JIT
landscape (which is very active), and the current state of numba
development, I think this is likely. And, if we choose to use numba, of
course we'll help to keep it healthy, as far as we can.
I'd love to hear your thoughts. I, for one, am excited about the
prospect of writing kernels as simply as:
>>> @jit_filter_function
... def fmin(values):
... result = np.inf
... for v in values:
... if v < result:
... result = v
... return result
>>> ndi.generic_filter(image, fmin, footprint=fp)
Best regards
Stéfan
___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image


___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image



--
Thomas Walter
27 rue des Acacias
75017 Paris

___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image


Re: [scikit-image] Numba on pypi

2017-07-13 Thread Ralf Gommers
On Thu, Jul 13, 2017 at 6:16 PM, Stefan van der Walt 
wrote:

> Hi everyone,
>
> As many of you know, speed has been a point of contention in
> scikit-image for a long time.  We've made a very deliberate decision to
> focus on writing high-level, understandable code (via Python and
> Cython): both to lower the barrier to entry for newcomers, and to lessen
> the burden on maintainers.  But execution time comparisons, vs OpenCV
> e.g., left much to be desired.
>
> I think we have hit a turning point in the road.  Binary wheels for
> Numba (actually, llvmlite) were recently uploaded to PyPi, making this
> technology available to users on both pip and conda installations.  The
> importance of this release on pypi should not be dismissed, and I am
> grateful to the numba team and Continuum for making that decision.
>

Agreed. Note that there are no Windows wheels up on PyPI (yet, or not
coming?). Given that there are no SciPy wheels for Windows either I don't
think that that changes your argument much - people should just use a
binary distribution on Windows - but I thought I'd point it out anway.

>
> So, how does that impact scikit-image?  Well, imagine we choose to
> optimize various procedures via numba (see Juan's blog post for exactly
> how impactful this can be:
> https://ilovesymposia.com/2017/03/15/prettier-
> lowlevelcallables-with-numba-jit-and-decorators/).
>

That's a great post. @Juan: get yourself on Planet Python!


> The only question we have to answer (from a survival point of view)
> needs to be: if, somehow, something happens to numba, will an
> alternative will be available at that time?  Looking at the Python JIT
> landscape (which is very active), and the current state of numba
> development, I think this is likely.  And, if we choose to use numba, of
> course we'll help to keep it healthy, as far as we can.
>
> I'd love to hear your thoughts.


I'm only an occasional user at the moment, so won't express an opinion
either way. But will be following this thread with interest.

Cheers,
Ralf


> I, for one, am excited about the
> prospect of writing kernels as simply as:
>
> >>> @jit_filter_function
> ... def fmin(values):
> ... result = np.inf
> ... for v in values:
> ... if v < result:
> ... result = v
> ... return result
>
> >>> ndi.generic_filter(image, fmin, footprint=fp)
>
> Best regards
> Stéfan
> ___
> scikit-image mailing list
> scikit-image@python.org
> https://mail.python.org/mailman/listinfo/scikit-image
>
___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image


[scikit-image] 回复: Numba on pypi

2017-07-13 Thread imagepy
Hi Stéfan:
  I appreciate Numba. for sometimes, we must do a 'for' in our python code, but 
just a 'for' with a 'if', It is fussy to compile a so/dll or write cython. 
Numba is very portable, and can run anywhere, just need to install numba and 
llvmlite. (That means our package could be a light-weight library, undepended 
any native so/dll)
  As I metioned befor, many scikit-image's algrisms are not exquisite 
enough(just my own opinion),  the mid_axi function results too many branch 
and sometimes with hols,the local_max has not a tolerance, so the 
result is too massy, then do a watershed end with too many fragments...,  
and how to build a graph from the skeleton, then do a network analysis.
I want to do a contribute to scikit-image, But after some effort, I give up, I 
prefor to write a dynamic lib rather then Cython. In the end, I wrote them in 
Numba. So I appreciate to use Numba.
BestYXDragon
- 原始邮件 -
发件人:Stefan van der Walt 
收件人:scikit-image@python.org
主题:[scikit-image] Numba on pypi
日期:2017年07月13日 14点17分

Hi everyone,
As many of you know, speed has been a point of contention in
scikit-image for a long time.  We've made a very deliberate decision to
focus on writing high-level, understandable code (via Python and
Cython): both to lower the barrier to entry for newcomers, and to lessen
the burden on maintainers.  But execution time comparisons, vs OpenCV
e.g., left much to be desired.
I think we have hit a turning point in the road.  Binary wheels for
Numba (actually, llvmlite) were recently uploaded to PyPi, making this
technology available to users on both pip and conda installations.  The
importance of this release on pypi should not be dismissed, and I am
grateful to the numba team and Continuum for making that decision.
So, how does that impact scikit-image?  Well, imagine we choose to
optimize various procedures via numba (see Juan's blog post for exactly
how impactful this can be:
https://ilovesymposia.com/2017/03/15/prettier-lowlevelcallables-with-numba-jit-and-decorators/).
The only question we have to answer (from a survival point of view)
needs to be: if, somehow, something happens to numba, will an
alternative will be available at that time?  Looking at the Python JIT
landscape (which is very active), and the current state of numba
development, I think this is likely.  And, if we choose to use numba, of
course we'll help to keep it healthy, as far as we can.
I'd love to hear your thoughts.  I, for one, am excited about the
prospect of writing kernels as simply as:
>>> @jit_filter_function
... def fmin(values):
... result = np.inf
... for v in values:
... if v < result:
... result = v
... return result
>>> ndi.generic_filter(image, fmin, footprint=fp)
Best regards
Stéfan
___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image
___
scikit-image mailing list
scikit-image@python.org
https://mail.python.org/mailman/listinfo/scikit-image