cp wrote:
Following your advice I wrote three different scripts and profiled them.
#Script 1 - indexing
for i in range(10):
imarr[:,:,0].mean()
imarr[:,:,1].mean()
imarr[:,:,2].mean()
#Script 2 - slicing
for i in range(10):
imarr[:,:,0:1].mean()
imarr[:,:,1:2].mean()
imarr[:,:,2:3].mean()
#Script 3 - reshape
for i in range(10):
imarr.reshape(-1,3).mean(axis=0)
For an RGB image 2000x2000 of ~11mb the times were:
script 1: 5.432sec
script 2: 10.234sec
script 3: 4.980sec
I tried the same without the mean(), but for 1000 loops, and the results were:
script 1: 0.463sec (~6mb of RAM)
script 2: 0.465sec (~3mb of RAM)
script 3: 0.462sec (~2mb of RAM)
Script 3, you proposed, has the best performance, while script 2 is very slow.
I think this is because with slicing, the resulting array is
discontiguous in memory -- this may slow down the mean() computation.
The fact that all three run at the same speed when not calculating the
mean supports that concept.
Shows you that you need to profile.
Script 3 is probably as fast as you can get with numpy -- it's probably
slower than the PIL version because it does need to copy memory one way
or the other.
-CHB
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
chris.bar...@noaa.gov
_______________________________________________
Image-SIG maillist - Image-SIG@python.org
http://mail.python.org/mailman/listinfo/image-sig