Hello PIL experts, I am writing a Python script to analyse a multipage TIFF image stacks coming from a microscope. The average file size is around several gigabytes with few thousand grayscale frames and the resolution is around 640x480 or thereabouts.
For this I need to apply some temporal filtering to individual pixels (temporal means that I need to construct a "line" from the values of this pixel along the pages). Currently I am using Image.getpixel() in cycle, but it turns out that it's prohibitively slow (much slower than the filtering that I am applying). I estimated that on my 2 Ghz laptop it will take around 400 hours to analyse one pack (all the pixels). Therefore I'd like to inquire whether this is an issue with PIL, or underlying TIFF reader, or maybe there's a built-in method to quickly read Z-axis cut that I didn't consider (I'm a total PIL n00b). Also I was considering that maybe reading the whole page with some built-in method, such as Image.getdata() and switching through the pages may be faster, but I can't store the whole stack in memory (the stacks that we have now will take around ~4 Gb, but this will quickly grow...). Fortunately I can apply filters to a window, but this will introduce a lot of complication in the algorithm, so I'm really looking forward to hearing from you whether it is possible to speed up my code as is. Many thanks! P.S. Snippet attached: import numpy as np def getvalue(im, position): pixel = im.getpixel( position ) if isinstance(pixel, int): return pixel else: return np.sqrt( np.sum( np.array(pixel) ** 2 ) ) im_name = 'bleaching.tiff' im = Image.open(im_name) page = 0 line = [] try: while True: im.seek(page) line.append(getvalue(im, (x, y) )) page += 1 except EOFError: pass <do something with line and take another pixel> -- Sincerely yours, Yury V. Zaytsev _______________________________________________ Image-SIG maillist - Image-SIG@python.org http://mail.python.org/mailman/listinfo/image-sig