Hi,
I'm trying to perform pixel-assignment operations using the load()
method in order to create 16-bit TIFF files. From the documentation, it
seems quite straight-forward using the internal I;16 image format, but
for some reason, it seems that even a simple pixel assignment with this
format fails.
For instance, I cannot get this to work:
>>> import Image
>>> test_image = Image.new('I;16', [300,300])
>>> test_image_pix = test_image.load()
>>> test_image_pix[3,3] = 5000
>>> test_image_pix[3,3]
0
>>>
As you can see, rather than pixel [3,3] being assigned a value of 5000,
it still gets assigned a value of 0. And this seems to happen with the
putdata() method as well. It also doesn't seem to matter what integer
value I input, they all return 0 when I call them back.
For RGB, Float, 32-bit Integer, etc. formats, this method seems to work
fine though. For instance:
>>> test_image2 = Image.new('RGB', [300,300])
>>> test_image2_pix = test_image2.load()
>>> test_image2_pix[3,3] = 40,40,40
>>> test_image2_pix[3,3]
(40, 40, 40)
>>>
Is there a bug I'm un-aware of for assigning pixel values to I;16
formatted images? Or is there a specific function I should apply to the
integers being input into the I;16 format for them to be assigned properly?
If there is a bug, what is the work-around for creating a 16-bit TIFF?
Thanks,
Jason Rodriguez
Virginia Beach, VA
_______________________________________________
Image-SIG maillist - Image-SIG@python.org
http://mail.python.org/mailman/listinfo/image-sig