Script 'mail_helper' called by obssrc
Hello community,
here is the log from the commit of package python-tifffile for openSUSE:Factory
checked in at 2021-02-10 21:30:25
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-tifffile (Old)
and /work/SRC/openSUSE:Factory/.python-tifffile.new.28504 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "python-tifffile"
Wed Feb 10 21:30:25 2021 rev:7 rq:870287 version:2021.1.14
Changes:
--------
--- /work/SRC/openSUSE:Factory/python-tifffile/python-tifffile.changes
2020-12-23 14:19:59.185651580 +0100
+++
/work/SRC/openSUSE:Factory/.python-tifffile.new.28504/python-tifffile.changes
2021-02-10 21:30:34.638300972 +0100
@@ -1,0 +2,29 @@
+Mon Feb 8 14:28:00 UTC 2021 - Mark??ta Machov?? <[email protected]>
+
+- Skip python36 because of imagecodecs
+
+-------------------------------------------------------------------
+Sun Jan 31 12:08:11 UTC 2021 - andy great <[email protected]>
+
+- Add missing python_alternative.
+
+-------------------------------------------------------------------
+Mon Jan 25 09:05:41 UTC 2021 - andy great <[email protected]>
+
+- Update to version 2021.1.14.
+ * Try ImageJ series if OME series fails
+ * Add option to use pages as chunks in ZarrFileStore (experimental).
+ * Fix reading from file objects with no readinto function.
+- Updates for version 2021.1.11
+ * Fix test errors on PyPy.
+ * Fix decoding bitorder with imagecodecs >= 2021.1.11.
+- Updates for version 2021.1.8
+ * Decode float24 using imagecodecs >= 2021.1.8.
+ * Consolidate reading of segments if possible.
+
+-------------------------------------------------------------------
+Mon Dec 28 19:54:11 UTC 2020 - andy great <[email protected]>
+
+- Add zarr dependency.
+
+-------------------------------------------------------------------
Old:
----
tifffile-2020.12.8.tar.gz
New:
----
tifffile-2021.1.14.tar.gz
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ python-tifffile.spec ++++++
--- /var/tmp/diff_new_pack.1emgTR/_old 2021-02-10 21:30:35.250301857 +0100
+++ /var/tmp/diff_new_pack.1emgTR/_new 2021-02-10 21:30:35.250301857 +0100
@@ -1,7 +1,7 @@
#
# spec file for package python-tifffile
#
-# Copyright (c) 2020 SUSE LLC
+# Copyright (c) 2021 SUSE LLC
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
@@ -18,9 +18,10 @@
%define packagename tifffile
%define skip_python2 1
+%define skip_python36 1
%{?!python_module:%define python_module() python-%{**} python3-%{**}}
Name: python-tifffile
-Version: 2020.12.8
+Version: 2021.1.14
Release: 0
Summary: Read and write TIFF(r) files
License: BSD-2-Clause
@@ -32,12 +33,16 @@
BuildRequires: %{python_module numpy >= 1.15.1}
BuildRequires: %{python_module pytest}
BuildRequires: %{python_module setuptools}
+BuildRequires: %{python_module zarr >= 2.5.0}
BuildRequires: fdupes
BuildRequires: python-rpm-macros
Requires: python-imagecodecs >= 2020.5.30
Requires: python-lxml
Requires: python-matplotlib >= 3.2
Requires: python-numpy >= 1.15.1
+Requires: python-zarr >= 2.5.0
+Requires(post): update-alternatives
+Requires(postun): update-alternatives
BuildArch: noarch
%python_subpackages
++++++ _constraints ++++++
--- /var/tmp/diff_new_pack.1emgTR/_old 2021-02-10 21:30:35.282301904 +0100
+++ /var/tmp/diff_new_pack.1emgTR/_new 2021-02-10 21:30:35.282301904 +0100
@@ -7,4 +7,4 @@
<size unit="G">5</size>
</physicalmemory>
</hardware>
-</constraints>
+</constraints>
++++++ tifffile-2020.12.8.tar.gz -> tifffile-2021.1.14.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/tifffile-2020.12.8/CHANGES.rst
new/tifffile-2021.1.14/CHANGES.rst
--- old/tifffile-2020.12.8/CHANGES.rst 2020-12-10 05:29:03.000000000 +0100
+++ new/tifffile-2021.1.14/CHANGES.rst 2021-01-16 04:01:59.000000000 +0100
@@ -1,7 +1,17 @@
Revisions
---------
+2021.1.14
+ Pass 4378 tests.
+ Try ImageJ series if OME series fails (#54)
+ Add option to use pages as chunks in ZarrFileStore (experimental).
+ Fix reading from file objects with no readinto function.
+2021.1.11
+ Fix test errors on PyPy.
+ Fix decoding bitorder with imagecodecs >= 2021.1.11.
+2021.1.8
+ Decode float24 using imagecodecs >= 2021.1.8.
+ Consolidate reading of segments if possible.
2020.12.8
- Pass 4376 tests.
Fix corrupted ImageDescription in multi shaped series if buffer too small.
Fix libtiff warning that ImageDescription contains null byte in value.
Fix reading invalid files using JPEG compression with palette colorspace.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/tifffile-2020.12.8/LICENSE
new/tifffile-2021.1.14/LICENSE
--- old/tifffile-2020.12.8/LICENSE 2020-12-10 05:29:03.000000000 +0100
+++ new/tifffile-2021.1.14/LICENSE 2021-01-16 04:01:59.000000000 +0100
@@ -1,6 +1,6 @@
BSD 3-Clause License
-Copyright (c) 2008-2020, Christoph Gohlke
+Copyright (c) 2008-2021, Christoph Gohlke
All rights reserved.
Redistribution and use in source and binary forms, with or without
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/tifffile-2020.12.8/README.rst
new/tifffile-2021.1.14/README.rst
--- old/tifffile-2020.12.8/README.rst 2020-12-10 05:29:03.000000000 +0100
+++ new/tifffile-2021.1.14/README.rst 2021-01-16 04:01:59.000000000 +0100
@@ -41,16 +41,16 @@
:License: BSD 3-Clause
-:Version: 2020.12.8
+:Version: 2021.1.14
Requirements
------------
This release has been tested with the following requirements and dependencies
(other versions may work):
-* `CPython 3.7.9, 3.8.6, 3.9.1 64-bit <https://www.python.org>`_
-* `Numpy 1.19.4 <https://pypi.org/project/numpy/>`_
-* `Imagecodecs 2020.5.30 <https://pypi.org/project/imagecodecs/>`_
+* `CPython 3.7.9, 3.8.7, 3.9.1 64-bit <https://www.python.org>`_
+* `Numpy 1.19.5 <https://pypi.org/project/numpy/>`_
+* `Imagecodecs 2021.1.11 <https://pypi.org/project/imagecodecs/>`_
(required only for encoding or decoding LZW, JPEG, etc.)
* `Matplotlib 3.3.3 <https://pypi.org/project/matplotlib/>`_
(required only for plotting)
@@ -61,8 +61,18 @@
Revisions
---------
+2021.1.14
+ Pass 4378 tests.
+ Try ImageJ series if OME series fails (#54)
+ Add option to use pages as chunks in ZarrFileStore (experimental).
+ Fix reading from file objects with no readinto function.
+2021.1.11
+ Fix test errors on PyPy.
+ Fix decoding bitorder with imagecodecs >= 2021.1.11.
+2021.1.8
+ Decode float24 using imagecodecs >= 2021.1.8.
+ Consolidate reading of segments if possible.
2020.12.8
- Pass 4376 tests.
Fix corrupted ImageDescription in multi shaped series if buffer too small.
Fix libtiff warning that ImageDescription contains null byte in value.
Fix reading invalid files using JPEG compression with palette colorspace.
@@ -417,6 +427,11 @@
... for tag in page.tags:
... tag_name, tag_value = tag.name, tag.value
+Overwrite the value of an existing tag, e.g. XResolution:
+
+>>> with TiffFile('temp.tif', mode='r+b') as tif:
+... _ = tif.pages[0].tags['XResolution'].overwrite(tif, (96000, 1000))
+
Write a floating-point ndarray and metadata using BigTIFF format, tiling,
compression, and planar storage:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/tifffile-2020.12.8/setup.py
new/tifffile-2021.1.14/setup.py
--- old/tifffile-2020.12.8/setup.py 2020-12-10 05:29:03.000000000 +0100
+++ new/tifffile-2021.1.14/setup.py 2021-01-16 04:01:59.000000000 +0100
@@ -82,11 +82,11 @@
python_requires='>=3.7',
install_requires=[
'numpy>=1.15.1',
- # 'imagecodecs>=2020.5.30',
+ # 'imagecodecs>=2021.1.11',
],
extras_require={
'all': [
- 'imagecodecs>=2020.5.30',
+ 'imagecodecs>=2021.1.11',
'matplotlib>=3.2',
'lxml',
# 'zarr>=2.5.0'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/tifffile-2020.12.8/tests/conftest.py
new/tifffile-2021.1.14/tests/conftest.py
--- old/tifffile-2020.12.8/tests/conftest.py 2020-12-10 05:29:03.000000000
+0100
+++ new/tifffile-2021.1.14/tests/conftest.py 2021-01-16 04:01:59.000000000
+0100
@@ -3,6 +3,11 @@
import os
import sys
+if os.environ.get('VSCODE_CWD'):
+ # work around pytest not using PYTHONPATH in VSCode
+ sys.path.insert(
+ 0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
+ )
if os.environ.get('SKIP_CODECS', None):
sys.modules['imagecodecs'] = None
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/tifffile-2020.12.8/tests/test_tifffile.py
new/tifffile-2021.1.14/tests/test_tifffile.py
--- old/tifffile-2020.12.8/tests/test_tifffile.py 2020-12-10
05:29:03.000000000 +0100
+++ new/tifffile-2021.1.14/tests/test_tifffile.py 2021-01-16
04:01:59.000000000 +0100
@@ -1,6 +1,6 @@
# test_tifffile.py
-# Copyright (c) 2008-2020, Christoph Gohlke
+# Copyright (c) 2008-2021, Christoph Gohlke
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
@@ -42,7 +42,7 @@
:License: BSD 3-Clause
-:Version: 2020.12.8
+:Version: 2021.1.14
"""
@@ -51,6 +51,7 @@
import glob
import json
import math
+import mmap
import os
import pathlib
import random
@@ -181,8 +182,9 @@
SKIP_VALIDATE = True # skip validate written files with jhove
SKIP_CODECS = False
SKIP_ZARR = False
+SKIP_PYPY = 'PyPy' in sys.version
SKIP_BE = sys.byteorder == 'big'
-REASON = 'just skip it'
+REASON = 'skipped'
if sys.maxsize < 2 ** 32:
SKIP_LARGE = True
@@ -329,13 +331,13 @@
assert image.reshape(page.shaped)[index] == strile[0, 0, 0, 0]
-def assert_aszarr_method(obj, image=None, **kwargs):
+def assert_aszarr_method(obj, image=None, chunkmode=None, **kwargs):
"""Assert aszarr returns same data as asarray."""
if SKIP_ZARR:
return
if image is None:
image = obj.asarray(**kwargs)
- with obj.aszarr(**kwargs) as store:
+ with obj.aszarr(chunkmode=chunkmode, **kwargs) as store:
data = zarr.open(store, mode='r')
if isinstance(data, zarr.Group):
data = data[0]
@@ -349,7 +351,9 @@
def __init__(self, name=None, ext='.tif', remove=False):
self.remove = remove or TEMP_DIR == tempfile.gettempdir()
if not name:
- self.name = tempfile.NamedTemporaryFile(prefix='test_').name
+ fh = tempfile.NamedTemporaryFile(prefix='test_')
+ self.name = fh.named
+ fh.close()
else:
self.name = os.path.join(TEMP_DIR, f'test_{name}{ext}')
@@ -396,7 +400,7 @@
def test_issue_imread_kwargs():
"""Test that is_flags are handled by imread."""
data = random_data('uint16', (5, 63, 95))
- with TempFileName(f'issue_imread_kwargs') as fname:
+ with TempFileName('issue_imread_kwargs') as fname:
with TiffWriter(fname) as tif:
for image in data:
tif.write(image) # create 5 series
@@ -1041,6 +1045,40 @@
assert_array_equal(page.asarray(), extrasample)
[email protected](SKIP_PRIVATE, reason=REASON)
+def test_issue_mmap():
+ """Test reading from mmap object with no readinto function.."""
+ fname = public_file('OME/bioformats-artificial/4D-series.ome.tiff')
+ with open(fname, 'rb') as fh:
+ mm = mmap.mmap(fh.fileno(), 0, access=mmap.ACCESS_READ)
+ assert_array_equal(imread(mm), imread(fname))
+ mm.close()
+
+
[email protected](SKIP_PRIVATE, reason=REASON)
+def test_issue_micromanager(caplog):
+ """Test fallback to ImageJ metadata if OME series fails."""
+ # https://github.com/cgohlke/tifffile/issues/54
+ # https://forum.image.sc/t/47567/9
+ # OME-XML does not contain reference to master file
+ # file has corrupt MicroManager DisplaySettings metadata
+ fname = private_file(
+ 'OME/'
+ 'image_stack_tpzc_50tp_2p_5z_3c_512k_1_MMStack_2-Pos001_000.ome.tif'
+ )
+ with TiffFile(fname) as tif:
+ assert len(tif.pages) == 750
+ assert len(tif.series) == 1
+ assert 'OME series: not an ome-tiff master file' in caplog.text
+ assert tif.is_micromanager
+ assert tif.is_ome
+ assert tif.is_imagej
+ assert tif.micromanager_metadata['DisplaySettings'] is None
+ assert 'read_json: invalid JSON' in caplog.text
+ series = tif.series[0]
+ assert series.shape == (50, 5, 3, 256, 256)
+
+
###############################################################################
# Test specific functions and classes
@@ -1306,13 +1344,13 @@
def test_class_tifftagregistry():
"""Test TiffTagRegistry."""
tags = TIFF.TAGS
- assert len(tags) == 620
+ assert len(tags) == 624
assert tags[11] == 'ProcessingSoftware'
assert tags['ProcessingSoftware'] == 11
assert tags.getall(11) == ['ProcessingSoftware']
assert tags.getall('ProcessingSoftware') == [11]
tags.add(11, 'ProcessingSoftware')
- assert len(tags) == 620
+ assert len(tags) == 624
# one code with two names
assert 34853 in tags
@@ -1325,7 +1363,7 @@
assert tags.getall('GPSTag') == [34853]
del tags[34853]
- assert len(tags) == 618
+ assert len(tags) == 622
assert 34853 not in tags
assert 'GPSTag' not in tags
assert 'OlympusSIS2' not in tags
@@ -1351,7 +1389,7 @@
assert tags.getall(41483) == ['FlashEnergy']
del tags['FlashEnergy']
- assert len(tags) == 618
+ assert len(tags) == 622
assert 37387 not in tags
assert 41483 not in tags
assert 'FlashEnergy' not in tags
@@ -1612,13 +1650,15 @@
omexml = OmeXml(**metadata)
omexml.addimage('uint16', (3, 32, 32, 3), (3, 1, 1, 32, 32, 3), **metadata)
+ xml = omexml.tostring()
+ assert uuid in xml
+ assert 'SignificantBits="12"' in xml
+ assert 'SamplesPerPixel="3" Name="ChannelName"' in xml
+ assert 'TheC="0" TheZ="2" TheT="0" PositionZ="4.0"' in xml
+ if SKIP_PYPY:
+ pytest.xfail('lxml bug?')
+ assert_valid_omexml(xml)
assert '\n ' in str(omexml)
- omexml = omexml.tostring()
- assert_valid_omexml(omexml)
- assert uuid in omexml
- assert 'SignificantBits="12"' in omexml
- assert 'SamplesPerPixel="3" Name="ChannelName"' in omexml
- assert 'TheC="0" TheZ="2" TheT="0" PositionZ="4.0"' in omexml
def test_class_omexml_multiimage():
@@ -2581,7 +2621,7 @@
"""Test create_output function in context of asarray."""
data = random_data('uint16', (5, 219, 301))
- with TempFileName('out') as fname:
+ with TempFileName(f'out_{key}_{out}') as fname:
imwrite(fname, data)
# assert file
with TiffFile(fname) as tif:
@@ -2631,7 +2671,9 @@
del image
elif out == 'name':
# memmap in specified file
- with TempFileName('out', ext='.memmap') as fileout:
+ with TempFileName(
+ f'out_{key}_{out}', ext='.memmap'
+ ) as fileout:
image = obj.asarray(out=fileout)
assert isinstance(image, numpy.core.memmap)
assert_array_equal(dat, image)
@@ -2947,10 +2989,10 @@
assert page.photometric == MINISBLACK
# float24 not supported
- if 'float' in fname and databits == 24:
- with pytest.raises(ValueError):
- data = tif.asarray()
- return
+ # if 'float' in fname and databits == 24:
+ # with pytest.raises(ValueError):
+ # data = tif.asarray()
+ # return
# assert data shapes
data = tif.asarray()
@@ -2974,7 +3016,10 @@
# assert data types
if 'float' in fname:
- dtype = f'float{databits}'
+ if databits == 24:
+ dtype = 'float32'
+ else:
+ dtype = f'float{databits}'
# elif 'palette' in fname:
# dtype = 'uint16'
elif databits == 1:
@@ -6610,6 +6655,7 @@
assert__str__(tif)
# test aszarr
assert_aszarr_method(tif, data)
+ assert_aszarr_method(tif, data, chunkmode='page')
del data
# assert other files are still closed after ZarrFileStore.close
for page in tif.series[0].pages:
@@ -6661,6 +6707,7 @@
assert data.dtype.name == 'uint8'
assert data[1, 42, 9, 426, 272] == 123
assert_aszarr_method(tif, data)
+ assert_aszarr_method(tif, data, chunkmode='page')
del data
assert__str__(tif)
@@ -6697,6 +6744,7 @@
assert data.dtype.name == 'uint8'
assert data[1, 158, 428] == 253
assert_aszarr_method(tif, data)
+ assert_aszarr_method(tif, data, chunkmode='page')
assert__str__(tif)
@@ -6730,6 +6778,7 @@
assert data.dtype.name == 'uint8'
assert tuple(data[5, :, 191, 449]) == (253, 0, 28)
assert_aszarr_method(tif, data)
+ assert_aszarr_method(tif, data, chunkmode='page')
assert__str__(tif)
@@ -6765,6 +6814,7 @@
assert data.dtype.name == 'uint16'
assert data[1, 158, 428] == 51
assert_aszarr_method(tif, data)
+ assert_aszarr_method(tif, data, chunkmode='page')
assert__str__(tif)
@@ -6800,6 +6850,7 @@
assert data.dtype.name == 'uint16'
assert data[4, 9, 1, 175, 123] == 9605
assert_aszarr_method(tif, data)
+ assert_aszarr_method(tif, data, chunkmode='page')
del data
assert__str__(tif)
@@ -6833,6 +6884,7 @@
assert data.dtype.name == 'uint16'
assert tuple(data[:, 2684, 2684]) == (496, 657, 7106, 469)
assert_aszarr_method(tif, data)
+ assert_aszarr_method(tif, data, chunkmode='page')
del data
assert__str__(tif)
@@ -6943,6 +6995,7 @@
assert data.dtype.name == 'uint16'
assert data[0, 0] == 1904
assert_aszarr_method(page, data)
+ assert_aszarr_method(page, data, chunkmode='page')
assert__str__(tif)
@@ -6980,6 +7033,7 @@
assert data.dtype.name == 'uint16'
assert round(abs(data[50, 256, 256] - 703), 7) == 0
assert_aszarr_method(tif, data)
+ assert_aszarr_method(tif, data, chunkmode='page')
assert__str__(tif, 0)
@@ -7016,6 +7070,7 @@
assert data.dtype.name == 'uint8'
assert data[195, 144] == 41
assert_aszarr_method(tif, data)
+ assert_aszarr_method(tif, data, chunkmode='page')
assert__str__(tif)
@@ -7090,6 +7145,7 @@
assert data.dtype.name == 'uint8'
assert data[35, 35, 65] == 171
assert_aszarr_method(tif, data)
+ assert_aszarr_method(tif, data, chunkmode='page')
assert__str__(tif)
@@ -7132,6 +7188,7 @@
assert tuple(data[:, 15, 15]) == (812, 1755, 648)
assert_decode_method(page)
assert_aszarr_method(series, data)
+ assert_aszarr_method(series, data, chunkmode='page')
assert__str__(tif, 0)
@@ -7170,6 +7227,7 @@
assert data.dtype.name == 'uint8'
assert data[102, 216, 212] == 120
assert_aszarr_method(series, data)
+ assert_aszarr_method(series, data, chunkmode='page')
assert__str__(tif, 0)
@@ -7205,6 +7263,7 @@
assert data.dtype.name == 'uint16'
assert tuple(data[255, 336]) == (440, 378, 298)
assert_aszarr_method(series, data)
+ assert_aszarr_method(series, data, chunkmode='page')
assert__str__(tif)
@@ -7239,6 +7298,7 @@
assert data.dtype.name == 'uint8'
assert tuple(data[18, 108, 97]) == (165, 157, 0)
assert_aszarr_method(series, data)
+ assert_aszarr_method(series, data, chunkmode='page')
assert__str__(tif)
@@ -7286,6 +7346,7 @@
assert tif.pages.pages[2] == 8001073
assert tif.pages.pages[-1] == 8008687
assert_aszarr_method(series, data)
+ assert_aszarr_method(series, data, chunkmode='page')
assert__str__(tif)
@@ -7513,6 +7574,7 @@
assert data.dtype.name == 'uint16'
assert data[94, 34] == 1257
assert_aszarr_method(series, data)
+ assert_aszarr_method(series, data, chunkmode='page')
assert__str__(tif)
del data
@@ -7587,6 +7649,7 @@
assert data.dtype.name == 'uint16'
assert round(abs(data[1, 36, 128, 128] - 824), 7) == 0
assert_aszarr_method(series, data)
+ assert_aszarr_method(series, data, chunkmode='page')
assert__str__(tif)
@@ -7655,6 +7718,7 @@
assert data.dtype.name == 'uint16'
assert data[256, 256] == 1917
assert_aszarr_method(series, data)
+ assert_aszarr_method(series, data, chunkmode='page')
assert__str__(tif)
@@ -7695,6 +7759,7 @@
assert round(abs(data[512, 2856] - 4095), 7) == 0
if not SKIP_LARGE:
assert_aszarr_method(series, data)
+ assert_aszarr_method(series, data, chunkmode='page')
assert__str__(tif)
@@ -7748,6 +7813,7 @@
assert data.dtype.name == 'float32'
assert round(abs(data[260, 740] - 399.1728515625), 7) == 0
assert_aszarr_method(series, data)
+ assert_aszarr_method(series, data, chunkmode='page')
assert__str__(tif)
@@ -7782,6 +7848,7 @@
assert data.dtype.name == 'uint8'
assert round(abs(data[120, 34] - 4), 7) == 0
assert_aszarr_method(series, data)
+ assert_aszarr_method(series, data, chunkmode='page')
assert__str__(tif)
@@ -7807,6 +7874,7 @@
assert attr['Tau'] == 1.991e-07
assert attr['Silicon'] == 0.000320
assert_aszarr_method(page)
+ assert_aszarr_method(page, chunkmode='page')
assert__str__(tif)
@@ -7863,6 +7931,7 @@
tags['epicsTSSec'], tags['epicsTSNsec']
) == datetime.datetime(2015, 6, 2, 11, 31, 56, 103746)
assert_aszarr_method(page)
+ assert_aszarr_method(page, chunkmode='page')
assert__str__(tif)
@@ -7992,6 +8061,7 @@
assert image.dtype == 'uint8'
assert image[300, 400, 1] == 48
assert_aszarr_method(tif, image, series=1)
+ assert_aszarr_method(tif, image, series=1, chunkmode='page')
assert__str__(tif)
@@ -8025,6 +8095,7 @@
assert image.shape == (2789, 2677, 3)
assert image[300, 400, 1] == 206
assert_aszarr_method(series, image, level=5)
+ assert_aszarr_method(series, image, level=5, chunkmode='page')
assert__str__(tif)
@@ -8095,6 +8166,7 @@
assert sis['name'] == 'Hela-Zellen'
assert sis['magnification'] == 60.0
assert_aszarr_method(tif, data)
+ assert_aszarr_method(tif, data, chunkmode='page')
assert__str__(tif)
@@ -8864,6 +8936,7 @@
data = tif.asarray()
assert data.shape == shape
assert_aszarr_method(tif, data)
+ assert_aszarr_method(tif, data, chunkmode='page')
assert__str__(tif)
@@ -8888,6 +8961,7 @@
assert page.is_shaped
assert page.description == descr
assert_aszarr_method(page)
+ assert_aszarr_method(page, chunkmode='page')
assert__str__(tif)
@@ -9710,6 +9784,7 @@
image = tif.asarray()
assert_array_equal(data, image)
assert_aszarr_method(tif, image)
+ assert_aszarr_method(tif, image, chunkmode='page')
assert__str__(tif)
@@ -9754,6 +9829,7 @@
image = tif.asarray()
assert_array_equal(data, image)
assert_aszarr_method(tif, image)
+ assert_aszarr_method(tif, image, chunkmode='page')
assert__str__(tif)
@@ -10927,6 +11003,7 @@
image = tif.asarray()
assert_array_equal(data, image)
assert_aszarr_method(tif, image)
+ assert_aszarr_method(tif, image, chunkmode='page')
assert__str__(tif)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/tifffile-2020.12.8/tifffile/tiffcomment.py
new/tifffile-2021.1.14/tifffile/tiffcomment.py
--- old/tifffile-2020.12.8/tifffile/tiffcomment.py 2020-12-10
05:29:03.000000000 +0100
+++ new/tifffile-2021.1.14/tifffile/tiffcomment.py 2021-01-16
04:01:59.000000000 +0100
@@ -45,7 +45,7 @@
try:
comment = comment.encode('ascii')
except UnicodeEncodeError as exc:
- print(f'{file}: {exc}')
+ print(f'{exc}')
comment = comment.encode()
for file in files:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/tifffile-2020.12.8/tifffile/tifffile.py
new/tifffile-2021.1.14/tifffile/tifffile.py
--- old/tifffile-2020.12.8/tifffile/tifffile.py 2020-12-10 05:29:03.000000000
+0100
+++ new/tifffile-2021.1.14/tifffile/tifffile.py 2021-01-16 04:01:59.000000000
+0100
@@ -1,6 +1,6 @@
# tifffile.py
-# Copyright (c) 2008-2020, Christoph Gohlke
+# Copyright (c) 2008-2021, Christoph Gohlke
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
@@ -71,16 +71,16 @@
:License: BSD 3-Clause
-:Version: 2020.12.8
+:Version: 2021.1.14
Requirements
------------
This release has been tested with the following requirements and dependencies
(other versions may work):
-* `CPython 3.7.9, 3.8.6, 3.9.1 64-bit <https://www.python.org>`_
-* `Numpy 1.19.4 <https://pypi.org/project/numpy/>`_
-* `Imagecodecs 2020.5.30 <https://pypi.org/project/imagecodecs/>`_
+* `CPython 3.7.9, 3.8.7, 3.9.1 64-bit <https://www.python.org>`_
+* `Numpy 1.19.5 <https://pypi.org/project/numpy/>`_
+* `Imagecodecs 2021.1.11 <https://pypi.org/project/imagecodecs/>`_
(required only for encoding or decoding LZW, JPEG, etc.)
* `Matplotlib 3.3.3 <https://pypi.org/project/matplotlib/>`_
(required only for plotting)
@@ -91,8 +91,18 @@
Revisions
---------
+2021.1.14
+ Pass 4378 tests.
+ Try ImageJ series if OME series fails (#54)
+ Add option to use pages as chunks in ZarrFileStore (experimental).
+ Fix reading from file objects with no readinto function.
+2021.1.11
+ Fix test errors on PyPy.
+ Fix decoding bitorder with imagecodecs >= 2021.1.11.
+2021.1.8
+ Decode float24 using imagecodecs >= 2021.1.8.
+ Consolidate reading of segments if possible.
2020.12.8
- Pass 4376 tests.
Fix corrupted ImageDescription in multi shaped series if buffer too small.
Fix libtiff warning that ImageDescription contains null byte in value.
Fix reading invalid files using JPEG compression with palette colorspace.
@@ -447,6 +457,11 @@
... for tag in page.tags:
... tag_name, tag_value = tag.name, tag.value
+Overwrite the value of an existing tag, e.g. XResolution:
+
+>>> with TiffFile('temp.tif', mode='r+b') as tif:
+... _ = tif.pages[0].tags['XResolution'].overwrite(tif, (96000, 1000))
+
Write a floating-point ndarray and metadata using BigTIFF format, tiling,
compression, and planar storage:
@@ -606,7 +621,7 @@
"""
-__version__ = '2020.12.8'
+__version__ = '2021.1.14'
__all__ = (
'imwrite',
@@ -3076,12 +3091,12 @@
result.shape = (-1,) + pages[0].shape
return result
- def aszarr(self, key=None, series=None, level=None):
+ def aszarr(self, key=None, series=None, level=None, **kwargs):
"""Return image data from selected TIFF page(s) as zarr storage."""
if not self.pages:
raise NotImplementedError('empty zarr arrays not supported')
if key is None and series is None:
- return self.series[0].aszarr(level=level)
+ return self.series[0].aszarr(level=level, **kwargs)
if series is None:
pages = self.pages
else:
@@ -3090,10 +3105,10 @@
except (KeyError, TypeError):
pass
if key is None:
- return series.aszarr(level=level)
+ return series.aszarr(level=level, **kwargs)
pages = series.pages
if isinstance(key, (int, numpy.integer)):
- return pages[key].aszarr()
+ return pages[key].aszarr(**kwargs)
raise TypeError('key must be an integer index')
@lazyattr
@@ -3128,6 +3143,12 @@
):
if getattr(self, 'is_' + name, False):
series = getattr(self, '_series_' + name)()
+ if not series and name == 'ome' and self.is_imagej:
+ # try ImageJ series if OME series fails.
+ # clear pages cache since _series_ome() might leave some
+ # frames without keyframe
+ self.pages._clear()
+ continue
break
self.pages.useframes = useframes
self.pages.keyframe = keyframe
@@ -5915,6 +5936,12 @@
# return numpy array from packed integers
return unpack_rgb(data, dtype, self.bitspersample)
+ elif self.bitspersample == 24 and dtype.char == 'f':
+ # float24
+ def unpack(data, byteorder=self.parent.byteorder):
+ # return numpy.float32 array from float24
+ return float24_decode(data, byteorder)
+
else:
# bilevel and packed integers
def unpack(data):
@@ -5931,7 +5958,7 @@
data, shape = pad(data, shape)
return data, index, shape
if self.fillorder == 2:
- data = bitorder_decode(data, out=data)
+ data = bitorder_decode(data)
if decompress is not None:
# TODO: calculate correct size for packed integers
size = shape[0] * shape[1] * shape[2] * shape[3]
@@ -7858,10 +7885,14 @@
"""
- def __init__(self, fillvalue=None):
+ def __init__(self, fillvalue=None, chunkmode=None):
"""Initialize ZarrStore."""
self._store = {}
self._fillvalue = 0 if fillvalue is None else fillvalue
+ if chunkmode is None:
+ self._chunkmode = TIFF.CHUNKMODE(0)
+ else:
+ self._chunkmode = enumarg(TIFF.CHUNKMODE, chunkmode)
def __enter__(self):
return self
@@ -7983,10 +8014,19 @@
"""Zarr storage interface to image data in TiffPage or TiffPageSeries."""
def __init__(
- self, arg, level=None, fillvalue=None, lock=None, _openfiles=None
+ self,
+ arg,
+ level=None,
+ chunkmode=None,
+ fillvalue=None,
+ lock=None,
+ _openfiles=None,
):
"""Initialize Zarr storage from TiffPage or TiffPageSeries."""
- super().__init__(fillvalue=fillvalue)
+ super().__init__(fillvalue=fillvalue, chunkmode=chunkmode)
+
+ if self._chunkmode not in (0, 2):
+ raise NotImplementedError(f'{self._chunkmode!r} not implemented')
if lock is None:
lock = threading.RLock()
@@ -8019,7 +8059,10 @@
for level, series in enumerate(self._data):
shape = series.shape
dtype = series.dtype
- chunks = series.keyframe.chunks
+ if self._chunkmode:
+ chunks = series.keyframe.shape
+ else:
+ chunks = series.keyframe.chunks
self._store[f'{level}/.zarray'] = ZarrStore._json(
{
'zarr_format': 2,
@@ -8036,7 +8079,10 @@
series = self._data[0]
shape = series.shape
dtype = series.dtype
- chunks = series.keyframe.chunks
+ if self._chunkmode:
+ chunks = series.keyframe.shape
+ else:
+ chunks = series.keyframe.chunks
self._store['.zattrs'] = ZarrStore._json({})
self._store['.zarray'] = ZarrStore._json(
{
@@ -8059,10 +8105,24 @@
"""Return chunk from file."""
keyframe, page, chunkindex, offset, bytecount = self._parse_key(key)
+ if self._chunkmode:
+ chunks = keyframe.shape
+ else:
+ chunks = keyframe.chunks
+
if page is None or offset == 0 or bytecount == 0:
- return ZarrStore._empty_chunk(
- keyframe.chunks, keyframe.dtype, self._fillvalue
+ chunk = ZarrStore._empty_chunk(
+ chunks, keyframe.dtype, self._fillvalue
)
+ if self._transform is not None:
+ chunk = self._transform(chunk)
+ return chunk
+
+ if self._chunkmode and offset is None:
+ chunk = page.asarray(lock=self._filecache.lock) # maxworkers=1 ?
+ if self._transform is not None:
+ chunk = self._transform(chunk)
+ return chunk
chunk = self._filecache.read(page.parent.filehandle, offset, bytecount)
@@ -8074,7 +8134,7 @@
if self._transform is not None:
chunk = self._transform(chunk)
- if chunk.size != product(keyframe.chunks):
+ if chunk.size != product(chunks):
raise RuntimeError
return chunk # .tobytes()
@@ -8087,7 +8147,7 @@
else:
series = self._data[0]
keyframe = series.keyframe
- pageindex, chunkindex = ZarrTiffStore._indices(key, series)
+ pageindex, chunkindex = self._indices(key, series)
if pageindex > 0 and len(series) == 1:
# truncated ImageJ, STK, or shaped
if series.offset is None:
@@ -8097,6 +8157,14 @@
return keyframe, None, chunkindex, 0, 0
offset = pageindex * page.size * page.dtype.itemsize
offset += page.dataoffsets[chunkindex]
+ if self._chunkmode:
+ bytecount = page.size * page.dtype.itemsize
+ return keyframe, page, chunkindex, offset, bytecount
+ elif self._chunkmode:
+ page = series[pageindex]
+ if page is None:
+ return keyframe, None, None, 0, 0
+ return keyframe, page, None, None, None
else:
page = series[pageindex]
if page is None:
@@ -8105,48 +8173,15 @@
bytecount = page.databytecounts[chunkindex]
return keyframe, page, chunkindex, offset, bytecount
- @staticmethod
- def _chunks(chunks, shape):
- """Return chunks with same length as shape."""
- ndim = len(shape)
- if ndim == 0:
- return () # empty array
- if 0 in shape:
- return (1,) * ndim
- newchunks = []
- i = ndim - 1
- j = len(chunks) - 1
- while True:
- if j < 0:
- newchunks.append(1)
- i -= 1
- elif shape[i] > 1 and chunks[j] > 1:
- newchunks.append(chunks[j])
- i -= 1
- j -= 1
- elif shape[i] == chunks[j]: # both 1
- newchunks.append(1)
- i -= 1
- j -= 1
- elif shape[i] == 1:
- newchunks.append(1)
- i -= 1
- elif chunks[j] == 1:
- newchunks.append(1)
- j -= 1
- else:
- raise RuntimeError
- if i < 0 or ndim == len(newchunks):
- break
- # assert ndim == len(newchunks)
- return tuple(newchunks[::-1])
-
- @staticmethod
- def _indices(key, series):
+ def _indices(self, key, series):
"""Return page and strile indices from zarr chunk index."""
keyframe = series.keyframe
indices = [int(i) for i in key.split('.')]
assert len(indices) == len(series.shape)
+ if self._chunkmode:
+ chunked = (1,) * len(keyframe.shape)
+ else:
+ chunked = keyframe.chunked
p = 1
for i, s in enumerate(series.shape[::-1]):
p *= s
@@ -8160,14 +8195,14 @@
else:
raise RuntimeError
if len(strile_chunked) == len(keyframe.shape):
- strile_chunked = keyframe.chunked
+ strile_chunked = chunked
else:
# get strile_chunked including singleton dimensions
i = len(strile_indices) - 1
j = len(keyframe.shape) - 1
while True:
if strile_chunked[i] == keyframe.shape[j]:
- strile_chunked[i] = keyframe.chunked[j]
+ strile_chunked[i] = chunked[j]
i -= 1
j -= 1
elif strile_chunked[i] == 1:
@@ -8176,7 +8211,7 @@
raise RuntimeError('shape does not match page shape')
if i < 0 or j < 0:
break
- assert product(strile_chunked) == product(keyframe.chunked)
+ assert product(strile_chunked) == product(chunked)
if len(frames_indices) > 0:
frameindex = int(
numpy.ravel_multi_index(frames_indices, frames_chunked)
@@ -8191,13 +8226,52 @@
strileindex = 0
return frameindex, strileindex
+ @staticmethod
+ def _chunks(chunks, shape):
+ """Return chunks with same length as shape."""
+ ndim = len(shape)
+ if ndim == 0:
+ return () # empty array
+ if 0 in shape:
+ return (1,) * ndim
+ newchunks = []
+ i = ndim - 1
+ j = len(chunks) - 1
+ while True:
+ if j < 0:
+ newchunks.append(1)
+ i -= 1
+ elif shape[i] > 1 and chunks[j] > 1:
+ newchunks.append(chunks[j])
+ i -= 1
+ j -= 1
+ elif shape[i] == chunks[j]: # both 1
+ newchunks.append(1)
+ i -= 1
+ j -= 1
+ elif shape[i] == 1:
+ newchunks.append(1)
+ i -= 1
+ elif chunks[j] == 1:
+ newchunks.append(1)
+ j -= 1
+ else:
+ raise RuntimeError
+ if i < 0 or ndim == len(newchunks):
+ break
+ # assert ndim == len(newchunks)
+ return tuple(newchunks[::-1])
+
class ZarrFileStore(ZarrStore):
"""Zarr storage interface to image data in TiffSequence."""
- def __init__(self, arg, fillvalue=None, **kwargs):
+ def __init__(self, arg, fillvalue=None, chunkmode=None, **kwargs):
"""Initialize Zarr storage from FileSequence."""
- super().__init__(fillvalue=fillvalue)
+ super().__init__(fillvalue=fillvalue, chunkmode=chunkmode)
+
+ if self._chunkmode not in (0, 3):
+ raise NotImplementedError(f'{self._chunkmode!r} not implemented')
if not isinstance(arg, FileSequence):
raise TypeError('not a FileSequence')
@@ -8718,7 +8792,14 @@
if result.nbytes != nbytes:
raise ValueError('size mismatch')
- n = self._fh.readinto(result)
+ try:
+ n = self._fh.readinto(result)
+ except AttributeError:
+ result[:] = numpy.frombuffer(self._fh.read(nbytes), dtype).reshape(
+ result.shape
+ )
+ n = nbytes
+
if n != nbytes:
raise ValueError(f'failed to read {nbytes} bytes')
@@ -8813,6 +8894,7 @@
Iterator over individual or lists of (segment, index) tuples.
"""
+ # TODO: Cythonize this?
length = len(offsets)
if length < 1:
return
@@ -8832,7 +8914,7 @@
if lock is None:
lock = self._lock
if buffersize is None:
- buffersize = 2 ** 26 # 64 MB
+ buffersize = 67108864 # 2 ** 26, 64 MB
if indices is None:
segments = [(i, offsets[i], bytecounts[i]) for i in range(length)]
@@ -8843,15 +8925,64 @@
if sort:
segments = sorted(segments, key=lambda x: x[1])
+ iscontig = True
+ for i in range(length - 1):
+ _, offset, bytecount = segments[i]
+ nextoffset = segments[i + 1][1]
+ if offset == 0 or bytecount == 0 or nextoffset == 0:
+ continue
+ if offset + bytecount != nextoffset:
+ iscontig = False
+ break
+
seek = self.seek
read = self._fh.read
+
+ if iscontig:
+ # consolidate reads
+ i = 0
+ while i < length:
+ j = i
+ offset = None
+ bytecount = 0
+ while bytecount < buffersize and i < length:
+ _, o, b = segments[i]
+ if o > 0 and b > 0:
+ if offset is None:
+ offset = o
+ bytecount += b
+ i += 1
+
+ if offset is None:
+ data = None
+ else:
+ with lock:
+ seek(offset)
+ data = read(bytecount)
+ start = 0
+ stop = 0
+ result = []
+ while j < i:
+ index, offset, bytecount = segments[j]
+ if offset > 0 and bytecount > 0:
+ stop += bytecount
+ result.append((data[start:stop], index))
+ start = stop
+ else:
+ result.append((None, index))
+ j += 1
+ if flat:
+ yield from result
+ else:
+ yield result
+ return
+
i = 0
while i < length:
result = []
size = 0
with lock:
while size < buffersize and i < length:
- # TODO: consolidate reads?
index, offset, bytecount = segments[i]
if offset > 0 and bytecount > 0:
seek(offset)
@@ -10158,7 +10289,8 @@
(34853, 'GPSTag'), # GPSIFD also OlympusSIS2
(34853, 'OlympusSIS2'),
(34855, 'ISOSpeedRatings'),
- (34856, 'OECF'),
+ (34855, 'PhotographicSensitivity'),
+ (34856, 'OECF'), # optoelectric conversion factor
(34857, 'Interlace'),
(34858, 'TimeZoneOffset'),
(34859, 'SelfTimerMode'),
@@ -10311,6 +10443,9 @@
(42035, 'LensMake'),
(42036, 'LensModel'),
(42037, 'LensSerialNumber'),
+ (42080, 'CompositeImage'),
+ (42081, 'SourceImageNumberCompositeImage'),
+ (42082, 'SourceExposureTimesCompositeImage'),
(42112, 'GDAL_METADATA'),
(42113, 'GDAL_NODATA'),
(42240, 'Gamma'),
@@ -10420,7 +10555,7 @@
(50909, 'GEO_METADATA'), # DGIWG XML
(50931, 'CameraCalibrationSignature'),
(50932, 'ProfileCalibrationSignature'),
- (50933, 'ProfileIFD'),
+ (50933, 'ProfileIFD'), # EXTRACAMERAPROFILES
(50934, 'AsShotProfileName'),
(50935, 'NoiseReductionApplied'),
(50936, 'ProfileName'),
@@ -10953,7 +11088,7 @@
(2, 64): 'q',
# IEEEFP
(3, 16): 'e',
- # (3, 24): '', # 24 bit not supported by numpy
+ (3, 24): 'f', # float24 bit not supported by numpy
(3, 32): 'f',
(3, 64): 'd',
# COMPLEXIEEEFP
@@ -12196,6 +12331,15 @@
return max(multiprocessing.cpu_count() // 2, 1)
+ def CHUNKMODE():
+ class CHUNKMODE(enum.IntEnum):
+ NONE = 0
+ PLANE = 1
+ PAGE = 2
+ FILE = 3
+
+ return CHUNKMODE
+
def read_tags(
fh, byteorder, offsetsize, tagnames, customtags=None, maxifds=None
@@ -12380,6 +12524,7 @@
return json.loads(stripnull(data).decode())
except ValueError:
log_warning('read_json: invalid JSON')
+ return None
def read_mm_header(fh, byteorder, dtype, count, offsetsize):
@@ -13790,6 +13935,11 @@
return result.reshape(-1)
+def float24_decode(data, byteorder):
+ """Return float32 array from float24."""
+ raise NotImplementedError('float24_decode')
+
+
if imagecodecs is None:
import lzma
import zlib
@@ -13971,6 +14121,10 @@
bitorder_decode = imagecodecs.bitorder_decode # noqa
packints_decode = imagecodecs.packints_decode # noqa
packints_encode = imagecodecs.packints_encode # noqa
+ try:
+ float24_decode = imagecodecs.float24_decode # noqa
+ except AttributeError:
+ pass
def apply_colormap(image, colormap, contig=True):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/tifffile-2020.12.8/tifffile/tifffile_geodb.py
new/tifffile-2021.1.14/tifffile/tifffile_geodb.py
--- old/tifffile-2020.12.8/tifffile/tifffile_geodb.py 2020-12-10
05:29:03.000000000 +0100
+++ new/tifffile-2021.1.14/tifffile/tifffile_geodb.py 2021-01-16
04:01:59.000000000 +0100
@@ -2005,24 +2005,6 @@
Caspian_Sea = 5106
-GEO_CODES = {
- 'GTModelTypeGeoKey': ModelType,
- 'GTRasterTypeGeoKey': RasterPixel,
- 'GeographicTypeGeoKey': GCS,
- 'GeogEllipsoidGeoKey': Ellipse,
- 'ProjectedCSTypeGeoKey': PCS,
- 'ProjectionGeoKey': Proj,
- 'VerticalCSTypeGeoKey': VertCS,
- # 'VerticalDatumGeoKey': VertCS,
- 'GeogLinearUnitsGeoKey': Linear,
- 'ProjLinearUnitsGeoKey': Linear,
- 'VerticalUnitsGeoKey': Linear,
- 'GeogAngularUnitsGeoKey': Angular,
- 'GeogAzimuthUnitsGeoKey': Angular,
- 'ProjCoordTransGeoKey': CT,
- 'GeogPrimeMeridianGeoKey': PM,
-}
-
GEO_KEYS = {
1024: 'GTModelTypeGeoKey',
1025: 'GTRasterTypeGeoKey',
@@ -2073,3 +2055,21 @@
4098: 'VerticalDatumGeoKey',
4099: 'VerticalUnitsGeoKey',
}
+
+GEO_CODES = {
+ GEO_KEYS[1024]: ModelType, # GTModelTypeGeoKey
+ GEO_KEYS[1025]: RasterPixel, # GTRasterTypeGeoKey
+ GEO_KEYS[2048]: GCS, # GeographicTypeGeoKey
+ GEO_KEYS[2051]: PM, # GeogPrimeMeridianGeoKey
+ GEO_KEYS[2052]: Linear, # GeogLinearUnitsGeoKey
+ GEO_KEYS[2054]: Angular, # GeogAngularUnitsGeoKey
+ GEO_KEYS[2056]: Ellipse, # GeogEllipsoidGeoKey
+ GEO_KEYS[2060]: Angular, # GeogAzimuthUnitsGeoKey
+ GEO_KEYS[3072]: PCS, # ProjectedCSTypeGeoKey
+ GEO_KEYS[3074]: Proj, # ProjectionGeoKey
+ GEO_KEYS[3075]: CT, # ProjCoordTransGeoKey
+ GEO_KEYS[3076]: Linear, # ProjLinearUnitsGeoKey
+ GEO_KEYS[4096]: VertCS, # VerticalCSTypeGeoKey
+ # GEO_KEYS[4098]: VertCS, # VerticalDatumGeoKey
+ GEO_KEYS[4099]: Linear, # VerticalUnitsGeoKey
+}