Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-dask for openSUSE:Factory 
checked in at 2021-03-12 13:33:11
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-dask (Old)
 and      /work/SRC/openSUSE:Factory/.python-dask.new.2401 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-dask"

Fri Mar 12 13:33:11 2021 rev:42 rq:877824 version:2021.3.0

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-dask/python-dask.changes  2021-02-19 
23:45:54.935401644 +0100
+++ /work/SRC/openSUSE:Factory/.python-dask.new.2401/python-dask.changes        
2021-03-12 13:33:13.694318136 +0100
@@ -1,0 +2,93 @@
+Sun Mar  7 16:40:26 UTC 2021 - Ben Greiner <c...@bnavigator.de>
+
+- Update to 2021.3.0
+  * This is the first release with support for Python 3.9 and the
+    last release with support for Python 3.6
+  * Bump minimum version of distributed (GH#7328) James Bourbeau
+  * Fix percentiles_summary with dask_cudf (GH#7325) Peter Andreas
+    Entschev
+  * Temporarily revert recent Array.__setitem__ updates (GH#7326)
+    James Bourbeau
+  * Blockwise.clone (GH#7312) Guido Imperiale
+  * NEP-35 duck array update (GH#7321) James Bourbeau
+  * Don???t allow setting .name for array (GH#7222) Julia Signell
+  * Use nearest interpolation for creating percentiles of integer
+    input (GH#7305) Kyle Barron
+  * Test exp with CuPy arrays (GH#7322) John A Kirkham
+  * Check that computed chunks have right size and dtype (GH#7277)
+    Bruce Merry
+  * pytest.mark.flaky (GH#7319) Guido Imperiale
+  * Contributing docs: add note to pull the latest git tags before
+    pip installing Dask (GH#7308) Genevieve Buckley
+  * Support for Python 3.9 (GH#7289) Guido Imperiale
+  * Add broadcast-based merge implementation (GH#7143) Richard
+    (Rick) Zamora
+  * Add split_every to graph_manipulation (GH#7282) Guido Imperiale
+  * Typo in optimize docs (GH#7306) Julius Busecke
+  * dask.graph_manipulation support for xarray.Dataset (GH#7276)
+    Guido Imperiale
+  * Add plot width and height support for Bokeh 2.3.0 (GH#7297)
+    James Bourbeau
+  * Add NumPy functions tri, triu_indices, triu_indices_from,
+    tril_indices, tril_indices_from (GH#6997) Illviljan
+  * Remove ???cleanup??? task in DataFrame on-disk shuffle (GH#7260)
+    Sinclair Target
+  * Use development version of distributed in CI (GH#7279) James
+    Bourbeau
+  * Moving high level graph pack/unpack Dask (GH#7179) Mads R. B.
+    Kristensen
+  * Improve performance of merge_percentiles (GH#7172) Ashwin
+    Srinath
+  * DOC: add dask-sql and fugue (GH#7129) Ray Bell
+  * Example for working with categoricals and parquet (GH#7085)
+    McToel
+  * Adds tree reduction to bincount (GH#7183) Thomas J. Fan
+  * Improve documentation of name in from_array (GH#7264) Bruce
+    Merry
+  * Fix cumsum for empty partitions (GH#7230) Julia Signell
+  * Add map_blocks example to dask array creation docs (GH#7221)
+    Julia Signell
+  * Fix performance issue in dask.graph_manipulation.wait_on()
+    (GH#7258) Guido Imperiale
+  * Replace coveralls with codecov.io (GH#7246) Guido Imperiale
+  * Pin to a particular black rev in pre-commit (GH#7256) Julia
+    Signell
+  * Minor typo in documentation: array-chunks.rst (GH#7254) Magnus
+    Nord
+  * Fix bugs in Blockwise and ShuffleLayer (GH#7213) Richard
+    (Rick) Zamora
+  * Fix parquet filtering bug for "pyarrow-dataset" with
+    pyarrow-3.0.0 (GH#7200) Richard (Rick) Zamora
+  * graph_manipulation without NumPy (GH#7243) Guido Imperiale
+  * Support for NEP-35 (GH#6738) Peter Andreas Entschev
+  * Avoid running unit tests during doctest CI build (GH#7240)
+    James Bourbeau
+  * Run doctests on CI (GH#7238) Julia Signell
+  * Cleanup code quality on set arithmetics (GH#7196) Guido
+    Imperiale
+  * Add dask.array.delete (GH#7125) Julia Signell
+  * Unpin graphviz now that new conda-forge recipe is built
+    (GH#7235) Julia Signell
+  * Don???t use NumPy 1.20 from conda-forge on Mac (GH#7211) Guido
+    Imperiale
+  * map_overlap: Don???t rechunk axes without overlap (GH#7233)
+    Deepak Cherian
+  * Pin graphviz to avoid issue with latest conda-forge build
+    (GH#7232) Julia Signell
+  * Use html_css_files in docs for custom CSS (GH#7220) James
+    Bourbeau
+  * Graph manipulation: clone, bind, checkpoint, wait_on (GH#7109)
+    Guido Imperiale
+  * Fix handling of filter expressions in parquet pyarrow-dataset
+    engine (GH#7186) Joris Van den Bossche
+  * Extend __setitem__ to more closely match numpy (GH#7033) David
+    Hassell
+  * Clean up Python 2 syntax (GH#7195) Guido Imperiale
+  * Fix regression in Delayed._length (GH#7194) Guido Imperiale
+  * __dask_layers__() tests and tweaks (GH#7177) Guido Imperiale
+  * Properly convert HighLevelGraph in multiprocessing scheduler
+    (GH#7191) Jim Crist-Harif
+  * Don???t fail fast in CI (GH#7188) James Bourbeau
+- Add dask-pr7247-numpyskip.patch -- gh#dask/dask#7247
+
+-------------------------------------------------------------------

Old:
----
  dask-2021.2.0.tar.gz

New:
----
  dask-2021.3.0.tar.gz
  dask-pr7247-numpyskip.patch

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-dask.spec ++++++
--- /var/tmp/diff_new_pack.SHfKWb/_old  2021-03-12 13:33:14.274318949 +0100
+++ /var/tmp/diff_new_pack.SHfKWb/_new  2021-03-12 13:33:14.278318955 +0100
@@ -27,12 +27,14 @@
 %endif
 %define         skip_python2 1
 Name:           python-dask%{psuffix}
-Version:        2021.2.0
+Version:        2021.3.0
 Release:        0
 Summary:        Minimal task scheduling abstraction
 License:        BSD-3-Clause
 URL:            https://dask.org
 Source:         
https://files.pythonhosted.org/packages/source/d/dask/dask-%{version}.tar.gz
+# PATCH-FIX-UPSTREAM dask-pr7247-numpyskip.patch -- gh#dask/dask#7247
+Patch0:         dask-pr7247-numpyskip.patch
 BuildRequires:  %{python_module PyYAML}
 BuildRequires:  %{python_module base >= 3.6}
 BuildRequires:  %{python_module setuptools}
@@ -48,7 +50,7 @@
 Recommends:     python-bokeh >= 1.0.0
 Recommends:     python-cloudpickle >= 0.2.2
 Recommends:     python-cityhash
-Recommends:     python-distributed >= 2.0
+Recommends:     python-distributed >= %{version}
 Recommends:     python-fastparquet
 Recommends:     python-fsspec >= 0.6.0
 Recommends:     python-gcsfs >= 0.4.0
@@ -64,7 +66,7 @@
 %if %{with test}
 BuildRequires:  %{python_module cachey}
 BuildRequires:  %{python_module cloudpickle >= 0.2.2}
-BuildRequires:  %{python_module distributed >= 2.0}
+BuildRequires:  %{python_module distributed >= %{version}}
 # optional zarr needs fsspec >= 0.8.4 if present
 BuildRequires:  %{python_module fsspec >= 0.8.4}
 BuildRequires:  %{python_module graphviz}
@@ -73,6 +75,7 @@
 BuildRequires:  %{python_module mimesis}
 BuildRequires:  %{python_module multipledispatch}
 BuildRequires:  %{python_module partd >= 0.3.10}
+BuildRequires:  %{python_module pytest-rerunfailures}
 BuildRequires:  %{python_module pytest}
 BuildRequires:  %{python_module toolz >= 0.8.2}
 BuildRequires:  graphviz
@@ -248,7 +251,7 @@
 This package contains the multiprocessing interface.
 
 %prep
-%setup -q -n dask-%{version}
+%autosetup -p1 -n dask-%{version}
 
 %build
 %python_build
@@ -272,9 +275,11 @@
 donttest+="or (test_distributed and test_await)"
 # NEP 29: There is no python36-dask-dataframe or -array because Tumbleweed 
dropped python36-numpy with 1.20
 python36_ignore="--ignore dask/dataframe --ignore dask/array"
-python36_donttest=" or (test_distributed and test_to_hdf)"
-# https://github.com/dask/dask/issues/7170 -- skip in any case
-sed -i 's/from dask.array.numpy_compat import _numpy_120/_numpy_120 = True/' 
dask/tests/test_distributed.py
+if [ $(getconf LONG_BIT) -eq 32 ]; then
+  # Fails to convert datatype in obs constrained memory for 32-bit platforms
+  donttest+="or (test_distributed and test_combo_of_layer_types)"
+  donttest+="or (test_distributed and test_annotation_pack_unpack)"
+fi
 %pytest -ra -m "not network" -k "not ($donttest ${$python_donttest})" -n auto 
${$python_ignore}
 %endif
 

++++++ dask-2021.2.0.tar.gz -> dask-2021.3.0.tar.gz ++++++
++++ 7837 lines of diff (skipped)

++++++ dask-pr7247-numpyskip.patch ++++++
>From 68122a9e8f7aa3f063a64b2c36e2ecb0b1249902 Mon Sep 17 00:00:00 2001
From: Julia Signell <jsign...@gmail.com>
Date: Thu, 18 Feb 2021 14:53:32 -0500
Subject: [PATCH 1/6] Move numpy skip into test

---
 dask/tests/test_distributed.py | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/dask/tests/test_distributed.py b/dask/tests/test_distributed.py
index be92e4caa3..14752cbd78 100644
--- a/dask/tests/test_distributed.py
+++ b/dask/tests/test_distributed.py
@@ -11,7 +11,6 @@
 
 import dask
 from dask import persist, delayed, compute
-from dask.array.numpy_compat import _numpy_120
 from dask.delayed import Delayed
 from dask.utils import tmpdir, get_named_args
 from distributed import futures_of
@@ -70,10 +69,15 @@ def test_persist_nested(c):
     assert res[2:] == (4, [5])
 
 
-@pytest.mark.skipif(_numpy_120, 
reason="https://github.com/dask/dask/issues/7170";)
 def test_futures_to_delayed_dataframe(c):
     pd = pytest.importorskip("pandas")
     dd = pytest.importorskip("dask.dataframe")
+
+    from dask.array.numpy_compat import _numpy_120
+
+    if _numpy_120:
+        pytest.skip("https://github.com/dask/dask/issues/7170";)
+
     df = pd.DataFrame({"x": [1, 2, 3]})
 
     futures = c.scatter([df, df])

>From 6970b2c1807fb735d9889d3ed77c5eaea7f28e20 Mon Sep 17 00:00:00 2001
From: Julia Signell <jsign...@gmail.com>
Date: Thu, 18 Feb 2021 15:38:52 -0500
Subject: [PATCH 2/6] Skip tests that depend on test_hdf if pandas not
 installed

---
 dask/tests/test_distributed.py | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/dask/tests/test_distributed.py b/dask/tests/test_distributed.py
index 14752cbd78..df4460fe54 100644
--- a/dask/tests/test_distributed.py
+++ b/dask/tests/test_distributed.py
@@ -151,6 +151,9 @@ def test_local_get_with_distributed_active(c, s, a, b):
 
 
 def test_to_hdf_distributed(c):
+    pytest.importorskip("numpy")
+    pytest.importorskip("pandas")
+
     from ..dataframe.io.tests.test_hdf import test_to_hdf
 
     test_to_hdf()
@@ -171,6 +174,9 @@ def test_to_hdf_distributed(c):
     ],
 )
 def test_to_hdf_scheduler_distributed(npartitions, c):
+    pytest.importorskip("numpy")
+    pytest.importorskip("pandas")
+
     from ..dataframe.io.tests.test_hdf import test_to_hdf_schedulers
 
     test_to_hdf_schedulers(None, npartitions)

Reply via email to