Hello community,

here is the log from the commit of package python-dask for openSUSE:Factory 
checked in at 2020-07-20 21:00:27
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-dask (Old)
 and      /work/SRC/openSUSE:Factory/.python-dask.new.3592 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-dask"

Mon Jul 20 21:00:27 2020 rev:35 rq:821678 version:2.21.0

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-dask/python-dask.changes  2020-07-09 
13:22:04.773783896 +0200
+++ /work/SRC/openSUSE:Factory/.python-dask.new.3592/python-dask.changes        
2020-07-20 21:01:47.153118139 +0200
@@ -1,0 +2,32 @@
+Sat Jul 18 18:12:13 UTC 2020 - Arun Persaud <a...@gmx.de>
+
+- update to version 2.21.0:
+  * Array
+    + Correct error message in array.routines.gradient() (:pr:`6417`)
+      johnomotani
+    + Fix blockwise concatenate for array with some dimension=1
+      (:pr:`6342`) Matthias Bussonnier
+  * Bag
+    + Fix bag.take example (:pr:`6418`) Roberto Panai
+  * Core
+    + Groups values in optimization pass should only be graph and keys
+      -- not an optimization + keys (:pr:`6409`) Ben Zaitlen
+    + Call custom optimizations once, with kwargs provided
+      (:pr:`6382`) Clark Zinzow
+    + Include pickle5 for testing on Python 3.7 (:pr:`6379`) John A
+      Kirkham
+  * DataFrame
+    + Correct typo in error message (:pr:`6422`) Tom McTiernan
+    + Use pytest.warns to check for UserWarning (:pr:`6378`) Richard
+      (Rick) Zamora
+    + Parse bytes_per_chunk keyword from string (:pr:`6370`) Matthew
+      Rocklin
+  * Documentation
+    + Numpydoc formatting (:pr:`6421`) Matthias Bussonnier
+    + Unpin numpydoc following 1.1 release (:pr:`6407`) Gil Forsyth
+    + Numpydoc formatting (:pr:`6402`) Matthias Bussonnier
+    + Add instructions for using conda when installing code for
+      development (:pr:`6399`) Ray Bell
+    + Update visualize docstrings (:pr:`6383`) Zhengnan
+
+-------------------------------------------------------------------

Old:
----
  dask-2.20.0.tar.gz

New:
----
  dask-2.21.0.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-dask.spec ++++++
--- /var/tmp/diff_new_pack.BEySGb/_old  2020-07-20 21:01:48.605119611 +0200
+++ /var/tmp/diff_new_pack.BEySGb/_new  2020-07-20 21:01:48.609119614 +0200
@@ -27,7 +27,7 @@
 %endif
 %define         skip_python2 1
 Name:           python-dask%{psuffix}
-Version:        2.20.0
+Version:        2.21.0
 Release:        0
 Summary:        Minimal task scheduling abstraction
 License:        BSD-3-Clause

++++++ dask-2.20.0.tar.gz -> dask-2.21.0.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/PKG-INFO new/dask-2.21.0/PKG-INFO
--- old/dask-2.20.0/PKG-INFO    2020-07-03 06:14:33.941440000 +0200
+++ new/dask-2.21.0/PKG-INFO    2020-07-18 00:07:14.618748200 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: dask
-Version: 2.20.0
+Version: 2.21.0
 Summary: Parallel PyData with Task Scheduling
 Home-page: https://github.com/dask/dask/
 Maintainer: Matthew Rocklin
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/_version.py 
new/dask-2.21.0/dask/_version.py
--- old/dask-2.20.0/dask/_version.py    2020-07-03 06:14:33.943292900 +0200
+++ new/dask-2.21.0/dask/_version.py    2020-07-18 00:07:14.620754500 +0200
@@ -11,8 +11,8 @@
 {
  "dirty": false,
  "error": null,
- "full-revisionid": "1878a451c347253eb9cfb852152a57e88f2ae848",
- "version": "2.20.0"
+ "full-revisionid": "a4c571475b1c643dedf99bc773c2564bf3fcf977",
+ "version": "2.21.0"
 }
 '''  # END VERSION_JSON
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/array/core.py 
new/dask-2.21.0/dask/array/core.py
--- old/dask-2.20.0/dask/array/core.py  2020-07-03 05:57:43.000000000 +0200
+++ new/dask-2.21.0/dask/array/core.py  2020-07-17 06:21:03.000000000 +0200
@@ -4249,7 +4249,7 @@
     Given a sequence of dask arrays, form a new dask array by stacking them
     along a new dimension (axis=0 by default)
 
-     Parameters
+    Parameters
     ----------
     seq: list of dask.arrays
     axis: int
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/array/routines.py 
new/dask-2.21.0/dask/array/routines.py
--- old/dask-2.20.0/dask/array/routines.py      2020-07-03 05:57:43.000000000 
+0200
+++ new/dask-2.21.0/dask/array/routines.py      2020-07-17 23:49:45.000000000 
+0200
@@ -545,7 +545,7 @@
                 raise ValueError(
                     "Chunk size must be larger than edge_order + 1. "
                     "Minimum chunk for axis {} is {}. Rechunk to "
-                    "proceed.".format(np.min(c), ax)
+                    "proceed.".format(ax, np.min(c))
                 )
 
         if np.isscalar(varargs[i]):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/array/slicing.py 
new/dask-2.21.0/dask/array/slicing.py
--- old/dask-2.20.0/dask/array/slicing.py       2020-07-03 05:57:43.000000000 
+0200
+++ new/dask-2.21.0/dask/array/slicing.py       2020-07-17 23:49:45.000000000 
+0200
@@ -135,9 +135,9 @@
     This function works by successively unwrapping cases and passing down
     through a sequence of functions.
 
-    slice_with_newaxis - handle None/newaxis case
-    slice_wrap_lists - handle fancy indexing with lists
-    slice_slices_and_integers - handle everything else
+    slice_with_newaxis : handle None/newaxis case
+    slice_wrap_lists : handle fancy indexing with lists
+    slice_slices_and_integers : handle everything else
     """
     blockdims = tuple(map(tuple, blockdims))
 
@@ -211,8 +211,8 @@
     See Also
     --------
 
-    take - handle slicing with lists ("fancy" indexing)
-    slice_slices_and_integers - handle slicing with slices and integers
+    take : handle slicing with lists ("fancy" indexing)
+    slice_slices_and_integers : handle slicing with slices and integers
     """
     assert all(isinstance(i, (slice, list, Integral, np.ndarray)) for i in 
index)
     if not len(blockdims) == len(index):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/array/tests/test_array_core.py 
new/dask-2.21.0/dask/array/tests/test_array_core.py
--- old/dask-2.20.0/dask/array/tests/test_array_core.py 2020-07-03 
05:57:43.000000000 +0200
+++ new/dask-2.21.0/dask/array/tests/test_array_core.py 2020-07-17 
06:21:03.000000000 +0200
@@ -130,6 +130,57 @@
     assert_eq(z, x)
 
 
+def test_blockwise_1_in_shape_I():
+    def test_f(a, b):
+        assert 1 in b.shape
+
+    p, k, N = 7, 2, 5
+    da.blockwise(
+        test_f,
+        "x",
+        da.zeros((2 * p, 9, k * N), chunks=(p, 3, k)),
+        "xzt",
+        da.zeros((2 * p, 9, 1), chunks=(p, 3, -1)),
+        "xzt",
+        concatenate=True,
+        dtype=float,
+    ).compute()
+
+
+def test_blockwise_1_in_shape_II():
+    def test_f(a, b):
+        assert 1 in b.shape
+
+    p, k, N = 7, 2, 5
+    da.blockwise(
+        test_f,
+        "x",
+        da.zeros((2 * p, 9, k * N, 8), chunks=(p, 9, k, 4)),
+        "xztu",
+        da.zeros((2 * p, 9, 1, 8), chunks=(p, 9, -1, 4)),
+        "xztu",
+        concatenate=True,
+        dtype=float,
+    ).compute()
+
+
+def test_blockwise_1_in_shape_III():
+    def test_f(a, b):
+        assert 1 in b.shape
+
+    k, N = 2, 5
+    da.blockwise(
+        test_f,
+        "x",
+        da.zeros((k * N, 9, 8), chunks=(k, 3, 4)),
+        "xtu",
+        da.zeros((1, 9, 8), chunks=(-1, 3, 4)),
+        "xtu",
+        concatenate=True,
+        dtype=float,
+    ).compute()
+
+
 def test_concatenate3_on_scalars():
     assert_eq(concatenate3([1, 2]), np.array([1, 2]))
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/bag/core.py 
new/dask-2.21.0/dask/bag/core.py
--- old/dask-2.20.0/dask/bag/core.py    2020-07-03 05:57:43.000000000 +0200
+++ new/dask-2.21.0/dask/bag/core.py    2020-07-17 06:21:03.000000000 +0200
@@ -98,7 +98,7 @@
 
     See Also
     --------
-    ``dask.bag.core.lazify_task``
+    dask.bag.core.lazify_task
     """
     return valmap(lazify_task, dsk)
 
@@ -1354,7 +1354,7 @@
             Whether to warn if the number of elements returned is less than
             requested, default is True.
 
-        >>> b = from_sequence(range(10))
+        >>> b = from_sequence(range(1_000))
         >>> b.take(3)  # doctest: +SKIP
         (0, 1, 2)
         """
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/base.py new/dask-2.21.0/dask/base.py
--- old/dask-2.20.0/dask/base.py        2020-07-03 05:57:43.000000000 +0200
+++ new/dask-2.21.0/dask/base.py        2020-07-17 06:21:03.000000000 +0200
@@ -54,8 +54,9 @@
         Parameters
         ----------
         filename : str or None, optional
-            The name (without an extension) of the file to write to disk.  If
-            `filename` is None, no file will be written, and we communicate
+            The name of the file to write to disk. If the provided `filename`
+            doesn't include an extension, '.png' will be used by default.
+            If `filename` is None, no file will be written, and we communicate
             with dot using only pipes.
         format : {'png', 'pdf', 'dot', 'svg', 'jpeg', 'jpg'}, optional
             Format in which to write output file.  Default is 'png'.
@@ -143,7 +144,7 @@
         """Compute this dask collection
 
         This turns a lazy Dask collection into its in-memory equivalent.
-        For example a Dask.array turns into a  :func:`numpy.array` and a 
Dask.dataframe
+        For example a Dask array turns into a NumPy array and a Dask dataframe
         turns into a Pandas dataframe.  The entire dataset must fit into memory
         before calling this operation.
 
@@ -212,16 +213,18 @@
 
         _opt_list = []
         for opt, val in groups.items():
-            _graph_and_keys = _extract_graph_and_keys(val)
-            groups[opt] = _graph_and_keys
-            _opt_list.append(opt(_graph_and_keys[0], _graph_and_keys[1], 
**kwargs))
+            dsk, keys = _extract_graph_and_keys(val)
+            groups[opt] = (dsk, keys)
+            _opt = opt(dsk, keys, **kwargs)
+            _opt_list.append(_opt)
 
         for opt in optimizations:
             _opt_list = []
             group = {}
             for k, (dsk, keys) in groups.items():
-                group[k] = (opt(dsk, keys), keys)
-                _opt_list.append(opt(dsk, keys, **kwargs))
+                _opt = opt(dsk, keys, **kwargs)
+                group[k] = (_opt, keys)
+                _opt_list.append(_opt)
             groups = group
 
         dsk = merge(*map(ensure_dict, _opt_list,))
@@ -457,8 +460,9 @@
     dsk : dict(s) or collection(s)
         The dask graph(s) to visualize.
     filename : str or None, optional
-        The name (without an extension) of the file to write to disk.  If
-        `filename` is None, no file will be written, and we communicate
+        The name of the file to write to disk. If the provided `filename`
+        doesn't include an extension, '.png' will be used by default.
+        If `filename` is None, no file will be written, and we communicate
         with dot using only pipes.
     format : {'png', 'pdf', 'dot', 'svg', 'jpeg', 'jpg'}, optional
         Format in which to write output file.  Default is 'png'.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/blockwise.py 
new/dask-2.21.0/dask/blockwise.py
--- old/dask-2.20.0/dask/blockwise.py   2020-07-03 05:57:43.000000000 +0200
+++ new/dask-2.21.0/dask/blockwise.py   2020-07-17 06:21:03.000000000 +0200
@@ -378,7 +378,9 @@
     # "coordinate set" that consists of
     # - the output indices
     # - the dummy indices
-    # - the dummy indices, with indices replaced by zeros (for broadcasting)
+    # - the dummy indices, with indices replaced by zeros (for broadcasting), 
we
+    #   are careful to only emit a single dummy zero when concatenate=True to 
not
+    #   concatenate the same array with itself several times.
     # - a 0 to assist with broadcasting.
 
     index_pos, zero_pos = {}, {}
@@ -390,7 +392,8 @@
     for i, ind in enumerate(dummy_indices):
         index_pos[ind] = 2 * i + len(out_indices)
         zero_pos[ind] = 2 * i + 1 + len(out_indices)
-        _dummies_list.append([list(range(dims[ind])), [0] * dims[ind]])
+        reps = 1 if concatenate else dims[ind]
+        _dummies_list.append([list(range(dims[ind])), [0] * reps])
 
     # ([0, 1, 2], [0, 0, 0], ...)  For a dummy index of dimension 3
     dummies = tuple(itertools.chain.from_iterable(_dummies_list))
@@ -402,7 +405,6 @@
 
     # Axes along which to concatenate, for each input
     concat_axes = []
-
     for arg, ind in argpairs:
         if ind is not None:
             coord_maps.append(
@@ -415,7 +417,6 @@
         else:
             coord_maps.append(None)
             concat_axes.append(None)
-
     # Unpack delayed objects in kwargs
     dsk2 = {}
     if kwargs:
@@ -438,6 +439,7 @@
                 arg_coords = tuple(coords[c] for c in cmap)
                 if axes:
                     tups = lol_product((arg,), arg_coords)
+
                     if concatenate:
                         tups = (concatenate, tups, axes)
                 else:
@@ -467,7 +469,6 @@
     values : sequence
         Mix of singletons and lists. Each list is substituted with every
         possible value and introduces another level of list in the output.
-
     Examples
     --------
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/dataframe/core.py 
new/dask-2.21.0/dask/dataframe/core.py
--- old/dask-2.20.0/dask/dataframe/core.py      2020-07-03 05:57:43.000000000 
+0200
+++ new/dask-2.21.0/dask/dataframe/core.py      2020-07-17 23:49:45.000000000 
+0200
@@ -1166,7 +1166,7 @@
         ):
             raise ValueError(
                 "Please provide exactly one of ``npartitions=``, ``freq=``, "
-                "``divisisions=``, ``partitions_size=`` keyword arguments"
+                "``divisions=``, ``partitions_size=`` keyword arguments"
             )
 
         if partition_size is not None:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/dataframe/io/sql.py 
new/dask-2.21.0/dask/dataframe/io/sql.py
--- old/dask-2.20.0/dask/dataframe/io/sql.py    2020-07-03 05:57:43.000000000 
+0200
+++ new/dask-2.21.0/dask/dataframe/io/sql.py    2020-07-17 06:21:03.000000000 
+0200
@@ -16,7 +16,7 @@
     npartitions=None,
     limits=None,
     columns=None,
-    bytes_per_chunk=256 * 2 ** 20,
+    bytes_per_chunk="256 MiB",
     head_rows=5,
     schema=None,
     meta=None,
@@ -73,7 +73,7 @@
         ``sql.func.abs(sql.column('value')).label('abs(value)')``.
         Labeling columns created by functions or arithmetic operations is
         recommended.
-    bytes_per_chunk : int
+    bytes_per_chunk : str, int
         If both divisions and npartitions is None, this is the target size of
         each partition, in bytes
     head_rows : int
@@ -170,7 +170,14 @@
         if npartitions is None:
             q = sql.select([sql.func.count(index)]).select_from(table)
             count = pd.read_sql(q, engine)["count_1"][0]
-            npartitions = int(round(count * bytes_per_row / bytes_per_chunk)) 
or 1
+            npartitions = (
+                int(
+                    round(
+                        count * bytes_per_row / 
dask.utils.parse_bytes(bytes_per_chunk)
+                    )
+                )
+                or 1
+            )
         if dtype.kind == "M":
             divisions = pd.date_range(
                 start=mini,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/dataframe/io/tests/test_parquet.py 
new/dask-2.21.0/dask/dataframe/io/tests/test_parquet.py
--- old/dask-2.20.0/dask/dataframe/io/tests/test_parquet.py     2020-07-03 
05:57:43.000000000 +0200
+++ new/dask-2.21.0/dask/dataframe/io/tests/test_parquet.py     2020-07-17 
06:21:03.000000000 +0200
@@ -2575,10 +2575,9 @@
     ddf = dd.from_pandas(df, npartitions=2)
 
     # If we don't want to preserve the None index name, the
-    # write should work, but a UserWarning should be raised
-    with pytest.raises(UserWarning) as w:
+    # write should work, but the user should be warned
+    with pytest.warns(UserWarning, match=null_name):
         ddf.to_parquet(fn, engine=engine, write_index=False)
-    assert null_name in str(w.value)
 
     # If we do want to preserve the None index name, should
     # get a ValueError for having an illegal column name
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/dataframe/io/tests/test_sql.py 
new/dask-2.21.0/dask/dataframe/io/tests/test_sql.py
--- old/dask-2.20.0/dask/dataframe/io/tests/test_sql.py 2020-07-03 
05:57:43.000000000 +0200
+++ new/dask-2.21.0/dask/dataframe/io/tests/test_sql.py 2020-07-17 
06:21:03.000000000 +0200
@@ -176,7 +176,7 @@
         "test",
         db,
         columns=list(df.columns),
-        bytes_per_chunk=2 ** 30,
+        bytes_per_chunk="2 GiB",
         index_col="number",
     )
     assert data.npartitions == 1
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/dataframe/tests/test_dataframe.py 
new/dask-2.21.0/dask/dataframe/tests/test_dataframe.py
--- old/dask-2.20.0/dask/dataframe/tests/test_dataframe.py      2020-07-03 
05:57:43.000000000 +0200
+++ new/dask-2.21.0/dask/dataframe/tests/test_dataframe.py      2020-07-17 
06:21:03.000000000 +0200
@@ -1612,6 +1612,7 @@
 
     cloudpickle = pytest.importorskip("cloudpickle")
     cp_dumps = cloudpickle.dumps
+    cp_loads = cloudpickle.loads
 
     d = _compat.makeTimeDataFrame()
     df = dd.from_pandas(d, npartitions=3)
@@ -1620,25 +1621,25 @@
     # dataframe
     df2 = loads(dumps(df))
     assert_eq(df, df2)
-    df2 = loads(cp_dumps(df))
+    df2 = cp_loads(cp_dumps(df))
     assert_eq(df, df2)
 
     # series
     a2 = loads(dumps(df.A))
     assert_eq(df.A, a2)
-    a2 = loads(cp_dumps(df.A))
+    a2 = cp_loads(cp_dumps(df.A))
     assert_eq(df.A, a2)
 
     # index
     i2 = loads(dumps(df.index))
     assert_eq(df.index, i2)
-    i2 = loads(cp_dumps(df.index))
+    i2 = cp_loads(cp_dumps(df.index))
     assert_eq(df.index, i2)
 
     # scalar
     # lambdas are present, so only test cloudpickle
     s = df.A.sum()
-    s2 = loads(cp_dumps(s))
+    s2 = cp_loads(cp_dumps(s))
     assert_eq(s, s2)
 
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/delayed.py 
new/dask-2.21.0/dask/delayed.py
--- old/dask-2.20.0/dask/delayed.py     2020-07-03 05:57:43.000000000 +0200
+++ new/dask-2.21.0/dask/delayed.py     2020-07-17 06:21:03.000000000 +0200
@@ -250,7 +250,7 @@
         object. If provided, the ``Delayed`` output of the call can be iterated
         into ``nout`` objects, allowing for unpacking of results. By default
         iteration over ``Delayed`` objects will error. Note, that ``nout=1``
-        expects ``obj``, to return a tuple of length 1, and consequently for
+        expects ``obj`` to return a tuple of length 1, and consequently for
         ``nout=0``, ``obj`` should return an empty tuple.
     traverse : bool, optional
         By default dask traverses builtin python collections looking for dask
@@ -523,10 +523,10 @@
             raise AttributeError("Attribute {0} not found".format(attr))
 
         if attr == "visualise":
-            # added to warn users incase of spelling error
+            # added to warn users in case of spelling error
             # for more details: https://github.com/dask/dask/issues/5721
             warnings.warn(
-                "dask.delayed objects have no `visualise` method, perhaps you 
meant `visualize`?"
+                "dask.delayed objects have no `visualise` method. Perhaps you 
meant `visualize`?"
             )
 
         return DelayedAttr(self, attr)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/dot.py new/dask-2.21.0/dask/dot.py
--- old/dask-2.20.0/dask/dot.py 2020-07-03 05:57:43.000000000 +0200
+++ new/dask-2.21.0/dask/dot.py 2020-07-17 06:21:03.000000000 +0200
@@ -234,17 +234,18 @@
     """
     Render a task graph using dot.
 
-    If `filename` is not None, write a file to disk with that name in the
-    format specified by `format`.  `filename` should not include an extension.
+    If `filename` is not None, write a file to disk with the specified name 
and extension.
+    If no extension is specified, '.png' will be used by default.
 
     Parameters
     ----------
     dsk : dict
         The graph to display.
     filename : str or None, optional
-        The name (without an extension) of the file to write to disk.  If
-        `filename` is None, no file will be written, and we communicate with
-        dot using only pipes.  Default is 'mydask'.
+        The name of the file to write to disk. If the provided `filename`
+        doesn't include an extension, '.png' will be used by default.
+        If `filename` is None, no file will be written, and we communicate
+        with dot using only pipes.  Default is 'mydask'.
     format : {'png', 'pdf', 'dot', 'svg', 'jpeg', 'jpg'}, optional
         Format in which to write output file.  Default is 'png'.
     **kwargs
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask/local.py 
new/dask-2.21.0/dask/local.py
--- old/dask-2.20.0/dask/local.py       2020-07-03 05:57:43.000000000 +0200
+++ new/dask-2.21.0/dask/local.py       2020-07-17 06:21:03.000000000 +0200
@@ -215,7 +215,7 @@
 
     See Also
     --------
-    _execute_task - actually execute task
+    _execute_task : actually execute task
     """
     try:
         task, data = loads(task_info)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/dask.egg-info/PKG-INFO 
new/dask-2.21.0/dask.egg-info/PKG-INFO
--- old/dask-2.20.0/dask.egg-info/PKG-INFO      2020-07-03 06:14:33.000000000 
+0200
+++ new/dask-2.21.0/dask.egg-info/PKG-INFO      2020-07-18 00:07:14.000000000 
+0200
@@ -1,6 +1,6 @@
 Metadata-Version: 2.1
 Name: dask
-Version: 2.20.0
+Version: 2.21.0
 Summary: Parallel PyData with Task Scheduling
 Home-page: https://github.com/dask/dask/
 Maintainer: Matthew Rocklin
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/docs/source/changelog.rst 
new/dask-2.21.0/docs/source/changelog.rst
--- old/dask-2.20.0/docs/source/changelog.rst   2020-07-03 06:11:45.000000000 
+0200
+++ new/dask-2.21.0/docs/source/changelog.rst   2020-07-18 00:05:57.000000000 
+0200
@@ -1,6 +1,44 @@
 Changelog
 =========
 
+2.21.0 / 2020-07-17
+-------------------
+
+Array
++++++
+
+- Correct error message in ``array.routines.gradient()`` (:pr:`6417`) 
`johnomotani`_
+- Fix blockwise concatenate for array with some ``dimension=1`` (:pr:`6342`) 
`Matthias Bussonnier`_
+
+Bag
++++
+
+- Fix ``bag.take`` example (:pr:`6418`) `Roberto Panai`_
+
+Core
+++++
+
+- Groups values in optimization pass should only be graph and keys -- not an 
optimization + keys (:pr:`6409`) `Ben Zaitlen`_
+- Call custom optimizations once, with ``kwargs`` provided (:pr:`6382`) `Clark 
Zinzow`_
+- Include ``pickle5`` for testing on Python 3.7 (:pr:`6379`) `John A Kirkham`_
+
+DataFrame
++++++++++
+
+- Correct typo in error message (:pr:`6422`) `Tom McTiernan`_
+- Use ``pytest.warns`` to check for ``UserWarning`` (:pr:`6378`) `Richard 
(Rick) Zamora`_
+- Parse ``bytes_per_chunk keyword`` from string (:pr:`6370`) `Matthew Rocklin`_
+
+Documentation
++++++++++++++
+
+- Numpydoc formatting (:pr:`6421`) `Matthias Bussonnier`_
+- Unpin ``numpydoc`` following 1.1 release (:pr:`6407`) `Gil Forsyth`_
+- Numpydoc formatting (:pr:`6402`) `Matthias Bussonnier`_
+- Add instructions for using conda when installing code for development 
(:pr:`6399`) `Ray Bell`_
+- Update ``visualize`` docstrings (:pr:`6383`) `Zhengnan`_
+
+
 2.20.0 / 2020-07-02
 -------------------
 
@@ -3376,3 +3414,8 @@
 .. _`Abdulelah Bin Mahfoodh`: https://github.com/abduhbm
 .. _`Ben Shaver`: https://github.com/bpshaver
 .. _`Matthias Bussonnier`: https://github.com/Carreau
+.. _`johnomotani`: https://github.com/johnomotani
+.. _`Roberto Panai`: https://github.com/rpanai
+.. _`Clark Zinzow`: https://github.com/clarkzinzow
+.. _`Tom McTiernan`: https://github.com/tmct
+.. _`Zhengnan`: https://github.com/ZhengnanZhao
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/dask-2.20.0/docs/source/develop.rst 
new/dask-2.21.0/docs/source/develop.rst
--- old/dask-2.20.0/docs/source/develop.rst     2020-07-03 05:57:43.000000000 
+0200
+++ new/dask-2.21.0/docs/source/develop.rst     2020-07-17 06:21:03.000000000 
+0200
@@ -86,23 +86,22 @@
 Install
 ~~~~~~~
 
-You may want to install larger dependencies like NumPy and Pandas using a
-binary package manager like conda_.  You can skip this step if you already
-have these libraries, don't care to use them, or have sufficient build
-environment on your computer to compile them when installing with ``pip``::
+To build the library you can install the necessary requirements using
+pip or conda_::
 
-   conda install -y numpy pandas scipy bokeh psutil
+  cd dask
 
 .. _conda: https://conda.io/
 
-Install Dask and dependencies::
+``pip``::
 
-   cd dask
-   python -m pip install -e ".[complete]"
+  python -m pip install -e ".[complete]"
 
-For development, Dask uses the following additional dependencies::
+``conda``::
 
-   python -m pip install pytest moto
+  conda env create -n dask-dev -f 
continuous_integration/environment-latest.yaml
+  conda activate dask-dev
+  python -m pip install --no-deps -e .
 
 
 Run Tests


Reply via email to