Hello community,

here is the log from the commit of package python-distributed for 
openSUSE:Factory checked in at 2018-09-11 17:17:37
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-distributed (Old)
 and      /work/SRC/openSUSE:Factory/.python-distributed.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-distributed"

Tue Sep 11 17:17:37 2018 rev:6 rq:634439 version:1.23.1

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-distributed/python-distributed.changes    
2018-09-04 22:56:23.989047985 +0200
+++ 
/work/SRC/openSUSE:Factory/.python-distributed.new/python-distributed.changes   
    2018-09-11 17:17:51.055359160 +0200
@@ -1,0 +2,16 @@
+Sat Sep  8 04:35:20 UTC 2018 - Arun Persaud <[email protected]>
+
+- update to version 1.23.1:
+  * Raise informative error when mixing futures between clients
+    (#2227) Matthew Rocklin
+  * add byte_keys to unpack_remotedata call (#2232) Matthew Rocklin
+  * Add documentation for gist/rawgit for get_task_stream (#2236)
+    Matthew Rocklin
+  * Quiet Client.close by waiting for scheduler stop signal (#2237)
+    Matthew Rocklin
+  * Display system graphs nicely on different screen sizes (#2239)
+    Derek Ludwig
+  * Mutate passed in workers dict in TaskStreamPlugin.rectangles
+    (#2238) Matthew Rocklin
+
+-------------------------------------------------------------------

Old:
----
  distributed-1.23.0.tar.gz

New:
----
  distributed-1.23.1.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-distributed.spec ++++++
--- /var/tmp/diff_new_pack.4B2avE/_old  2018-09-11 17:17:51.939357804 +0200
+++ /var/tmp/diff_new_pack.4B2avE/_new  2018-09-11 17:17:51.939357804 +0200
@@ -20,7 +20,7 @@
 # Test requires network connection
 %bcond_with     test
 Name:           python-distributed
-Version:        1.23.0
+Version:        1.23.1
 Release:        0
 Summary:        Library for distributed computing with Python
 License:        BSD-3-Clause

++++++ distributed-1.23.0.tar.gz -> distributed-1.23.1.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.23.0/PKG-INFO 
new/distributed-1.23.1/PKG-INFO
--- old/distributed-1.23.0/PKG-INFO     2018-08-30 18:44:33.000000000 +0200
+++ new/distributed-1.23.1/PKG-INFO     2018-09-06 14:11:46.000000000 +0200
@@ -1,11 +1,12 @@
-Metadata-Version: 1.2
+Metadata-Version: 1.1
 Name: distributed
-Version: 1.23.0
+Version: 1.23.1
 Summary: Distributed scheduler for Dask
 Home-page: https://distributed.readthedocs.io/en/latest/
-Maintainer: Matthew Rocklin
-Maintainer-email: [email protected]
+Author: Matthew Rocklin
+Author-email: [email protected]
 License: BSD
+Description-Content-Type: UNKNOWN
 Description: Distributed
         ===========
         
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.23.0/distributed/_version.py 
new/distributed-1.23.1/distributed/_version.py
--- old/distributed-1.23.0/distributed/_version.py      2018-08-30 
18:44:33.000000000 +0200
+++ new/distributed-1.23.1/distributed/_version.py      2018-09-06 
14:11:46.000000000 +0200
@@ -8,11 +8,11 @@
 
 version_json = '''
 {
- "date": "2018-08-30T12:43:14-0400",
+ "date": "2018-09-06T08:10:35-0400",
  "dirty": false,
  "error": null,
- "full-revisionid": "9432f446abd5d09e96b8a799897d7917986dd9c7",
- "version": "1.23.0"
+ "full-revisionid": "8bd288a7b7e087202ed74d67bc7a438221d0b2bf",
+ "version": "1.23.1"
 }
 '''  # END VERSION_JSON
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.23.0/distributed/actor.py 
new/distributed-1.23.1/distributed/actor.py
--- old/distributed-1.23.0/distributed/actor.py 2018-08-20 15:54:29.000000000 
+0200
+++ new/distributed-1.23.1/distributed/actor.py 2018-09-05 19:20:42.000000000 
+0200
@@ -164,6 +164,10 @@
 
             return self._sync(get_actor_attribute_from_worker)
 
+    @property
+    def client(self):
+        return self._future.client
+
 
 class ProxyRPC(object):
     """
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.23.0/distributed/bokeh/scheduler.py 
new/distributed-1.23.1/distributed/bokeh/scheduler.py
--- old/distributed-1.23.0/distributed/bokeh/scheduler.py       2018-08-30 
18:31:20.000000000 +0200
+++ new/distributed-1.23.1/distributed/bokeh/scheduler.py       2018-09-06 
14:07:29.000000000 +0200
@@ -1053,12 +1053,13 @@
 
 def systemmonitor_doc(scheduler, extra, doc):
     with log_errors():
-        sysmon = SystemMonitor(scheduler, sizing_mode='scale_width')
+        sysmon = SystemMonitor(scheduler, sizing_mode='stretch_both')
         doc.title = "Dask: Scheduler System Monitor"
         doc.add_periodic_callback(sysmon.update, 500)
 
-        doc.add_root(column(sysmon.root, sizing_mode='scale_width'))
-        doc.template = env.get_template('simple.html')
+        for subdoc in sysmon.root.children:
+            doc.add_root(subdoc)
+        doc.template = env.get_template('system.html')
         doc.template_variables.update(extra)
         doc.theme = BOKEH_THEME
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/distributed-1.23.0/distributed/bokeh/static/css/system.css 
new/distributed-1.23.1/distributed/bokeh/static/css/system.css
--- old/distributed-1.23.0/distributed/bokeh/static/css/system.css      
1970-01-01 01:00:00.000000000 +0100
+++ new/distributed-1.23.1/distributed/bokeh/static/css/system.css      
2018-09-06 14:07:29.000000000 +0200
@@ -0,0 +1,26 @@
+#system-fluid {
+    display: flex;
+    flex-wrap: wrap;
+    height: 100%;
+}
+
+/* Small layout: stack all graphs on top of each other, space split equally */
+@media (min-width: 0px) {
+    #system-fluid {
+        flex-direction: column;
+    }
+    #system-fluid .system-item {
+        flex: 1;
+    }
+}
+
+/* Large layout: as many rows as necessary, each item consuming at least
+ * half of the width */
+@media (min-width: 992px) {
+    #system-fluid {
+        flex-direction: row;
+    }
+    #system-fluid .system-item {
+        flex: 1 50%;
+    }
+}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/distributed-1.23.0/distributed/bokeh/templates/system.html 
new/distributed-1.23.1/distributed/bokeh/templates/system.html
--- old/distributed-1.23.0/distributed/bokeh/templates/system.html      
1970-01-01 01:00:00.000000000 +0100
+++ new/distributed-1.23.1/distributed/bokeh/templates/system.html      
2018-09-06 14:07:29.000000000 +0200
@@ -0,0 +1,18 @@
+{% extends "base.html" %}
+
+{% block extra_resources %}
+<link rel="stylesheet" href="statics/css/system.css"/>
+{% endblock %}
+
+{% block content %}
+{% from macros import embed %}
+<div id="system-fluid">
+  {% for plot in roots %}
+    <div class="system-item">
+      {{ embed(plot) }}
+    </div>
+  {% endfor %}
+</div>
+{{ plot_script }}
+{% endblock %}
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.23.0/distributed/client.py 
new/distributed-1.23.1/distributed/client.py
--- old/distributed-1.23.0/distributed/client.py        2018-08-30 
18:31:20.000000000 +0200
+++ new/distributed-1.23.1/distributed/client.py        2018-09-06 
14:05:25.000000000 +0200
@@ -821,7 +821,8 @@
         for pc in self._periodic_callbacks.values():
             pc.start()
 
-        self.coroutines.append(self._handle_report())
+        self._handle_scheduler_coroutine = self._handle_report()
+        self.coroutines.append(self._handle_scheduler_coroutine)
 
         raise gen.Return(self)
 
@@ -1044,9 +1045,18 @@
                 del dask.config.config['get']
             if self.status == 'closed':
                 raise gen.Return()
+
             if self.scheduler_comm and self.scheduler_comm.comm and not 
self.scheduler_comm.comm.closed():
                 self._send_to_scheduler({'op': 'close-client'})
                 self._send_to_scheduler({'op': 'close-stream'})
+
+            # Give the scheduler 'stream-closed' message 100ms to come through
+            # This makes the shutdown slightly smoother and quieter
+            with ignoring(AttributeError, gen.TimeoutError):
+                yield gen.with_timeout(timedelta(milliseconds=100),
+                                       self._handle_scheduler_coroutine)
+
+            if self.scheduler_comm and self.scheduler_comm.comm and not 
self.scheduler_comm.comm.closed():
                 yield self.scheduler_comm.close()
             for key in list(self.futures):
                 self._release_key(key=key)
@@ -1391,8 +1401,8 @@
 
     @gen.coroutine
     def _gather(self, futures, errors='raise', direct=None, local_worker=None):
-        futures2, keys = unpack_remotedata(futures, byte_keys=True)
-        keys = [tokey(key) for key in keys]
+        unpacked, future_set = unpack_remotedata(futures, byte_keys=True)
+        keys = [tokey(future.key) for future in future_set]
         bad_data = dict()
         data = {}
 
@@ -1485,11 +1495,11 @@
             else:
                 break
 
-        if bad_data and errors == 'skip' and isinstance(futures2, list):
-            futures2 = [f for f in futures2 if f not in bad_data]
+        if bad_data and errors == 'skip' and isinstance(unpacked, list):
+            unpacked = [f for f in unpacked if f not in bad_data]
 
         data.update(response['data'])
-        result = pack_data(futures2, merge(data, bad_data))
+        result = pack_data(unpacked, merge(data, bad_data))
         raise gen.Return(result)
 
     def _threaded_gather(self, qin, qout, **kwargs):
@@ -2098,10 +2108,16 @@
             if values:
                 dsk = dask.optimization.inline(dsk, keys=values)
 
-            d = {k: unpack_remotedata(v) for k, v in dsk.items()}
-            extra_keys = set.union(*[v[1] for v in d.values()]) if d else set()
+            d = {k: unpack_remotedata(v, byte_keys=True) for k, v in 
dsk.items()}
+            extra_futures = set.union(*[v[1] for v in d.values()]) if d else 
set()
+            extra_keys = {tokey(future.key) for future in extra_futures}
             dsk2 = str_graph({k: v[0] for k, v in d.items()}, extra_keys)
             dsk3 = {k: v for k, v in dsk2.items() if k is not v}
+            for future in extra_futures:
+                if future.client is not self:
+                    msg = ("Inputs contain futures that were created by "
+                           "another client.")
+                    raise ValueError(msg)
 
             if restrictions:
                 restrictions = keymap(tokey, restrictions)
@@ -2110,7 +2126,7 @@
             if loose_restrictions is not None:
                 loose_restrictions = list(map(tokey, loose_restrictions))
 
-            future_dependencies = {tokey(k): set(map(tokey, v[1])) for k, v in 
d.items()}
+            future_dependencies = {tokey(k): {tokey(f.key) for f in v[1]} for 
k, v in d.items()}
 
             for s in future_dependencies.values():
                 for v in s:
@@ -3851,11 +3867,24 @@
 
     Get back a Bokeh figure and optionally save to a file
 
-    >>> with get_task_stream(plot='save', filename='myfile.html') as ts:
+    >>> with get_task_stream(plot='save', filename='task-stream.html') as ts:
     ...    x.compute()
     >>> ts.figure
     <Bokeh Figure>
 
+    To share this file with others you may wish to upload and serve it online.
+    A common way to do this is to upload the file as a gist, and then serve it
+    on https://rawgit.com ::
+
+       $ pip install gist
+       $ gist task-stream.html
+       https://gist.github.com/8a5b3c74b10b413f612bb5e250856ceb
+
+    You can then navigate to that site, click the "Raw" button to the right of
+    the ``task-stream.html`` file, and then provide that URL to
+    https://rawgit.com .  This process should provide a sharable link that
+    others can use to see your task stream plot.
+
     See Also
     --------
     Client.get_task_stream: Function version of this context manager
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/distributed-1.23.0/distributed/diagnostics/task_stream.py 
new/distributed-1.23.1/distributed/diagnostics/task_stream.py
--- old/distributed-1.23.0/distributed/diagnostics/task_stream.py       
2018-08-29 18:10:06.000000000 +0200
+++ new/distributed-1.23.1/distributed/diagnostics/task_stream.py       
2018-09-06 14:07:29.000000000 +0200
@@ -83,7 +83,8 @@
 
 
 def rectangles(msgs, workers=None, start_boundary=0):
-    workers = workers or {}
+    if workers is None:
+        workers = {}
 
     L_start = []
     L_duration = []
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/distributed-1.23.0/distributed/diagnostics/tests/test_task_stream.py 
new/distributed-1.23.1/distributed/diagnostics/tests/test_task_stream.py
--- old/distributed-1.23.0/distributed/diagnostics/tests/test_task_stream.py    
2018-08-29 18:10:06.000000000 +0200
+++ new/distributed-1.23.1/distributed/diagnostics/tests/test_task_stream.py    
2018-09-06 14:07:29.000000000 +0200
@@ -28,6 +28,7 @@
     workers = dict()
 
     rects = es.rectangles(0, 10, workers)
+    assert workers
     assert all(n == 'div' for n in rects['name'])
     assert all(d > 0 for d in rects['duration'])
     counts = frequencies(rects['color'])
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.23.0/distributed/tests/test_client.py 
new/distributed-1.23.1/distributed/tests/test_client.py
--- old/distributed-1.23.0/distributed/tests/test_client.py     2018-08-29 
18:10:06.000000000 +0200
+++ new/distributed-1.23.1/distributed/tests/test_client.py     2018-09-06 
14:07:29.000000000 +0200
@@ -1121,8 +1121,11 @@
     [x] = yield c.get({'x': (inc, 1)}, ['x'], sync=False)
     import gc
     gc.collect()
-    yield gen.moment
-    assert c.refcount['x'] == 0
+
+    start = time()
+    while c.refcount['x']:
+        yield gen.sleep(0.01)
+        assert time() < start + 2
 
 
 def test_Current():
@@ -5476,5 +5479,25 @@
             assert c.submit(f).result()
 
 
+@gen_cluster()
+def test_mixing_clients(s, a, b):
+    c1 = yield Client(s.address, asynchronous=True)
+    c2 = yield Client(s.address, asynchronous=True)
+
+    future = c1.submit(inc, 1)
+    with pytest.raises(ValueError):
+        c2.submit(inc, future)
+    yield c1.close()
+    yield c2.close()
+
+
+@gen_cluster(client=True)
+def test_tuple_keys(c, s, a, b):
+    x = dask.delayed(inc)(1, dask_key_name=('x', 1))
+    y = dask.delayed(inc)(x, dask_key_name=('y', 1))
+    future = c.compute(y)
+    assert (yield future) == 3
+
+
 if sys.version_info >= (3, 5):
     from distributed.tests.py3_test_client import *  # noqa F401
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.23.0/distributed/utils_comm.py 
new/distributed-1.23.1/distributed/utils_comm.py
--- old/distributed-1.23.0/distributed/utils_comm.py    2018-08-09 
03:16:09.000000000 +0200
+++ new/distributed-1.23.1/distributed/utils_comm.py    2018-09-05 
19:20:42.000000000 +0200
@@ -90,10 +90,12 @@
     only be accessed in a certain way.  Schedulers may have particular needs
     that can only be addressed by additional metadata.
     """
-
     def __init__(self, key):
         self.key = key
 
+    def __repr__(self):
+        return "%s('%s')" % (type(self).__name__, self.key)
+
 
 _round_robin_counter = [0]
 
@@ -144,7 +146,7 @@
 def unpack_remotedata(o, byte_keys=False, myset=None):
     """ Unpack WrappedKey objects from collection
 
-    Returns original collection and set of all found keys
+    Returns original collection and set of all found WrappedKey objects
 
     Examples
     --------
@@ -154,19 +156,19 @@
     >>> unpack_remotedata(())
     ((), set())
     >>> unpack_remotedata(rd)
-    ('mykey', {'mykey'})
+    ('mykey', {WrappedKey('mykey')})
     >>> unpack_remotedata([1, rd])
-    ([1, 'mykey'], {'mykey'})
+    ([1, 'mykey'], {WrappedKey('mykey')})
     >>> unpack_remotedata({1: rd})
-    ({1: 'mykey'}, {'mykey'})
+    ({1: 'mykey'}, {WrappedKey('mykey')})
     >>> unpack_remotedata({1: [rd]})
-    ({1: ['mykey']}, {'mykey'})
+    ({1: ['mykey']}, {WrappedKey('mykey')})
 
     Use the ``byte_keys=True`` keyword to force string keys
 
     >>> rd = WrappedKey(('x', 1))
     >>> unpack_remotedata(rd, byte_keys=True)
-    ("('x', 1)", {"('x', 1)"})
+    ("('x', 1)", {WrappedKey('('x', 1)')})
     """
     if myset is None:
         myset = set()
@@ -190,7 +192,7 @@
         k = o.key
         if byte_keys:
             k = tokey(k)
-        myset.add(k)
+        myset.add(o)
         return k
     else:
         return o
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.23.0/distributed.egg-info/PKG-INFO 
new/distributed-1.23.1/distributed.egg-info/PKG-INFO
--- old/distributed-1.23.0/distributed.egg-info/PKG-INFO        2018-08-30 
18:44:33.000000000 +0200
+++ new/distributed-1.23.1/distributed.egg-info/PKG-INFO        2018-09-06 
14:11:46.000000000 +0200
@@ -1,11 +1,12 @@
-Metadata-Version: 1.2
+Metadata-Version: 1.1
 Name: distributed
-Version: 1.23.0
+Version: 1.23.1
 Summary: Distributed scheduler for Dask
 Home-page: https://distributed.readthedocs.io/en/latest/
-Maintainer: Matthew Rocklin
-Maintainer-email: [email protected]
+Author: Matthew Rocklin
+Author-email: [email protected]
 License: BSD
+Description-Content-Type: UNKNOWN
 Description: Distributed
         ===========
         
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.23.0/distributed.egg-info/SOURCES.txt 
new/distributed-1.23.1/distributed.egg-info/SOURCES.txt
--- old/distributed-1.23.0/distributed.egg-info/SOURCES.txt     2018-08-30 
18:44:33.000000000 +0200
+++ new/distributed-1.23.1/distributed.egg-info/SOURCES.txt     2018-09-06 
14:11:46.000000000 +0200
@@ -70,6 +70,7 @@
 distributed/bokeh/worker.py
 distributed/bokeh/static/css/base.css
 distributed/bokeh/static/css/status.css
+distributed/bokeh/static/css/system.css
 distributed/bokeh/static/images/dask-logo.svg
 distributed/bokeh/static/images/fa-bars.svg
 distributed/bokeh/templates/base.html
@@ -79,6 +80,7 @@
 distributed/bokeh/templates/main.html
 distributed/bokeh/templates/simple.html
 distributed/bokeh/templates/status.html
+distributed/bokeh/templates/system.html
 distributed/bokeh/templates/task.html
 distributed/bokeh/templates/worker-table.html
 distributed/bokeh/templates/worker.html
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.23.0/docs/source/actors.rst 
new/distributed-1.23.1/docs/source/actors.rst
--- old/distributed-1.23.0/docs/source/actors.rst       2018-08-09 
03:16:09.000000000 +0200
+++ new/distributed-1.23.1/docs/source/actors.rst       2018-09-06 
14:09:27.000000000 +0200
@@ -175,7 +175,7 @@
 
 
 Calling from coroutines and async/await
---------------------------
+---------------------------------------
 
 If you use actors within a coroutine or async/await function then actor methods
 and attrbute access will return Tornado futures
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.23.0/docs/source/api.rst 
new/distributed-1.23.1/docs/source/api.rst
--- old/distributed-1.23.0/docs/source/api.rst  2018-08-09 03:16:09.000000000 
+0200
+++ new/distributed-1.23.1/docs/source/api.rst  2018-09-06 14:05:22.000000000 
+0200
@@ -3,7 +3,7 @@
 API
 ===
 
-.. currentmodule:: distributed.client
+.. currentmodule:: distributed
 
 **Client**
 
@@ -60,7 +60,7 @@
    ReplayExceptionClient.get_futures_error
    ReplayExceptionClient.recreate_error_locally
 
-.. currentmodule:: distributed.client
+.. currentmodule:: distributed
 
 
 **Future**
@@ -131,7 +131,7 @@
 Client
 ------
 
-.. currentmodule:: distributed.client
+.. currentmodule:: distributed
 
 .. autoclass:: Client
    :members:
@@ -163,6 +163,7 @@
 .. autofunction:: distributed.get_client
 .. autofunction:: distributed.secede
 .. autofunction:: distributed.rejoin
+.. autoclass:: get_task_stream
 
 .. autoclass:: Lock
    :members:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.23.0/docs/source/changelog.rst 
new/distributed-1.23.1/docs/source/changelog.rst
--- old/distributed-1.23.0/docs/source/changelog.rst    2018-08-30 
18:43:14.000000000 +0200
+++ new/distributed-1.23.1/docs/source/changelog.rst    2018-09-06 
14:10:03.000000000 +0200
@@ -1,11 +1,22 @@
 Changelog
 =========
 
-1.23.1 - YYYY-MM-DD
+1.23.2 - YYYY-MM-DD
 -------------------
 
 -
 
+1.23.1 - 2018-09-06
+-------------------
+
+-  Raise informative error when mixing futures between clients (#2227) 
`Matthew Rocklin`_
+-  add byte_keys to unpack_remotedata call (#2232) `Matthew Rocklin`_
+-  Add documentation for gist/rawgit for get_task_stream (#2236) `Matthew 
Rocklin`_
+-  Quiet Client.close by waiting for scheduler stop signal (#2237) `Matthew 
Rocklin`_
+-  Display system graphs nicely on different screen sizes (#2239) `Derek 
Ludwig`_
+-  Mutate passed in workers dict in TaskStreamPlugin.rectangles (#2238) 
`Matthew Rocklin`_
+
+
 1.23.0 - 2018-08-30
 -------------------
 
@@ -56,7 +67,7 @@
 -  Retire workers from scale (:pr:`2104`) `Matthew Rocklin`_
 -  Allow worker to refuse data requests with busy signal (:pr:`2092`) `Matthew 
Rocklin`_
 -  Don't forget released keys (:pr:`2098`) `Matthew Rocklin`_
--  Update example for stopping a worker (:pr:`2088`) `John A Kirkham`_
+-  Update example for stopping a worker (:pr:`2088`) `John Kirkham`_
 -  removed hardcoded value of memory terminate fraction from a log message 
(:pr:`2096`) `Bartosz Marcinkowski`_
 -  Adjust worker doc after change in config file location and treatment 
(:pr:`2094`) `Aurélien Ponte`_
 -  Prefer gathering data from same host (:pr:`2090`) `Matthew Rocklin`_


Reply via email to