Hello community,
here is the log from the commit of package python-distributed for
openSUSE:Factory checked in at 2018-05-08 13:38:50
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-distributed (Old)
and /work/SRC/openSUSE:Factory/.python-distributed.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "python-distributed"
Tue May 8 13:38:50 2018 rev:3 rq:605125 version:1.21.8
Changes:
--------
--- /work/SRC/openSUSE:Factory/python-distributed/python-distributed.changes
2018-05-04 11:29:39.463418437 +0200
+++
/work/SRC/openSUSE:Factory/.python-distributed.new/python-distributed.changes
2018-05-08 13:38:51.642259801 +0200
@@ -1,0 +2,32 @@
+Sun May 6 05:30:39 UTC 2018 - [email protected]
+
+- specfile:
+ * Replace msgpack-python by msgpack
+ * updated required versions according to requirement.txt
+
+- update to version 1.21.8:
+ * Remove errant print statement (GH#1957) Matthew Rocklin
+ * Only add reevaluate_occupancy callback once (GH#1953) Tony Lorenzo
+
+- changes from version 1.21.7:
+ * Newline needed for doctest rendering (GH#1917) Loïc Estève
+ * Support Client._repr_html_ when in async mode (GH#1909) Matthew
+ Rocklin
+ * Add parameters to dask-ssh command (GH#1910) Irene Rodriguez
+ * Santize get_dataset trace (GH#1888) John Kirkham
+ * Fix bug where queues would not clean up cleanly (GH#1922) Matthew
+ Rocklin
+ * Delete cached file safely in upload file (GH#1921) Matthew Rocklin
+ * Accept KeyError when closing tornado IOLoop in tests (GH#1937)
+ Matthew Rocklin
+ * Quiet the client and scheduler when gather(…, errors=’skip’)
+ (:pr:`1936) Matthew Rocklin
+ * Clarify couldn’t gather keys warning (GH#1942) Kenneth Koski
+ * Support submit keywords in joblib (GH#1947) Matthew Rocklin
+ * Avoid use of external resources in bokeh server (GH#1934) Matthew
+ Rocklin
+ * Drop __contains__ from Datasets (GH#1889) John Kirkham
+ * Fix bug with queue timeouts (GH#1950) Matthew Rocklin
+ * Replace msgpack-python by msgpack (GH#1927) Loïc Estève
+
+-------------------------------------------------------------------
Old:
----
distributed-1.21.6.tar.gz
New:
----
distributed-1.21.8.tar.gz
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ python-distributed.spec ++++++
--- /var/tmp/diff_new_pack.NgbfXK/_old 2018-05-08 13:38:52.266237279 +0200
+++ /var/tmp/diff_new_pack.NgbfXK/_new 2018-05-08 13:38:52.270237134 +0200
@@ -20,12 +20,12 @@
# Test requires network connection
%bcond_with test
Name: python-distributed
-Version: 1.21.6
+Version: 1.21.8
Release: 0
Summary: Library for distributed computing with Python
License: BSD-3-Clause
Group: Development/Languages/Python
-Url: https://distributed.readthedocs.io/en/latest/
+URL: https://distributed.readthedocs.io/en/latest/
Source:
https://files.pythonhosted.org/packages/source/d/distributed/distributed-%{version}.tar.gz
Source99: python-distributed-rpmlintrc
BuildRequires: %{python_module devel}
@@ -38,31 +38,31 @@
Requires: python-certifi
Requires: python-click >= 6.6
Requires: python-cloudpickle >= 0.2.2
-Requires: python-dask >= 0.14.1
+Requires: python-dask >= 0.17.0
Requires: python-joblib >= 0.10.2
-Requires: python-msgpack-python
+Requires: python-msgpack
Requires: python-psutil
Requires: python-scikit-learn >= 0.17.1
Requires: python-six
Requires: python-sortedcontainers
Requires: python-tblib
Requires: python-toolz >= 0.7.4
-Requires: python-tornado >= 4.4
-Requires: python-zict >= 0.1.2
+Requires: python-tornado >= 4.5.1
+Requires: python-zict >= 0.1.3
BuildArch: noarch
%if %{with test}
BuildRequires: %{python_module certifi}
BuildRequires: %{python_module click >= 6.6}
BuildRequires: %{python_module cloudpickle >= 0.2.2}
-BuildRequires: %{python_module dask >= 0.14.1}
-BuildRequires: %{python_module msgpack-python}
+BuildRequires: %{python_module dask >= 0.17.0}
+BuildRequires: %{python_module msgpack}
BuildRequires: %{python_module psutil}
BuildRequires: %{python_module six}
BuildRequires: %{python_module sortedcontainers}
BuildRequires: %{python_module tblib}
BuildRequires: %{python_module toolz >= 0.7.4}
-BuildRequires: %{python_module tornado >= 4.4}
-BuildRequires: %{python_module zict >= 0.1.2}
+BuildRequires: %{python_module tornado >= 4.5.1}
+BuildRequires: %{python_module zict >= 0.1.3}
%endif
%ifpython2
Requires: python-futures
++++++ distributed-1.21.6.tar.gz -> distributed-1.21.8.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/MANIFEST.in
new/distributed-1.21.8/MANIFEST.in
--- old/distributed-1.21.6/MANIFEST.in 2018-02-12 20:27:00.000000000 +0100
+++ new/distributed-1.21.8/MANIFEST.in 2018-05-03 23:45:52.000000000 +0200
@@ -2,6 +2,7 @@
recursive-include distributed *.js
recursive-include distributed *.coffee
recursive-include distributed *.html
+recursive-include distributed *.svg
recursive-include docs *.rst
include setup.py
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/PKG-INFO
new/distributed-1.21.8/PKG-INFO
--- old/distributed-1.21.6/PKG-INFO 2018-04-06 18:58:48.000000000 +0200
+++ new/distributed-1.21.8/PKG-INFO 2018-05-03 23:48:35.000000000 +0200
@@ -1,6 +1,6 @@
Metadata-Version: 1.1
Name: distributed
-Version: 1.21.6
+Version: 1.21.8
Summary: Distributed computing
Home-page: https://distributed.readthedocs.io/en/latest/
Author: Matthew Rocklin
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/_version.py
new/distributed-1.21.8/distributed/_version.py
--- old/distributed-1.21.6/distributed/_version.py 2018-04-06
18:58:48.000000000 +0200
+++ new/distributed-1.21.8/distributed/_version.py 2018-05-03
23:48:35.000000000 +0200
@@ -8,11 +8,11 @@
version_json = '''
{
- "date": "2018-04-06T12:58:13-0400",
+ "date": "2018-05-03T17:47:50-0400",
"dirty": false,
"error": null,
- "full-revisionid": "07b3840d1065df1d2ebebf67d1f0a9a33eb9c28e",
- "version": "1.21.6"
+ "full-revisionid": "6c47b25d25534dca778cfe1eb7bb5dbe8b7832aa",
+ "version": "1.21.8"
}
''' # END VERSION_JSON
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/bokeh/core.py
new/distributed-1.21.8/distributed/bokeh/core.py
--- old/distributed-1.21.6/distributed/bokeh/core.py 2018-02-12
20:27:00.000000000 +0100
+++ new/distributed-1.21.8/distributed/bokeh/core.py 2018-05-03
23:45:52.000000000 +0200
@@ -1,9 +1,12 @@
from __future__ import print_function, division, absolute_import
from distutils.version import LooseVersion
+import os
import bokeh
from bokeh.server.server import Server
+from tornado import web
+
if LooseVersion(bokeh.__version__) < LooseVersion('0.12.6'):
raise ImportError("Dask needs bokeh >= 0.12.6")
@@ -28,6 +31,13 @@
allow_websocket_origin=["*"],
**self.server_kwargs)
self.server.start()
+
+ handlers = [(self.prefix + r'/statics/(.*)',
+ web.StaticFileHandler,
+ {'path': os.path.join(os.path.dirname(__file__),
'static')})]
+
+ self.server._tornado.add_handlers(r'.*', handlers)
+
return
except (SystemExit, EnvironmentError):
port = 0
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/bokeh/scheduler.py
new/distributed-1.21.8/distributed/bokeh/scheduler.py
--- old/distributed-1.21.6/distributed/bokeh/scheduler.py 2018-04-06
16:36:53.000000000 +0200
+++ new/distributed-1.21.8/distributed/bokeh/scheduler.py 2018-05-03
23:45:52.000000000 +0200
@@ -1215,4 +1215,5 @@
from .scheduler_html import routes
handlers = [(self.prefix + '/' + url, cls, {'server': self.my_server,
'extra': self.extra})
for url, cls in routes]
+
self.server._tornado.add_handlers(r'.*', handlers)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore'
old/distributed-1.21.6/distributed/bokeh/static/dask_horizontal.svg
new/distributed-1.21.8/distributed/bokeh/static/dask_horizontal.svg
--- old/distributed-1.21.6/distributed/bokeh/static/dask_horizontal.svg
1970-01-01 01:00:00.000000000 +0100
+++ new/distributed-1.21.8/distributed/bokeh/static/dask_horizontal.svg
2018-05-03 23:45:52.000000000 +0200
@@ -0,0 +1,28 @@
+<svg id="Layer_1"
+ data-name="Layer 1"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ viewBox="40 30 470 190">
+ <defs>
+ <linearGradient id="linear-gradient" x1="154.55" y1="173.33" x2="242.36"
y2="173.33" gradientTransform="translate(-26.62 -73.73) rotate(7.91)"
gradientUnits="userSpaceOnUse">
+ <stop offset="0.01" stop-color="#c7422f" />
+ <stop offset="0.37" stop-color="#d46e43" />
+ <stop offset="1" stop-color="#eeb575" />
+ </linearGradient>
+ <linearGradient id="linear-gradient-2" x1="181.83" y1="171.07" x2="221.39"
y2="171.07" gradientTransform="translate(-26.62 -73.73) rotate(7.91)"
gradientUnits="userSpaceOnUse">
+ <stop offset="0.21" stop-color="#cf603b" />
+ <stop offset="1" stop-color="#eeb575" />
+ </linearGradient>
+ <linearGradient id="linear-gradient-3" x1="107.2" y1="175.53" x2="204.37"
y2="175.53" xlink:href="#linear-gradient-2" />
+ </defs>
+
+ <title>Dask</title>
+
+ <path
d="M214.33,85.8h36.1c24.73,0.28,30.29,21.36,30.29,40,0,39.54-20.35,39.14-30.29,39.14h-36.1V85.8Zm34.74,64.52c12.94,0,15.6-11.66,15.6-24.55,0-18.13-6.31-25.38-15.6-25.38h-18.7v49.93h18.7Z"
style="fill:#101011" />
+ <path
d="M311.7,86.08h18.46l27.18,78.84h-17L333.87,147H306.31l-6.92,17.88h-17Zm18.18,47.34-9.09-26.87-10,26.87h19Z"
style="fill:#101011" />
+ <path
d="M364.71,106.58c0-11.68,4.7-20.91,20.54-20.91,7.19,0,27.91,1.13,38.24,3v11.76s-21.49-.8-35.82-0.8c-5.45,0-6.91,3.45-6.91,7.19v5.89c0,6.07,2.67,6.19,6.91,6.19h20.61c14,0,18.95,8.92,18.95,20v7.75c0,15.95-9.55,19.64-18.95,19.64-6.58,0-35.16-.8-40.53-3.24V151.58s23.47,0.61,37.14.61c4.9,0,6.29-5.82,6.29-5.82v-7c0-4-1.56-5.91-6.29-5.91H384.76c-14.5,0-20.05-6.54-20.05-19.81v-7.1Z"
style="fill:#101011" />
+ <path
d="M438.85,86.08h15.56v33.19h8.64l22.75-33.19h18.67l-27.35,40.11,27.35,38.87H485.81l-23-31.67h-8.37v31.67H438.85v-79Z"
style="fill:#101011" />
+ <path
d="M192.41,110.26q0.17-1.83.29-3.66a119.55,119.55,0,0,0-12.24-60.92L173.64,32l-3.16,15a109,109,0,0,1-64.2,77.79l-4.69,2,1.78,4.78a107.9,107.9,0,0,1,6.31,48.3A109.44,109.44,0,0,1,104,205.75L100.36,216l10.28-3.38A119.71,119.71,0,0,0,192.41,110.26ZM122.68,196l-5.48,2.79,1.2-6a120.35,120.35,0,0,0,1.75-12,118.87,118.87,0,0,0-4.52-45.89l-0.73-2.41,2.25-1.12A120.24,120.24,0,0,0,173.4,72.22l3.51-8L179,72.7a107.63,107.63,0,0,1,2.69,36.54A108.48,108.48,0,0,1,122.68,196Z"
style="fill:url(#linear-gradient)" />
+ <path
d="M166.91,116.14c4.13-9.12,8.42-31.77,8.15-33.46A126,126,0,0,1,160,105.07c-0.85,2.24-1.74,4.47-2.74,6.67h0a108.87,108.87,0,0,1-31.62,40.47q0.85,6.14,1.07,12.36A119.4,119.4,0,0,0,166.91,116.14Z"
style="fill:url(#linear-gradient-2)" />
+ <path
d="M104.08,165.48a109,109,0,0,1-30.87,9.17l-6.08.86,3.08-5.31a120.74,120.74,0,0,0,5.54-10.74,118.79,118.79,0,0,0,10.62-44.87l0.09-2.51,2.49-.33a120.18,120.18,0,0,0,54-21.47A102.79,102.79,0,0,0,161.57,56.9a109,109,0,0,1-80.84,45l-5.08.35,0.13,5.1a107.92,107.92,0,0,1-9.72,47.73,109.43,109.43,0,0,1-13.83,22.67l-6.71,8.49,10.82,0.14a119.25,119.25,0,0,0,47.52-9.26A104.21,104.21,0,0,0,104.08,165.48Z"
style="fill:url(#linear-gradient-3)" />
+</svg>
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/bokeh/template.html
new/distributed-1.21.8/distributed/bokeh/template.html
--- old/distributed-1.21.6/distributed/bokeh/template.html 2018-02-12
20:27:00.000000000 +0100
+++ new/distributed-1.21.8/distributed/bokeh/template.html 2018-05-03
23:45:52.000000000 +0200
@@ -6,7 +6,6 @@
{{ bokeh_css }}
{{ bokeh_js }}
<style>
- @import
url("https://fonts.googleapis.com/css?family=Source+Sans+Pro:300,400,700");
html {
width: 100%;
height: 100%;
@@ -32,6 +31,10 @@
margin: 0 20%;
}
}
+
+ body {
+ font-family: Helvetica, Arial, sans-serif;
+ }
.dashboard {
clear: both;
}
@@ -45,10 +48,10 @@
font-size: 0;
}
.navbar img {
- height: 42px;
+ height: 36px;
float: right;
- margin-bottom: 0px;
- padding: 0px 24px;
+ margin: 3px;
+ padding: 0px 25px;
}
.navbar ul {
list-style-type: none;
@@ -60,7 +63,7 @@
.navbar li {
float: left;
- font-size: 18px;
+ font-size: 17px;
transition: .3s background-color;
}
@@ -101,7 +104,11 @@
<body>
<div class="navbar">
<ul>
- <li><a href="http://dask.pydata.org/en/latest/" style="padding:0px
0px;"><img
src="https://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg"></img></a></li>
+ <li>
+ <a href="http://dask.pydata.org/en/latest/" style="padding: 0px
0px">
+ <img src="statics/dask_horizontal.svg"></img>
+ </a>
+ </li>
{% for page in pages %}
<li{% if page == active_page %} class="active"{% endif %}><a
href="{{ page }}">{{ page|title }}</a></li>
{% endfor %}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/bokeh/worker.py
new/distributed-1.21.8/distributed/bokeh/worker.py
--- old/distributed-1.21.6/distributed/bokeh/worker.py 2018-03-06
13:11:14.000000000 +0100
+++ new/distributed-1.21.8/distributed/bokeh/worker.py 2018-05-03
23:45:52.000000000 +0200
@@ -625,6 +625,7 @@
prefix = prefix.rstrip('/')
if prefix and not prefix.startswith('/'):
prefix = '/' + prefix
+ self.prefix = prefix
extra = {'prefix': prefix}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/cli/dask_ssh.py
new/distributed-1.21.8/distributed/cli/dask_ssh.py
--- old/distributed-1.21.6/distributed/cli/dask_ssh.py 2018-03-06
23:35:27.000000000 +0100
+++ new/distributed-1.21.8/distributed/cli/dask_ssh.py 2018-05-03
19:22:32.000000000 +0200
@@ -37,9 +37,20 @@
"dask-scheduler and dask-worker commands."))
@click.option('--remote-python', default=None, type=str,
help="Path to Python on remote nodes.")
[email protected]('--memory-limit', default='auto',
+ help="Bytes of memory that the worker can use. "
+ "This can be an integer (bytes), "
+ "float (fraction of total system memory), "
+ "string (like 5GB or 5000M), "
+ "'auto', or zero for no memory management")
[email protected]('--worker-port', type=int, default=0,
+ help="Serving computation port, defaults to random")
[email protected]('--nanny-port', type=int, default=0,
+ help="Serving nanny port, defaults to random")
@click.pass_context
def main(ctx, scheduler, scheduler_port, hostnames, hostfile, nthreads, nprocs,
- ssh_username, ssh_port, ssh_private_key, nohost, log_directory,
remote_python):
+ ssh_username, ssh_port, ssh_private_key, nohost, log_directory,
remote_python,
+ memory_limit, worker_port, nanny_port):
try:
hostnames = list(hostnames)
if hostfile:
@@ -55,7 +66,8 @@
exit(1)
c = SSHCluster(scheduler, scheduler_port, hostnames, nthreads, nprocs,
- ssh_username, ssh_port, ssh_private_key, nohost,
log_directory, remote_python)
+ ssh_username, ssh_port, ssh_private_key, nohost,
log_directory, remote_python,
+ memory_limit, worker_port, nanny_port)
import distributed
print('\n---------------------------------------------------------------')
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/client.py
new/distributed-1.21.8/distributed/client.py
--- old/distributed-1.21.6/distributed/client.py 2018-04-06
18:54:37.000000000 +0200
+++ new/distributed-1.21.8/distributed/client.py 2018-05-03
23:45:52.000000000 +0200
@@ -631,7 +631,11 @@
return '<%s: not connected>' % (self.__class__.__name__,)
def _repr_html_(self):
- if self._loop_runner.is_started() and self.scheduler:
+ if self.cluster and hasattr(self.cluster, 'scheduler'):
+ info = self.cluster.scheduler.identity()
+ elif (self._loop_runner.is_started() and
+ self.scheduler and
+ not (self.asynchronous and self.loop is IOLoop.current())):
info = sync(self.loop, self.scheduler.identity)
else:
info = False
@@ -1349,7 +1353,7 @@
""" Want to stop the All(...) early if we find an error """
st = self.futures[k]
yield st.wait()
- if st.status != 'finished':
+ if st.status != 'finished' and errors == 'raise' :
raise AllExit()
while True:
@@ -1410,7 +1414,8 @@
response = yield self.scheduler.gather(keys=keys)
if response['status'] == 'error':
- logger.warning("Couldn't gather keys %s", response['keys'])
+ log = logger.warning if errors == 'raise' else logger.debug
+ log("Couldn't gather %s keys, rescheduling %s",
(len(response['keys']), response['keys']))
for key in response['keys']:
self._send_to_scheduler({'op': 'report-key',
'key': key})
@@ -1824,6 +1829,8 @@
@gen.coroutine
def _get_dataset(self, name):
out = yield self.scheduler.publish_get(name=name, client=self.id)
+ if out is None:
+ raise KeyError("Dataset '%s' not found" % name)
with temp_default_client(self):
data = loads(out['data'])
@@ -2476,13 +2483,13 @@
""" Upload local package to workers
This sends a local file up to all worker nodes. This file is placed
- into a temporary directory on Python's system path so any .py, .pyc,
.egg
+ into a temporary directory on Python's system path so any .py, .egg
or .zip files will be importable.
Parameters
----------
filename: string
- Filename of .py, .pyc, .egg or .zip file to send to workers
+ Filename of .py, .egg or .zip file to send to workers
Examples
--------
@@ -2637,6 +2644,9 @@
def has_what(self, workers=None, **kwargs):
""" Which keys are held by which workers
+ This returns the keys of the data that are held in each worker's
+ memory.
+
Parameters
----------
workers: list (optional)
@@ -2655,6 +2665,7 @@
--------
Client.who_has
Client.ncores
+ Client.processing
"""
if (isinstance(workers, tuple)
and all(isinstance(i, (str, tuple)) for i in workers)):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/deploy/cluster.py
new/distributed-1.21.8/distributed/deploy/cluster.py
--- old/distributed-1.21.6/distributed/deploy/cluster.py 2018-04-06
16:36:53.000000000 +0200
+++ new/distributed-1.21.8/distributed/deploy/cluster.py 2018-05-03
23:45:55.000000000 +0200
@@ -179,7 +179,6 @@
adapt.on_click(adapt_cb)
def scale_cb(b):
- print('Hello!')
with log_errors():
n = request.value
with ignoring(AttributeError):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/deploy/local.py
new/distributed-1.21.8/distributed/deploy/local.py
--- old/distributed-1.21.6/distributed/deploy/local.py 2018-04-06
16:36:53.000000000 +0200
+++ new/distributed-1.21.8/distributed/deploy/local.py 2018-05-03
19:22:32.000000000 +0200
@@ -60,12 +60,15 @@
>>> c = Client(c) # connect to local cluster # doctest: +SKIP
Add a new worker to the cluster
+
>>> w = c.start_worker(ncores=2) # doctest: +SKIP
Shut down the extra worker
+
>>> c.remove_worker(w) # doctest: +SKIP
Pass extra keyword arguments to Bokeh
+
>>> LocalCluster(service_kwargs={'bokeh': {'prefix': '/foo'}}) # doctest:
+SKIP
"""
def __init__(self, n_workers=None, threads_per_worker=None, processes=True,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/deploy/ssh.py
new/distributed-1.21.8/distributed/deploy/ssh.py
--- old/distributed-1.21.6/distributed/deploy/ssh.py 2018-03-06
23:35:27.000000000 +0100
+++ new/distributed-1.21.8/distributed/deploy/ssh.py 2018-05-03
19:22:32.000000000 +0200
@@ -210,20 +210,38 @@
def start_worker(logdir, scheduler_addr, scheduler_port, worker_addr,
nthreads, nprocs,
- ssh_username, ssh_port, ssh_private_key, nohost,
remote_python=None):
+ ssh_username, ssh_port, ssh_private_key, nohost,
+ memory_limit,
+ worker_port,
+ nanny_port,
+ remote_python=None):
cmd = ('{python} -m distributed.cli.dask_worker '
'{scheduler_addr}:{scheduler_port} '
- '--nthreads {nthreads} --nprocs {nprocs}')
+ '--nthreads {nthreads} --nprocs {nprocs} ')
+
if not nohost:
- cmd += ' --host {worker_addr}'
+ cmd += ' --host {worker_addr} '
+
+ if memory_limit:
+ cmd += '--memory-limit {memory_limit} '
+
+ if worker_port:
+ cmd += '--worker-port {worker_port} '
+
+ if nanny_port:
+ cmd += '--nanny-port {nanny_port} '
+
cmd = cmd.format(
python=remote_python or sys.executable,
scheduler_addr=scheduler_addr,
scheduler_port=scheduler_port,
worker_addr=worker_addr,
nthreads=nthreads,
- nprocs=nprocs)
+ nprocs=nprocs,
+ memory_limit=memory_limit,
+ worker_port=worker_port,
+ nanny_port=nanny_port)
# Optionally redirect stdout and stderr to a logfile
if logdir is not None:
@@ -254,7 +272,8 @@
def __init__(self, scheduler_addr, scheduler_port, worker_addrs,
nthreads=0, nprocs=1,
ssh_username=None, ssh_port=22, ssh_private_key=None,
- nohost=False, logdir=None, remote_python=None):
+ nohost=False, logdir=None, remote_python=None,
+ memory_limit=None, worker_port=None, nanny_port=None):
self.scheduler_addr = scheduler_addr
self.scheduler_port = scheduler_port
@@ -269,6 +288,10 @@
self.remote_python = remote_python
+ self.memory_limit = memory_limit
+ self.worker_port = worker_port
+ self.nanny_port = nanny_port
+
# Generate a universal timestamp to use for log files
import datetime
if logdir is not None:
@@ -325,6 +348,9 @@
self.nthreads, self.nprocs,
self.ssh_username, self.ssh_port,
self.ssh_private_key, self.nohost,
+ self.memory_limit,
+ self.worker_port,
+ self.nanny_port,
self.remote_python))
def shutdown(self):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/joblib.py
new/distributed-1.21.8/distributed/joblib.py
--- old/distributed-1.21.6/distributed/joblib.py 2018-02-22
17:59:31.000000000 +0100
+++ new/distributed-1.21.8/distributed/joblib.py 2018-05-03
19:22:32.000000000 +0200
@@ -73,7 +73,7 @@
MAX_IDEAL_BATCH_DURATION = 1.0
def __init__(self, scheduler_host=None, scatter=None,
- client=None, loop=None):
+ client=None, loop=None, **submit_kwargs):
if client is None:
if scheduler_host:
client = Client(scheduler_host, loop=loop,
set_as_default=False)
@@ -104,6 +104,7 @@
self._scatter = []
self.data_to_future = {}
self.futures = set()
+ self.submit_kwargs = submit_kwargs
def __reduce__(self):
return (DaskDistributedBackend, ())
@@ -148,7 +149,7 @@
key = '%s-batch-%s' % (joblib_funcname(func), uuid4().hex)
func, args = self._to_func_args(func)
- future = self.client.submit(func, *args, key=key)
+ future = self.client.submit(func, *args, key=key, **self.submit_kwargs)
self.futures.add(future)
@gen.coroutine
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/publish.py
new/distributed-1.21.8/distributed/publish.py
--- old/distributed-1.21.6/distributed/publish.py 2018-04-03
05:03:11.000000000 +0200
+++ new/distributed-1.21.8/distributed/publish.py 2018-05-03
23:45:52.000000000 +0200
@@ -42,10 +42,7 @@
def get(self, stream, name=None, client=None):
with log_errors():
- if name in self.datasets:
- return self.datasets[name]
- else:
- raise KeyError("Dataset '%s' not found" % name)
+ return self.datasets.get(name, None)
class Datasets(MutableMapping):
@@ -68,9 +65,6 @@
def __delitem__(self, key):
self.__client.unpublish_dataset(key)
- def __contains__(self, key):
- return key in self.__client.list_datasets()
-
def __iter__(self):
for key in self.__client.list_datasets():
yield key
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/queues.py
new/distributed-1.21.8/distributed/queues.py
--- old/distributed-1.21.6/distributed/queues.py 2018-02-12
20:27:00.000000000 +0100
+++ new/distributed-1.21.8/distributed/queues.py 2018-05-03
23:45:52.000000000 +0200
@@ -1,6 +1,7 @@
from __future__ import print_function, division, absolute_import
from collections import defaultdict
+import datetime
import logging
import uuid
@@ -32,13 +33,17 @@
self.client_refcount = dict()
self.future_refcount = defaultdict(lambda: 0)
- self.scheduler.handlers.update({'queue_create': self.create,
- 'queue_release': self.release,
- 'queue_put': self.put,
- 'queue_get': self.get,
- 'queue_qsize': self.qsize})
-
- self.scheduler.client_handlers['queue-future-release'] =
self.future_release
+ self.scheduler.handlers.update({
+ 'queue_create': self.create,
+ 'queue_put': self.put,
+ 'queue_get': self.get,
+ 'queue_qsize': self.qsize}
+ )
+
+ self.scheduler.client_handlers.update({
+ 'queue-future-release': self.future_release,
+ 'queue_release': self.release,
+ })
self.scheduler.extensions['queues'] = self
@@ -50,13 +55,18 @@
self.client_refcount[name] += 1
def release(self, stream=None, name=None, client=None):
+ if name not in self.queues:
+ return
+
self.client_refcount[name] -= 1
if self.client_refcount[name] == 0:
del self.client_refcount[name]
- futures = self.queues[name].queue
+ futures = self.queues[name]._queue
del self.queues[name]
- self.scheduler.client_releases_keys(keys=[f.key for f in futures],
- client='queue-%s' % name)
+ self.scheduler.client_releases_keys(
+ keys=[d['value'] for d in futures if d['type'] ==
'Future'],
+ client='queue-%s' % name
+ )
@gen.coroutine
def put(self, stream=None, name=None, key=None, data=None, client=None,
timeout=None):
@@ -66,6 +76,8 @@
self.scheduler.client_desires_keys(keys=[key], client='queue-%s' %
name)
else:
record = {'type': 'msgpack', 'value': data}
+ if timeout is not None:
+ timeout = datetime.timedelta(seconds=(timeout))
yield self.queues[name].put(record, timeout=timeout)
def future_release(self, name=None, key=None, client=None):
@@ -111,6 +123,8 @@
out = [process(o) for o in out]
raise gen.Return(out)
else:
+ if timeout is not None:
+ timeout = datetime.timedelta(seconds=timeout)
record = yield self.queues[name].get(timeout=timeout)
record = process(record)
raise gen.Return(record)
@@ -232,14 +246,11 @@
result = yield self.client.scheduler.queue_qsize(name=self.name)
raise gen.Return(result)
- def _release(self):
+ def close(self):
if self.client.status == 'running': # TODO: can leave zombie futures
self.client._send_to_scheduler({'op': 'queue_release',
'name': self.name})
- def __del__(self):
- self._release()
-
def __getstate__(self):
return (self.name, self.client.scheduler.address)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/scheduler.py
new/distributed-1.21.8/distributed/scheduler.py
--- old/distributed-1.21.6/distributed/scheduler.py 2018-04-06
18:54:34.000000000 +0200
+++ new/distributed-1.21.8/distributed/scheduler.py 2018-05-03
23:45:55.000000000 +0200
@@ -13,6 +13,7 @@
import random
import six
+import psutil
from sortedcontainers import SortedSet, SortedDict
try:
from cytoolz import frequencies, merge, pluck, merge_sorted, first
@@ -747,6 +748,7 @@
self.allowed_failures = allowed_failures
self.validate = validate
self.status = None
+ self.proc = psutil.Process()
self.delete_interval = parse_timedelta(delete_interval, default='ms')
self.synchronize_worker_interval =
parse_timedelta(synchronize_worker_interval, default='ms')
self.digests = None
@@ -1056,6 +1058,8 @@
for k, v in self.services.items():
logger.info("%11s at: %25s", k, '%s:%d' % (listen_ip, v.port))
+ self.loop.add_callback(self.reevaluate_occupancy)
+
if self.scheduler_file:
with open(self.scheduler_file, 'w') as f:
json.dump(self.identity(), f, indent=2)
@@ -1068,7 +1072,6 @@
finalize(self, del_scheduler_file)
- self.loop.add_callback(self.reevaluate_occupancy)
self.start_periodic_callbacks()
setproctitle("dask-scheduler [%s]" % (self.address,))
@@ -2144,7 +2147,7 @@
for msg in msgs:
if msg == 'OK': # from close
break
- if 'status' in msg and 'error' in msg['status']:
+ if 'status' in msg and 'error' in msg['status'] and
msg.get('op') != 'task-erred':
try:
logger.error("error from worker %s: %s",
worker, clean_exception(**msg)[1])
@@ -4135,12 +4138,10 @@
if self.status == 'closed':
return
- import psutil
- proc = psutil.Process()
last = time()
next_time = timedelta(seconds=DELAY)
- if proc.cpu_percent() < 50:
+ if self.proc.cpu_percent() < 50:
workers = list(self.workers.values())
for i in range(len(workers)):
ws = workers[worker_index % len(workers)]
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/tests/test_client.py
new/distributed-1.21.8/distributed/tests/test_client.py
--- old/distributed-1.21.6/distributed/tests/test_client.py 2018-04-06
18:54:37.000000000 +0200
+++ new/distributed-1.21.8/distributed/tests/test_client.py 2018-05-03
23:45:52.000000000 +0200
@@ -28,7 +28,7 @@
import dask
from dask import delayed
from dask.context import _globals
-from distributed import (Worker, Nanny, fire_and_forget, config,
+from distributed import (Worker, Nanny, fire_and_forget, config, LocalCluster,
get_client, secede, get_worker, Executor, profile,
TimeoutError)
from distributed.config import set_config
@@ -526,6 +526,20 @@
assert xx == 2
+@gen_cluster(client=True, ncores=[('127.0.0.1', 1)])
+def test_gather_skip(c, s, a):
+ x = c.submit(div, 1, 0, priority=10)
+ y = c.submit(slowinc, 1, delay=0.5)
+
+ with captured_logger(logging.getLogger('distributed.scheduler')) as sched:
+ with captured_logger(logging.getLogger('distributed.client')) as
client:
+ L = yield c.gather([x, y], errors='skip')
+ assert L == [2]
+
+ assert not client.getvalue()
+ assert not sched.getvalue()
+
+
@gen_cluster(client=True, timeout=None)
def test_get(c, s, a, b):
future = c.get({'x': (inc, 1)}, 'x', sync=False)
@@ -1882,6 +1896,24 @@
@gen_cluster(client=True)
+def test_repr_async(c, s, a, b):
+ c._repr_html_()
+
+
+@gen_test()
+def test_repr_localcluster():
+ cluster = yield LocalCluster(processes=False, diagnostics_port=None,
+ asynchronous=True)
+ client = yield Client(cluster, asynchronous=True)
+ try:
+ text = client._repr_html_()
+ assert cluster.scheduler.address in text
+ finally:
+ yield client.close()
+ yield cluster._close()
+
+
+@gen_cluster(client=True)
def test_forget_simple(c, s, a, b):
x = c.submit(inc, 1, retries=2)
y = c.submit(inc, 2)
@@ -4911,8 +4943,6 @@
def test_quiet_quit_when_cluster_leaves(loop_in_thread):
- from distributed import LocalCluster
-
loop = loop_in_thread
with LocalCluster(loop=loop, scheduler_port=0, diagnostics_port=None,
silence_logs=False) as cluster:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/tests/test_joblib.py
new/distributed-1.21.8/distributed/tests/test_joblib.py
--- old/distributed-1.21.6/distributed/tests/test_joblib.py 2018-02-22
17:59:31.000000000 +0100
+++ new/distributed-1.21.8/distributed/tests/test_joblib.py 2018-05-03
23:45:52.000000000 +0200
@@ -10,6 +10,7 @@
from distributed import Client
from distributed.utils_test import cluster, inc
from distributed.utils_test import loop # noqa F401
+from toolz import identity
distributed_joblib = pytest.importorskip('distributed.joblib')
joblib_funcname = distributed_joblib.joblib_funcname
@@ -179,9 +180,23 @@
def test_secede_with_no_processes(loop, joblib):
# https://github.com/dask/distributed/issues/1775
- def f(x):
- return x
-
with Client(loop=loop, processes=False, set_as_default=True):
with joblib.parallel_backend('dask'):
- joblib.Parallel(n_jobs=4)(joblib.delayed(f)(i) for i in range(2))
+ joblib.Parallel(n_jobs=4)(joblib.delayed(identity)(i) for i in
range(2))
+
+
+def _test_keywords_f(_):
+ from distributed import get_worker
+ return get_worker().address
+
+
+def test_keywords(loop, joblib):
+ with cluster() as (s, [a, b]):
+ with Client(s['address'], loop=loop) as client:
+ with joblib.parallel_backend('dask', workers=a['address']) as (ba,
_):
+ seq = joblib.Parallel()(joblib.delayed(_test_keywords_f)(i)
for i in range(10))
+ assert seq == [a['address']] * 10
+
+ with joblib.parallel_backend('dask', workers=b['address']) as (ba,
_):
+ seq = joblib.Parallel()(joblib.delayed(_test_keywords_f)(i)
for i in range(10))
+ assert seq == [b['address']] * 10
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/tests/test_queues.py
new/distributed-1.21.8/distributed/tests/test_queues.py
--- old/distributed-1.21.6/distributed/tests/test_queues.py 2018-03-08
18:22:56.000000000 +0100
+++ new/distributed-1.21.8/distributed/tests/test_queues.py 2018-05-03
23:45:52.000000000 +0200
@@ -240,3 +240,36 @@
exc = yield future2.exception()
assert isinstance(exc, ZeroDivisionError)
+
+
+@gen_cluster(client=True)
+def test_close(c, s, a, b):
+ q = Queue()
+
+ while q.name not in s.extensions['queues'].queues:
+ yield gen.sleep(0.01)
+
+ q.close()
+ q.close()
+
+ while q.name in s.extensions['queues'].queues:
+ yield gen.sleep(0.01)
+
+
+@gen_cluster(client=True)
+def test_timeout(c, s, a, b):
+ q = Queue('v', maxsize=1)
+
+ start = time()
+ with pytest.raises(gen.TimeoutError):
+ yield q.get(timeout=0.1)
+ stop = time()
+ assert 0.1 < stop - start < 2.0
+
+ yield q.put(1)
+
+ start = time()
+ with pytest.raises(gen.TimeoutError):
+ yield q.put(2, timeout=0.1)
+ stop = time()
+ assert 0.1 < stop - start < 2.0
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore'
old/distributed-1.21.6/distributed/tests/test_variable.py
new/distributed-1.21.8/distributed/tests/test_variable.py
--- old/distributed-1.21.6/distributed/tests/test_variable.py 2018-02-12
20:27:00.000000000 +0100
+++ new/distributed-1.21.8/distributed/tests/test_variable.py 2018-05-03
23:45:52.000000000 +0200
@@ -89,7 +89,19 @@
start = time()
with pytest.raises(gen.TimeoutError):
yield v.get(timeout=0.1)
- assert 0.05 < time() - start < 2.0
+ stop = time()
+ assert 0.1 < stop - start < 2.0
+
+
+def test_timeout_sync(loop):
+ with cluster() as (s, [a, b]):
+ with Client(s['address']) as c:
+ v = Variable('v')
+ start = time()
+ with pytest.raises(gen.TimeoutError):
+ v.get(timeout=0.1)
+ stop = time()
+ assert 0.1 < stop - start < 2.0
@gen_cluster(client=True)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/tests/test_worker.py
new/distributed-1.21.8/distributed/tests/test_worker.py
--- old/distributed-1.21.6/distributed/tests/test_worker.py 2018-04-05
14:24:08.000000000 +0200
+++ new/distributed-1.21.8/distributed/tests/test_worker.py 2018-05-03
23:45:55.000000000 +0200
@@ -20,14 +20,14 @@
from distributed import (Nanny, Client, get_client, wait, default_client,
get_worker, Reschedule)
-from distributed.compatibility import WINDOWS
+from distributed.compatibility import WINDOWS, cache_from_source
from distributed.config import config
from distributed.core import rpc
from distributed.client import wait
from distributed.scheduler import Scheduler
from distributed.metrics import time
from distributed.worker import Worker, error_message, logger, TOTAL_MEMORY
-from distributed.utils import tmpfile
+from distributed.utils import tmpfile, format_bytes
from distributed.utils_test import (inc, mul, gen_cluster, div, dec,
slow, slowinc, gen_test, cluster,
captured_logger)
@@ -181,6 +181,33 @@
assert not os.path.exists(os.path.join(a.local_dir, 'foobar.py'))
[email protected](reason="don't yet support uploading pyc files")
+@gen_cluster(client=True, ncores=[('127.0.0.1', 1)])
+def test_upload_file_pyc(c, s, w):
+ with tmpfile() as dirname:
+ os.mkdir(dirname)
+ with open(os.path.join(dirname, 'foo.py'), mode='w') as f:
+ f.write('def f():\n return 123')
+
+ sys.path.append(dirname)
+ try:
+ import foo
+ assert foo.f() == 123
+ pyc = cache_from_source(os.path.join(dirname, 'foo.py'))
+ assert os.path.exists(pyc)
+ yield c.upload_file(pyc)
+
+ def g():
+ import foo
+ return foo.x
+
+ future = c.submit(g)
+ result = yield future
+ assert result == 123
+ finally:
+ sys.path.remove(dirname)
+
+
@gen_cluster(client=True)
def test_upload_egg(c, s, a, b):
eggname = 'mytestegg-1.0.0-py3.4.egg'
@@ -989,7 +1016,8 @@
futures = c.map(slowinc, range(10), delay=0.1)
yield gen.sleep(0.3)
- assert a.paused
+ assert a.paused, (format_bytes(psutil.Process().memory_info().rss),
+ format_bytes(a.memory_limit))
out = logger.getvalue()
assert 'memory' in out.lower()
assert 'pausing' in out.lower()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/utils.py
new/distributed-1.21.8/distributed/utils.py
--- old/distributed-1.21.6/distributed/utils.py 2018-04-06 16:36:53.000000000
+0200
+++ new/distributed-1.21.8/distributed/utils.py 2018-05-03 23:45:52.000000000
+0200
@@ -987,19 +987,19 @@
def import_file(path):
- """ Loads modules for a file (.py, .pyc, .zip, .egg) """
+ """ Loads modules for a file (.py, .zip, .egg) """
directory, filename = os.path.split(path)
name, ext = os.path.splitext(filename)
names_to_import = []
tmp_python_path = None
- if ext in ('.py', '.pyc'):
+ if ext in ('.py'): # , '.pyc'):
if directory not in sys.path:
tmp_python_path = directory
names_to_import.append(name)
- # Ensures that no pyc file will be reused
+ if ext == '.py': # Ensure that no pyc file will be reused
cache_file = cache_from_source(path)
- if os.path.exists(cache_file):
+ with ignoring(OSError):
os.remove(cache_file)
if ext in ('.egg', '.zip'):
if path not in sys.path:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed/utils_test.py
new/distributed-1.21.8/distributed/utils_test.py
--- old/distributed-1.21.6/distributed/utils_test.py 2018-04-06
18:54:37.000000000 +0200
+++ new/distributed-1.21.8/distributed/utils_test.py 2018-05-03
23:45:52.000000000 +0200
@@ -142,7 +142,7 @@
finally:
try:
loop.close(all_fds=True)
- except ValueError:
+ except (KeyError, ValueError):
pass
IOLoop.clear_instance()
IOLoop.clear_current()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed.egg-info/PKG-INFO
new/distributed-1.21.8/distributed.egg-info/PKG-INFO
--- old/distributed-1.21.6/distributed.egg-info/PKG-INFO 2018-04-06
18:58:48.000000000 +0200
+++ new/distributed-1.21.8/distributed.egg-info/PKG-INFO 2018-05-03
23:48:35.000000000 +0200
@@ -1,6 +1,6 @@
Metadata-Version: 1.1
Name: distributed
-Version: 1.21.6
+Version: 1.21.8
Summary: Distributed computing
Home-page: https://distributed.readthedocs.io/en/latest/
Author: Matthew Rocklin
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed.egg-info/SOURCES.txt
new/distributed-1.21.8/distributed.egg-info/SOURCES.txt
--- old/distributed-1.21.6/distributed.egg-info/SOURCES.txt 2018-04-06
18:58:48.000000000 +0200
+++ new/distributed-1.21.8/distributed.egg-info/SOURCES.txt 2018-05-03
23:48:35.000000000 +0200
@@ -67,6 +67,7 @@
distributed/bokeh/template.html
distributed/bokeh/utils.py
distributed/bokeh/worker.py
+distributed/bokeh/static/dask_horizontal.svg
distributed/bokeh/templates/call-stack.html
distributed/bokeh/templates/json-index.html
distributed/bokeh/templates/logs.html
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/distributed.egg-info/requires.txt
new/distributed-1.21.8/distributed.egg-info/requires.txt
--- old/distributed-1.21.6/distributed.egg-info/requires.txt 2018-04-06
18:58:48.000000000 +0200
+++ new/distributed-1.21.8/distributed.egg-info/requires.txt 2018-05-03
23:48:35.000000000 +0200
@@ -1,7 +1,7 @@
click>=6.6
cloudpickle>=0.2.2
dask>=0.17.0
-msgpack-python
+msgpack
psutil
six
sortedcontainers
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/docs/source/changelog.rst
new/distributed-1.21.8/docs/source/changelog.rst
--- old/distributed-1.21.6/docs/source/changelog.rst 2018-04-06
18:57:51.000000000 +0200
+++ new/distributed-1.21.8/docs/source/changelog.rst 2018-05-03
23:47:23.000000000 +0200
@@ -1,20 +1,48 @@
Changelog
=========
+1.21.8 - 2018-05-03
+-------------------
+
+- Remove errant print statement (#1957) `Matthew Rocklin`_
+- Only add reevaluate_occupancy callback once (#1953) `Tony Lorenzo`_
+
+
+
+
+1.21.7 - 2018-05-02
+-------------------
+
+- Newline needed for doctest rendering (:pr:`1917`) `Loïc Estève`_
+- Support Client._repr_html_ when in async mode (:pr:`1909`) `Matthew
Rocklin`_
+- Add parameters to dask-ssh command (:pr:`1910`) `Irene Rodriguez`_
+- Santize get_dataset trace (:pr:`1888`) `John Kirkham`_
+- Fix bug where queues would not clean up cleanly (:pr:`1922`) `Matthew
Rocklin`_
+- Delete cached file safely in upload file (:pr:`1921`) `Matthew Rocklin`_
+- Accept KeyError when closing tornado IOLoop in tests (:pr:`1937`) `Matthew
Rocklin`_
+- Quiet the client and scheduler when gather(..., errors='skip'`)
(:pr:`1936`) `Matthew Rocklin`_
+- Clarify couldn't gather keys warning (:pr:`1942`) `Kenneth Koski`_
+- Support submit keywords in joblib (:pr:`1947`) `Matthew Rocklin`_
+- Avoid use of external resources in bokeh server (:pr:`1934`) `Matthew
Rocklin`_
+- Drop `__contains__` from `Datasets` (:pr:`1889`) `John Kirkham`_
+- Fix bug with queue timeouts (:pr:`1950`) `Matthew Rocklin`_
+- Replace msgpack-python by msgpack (:pr:`1927`) `Loïc Estève`_
+
+
1.21.6 - 2018-04-06
-------------------
-- Fix numeric environment variable configuration (#1885) `Joseph
Atkins-Kurkish`_
-- support bytearrays in older lz4 library (#1886) `Matthew Rocklin`_
-- Remove started timeout in nanny (#1852) `Matthew Rocklin`_
-- Don't log errors in sync (#1894) `Matthew Rocklin`_
-- downgrade stale lock warning to info logging level (#1890) `Matthew
Rocklin`_
-- Fix ``UnboundLocalError`` for ``key`` (#1900) `John Kirkham`_
-- Resolve deployment issues in Python 2 (#1905) `Matthew Rocklin`_
-- Support retries and priority in Client.get method (#1902) `Matthew Rocklin`_
-- Add additional attributes to task page if applicable (#1901) `Matthew
Rocklin`_
-- Add count method to as_completed (#1897) `Matthew Rocklin`_
-- Extend default timeout to 10s (#1904) `Matthew Rocklin`_
+- Fix numeric environment variable configuration (:pr:`1885`) `Joseph
Atkins-Kurkish`_
+- support bytearrays in older lz4 library (:pr:`1886`) `Matthew Rocklin`_
+- Remove started timeout in nanny (:pr:`1852`) `Matthew Rocklin`_
+- Don't log errors in sync (:pr:`1894`) `Matthew Rocklin`_
+- downgrade stale lock warning to info logging level (:pr:`1890`) `Matthew
Rocklin`_
+- Fix ``UnboundLocalError`` for ``key`` (:pr:`1900`) `John Kirkham`_
+- Resolve deployment issues in Python 2 (:pr:`1905`) `Matthew Rocklin`_
+- Support retries and priority in Client.get method (:pr:`1902`) `Matthew
Rocklin`_
+- Add additional attributes to task page if applicable (:pr:`1901`) `Matthew
Rocklin`_
+- Add count method to as_completed (:pr:`1897`) `Matthew Rocklin`_
+- Extend default timeout to 10s (:pr:`1904`) `Matthew Rocklin`_
@@ -47,14 +75,14 @@
1.21.3 - 2018-03-08
-------------------
-- Add cluster superclass and improve adaptivity (#1813) `Matthew Rocklin`_
-- Fixup tests and support Python 2 for Tornado 5.0 (#1818) `Matthew Rocklin`_
-- Fix bug in recreate_error when dependencies are dropped (#1815) `Matthew
Rocklin`_
-- Add worker time to live in Scheduler (#1811) `Matthew Rocklin`_
-- Scale adaptive based on total_occupancy (#1807) `Matthew Rocklin`_
-- Support calling compute within worker_client (#1814) `Matthew Rocklin`_
-- Add percentage to profile plot (#1817) `Brett Naul`_
-- Overwrite option for remote python in dask-ssh (#1812) `Sven Kreiss`_
+- Add cluster superclass and improve adaptivity (:pr:`1813`) `Matthew
Rocklin`_
+- Fixup tests and support Python 2 for Tornado 5.0 (:pr:`1818`) `Matthew
Rocklin`_
+- Fix bug in recreate_error when dependencies are dropped (:pr:`1815`)
`Matthew Rocklin`_
+- Add worker time to live in Scheduler (:pr:`1811`) `Matthew Rocklin`_
+- Scale adaptive based on total_occupancy (:pr:`1807`) `Matthew Rocklin`_
+- Support calling compute within worker_client (:pr:`1814`) `Matthew Rocklin`_
+- Add percentage to profile plot (:pr:`1817`) `Brett Naul`_
+- Overwrite option for remote python in dask-ssh (:pr:`1812`) `Sven Kreiss`_
1.21.2 - 2018-03-05
@@ -593,3 +621,7 @@
.. _`Sven Kreiss`: https://github.com/svenkreiss
.. _`Russ Bubley`: https://github.com/rbubley
.. _`Joseph Atkins-Kurkish`: https://github.com/spacerat
+.. _`Irene Rodriguez`: https://github.com/irenerodriguez
+.. _`Loïc Estève`: https://github.com/lesteve
+.. _`Kenneth Koski`: https://github.com/knkski
+.. _`Tony Lorenzo`: https://github.com/alorenzo175
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/docs/source/faq.rst
new/distributed-1.21.8/docs/source/faq.rst
--- old/distributed-1.21.6/docs/source/faq.rst 2018-02-12 20:27:00.000000000
+0100
+++ new/distributed-1.21.8/docs/source/faq.rst 2018-05-03 19:22:32.000000000
+0200
@@ -66,7 +66,7 @@
For example Dask developers use this ability to build in data locality when we
communicate to data-local storage systems like the Hadoop File System. When
users use high-level functions like
-``dask.dataframe.read_csv('hdfs:///path/to/files.*.csv'`` Dask talks to the
+``dask.dataframe.read_csv('hdfs:///path/to/files.*.csv')`` Dask talks to the
HDFS name node, finds the locations of all of the blocks of data, and sends
that information to the scheduler so that it can make smarter decisions and
improve load times for users.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/distributed-1.21.6/requirements.txt
new/distributed-1.21.8/requirements.txt
--- old/distributed-1.21.6/requirements.txt 2018-02-14 18:53:45.000000000
+0100
+++ new/distributed-1.21.8/requirements.txt 2018-05-03 23:45:52.000000000
+0200
@@ -1,7 +1,7 @@
click >= 6.6
cloudpickle >= 0.2.2
dask >= 0.17.0
-msgpack-python
+msgpack
psutil
six
sortedcontainers