Hello community,

here is the log from the commit of package python-distributed for 
openSUSE:Factory checked in at 2018-11-12 09:45:49
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-distributed (Old)
 and      /work/SRC/openSUSE:Factory/.python-distributed.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-distributed"

Mon Nov 12 09:45:49 2018 rev:10 rq:648369 version:1.24.1

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-distributed/python-distributed.changes    
2018-10-31 13:20:23.407116384 +0100
+++ 
/work/SRC/openSUSE:Factory/.python-distributed.new/python-distributed.changes   
    2018-11-12 09:46:45.508706243 +0100
@@ -1,0 +2,15 @@
+Mon Nov 12 05:55:43 UTC 2018 - Arun Persaud <[email protected]>
+
+- update to version 1.24.1:
+  * Use tornado’s builtin AnyThreadLoopEventPolicy (GH#2326) Matthew
+    Rocklin
+  * Adjust TLS tests for openssl 1.1 (GH#2331) Marius van Niekerk
+  * Avoid setting event loop policy if within Jupyter notebook server
+    (GH#2343) Matthew Rocklin
+  * Add preload script to conf (GH#2325) Guillaume Eynard-Bontemps
+  * Add serializer for Numpy masked arrays (GH#2335) Peter Killick
+  * Use psutil.Process.oneshot (GH#2339) NotSqrt
+  * Use worker SSL context when getting client from worker. (GH#2301)
+    Anonymous
+
+-------------------------------------------------------------------

Old:
----
  distributed-1.24.0.tar.gz

New:
----
  distributed-1.24.1.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-distributed.spec ++++++
--- /var/tmp/diff_new_pack.9dlN70/_old  2018-11-12 09:46:46.028705454 +0100
+++ /var/tmp/diff_new_pack.9dlN70/_new  2018-11-12 09:46:46.036705443 +0100
@@ -20,7 +20,7 @@
 # Test requires network connection
 %bcond_with     test
 Name:           python-distributed
-Version:        1.24.0
+Version:        1.24.1
 Release:        0
 Summary:        Library for distributed computing with Python
 License:        BSD-3-Clause

++++++ distributed-1.24.0.tar.gz -> distributed-1.24.1.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/PKG-INFO 
new/distributed-1.24.1/PKG-INFO
--- old/distributed-1.24.0/PKG-INFO     2018-10-26 22:06:13.000000000 +0200
+++ new/distributed-1.24.1/PKG-INFO     2018-11-09 19:47:41.000000000 +0100
@@ -1,6 +1,6 @@
 Metadata-Version: 1.2
 Name: distributed
-Version: 1.24.0
+Version: 1.24.1
 Summary: Distributed scheduler for Dask
 Home-page: https://distributed.readthedocs.io/en/latest/
 Maintainer: Matthew Rocklin
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/distributed/_version.py 
new/distributed-1.24.1/distributed/_version.py
--- old/distributed-1.24.0/distributed/_version.py      2018-10-26 
22:06:13.000000000 +0200
+++ new/distributed-1.24.1/distributed/_version.py      2018-11-09 
19:47:41.000000000 +0100
@@ -8,11 +8,11 @@
 
 version_json = '''
 {
- "date": "2018-10-26T16:03:57-0400",
+ "date": "2018-11-09T13:46:26-0500",
  "dirty": false,
  "error": null,
- "full-revisionid": "db903bc333ab8c1aec147896d7c5489863026243",
- "version": "1.24.0"
+ "full-revisionid": "32341216c9f62e37c2e01c898e26b117e2b872b3",
+ "version": "1.24.1"
 }
 '''  # END VERSION_JSON
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/distributed/cli/dask_scheduler.py 
new/distributed-1.24.1/distributed/cli/dask_scheduler.py
--- old/distributed-1.24.0/distributed/cli/dask_scheduler.py    2018-10-12 
15:35:59.000000000 +0200
+++ new/distributed-1.24.1/distributed/cli/dask_scheduler.py    2018-11-09 
17:47:35.000000000 +0100
@@ -1,6 +1,7 @@
 from __future__ import print_function, division, absolute_import
 
 import atexit
+import dask
 import logging
 import os
 import shutil
@@ -58,7 +59,7 @@
               "cluster is on a shared network file system.")
 @click.option('--local-directory', default='', type=str,
               help="Directory to place scheduler files")
[email protected]('--preload', type=str, multiple=True, is_eager=True,
[email protected]('--preload', type=str, multiple=True, is_eager=True, default='',
               help='Module that should be loaded by the scheduler process  '
                    'like "foo.bar" or "/path/to/foo.py".')
 @click.argument('preload_argv', nargs=-1,
@@ -125,6 +126,10 @@
                           scheduler_file=scheduler_file,
                           security=sec)
     scheduler.start(addr)
+    if not preload:
+        preload = dask.config.get('distributed.scheduler.preload')
+    if not preload_argv:
+        preload_argv = dask.config.get('distributed.scheduler.preload-argv')
     preload_modules(preload, parameter=scheduler, file_dir=local_directory, 
argv=preload_argv)
 
     logger.info('Local Directory: %26s', local_directory)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/distributed/client.py 
new/distributed-1.24.1/distributed/client.py
--- old/distributed-1.24.0/distributed/client.py        2018-10-26 
21:57:01.000000000 +0200
+++ new/distributed-1.24.1/distributed/client.py        2018-11-09 
17:47:35.000000000 +0100
@@ -560,7 +560,12 @@
         self.security = security or Security()
         self.scheduler_comm = None
         assert isinstance(self.security, Security)
-        self.connection_args = self.security.get_connection_args('client')
+
+        if name == 'worker':
+            self.connection_args = self.security.get_connection_args('worker')
+        else:
+            self.connection_args = self.security.get_connection_args('client')
+
         self._connecting_to_scheduler = False
         self._asynchronous = asynchronous
         self._should_close_loop = not loop
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/distributed/comm/core.py 
new/distributed-1.24.1/distributed/comm/core.py
--- old/distributed-1.24.0/distributed/comm/core.py     2018-10-13 
16:31:16.000000000 +0200
+++ new/distributed-1.24.1/distributed/comm/core.py     2018-11-09 
17:47:32.000000000 +0100
@@ -21,6 +21,10 @@
     pass
 
 
+class FatalCommClosedError(CommClosedError):
+    pass
+
+
 class Comm(with_metaclass(ABCMeta)):
     """
     A message-oriented communication object, representing an established
@@ -184,6 +188,8 @@
             comm = yield gen.with_timeout(timedelta(seconds=deadline - time()),
                                           future,
                                           quiet_exceptions=EnvironmentError)
+        except FatalCommClosedError:
+            raise
         except EnvironmentError as e:
             error = str(e)
             if time() < deadline:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/distributed/comm/tcp.py 
new/distributed-1.24.1/distributed/comm/tcp.py
--- old/distributed-1.24.0/distributed/comm/tcp.py      2018-10-13 
16:31:16.000000000 +0200
+++ new/distributed-1.24.1/distributed/comm/tcp.py      2018-11-09 
17:47:32.000000000 +0100
@@ -24,7 +24,7 @@
 
 from .registry import Backend, backends
 from .addressing import parse_host_port, unparse_host_port
-from .core import Comm, Connector, Listener, CommClosedError
+from .core import Comm, Connector, Listener, CommClosedError, 
FatalCommClosedError
 from .utils import (to_frames, from_frames,
                     get_tcp_server_address, ensure_concrete_host)
 
@@ -121,6 +121,9 @@
     if exc.real_error is not None:
         # The stream was closed because of an underlying OS error
         exc = exc.real_error
+        if ssl and isinstance(exc, ssl.SSLError):
+            if 'UNKNOWN_CA' in exc.reason:
+                raise FatalCommClosedError("in %s: %s: %s" % (obj, 
exc.__class__.__name__, exc))
         raise CommClosedError("in %s: %s: %s" % (obj, exc.__class__.__name__, 
exc))
     else:
         raise CommClosedError("in %s: %s" % (obj, exc))
@@ -329,6 +332,13 @@
             stream = yield client.connect(ip, port,
                                           max_buffer_size=MAX_BUFFER_SIZE,
                                           **kwargs)
+            # Under certain circumstances tornado will have a closed 
connnection with an error and not raise
+            # a StreamClosedError.
+            #
+            # This occurs with tornado 5.x and openssl 1.1+
+            if stream.closed() and stream.error:
+                raise StreamClosedError(stream.error)
+
         except StreamClosedError as e:
             # The socket connect() call failed
             convert_stream_closed_error(self, e)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/distributed/core.py 
new/distributed-1.24.1/distributed/core.py
--- old/distributed-1.24.0/distributed/core.py  2018-10-26 00:54:52.000000000 
+0200
+++ new/distributed-1.24.1/distributed/core.py  2018-11-09 17:47:35.000000000 
+0100
@@ -310,7 +310,13 @@
                     raise TypeError("Bad message type.  Expected dict, got\n  "
                                     + str(msg))
 
-                op = msg.pop('op')
+                try:
+                    op = msg.pop('op')
+                except KeyError:
+                    raise ValueError(
+                        "Received unexpected message without 'op' key: " %
+                        str(msg)
+                    )
                 if self.counters is not None:
                     self.counters['op'].add(op)
                 self._comms[comm] = op
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/distributed/distributed.yaml 
new/distributed-1.24.1/distributed/distributed.yaml
--- old/distributed-1.24.0/distributed/distributed.yaml 2018-10-12 
15:35:59.000000000 +0200
+++ new/distributed-1.24.1/distributed/distributed.yaml 2018-11-09 
17:47:35.000000000 +0100
@@ -15,6 +15,8 @@
     transition-log-length: 100000
     work-stealing: True     # workers should steal tasks from each other
     worker-ttl: null        # like '60s'. Time to live for workers.  They must 
heartbeat faster than this
+    preload: []
+    preload-argv: []
 
   worker:
     multiprocessing-method: forkserver
@@ -22,6 +24,8 @@
     connections:            # Maximum concurrent connections for data
       outgoing: 50          # This helps to control network saturation
       incoming: 10
+    preload: []
+    preload-argv: []
 
     profile:
       interval: 10ms        # Time between statistical profiling queries
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/distributed/protocol/numpy.py 
new/distributed-1.24.1/distributed/protocol/numpy.py
--- old/distributed-1.24.0/distributed/protocol/numpy.py        2018-10-13 
16:31:16.000000000 +0200
+++ new/distributed-1.24.1/distributed/protocol/numpy.py        2018-11-09 
17:47:35.000000000 +0100
@@ -102,3 +102,37 @@
                        strides=header['strides'])
 
         return x
+
+
+@dask_serialize.register(np.ma.core.MaskedArray)
+def serialize_numpy_maskedarray(x):
+    # Separate elements of the masked array that we need to deal with 
discretely.
+    data = x.data
+    mask = x.mask
+    fill_value = x.fill_value
+
+    # Make use of existing numpy serialization for the two ndarray elements of
+    # the masked array.
+    data_header, data_frames = serialize_numpy_ndarray(data)
+    mask_header, mask_frames = serialize_numpy_ndarray(mask)
+
+    header = {"data-header": data_header,
+              "mask-header": mask_header,
+              "fill_value": fill_value,
+              "nframes": len(data_frames)}
+    return header, data_frames + mask_frames
+
+
+@dask_deserialize.register(np.ma.core.MaskedArray)
+def deserialize_numpy_maskedarray(header, frames):
+    data_frames = frames[:header["nframes"]]
+    mask_frames = frames[header["nframes"]:]
+    data_header = header["data-header"]
+    mask_header = header["mask-header"]
+
+    # Get the individual elements of the masked array in order to reconstruct.
+    data = deserialize_numpy_ndarray(data_header, data_frames)
+    mask = deserialize_numpy_ndarray(mask_header, mask_frames)
+    fill_value = header["fill_value"]
+
+    return np.ma.masked_array(data, mask=mask, fill_value=fill_value)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/distributed-1.24.0/distributed/protocol/tests/test_numpy.py 
new/distributed-1.24.1/distributed/protocol/tests/test_numpy.py
--- old/distributed-1.24.0/distributed/protocol/tests/test_numpy.py     
2018-10-13 16:31:16.000000000 +0200
+++ new/distributed-1.24.1/distributed/protocol/tests/test_numpy.py     
2018-11-09 17:47:35.000000000 +0100
@@ -6,6 +6,7 @@
 import numpy as np
 import pytest
 
+from distributed.compatibility import PY2
 from distributed.protocol import (serialize, deserialize, decompress, dumps,
                                   loads, to_serialize, msgpack)
 from distributed.protocol.utils import BIG_BYTES_SHARD_SIZE
@@ -27,46 +28,53 @@
     assert (result == x).all()
 
 
[email protected]('x',
-                         [np.ones(5),
-                          np.array(5),
-                             np.random.random((5, 5)),
-                             np.random.random((5, 5))[::2, :],
-                             np.random.random((5, 5))[:, ::2],
-                             np.asfortranarray(np.random.random((5, 5))),
-                             np.asfortranarray(np.random.random((5, 5)))[::2, 
:],
-                             np.asfortranarray(np.random.random((5, 5)))[:, 
::2],
-                             np.random.random(5).astype('f4'),
-                             np.random.random(5).astype('>i8'),
-                             np.random.random(5).astype('<i8'),
-                             np.arange(5).astype('M8[us]'),
-                             np.arange(5).astype('M8[ms]'),
-                             np.arange(5).astype('m8'),
-                             np.arange(5).astype('m8[s]'),
-                             np.arange(5).astype('c16'),
-                             np.arange(5).astype('c8'),
-                             np.array([True, False, True]),
-                             np.ones(shape=5, dtype=[('a', 'i4'), ('b', 
'M8[us]')]),
-                             np.array(['abc'], dtype='S3'),
-                             np.array(['abc'], dtype='U3'),
-                             np.array(['abc'], dtype=object),
-                             np.ones(shape=(5,), dtype=('f8', 32)),
-                             np.ones(shape=(5,), dtype=[('x', 'f8', 32)]),
-                             np.ones(shape=(5,), dtype=np.dtype([('a', 'i1'), 
('b', 'f8')], align=False)),
-                             np.ones(shape=(5,), dtype=np.dtype([('a', 'i1'), 
('b', 'f8')], align=True)),
-                             np.ones(shape=(5,), dtype=np.dtype([('a', 
'm8[us]')], align=False)),
-                             # this dtype fails unpickling
-                             np.ones(shape=(5,), dtype=np.dtype([('a', 'm8')], 
align=False)),
-                             np.array([(1, 'abc')], dtype=[('x', 'i4'), ('s', 
object)]),
-                             np.zeros(5000, dtype=[('x%d' % i, '<f8') for i in 
range(4)]),
-                             np.zeros(5000, dtype='S32'),
-                             np.zeros((1, 1000, 1000)),
-                             np.arange(12)[::2],  # non-contiguous array
-                             np.ones(shape=(5, 6)).astype(dtype=[('total', 
'<f8'), ('n', '<f8')])])
[email protected]('x', [
+    np.ones(5),
+    np.array(5),
+    np.random.random((5, 5)),
+    np.random.random((5, 5))[::2, :],
+    np.random.random((5, 5))[:, ::2],
+    np.asfortranarray(np.random.random((5, 5))),
+    np.asfortranarray(np.random.random((5, 5)))[::2, :],
+    np.asfortranarray(np.random.random((5, 5)))[:, ::2],
+    np.random.random(5).astype('f4'),
+    np.random.random(5).astype('>i8'),
+    np.random.random(5).astype('<i8'),
+    np.arange(5).astype('M8[us]'),
+    np.arange(5).astype('M8[ms]'),
+    np.arange(5).astype('m8'),
+    np.arange(5).astype('m8[s]'),
+    np.arange(5).astype('c16'),
+    np.arange(5).astype('c8'),
+    np.array([True, False, True]),
+    np.ones(shape=5, dtype=[('a', 'i4'), ('b', 'M8[us]')]),
+    np.array(['abc'], dtype='S3'),
+    np.array(['abc'], dtype='U3'),
+    np.array(['abc'], dtype=object),
+    np.ones(shape=(5,), dtype=('f8', 32)),
+    np.ones(shape=(5,), dtype=[('x', 'f8', 32)]),
+    np.ones(shape=(5,), dtype=np.dtype([('a', 'i1'), ('b', 'f8')], 
align=False)),
+    np.ones(shape=(5,), dtype=np.dtype([('a', 'i1'), ('b', 'f8')], 
align=True)),
+    np.ones(shape=(5,), dtype=np.dtype([('a', 'm8[us]')], align=False)),
+    # this dtype fails unpickling
+    np.ones(shape=(5,), dtype=np.dtype([('a', 'm8')], align=False)),
+    np.array([(1, 'abc')], dtype=[('x', 'i4'), ('s', object)]),
+    np.zeros(5000, dtype=[('x%d' % i, '<f8') for i in range(4)]),
+    np.zeros(5000, dtype='S32'),
+    np.zeros((1, 1000, 1000)),
+    np.arange(12)[::2],  # non-contiguous array
+    np.ones(shape=(5, 6)).astype(dtype=[('total', '<f8'), ('n', '<f8')]),
+    np.ma.masked_array((5, 6), mask=[True, False]),  # int array
+    np.ma.masked_array((5., 6.), mask=[True, False]),  # float array 
(different default fill_value)
+    np.ma.masked_array((5., 6.), mask=[True, False], fill_value=np.nan),
+])
 def test_dumps_serialize_numpy(x):
     header, frames = serialize(x)
     if 'compression' in header:
         frames = decompress(header, frames)
+    buffer_interface = buffer if PY2 else memoryview  # noqa: F821
+    for frame in frames:
+        assert isinstance(frame, (bytes, buffer_interface))
     y = deserialize(header, frames)
 
     np.testing.assert_equal(x, y)
@@ -74,6 +82,20 @@
         assert x.strides == y.strides
 
 
+def test_masked_array_serialize():
+    data = (5, 6)
+    mask = [True, False]
+    fill_value = 999
+    x = np.ma.masked_array(data, mask=mask, fill_value=fill_value)
+    header, frames = serialize(x)
+    y = deserialize(header, frames)
+
+    # Explicitly test the particular elements of the masked array.
+    np.testing.assert_equal(data, y.data)
+    np.testing.assert_equal(mask, y.mask)
+    assert fill_value == y.fill_value
+
+
 def test_dumps_serialize_numpy_custom_dtype():
     from six.moves import builtins
     test_rational = pytest.importorskip('numpy.core.test_rational')
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/distributed/system_monitor.py 
new/distributed-1.24.1/distributed/system_monitor.py
--- old/distributed-1.24.0/distributed/system_monitor.py        2018-10-12 
15:35:17.000000000 +0200
+++ new/distributed-1.24.1/distributed/system_monitor.py        2018-11-09 
17:47:35.000000000 +0100
@@ -46,8 +46,9 @@
             return {k: None for k, v in self.quantities.items()}
 
     def update(self):
-        cpu = self.proc.cpu_percent()
-        memory = self.proc.memory_info().rss
+        with self.proc.oneshot():
+            cpu = self.proc.cpu_percent()
+            memory = self.proc.memory_info().rss
         now = time()
 
         self.cpu.append(cpu)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/distributed-1.24.0/distributed/tests/test_security.py 
new/distributed-1.24.1/distributed/tests/test_security.py
--- old/distributed-1.24.0/distributed/tests/test_security.py   2018-10-12 
15:35:17.000000000 +0200
+++ new/distributed-1.24.1/distributed/tests/test_security.py   2018-11-09 
17:47:32.000000000 +0100
@@ -25,6 +25,14 @@
 # Note this cipher uses RSA auth as this matches our test certs
 FORCED_CIPHER = 'ECDHE-RSA-AES128-GCM-SHA256'
 
+TLS_13_CIPHERS = [
+    'TLS_AES_128_GCM_SHA256',
+    'TLS_AES_256_GCM_SHA384',
+    'TLS_CHACHA20_POLY1305_SHA256',
+    'TLS_AES_128_CCM_SHA256',
+    'TLS_AES_128_CCM_8_SHA256',
+]
+
 
 def test_defaults():
     with new_config({}):
@@ -201,7 +209,12 @@
     ctx = d['ssl_context']
     basic_checks(ctx)
     if sys.version_info >= (3, 6):
-        assert len(ctx.get_ciphers()) == 1
+        supported_ciphers = ctx.get_ciphers()
+        tls_12_ciphers = [c for c in supported_ciphers if c['protocol'] == 
'TLSv1.2']
+        assert len(tls_12_ciphers) == 1
+        tls_13_ciphers = [c for c in supported_ciphers if c['protocol'] == 
'TLSv1.3']
+        if len(tls_13_ciphers):
+            assert len(tls_13_ciphers) == 3
 
 
 def test_listen_args():
@@ -255,7 +268,12 @@
     ctx = d['ssl_context']
     basic_checks(ctx)
     if sys.version_info >= (3, 6):
-        assert len(ctx.get_ciphers()) == 1
+        supported_ciphers = ctx.get_ciphers()
+        tls_12_ciphers = [c for c in supported_ciphers if c['protocol'] == 
'TLSv1.2']
+        assert len(tls_12_ciphers) == 1
+        tls_13_ciphers = [c for c in supported_ciphers if c['protocol'] == 
'TLSv1.3']
+        if len(tls_13_ciphers):
+            assert len(tls_13_ciphers) == 3
 
 
 @gen_test()
@@ -306,7 +324,7 @@
         comm = yield connect(listener.contact_address,
                              
connection_args=forced_cipher_sec.get_connection_args('worker'))
         cipher, _, _, = comm.extra_info['cipher']
-        assert cipher == FORCED_CIPHER
+        assert cipher in [FORCED_CIPHER] + TLS_13_CIPHERS
         comm.abort()
 
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/distributed/utils.py 
new/distributed-1.24.1/distributed/utils.py
--- old/distributed-1.24.0/distributed/utils.py 2018-10-13 16:31:16.000000000 
+0200
+++ new/distributed-1.24.1/distributed/utils.py 2018-11-09 17:47:35.000000000 
+0100
@@ -1361,28 +1361,6 @@
             inst.clear()
 
 
-def fix_asyncio_event_loop_policy(asyncio):
-    """
-    Work around https://github.com/tornadoweb/tornado/issues/2183
-    """
-    class PatchedDefaultEventLoopPolicy(asyncio.DefaultEventLoopPolicy):
-
-        def get_event_loop(self):
-            """Get the event loop.
-
-            This may be None or an instance of EventLoop.
-            """
-            try:
-                return super().get_event_loop()
-            except RuntimeError:
-                # "There is no current event loop in thread"
-                loop = self.new_event_loop()
-                self.set_event_loop(loop)
-                return loop
-
-    asyncio.set_event_loop_policy(PatchedDefaultEventLoopPolicy())
-
-
 def reset_logger_locks():
     """ Python 2's logger's locks don't survive a fork event
 
@@ -1395,7 +1373,21 @@
 
 # Only bother if asyncio has been loaded by Tornado
 if 'asyncio' in sys.modules:
-    fix_asyncio_event_loop_policy(sys.modules['asyncio'])
+
+    jupyter_event_loop_initialized = False
+
+    if 'notebook' in sys.modules:
+        import traitlets
+        from notebook.notebookapp import NotebookApp
+        jupyter_event_loop_initialized = (
+            traitlets.config.Application.initialized() and
+            isinstance(traitlets.config.Application.instance(), NotebookApp)
+        )
+
+    if not jupyter_event_loop_initialized:
+        import asyncio
+        import tornado.platform.asyncio
+        
asyncio.set_event_loop_policy(tornado.platform.asyncio.AnyThreadEventLoopPolicy())
 
 
 def has_keyword(func, keyword):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/distributed/worker.py 
new/distributed-1.24.1/distributed/worker.py
--- old/distributed-1.24.0/distributed/worker.py        2018-10-13 
16:31:16.000000000 +0200
+++ new/distributed-1.24.1/distributed/worker.py        2018-11-09 
17:47:35.000000000 +0100
@@ -87,7 +87,7 @@
                  services=None, service_ports=None, name=None,
                  reconnect=True, memory_limit='auto',
                  executor=None, resources=None, silence_logs=None,
-                 death_timeout=None, preload=(), preload_argv=[], 
security=None,
+                 death_timeout=None, preload=None, preload_argv=None, 
security=None,
                  contact_address=None, memory_monitor_interval='200ms',
                  extensions=None, metrics=None, **kwargs):
 
@@ -108,7 +108,11 @@
         self.available_resources = (resources or {}).copy()
         self.death_timeout = death_timeout
         self.preload = preload
-        self.preload_argv = preload_argv,
+        if self.preload is None:
+            self.preload = dask.config.get('distributed.worker.preload')
+        self.preload_argv = preload_argv
+        if self.preload_argv is None:
+            self.preload_argv = 
dask.config.get('distributed.worker.preload-argv')
         self.contact_address = contact_address
         self.memory_monitor_interval = 
parse_timedelta(memory_monitor_interval, default='ms')
         self.extensions = dict()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/distributed.egg-info/PKG-INFO 
new/distributed-1.24.1/distributed.egg-info/PKG-INFO
--- old/distributed-1.24.0/distributed.egg-info/PKG-INFO        2018-10-26 
22:06:13.000000000 +0200
+++ new/distributed-1.24.1/distributed.egg-info/PKG-INFO        2018-11-09 
19:47:41.000000000 +0100
@@ -1,6 +1,6 @@
 Metadata-Version: 1.2
 Name: distributed
-Version: 1.24.0
+Version: 1.24.1
 Summary: Distributed scheduler for Dask
 Home-page: https://distributed.readthedocs.io/en/latest/
 Maintainer: Matthew Rocklin
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/distributed.egg-info/requires.txt 
new/distributed-1.24.1/distributed.egg-info/requires.txt
--- old/distributed-1.24.0/distributed.egg-info/requires.txt    2018-10-26 
22:06:13.000000000 +0200
+++ new/distributed-1.24.1/distributed.egg-info/requires.txt    2018-11-09 
19:47:41.000000000 +0100
@@ -2,7 +2,7 @@
 cloudpickle>=0.2.2
 dask>=0.18.0
 msgpack
-psutil
+psutil>=5.0
 six
 sortedcontainers!=2.0.0,!=2.0.1
 tblib
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/docs/source/changelog.rst 
new/distributed-1.24.1/docs/source/changelog.rst
--- old/distributed-1.24.0/docs/source/changelog.rst    2018-10-26 
22:03:57.000000000 +0200
+++ new/distributed-1.24.1/docs/source/changelog.rst    2018-11-09 
19:37:42.000000000 +0100
@@ -1,6 +1,18 @@
 Changelog
 =========
 
+1.24.1 - 2018-11-09
+-------------------
+
+-  Use tornado's builtin AnyThreadLoopEventPolicy (:pr:`2326`) `Matthew 
Rocklin`_
+-  Adjust TLS tests for openssl 1.1 (:pr:`2331`) `Marius van Niekerk`_
+-  Avoid setting event loop policy if within Jupyter notebook server 
(:pr:`2343`) `Matthew Rocklin`_
+-  Add preload script to conf (:pr:`2325`) `Guillaume Eynard-Bontemps`_
+-  Add serializer for Numpy masked arrays (:pr:`2335`) `Peter Killick`_
+-  Use psutil.Process.oneshot (:pr:`2339`) `NotSqrt`_
+-  Use worker SSL context when getting client from worker. (:pr:`2301`) 
Anonymous
+
+
 1.24.0 - 2018-10-26
 -------------------
 
@@ -85,7 +97,7 @@
 -  Be resilient to missing dep after busy signal (:pr:`2217`) `Matthew 
Rocklin`_
 -  Use CSS Grid to layout status page on the dashboard (:pr:`2213`) `Derek 
Ludwig`_ and `Luke Canavan`_
 -  Fix deserialization of queues on main ioloop thread (:pr:`2221`) `Matthew 
Rocklin`_
--  Add a worker initialization function (:pr:`2201`) `Guillaume EB`_
+-  Add a worker initialization function (:pr:`2201`) `Guillaume 
Eynard-Bontemps`_
 -  Collapse navbar in dashboard (:pr:`2223`) `Luke Canavan`_
 
 
@@ -815,10 +827,12 @@
 .. _`Derek Ludwig`: https://github.com/dsludwig
 .. _`Anderson Banihirwe`: https://github.com/andersy005
 .. _`Yu Feng`: https://github.com/rainwoodman
-.. _`Guillaume EB`: https://github.com/guillaumeeb
+.. _`Guillaume Eynard-Bontemps`: https://github.com/guillaumeeb
 .. _`Vladyslav Moisieienkov`: https://github.com/VMois
 .. _`Chris White`: https://github.com/cicdw
 .. _`Adam Klein`: https://github.com/adamklein
 .. _`Mike Gevaert`: https://github.com/mgeplf
 .. _`Gaurav Sheni`: https://github.com/gsheni
 .. _`Eric Ma`: https://github.com/ericmjl
+.. _`Peter Killick`: https://github.com/dkillick
+.. _`NotSqrt`: https://github.com/NotSqrt
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/distributed-1.24.0/requirements.txt 
new/distributed-1.24.1/requirements.txt
--- old/distributed-1.24.0/requirements.txt     2018-10-12 15:35:59.000000000 
+0200
+++ new/distributed-1.24.1/requirements.txt     2018-11-09 17:47:35.000000000 
+0100
@@ -2,7 +2,7 @@
 cloudpickle >= 0.2.2
 dask >= 0.18.0
 msgpack
-psutil
+psutil >= 5.0
 six
 sortedcontainers !=2.0.0, !=2.0.1
 tblib


Reply via email to