Package: src:dask.distributed
Version: 2022.12.1+ds.1-3
Severity: important
Tags: ftbfs
Dear maintainer:
During a rebuild of all packages in bookworm, our package failed to build:
--------------------------------------------------------------------------------
[...]
debian/rules binary
dh binary --with python3,sphinxdoc --buildsystem=pybuild
dh_update_autotools_config -O--buildsystem=pybuild
dh_autoreconf -O--buildsystem=pybuild
dh_auto_configure -O--buildsystem=pybuild
I: pybuild base:240: python3.11 setup.py config
running config
debian/rules override_dh_auto_build
make[1]: Entering directory '/<<PKGBUILDDIR>>'
rm -f distributed/comm/tests/__init__.py
set -e; \
for p in distributed/http/static/js/anime.js
distributed/http/static/js/reconnecting-websocket.js; do \
uglifyjs -o $p debian/missing-sources/$p ; \
done
[... snipped ...]
../../../distributed/tests/test_worker_state_machine.py::test_throttling_incoming_transfer_on_transfer_bytes_different_workers
[32mPASSED[0m[31m [ 99%][0m
../../../distributed/tests/test_worker_state_machine.py::test_do_not_throttle_connections_while_below_threshold
[32mPASSED[0m[31m [ 99%][0m
../../../distributed/tests/test_worker_state_machine.py::test_throttle_on_transfer_bytes_regardless_of_threshold
[32mPASSED[0m[31m [ 99%][0m
../../../distributed/tests/test_worker_state_machine.py::test_worker_nbytes[executing]
[32mPASSED[0m[31m [ 99%][0m
../../../distributed/tests/test_worker_state_machine.py::test_worker_nbytes[long-running]
[32mPASSED[0m[31m [ 99%][0m
../../../distributed/tests/test_worker_state_machine.py::test_fetch_count
[32mPASSED[0m[31m [ 99%][0m
../../../distributed/tests/test_worker_state_machine.py::test_task_counts
[32mPASSED[0m[31m [ 99%][0m
../../../distributed/tests/test_worker_state_machine.py::test_task_counts_with_actors
[32mPASSED[0m[31m [100%][0m
=================================== FAILURES ===================================
[31m[1m_________________ test_do_not_block_event_loop_during_shutdown
_________________[0m
s = <Scheduler 'tcp://127.0.0.1:38957', workers: 0, cores: 0, tasks: 0>
[37m@gen_cluster[39;49;00m(nthreads=[])[90m[39;49;00m
[94masync[39;49;00m [94mdef[39;49;00m
[92mtest_do_not_block_event_loop_during_shutdown[39;49;00m(s):[90m[39;49;00m
loop = asyncio.get_running_loop()[90m[39;49;00m
called_handler = threading.Event()[90m[39;49;00m
block_handler = threading.Event()[90m[39;49;00m
[90m[39;49;00m
w = [94mawait[39;49;00m Worker(s.address)[90m[39;49;00m
executor =
w.executors[[33m"[39;49;00m[33mdefault[39;49;00m[33m"[39;49;00m][90m[39;49;00m
[90m[39;49;00m
[90m# The block wait must be smaller than the test timeout and smaller
than the[39;49;00m[90m[39;49;00m
[90m# default value for timeout in
`Worker.close``[39;49;00m[90m[39;49;00m
[94masync[39;49;00m [94mdef[39;49;00m
[92mblock[39;49;00m():[90m[39;49;00m
[94mdef[39;49;00m [92mfn[39;49;00m():[90m[39;49;00m
called_handler.set()[90m[39;49;00m
[94massert[39;49;00m
block_handler.wait([94m20[39;49;00m)[90m[39;49;00m
[90m[39;49;00m
[94mawait[39;49;00m loop.run_in_executor(executor,
fn)[90m[39;49;00m
[90m[39;49;00m
[94masync[39;49;00m [94mdef[39;49;00m
[92mset_future[39;49;00m():[90m[39;49;00m
[94mwhile[39;49;00m [94mTrue[39;49;00m:[90m[39;49;00m
[94mtry[39;49;00m:[90m[39;49;00m
[94mawait[39;49;00m loop.run_in_executor(executor, sleep,
[94m0.1[39;49;00m)[90m[39;49;00m
[94mexcept[39;49;00m [96mRuntimeError[39;49;00m: [90m#
executor has started shutting down[39;49;00m[90m[39;49;00m
block_handler.set()[90m[39;49;00m
[94mreturn[39;49;00m[90m[39;49;00m
[90m[39;49;00m
[94masync[39;49;00m [94mdef[39;49;00m
[92mclose[39;49;00m():[90m[39;49;00m
called_handler.wait()[90m[39;49;00m
[90m# executor_wait is True by default but we want to be explicit
here[39;49;00m[90m[39;49;00m
[94mawait[39;49;00m
w.close(executor_wait=[94mTrue[39;49;00m)[90m[39;49;00m
[90m[39;49;00m
[94mawait[39;49;00m asyncio.gather(block(), close(),
set_future())[90m[39;49;00m
[1m[31m../../../distributed/tests/test_worker.py[0m:3672:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[1m[31m../../../distributed/tests/test_worker.py[0m:3657: in block
[94mawait[39;49;00m loop.run_in_executor(executor, fn)[90m[39;49;00m
[1m[31mdistributed/_concurrent_futures_thread.py[0m:65: in run
result = [96mself[39;49;00m.fn(*[96mself[39;49;00m.args,
**[96mself[39;49;00m.kwargs)[90m[39;49;00m
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[94mdef[39;49;00m [92mfn[39;49;00m():[90m[39;49;00m
called_handler.set()[90m[39;49;00m
[94massert[39;49;00m
block_handler.wait([94m20[39;49;00m)[90m[39;49;00m
[1m[31mE assert False[0m
[1m[31mE + where False = <bound method Event.wait of <threading.Event at
0x7fdebd199810: unset>>(20)[0m
[1m[31mE + where <bound method Event.wait of <threading.Event at
0x7fdebd199810: unset>> = <threading.Event at 0x7fdebd199810: unset>.wait[0m
[1m[31m../../../distributed/tests/test_worker.py[0m:3655: AssertionError
----------------------------- Captured stdout call -----------------------------
Dumped cluster state to
test_cluster_dump/test_do_not_block_event_loop_during_shutdown.yaml
----------------------------- Captured stderr call -----------------------------
2024-11-09 21:18:38,124 - distributed.scheduler - INFO - State start
2024-11-09 21:18:38,124 - distributed.scheduler - INFO - State start
2024-11-09 21:18:38,124 - distributed.scheduler - INFO - State start
2024-11-09 21:18:38,124 - distributed.scheduler - INFO - State start
2024-11-09 21:18:38,124 - distributed.scheduler - INFO - State start
2024-11-09 21:18:38,124 - distributed.scheduler - INFO - State start
2024-11-09 21:18:38,125 - distributed.scheduler - INFO - Scheduler at:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,125 - distributed.scheduler - INFO - Scheduler at:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,125 - distributed.scheduler - INFO - Scheduler at:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,125 - distributed.scheduler - INFO - Scheduler at:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,125 - distributed.scheduler - INFO - Scheduler at:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,125 - distributed.scheduler - INFO - Scheduler at:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,125 - distributed.scheduler - INFO - dashboard at:
127.0.0.1:40905
2024-11-09 21:18:38,125 - distributed.scheduler - INFO - dashboard at:
127.0.0.1:40905
2024-11-09 21:18:38,125 - distributed.scheduler - INFO - dashboard at:
127.0.0.1:40905
2024-11-09 21:18:38,125 - distributed.scheduler - INFO - dashboard at:
127.0.0.1:40905
2024-11-09 21:18:38,125 - distributed.scheduler - INFO - dashboard at:
127.0.0.1:40905
2024-11-09 21:18:38,125 - distributed.scheduler - INFO - dashboard at:
127.0.0.1:40905
2024-11-09 21:18:38,128 - distributed.worker - INFO - Start worker at:
tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO - Start worker at:
tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO - Start worker at:
tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO - Start worker at:
tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO - Start worker at:
tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO - Start worker at:
tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO - Listening to:
tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO - Listening to:
tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO - Listening to:
tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO - Listening to:
tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO - Listening to:
tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO - Listening to:
tcp://127.0.0.1:36801
2024-11-09 21:18:38,128 - distributed.worker - INFO - dashboard at:
127.0.0.1:36225
2024-11-09 21:18:38,128 - distributed.worker - INFO - dashboard at:
127.0.0.1:36225
2024-11-09 21:18:38,128 - distributed.worker - INFO - dashboard at:
127.0.0.1:36225
2024-11-09 21:18:38,128 - distributed.worker - INFO - dashboard at:
127.0.0.1:36225
2024-11-09 21:18:38,128 - distributed.worker - INFO - dashboard at:
127.0.0.1:36225
2024-11-09 21:18:38,128 - distributed.worker - INFO - dashboard at:
127.0.0.1:36225
2024-11-09 21:18:38,128 - distributed.worker - INFO - Waiting to connect to:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,128 - distributed.worker - INFO - Waiting to connect to:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,128 - distributed.worker - INFO - Waiting to connect to:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,128 - distributed.worker - INFO - Waiting to connect to:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,128 - distributed.worker - INFO - Waiting to connect to:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,128 - distributed.worker - INFO - Waiting to connect to:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,128 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO - Threads:
1
2024-11-09 21:18:38,128 - distributed.worker - INFO - Threads:
1
2024-11-09 21:18:38,128 - distributed.worker - INFO - Threads:
1
2024-11-09 21:18:38,128 - distributed.worker - INFO - Threads:
1
2024-11-09 21:18:38,128 - distributed.worker - INFO - Threads:
1
2024-11-09 21:18:38,128 - distributed.worker - INFO - Threads:
1
2024-11-09 21:18:38,128 - distributed.worker - INFO - Memory:
3.71 GiB
2024-11-09 21:18:38,128 - distributed.worker - INFO - Memory:
3.71 GiB
2024-11-09 21:18:38,128 - distributed.worker - INFO - Memory:
3.71 GiB
2024-11-09 21:18:38,128 - distributed.worker - INFO - Memory:
3.71 GiB
2024-11-09 21:18:38,128 - distributed.worker - INFO - Memory:
3.71 GiB
2024-11-09 21:18:38,128 - distributed.worker - INFO - Memory:
3.71 GiB
2024-11-09 21:18:38,128 - distributed.worker - INFO - Local Directory:
/tmp/dask-worker-space/worker-j8lu9vvc
2024-11-09 21:18:38,128 - distributed.worker - INFO - Local Directory:
/tmp/dask-worker-space/worker-j8lu9vvc
2024-11-09 21:18:38,128 - distributed.worker - INFO - Local Directory:
/tmp/dask-worker-space/worker-j8lu9vvc
2024-11-09 21:18:38,128 - distributed.worker - INFO - Local Directory:
/tmp/dask-worker-space/worker-j8lu9vvc
2024-11-09 21:18:38,128 - distributed.worker - INFO - Local Directory:
/tmp/dask-worker-space/worker-j8lu9vvc
2024-11-09 21:18:38,128 - distributed.worker - INFO - Local Directory:
/tmp/dask-worker-space/worker-j8lu9vvc
2024-11-09 21:18:38,128 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,128 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,146 - distributed.scheduler - INFO - Register worker
<WorkerState 'tcp://127.0.0.1:36801', status: init, memory: 0, processing: 0>
2024-11-09 21:18:38,146 - distributed.scheduler - INFO - Register worker
<WorkerState 'tcp://127.0.0.1:36801', status: init, memory: 0, processing: 0>
2024-11-09 21:18:38,146 - distributed.scheduler - INFO - Register worker
<WorkerState 'tcp://127.0.0.1:36801', status: init, memory: 0, processing: 0>
2024-11-09 21:18:38,146 - distributed.scheduler - INFO - Register worker
<WorkerState 'tcp://127.0.0.1:36801', status: init, memory: 0, processing: 0>
2024-11-09 21:18:38,146 - distributed.scheduler - INFO - Register worker
<WorkerState 'tcp://127.0.0.1:36801', status: init, memory: 0, processing: 0>
2024-11-09 21:18:38,146 - distributed.scheduler - INFO - Register worker
<WorkerState 'tcp://127.0.0.1:36801', status: init, memory: 0, processing: 0>
2024-11-09 21:18:38,162 - distributed.scheduler - INFO - Starting worker
compute stream, tcp://127.0.0.1:36801
2024-11-09 21:18:38,162 - distributed.scheduler - INFO - Starting worker
compute stream, tcp://127.0.0.1:36801
2024-11-09 21:18:38,162 - distributed.scheduler - INFO - Starting worker
compute stream, tcp://127.0.0.1:36801
2024-11-09 21:18:38,162 - distributed.scheduler - INFO - Starting worker
compute stream, tcp://127.0.0.1:36801
2024-11-09 21:18:38,162 - distributed.scheduler - INFO - Starting worker
compute stream, tcp://127.0.0.1:36801
2024-11-09 21:18:38,162 - distributed.scheduler - INFO - Starting worker
compute stream, tcp://127.0.0.1:36801
2024-11-09 21:18:38,162 - distributed.core - INFO - Starting established
connection to tcp://127.0.0.1:45266
2024-11-09 21:18:38,162 - distributed.core - INFO - Starting established
connection to tcp://127.0.0.1:45266
2024-11-09 21:18:38,162 - distributed.core - INFO - Starting established
connection to tcp://127.0.0.1:45266
2024-11-09 21:18:38,162 - distributed.core - INFO - Starting established
connection to tcp://127.0.0.1:45266
2024-11-09 21:18:38,162 - distributed.core - INFO - Starting established
connection to tcp://127.0.0.1:45266
2024-11-09 21:18:38,162 - distributed.core - INFO - Starting established
connection to tcp://127.0.0.1:45266
2024-11-09 21:18:38,162 - distributed.worker - INFO - Registered to:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,162 - distributed.worker - INFO - Registered to:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,162 - distributed.worker - INFO - Registered to:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,162 - distributed.worker - INFO - Registered to:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,162 - distributed.worker - INFO - Registered to:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,162 - distributed.worker - INFO - Registered to:
tcp://127.0.0.1:38957
2024-11-09 21:18:38,162 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,162 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,162 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,162 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,162 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,162 - distributed.worker - INFO -
-------------------------------------------------
2024-11-09 21:18:38,163 - distributed.worker - INFO - Stopping worker at
tcp://127.0.0.1:36801. Reason: worker-close
2024-11-09 21:18:38,163 - distributed.worker - INFO - Stopping worker at
tcp://127.0.0.1:36801. Reason: worker-close
2024-11-09 21:18:38,163 - distributed.worker - INFO - Stopping worker at
tcp://127.0.0.1:36801. Reason: worker-close
2024-11-09 21:18:38,163 - distributed.worker - INFO - Stopping worker at
tcp://127.0.0.1:36801. Reason: worker-close
2024-11-09 21:18:38,163 - distributed.worker - INFO - Stopping worker at
tcp://127.0.0.1:36801. Reason: worker-close
2024-11-09 21:18:38,163 - distributed.worker - INFO - Stopping worker at
tcp://127.0.0.1:36801. Reason: worker-close
2024-11-09 21:18:38,163 - distributed.core - INFO - Starting established
connection to tcp://127.0.0.1:38957
2024-11-09 21:18:38,163 - distributed.core - INFO - Starting established
connection to tcp://127.0.0.1:38957
2024-11-09 21:18:38,163 - distributed.core - INFO - Starting established
connection to tcp://127.0.0.1:38957
2024-11-09 21:18:38,163 - distributed.core - INFO - Starting established
connection to tcp://127.0.0.1:38957
2024-11-09 21:18:38,163 - distributed.core - INFO - Starting established
connection to tcp://127.0.0.1:38957
2024-11-09 21:18:38,163 - distributed.core - INFO - Starting established
connection to tcp://127.0.0.1:38957
2024-11-09 21:18:38,164 - distributed.core - INFO - Connection to
tcp://127.0.0.1:38957 has been closed.
2024-11-09 21:18:38,164 - distributed.core - INFO - Connection to
tcp://127.0.0.1:38957 has been closed.
2024-11-09 21:18:38,164 - distributed.core - INFO - Connection to
tcp://127.0.0.1:38957 has been closed.
2024-11-09 21:18:38,164 - distributed.core - INFO - Connection to
tcp://127.0.0.1:38957 has been closed.
2024-11-09 21:18:38,164 - distributed.core - INFO - Connection to
tcp://127.0.0.1:38957 has been closed.
2024-11-09 21:18:38,164 - distributed.core - INFO - Connection to
tcp://127.0.0.1:38957 has been closed.
2024-11-09 21:18:38,164 - distributed.core - INFO - Received 'close-stream'
from tcp://127.0.0.1:45266; closing.
2024-11-09 21:18:38,164 - distributed.core - INFO - Received 'close-stream'
from tcp://127.0.0.1:45266; closing.
2024-11-09 21:18:38,164 - distributed.core - INFO - Received 'close-stream'
from tcp://127.0.0.1:45266; closing.
2024-11-09 21:18:38,164 - distributed.core - INFO - Received 'close-stream'
from tcp://127.0.0.1:45266; closing.
2024-11-09 21:18:38,164 - distributed.core - INFO - Received 'close-stream'
from tcp://127.0.0.1:45266; closing.
2024-11-09 21:18:38,164 - distributed.core - INFO - Received 'close-stream'
from tcp://127.0.0.1:45266; closing.
2024-11-09 21:18:38,164 - distributed.scheduler - INFO - Remove worker
<WorkerState 'tcp://127.0.0.1:36801', status: closing, memory: 0, processing: 0>
2024-11-09 21:18:38,164 - distributed.scheduler - INFO - Remove worker
<WorkerState 'tcp://127.0.0.1:36801', status: closing, memory: 0, processing: 0>
2024-11-09 21:18:38,164 - distributed.scheduler - INFO - Remove worker
<WorkerState 'tcp://127.0.0.1:36801', status: closing, memory: 0, processing: 0>
2024-11-09 21:18:38,164 - distributed.scheduler - INFO - Remove worker
<WorkerState 'tcp://127.0.0.1:36801', status: closing, memory: 0, processing: 0>
2024-11-09 21:18:38,164 - distributed.scheduler - INFO - Remove worker
<WorkerState 'tcp://127.0.0.1:36801', status: closing, memory: 0, processing: 0>
2024-11-09 21:18:38,164 - distributed.scheduler - INFO - Remove worker
<WorkerState 'tcp://127.0.0.1:36801', status: closing, memory: 0, processing: 0>
2024-11-09 21:18:38,164 - distributed.core - INFO - Removing comms to
tcp://127.0.0.1:36801
2024-11-09 21:18:38,164 - distributed.core - INFO - Removing comms to
tcp://127.0.0.1:36801
2024-11-09 21:18:38,164 - distributed.core - INFO - Removing comms to
tcp://127.0.0.1:36801
2024-11-09 21:18:38,164 - distributed.core - INFO - Removing comms to
tcp://127.0.0.1:36801
2024-11-09 21:18:38,164 - distributed.core - INFO - Removing comms to
tcp://127.0.0.1:36801
2024-11-09 21:18:38,164 - distributed.core - INFO - Removing comms to
tcp://127.0.0.1:36801
2024-11-09 21:18:38,165 - distributed.scheduler - INFO - Lost all workers
2024-11-09 21:18:38,165 - distributed.scheduler - INFO - Lost all workers
2024-11-09 21:18:38,165 - distributed.scheduler - INFO - Lost all workers
2024-11-09 21:18:38,165 - distributed.scheduler - INFO - Lost all workers
2024-11-09 21:18:38,165 - distributed.scheduler - INFO - Lost all workers
2024-11-09 21:18:38,165 - distributed.scheduler - INFO - Lost all workers
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing...
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing...
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing...
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing...
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing...
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing...
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing all
comms
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing all
comms
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing all
comms
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing all
comms
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing all
comms
2024-11-09 21:18:58,186 - distributed.scheduler - INFO - Scheduler closing all
comms
------------------------------ Captured log call -------------------------------
[32mINFO [0m distributed.scheduler:scheduler.py:1619 State start
[32mINFO [0m distributed.scheduler:scheduler.py:3864 Scheduler at:
tcp://127.0.0.1:38957
[32mINFO [0m distributed.scheduler:scheduler.py:3866 dashboard at:
127.0.0.1:40905
[32mINFO [0m distributed.worker:worker.py:1416 Start worker at:
tcp://127.0.0.1:36801
[32mINFO [0m distributed.worker:worker.py:1417 Listening to:
tcp://127.0.0.1:36801
[32mINFO [0m distributed.worker:worker.py:1422 dashboard at:
127.0.0.1:36225
[32mINFO [0m distributed.worker:worker.py:1423 Waiting to connect to:
tcp://127.0.0.1:38957
[32mINFO [0m distributed.worker:worker.py:1424
-------------------------------------------------
[32mINFO [0m distributed.worker:worker.py:1425 Threads:
1
[32mINFO [0m distributed.worker:worker.py:1427 Memory:
3.71 GiB
[32mINFO [0m distributed.worker:worker.py:1431 Local Directory:
/tmp/dask-worker-space/worker-j8lu9vvc
[32mINFO [0m distributed.worker:worker.py:1130
-------------------------------------------------
[32mINFO [0m distributed.scheduler:scheduler.py:4216 Register worker
<WorkerState 'tcp://127.0.0.1:36801', status: init, memory: 0, processing: 0>
[32mINFO [0m distributed.scheduler:scheduler.py:5434 Starting worker
compute stream, tcp://127.0.0.1:36801
[32mINFO [0m distributed.core:core.py:867 Starting established connection
to tcp://127.0.0.1:45266
[32mINFO [0m distributed.worker:worker.py:1199 Registered to:
tcp://127.0.0.1:38957
[32mINFO [0m distributed.worker:worker.py:1200
-------------------------------------------------
[32mINFO [0m distributed.worker:worker.py:1514 Stopping worker at
tcp://127.0.0.1:36801. Reason: worker-close
[32mINFO [0m distributed.core:core.py:867 Starting established connection
to tcp://127.0.0.1:38957
[32mINFO [0m distributed.core:core.py:877 Connection to
tcp://127.0.0.1:38957 has been closed.
[32mINFO [0m distributed.core:core.py:892 Received 'close-stream' from
tcp://127.0.0.1:45266; closing.
[32mINFO [0m distributed.scheduler:scheduler.py:4781 Remove worker
<WorkerState 'tcp://127.0.0.1:36801', status: closing, memory: 0, processing: 0>
[32mINFO [0m distributed.core:core.py:1480 Removing comms to
tcp://127.0.0.1:36801
[32mINFO [0m distributed.scheduler:scheduler.py:4861 Lost all workers
[32mINFO [0m distributed.scheduler:scheduler.py:3929 Scheduler closing...
[32mINFO [0m distributed.scheduler:scheduler.py:3951 Scheduler closing all
comms
============================= slowest 20 durations =============================
30.27s call
distributed/tests/test_scheduler.py::test_forget_tasks_while_processing
20.06s call
distributed/tests/test_worker.py::test_do_not_block_event_loop_during_shutdown
19.23s call
distributed/tests/test_scheduler.py::test_failing_task_increments_suspicious
10.14s call
distributed/tests/test_scheduler.py::test_log_tasks_during_restart
9.84s call distributed/tests/test_utils_test.py::test_bare_cluster
9.56s call distributed/tests/test_worker.py::test_tick_interval
8.48s call distributed/tests/test_stress.py::test_cancel_stress_sync
7.52s call
distributed/tests/test_scheduler.py::test_restart_nanny_timeout_exceeded
5.71s call distributed/tests/test_stress.py::test_stress_scatter_death
5.38s call
distributed/tests/test_steal.py::test_allow_tasks_stolen_before_first_completes
5.26s call distributed/tests/test_steal.py::test_balance_with_longer_task
5.01s call
distributed/tests/test_failed_workers.py::test_worker_doesnt_await_task_completion
4.89s call distributed/tests/test_failed_workers.py::test_restart_sync
4.72s call distributed/tests/test_scheduler.py::test_close_nanny
4.56s call
distributed/tests/test_failed_workers.py::test_failing_worker_with_additional_replicas_on_cluster
4.31s call
distributed/tests/test_worker.py::test_package_install_restarts_on_nanny
4.15s call distributed/tests/test_worker.py::test_heartbeat_missing_restarts
4.12s call distributed/tests/test_steal.py::test_restart
4.05s call distributed/tests/test_steal.py::test_steal_twice
4.02s call
distributed/tests/test_failed_workers.py::test_multiple_clients_restart
[36m[1m=========================== short test summary info
============================[0m
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:855:
unconditional skip
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:881:
unconditional skip
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:900:
unconditional skip
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:1758:
unconditional skip
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:2004:
unconditional skip
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:2598:
unconditional skip
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:2627: Use fast
random selection now
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:3261:
unconditional skip
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:4549: Now prefer
first-in-first-out
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:4715: could not
import 'scipy': No module named 'scipy'
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:5963:
unconditional skip
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:6161: could not
import 'bokeh.plotting': No module named 'bokeh'
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:6472: known
intermittent failure
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:6556: could not
import 'bokeh': No module named 'bokeh'
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:6607: On Py3.10+
semaphore._loop is not bound until .acquire() blocks
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:6627: On Py3.10+
semaphore._loop is not bound until .acquire() blocks
[33mSKIPPED[0m [1] ../../../distributed/tests/test_client.py:7020: could not
import 'bokeh': No module named 'bokeh'
[33mSKIPPED[0m [1] ../../../distributed/tests/test_config.py:316: could not
import 'uvloop': No module named 'uvloop'
[33mSKIPPED[0m [1] ../../../distributed/tests/test_core.py:955: could not
import 'crick': No module named 'crick'
[33mSKIPPED[0m [1] ../../../distributed/tests/test_core.py:964: could not
import 'crick': No module named 'crick'
[33mSKIPPED[0m [1] ../../../distributed/tests/test_counter.py:13: no crick
library
[33mSKIPPED[0m [1] ../../../distributed/tests/test_dask_collections.py:193:
could not import 'sparse': No module named 'sparse'
[33mSKIPPED[0m [2] ../../../distributed/tests/test_nanny.py:510: could not
import 'ucp': No module named 'ucp'
[33mSKIPPED[0m [1] ../../../distributed/tests/test_profile.py:74: could not
import 'stacktrace': No module named 'stacktrace'
[33mSKIPPED[0m [1] ../../../distributed/tests/test_queues.py:89: getting same
client from main thread
[33mSKIPPED[0m [1] ../../../distributed/tests/test_resources.py:370: Skipped
[33mSKIPPED[0m [1] ../../../distributed/tests/test_resources.py:427: Should
protect resource keys from optimization
[33mSKIPPED[0m [1] ../../../distributed/tests/test_resources.py:448: atop
fusion seemed to break this
[33mSKIPPED[0m [1] ../../../distributed/tests/test_scheduler.py:262: Not
relevant with queuing on; see https://github.com/dask/distributed/issues/7204
[33mSKIPPED[0m [1] ../../../distributed/tests/test_scheduler.py:2406: could
not import 'bokeh': No module named 'bokeh'
[33mSKIPPED[0m [1] ../../../distributed/tests/test_steal.py:285: Skipped
[33mSKIPPED[0m [1] ../../../distributed/tests/test_steal.py:1284: executing
heartbeats not considered yet
[33mSKIPPED[0m [1] ../../../distributed/tests/test_stress.py:194:
unconditional skip
[33mSKIPPED[0m [1] ../../../distributed/tests/test_utils.py:141: could not
import 'IPython': No module named 'IPython'
[33mSKIPPED[0m [1] ../../../distributed/tests/test_utils.py:331: could not
import 'pyarrow': No module named 'pyarrow'
[33mSKIPPED[0m [1] ../../../distributed/tests/test_utils_test.py:145: This
hangs on travis
[33mSKIPPED[0m [1] ../../../distributed/tests/test_worker.py:223: don't yet
support uploading pyc files
[33mSKIPPED[0m [1] ../../../distributed/tests/test_worker.py:319: could not
import 'crick': No module named 'crick'
[33mSKIPPED[0m [2] ../../../distributed/tests/test_worker.py:1475: could not
import 'ucp': No module named 'ucp'
[33mSKIPPED[0m [1] ../../../distributed/tests/test_worker.py:2014: skip if we
have elevated privileges
[33mSKIPPED[0m [1] ../../../distributed/tests/test_worker_memory.py:167:
fails on 32-bit, is it asking for large memory?
[33mXFAIL[0m
../../../distributed/tests/test_actor.py::[1mtest_linear_access[0m - Tornado
can pass things out of orderShould rely on sending small messages rather than
rpc
[33mXFAIL[0m
../../../distributed/tests/test_client.py::[1mtest_nested_prioritization[0m -
https://github.com/dask/dask/pull/6807
[33mXFAIL[0m
../../../distributed/tests/test_client.py::[1mtest_annotations_survive_optimization[0m
- https://github.com/dask/dask/issues/7036
[33mXFAIL[0m
../../../distributed/tests/test_nanny.py::[1mtest_no_unnecessary_imports_on_worker[pandas][0m
- distributed#5723
[33mXFAIL[0m
../../../distributed/tests/test_preload.py::[1mtest_client_preload_text[0m -
The preload argument to the client isn't supported yet
[33mXFAIL[0m
../../../distributed/tests/test_preload.py::[1mtest_client_preload_click[0m -
The preload argument to the client isn't supported yet
[33mXFAIL[0m
../../../distributed/tests/test_resources.py::[1mtest_collections_get[True][0m
- don't track resources through optimization
[33mXFAIL[0m
../../../distributed/tests/test_scheduler.py::[1mtest_rebalance_raises_missing_data3[True][0m
- reason: Freeing keys and gathering data is using different
channels (stream vs explicit RPC). Therefore, the
partial-fail is very timing sensitive and subject to a race
condition. This test assumes that the data is freed before
the rebalance get_data requests come in but merely deleting
the futures is not sufficient to guarantee this
[33mXFAIL[0m
../../../distributed/tests/test_utils_perf.py::[1mtest_gc_diagnosis_rss_win[0m
- flaky and re-fails on rerun
[33mXFAIL[0m
../../../distributed/tests/test_utils_test.py::[1mtest_gen_test[0m - Test
should always fail to ensure the body of the test function was run
[33mXFAIL[0m
../../../distributed/tests/test_utils_test.py::[1mtest_gen_test_legacy_implicit[0m
- Test should always fail to ensure the body of the test function was run
[33mXFAIL[0m
../../../distributed/tests/test_utils_test.py::[1mtest_gen_test_legacy_explicit[0m
- Test should always fail to ensure the body of the test function was run
[33mXFAIL[0m
../../../distributed/tests/test_worker.py::[1mtest_share_communication[0m -
very high flakiness
[33mXFAIL[0m
../../../distributed/tests/test_worker.py::[1mtest_dont_overlap_communications_to_same_worker[0m
- very high flakiness
[33mXFAIL[0m
../../../distributed/tests/test_worker_memory.py::[1mtest_workerstate_fail_to_pickle_flight[0m
- https://github.com/dask/distributed/issues/6705
[33mXFAIL[0m
../../../distributed/tests/test_worker_state_machine.py::[1mtest_gather_dep_failure[0m
- https://github.com/dask/distributed/issues/6705
[31mFAILED[0m
../../../distributed/tests/test_worker.py::[1mtest_do_not_block_event_loop_during_shutdown[0m
- assert False
[31m= [31m[1m1 failed[0m, [32m2121 passed[0m, [33m43 skipped[0m,
[33m127 deselected[0m, [33m16 xfailed[0m, [33m8 xpassed[0m, [33m4
rerun[0m[31m in 1073.21s (0:17:53)[0m[31m =[0m
E: pybuild pybuild:388: test: plugin distutils failed with: exit code=1: cd
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.11_distributed/build; python3.11 -m pytest
/<<PKGBUILDDIR>>/distributed/tests -v --ignore=distributed/deploy/utils_test.py
--ignore=distributed/utils_test.py --ignore=continuous_integration --ignore=docs --ignore=.github --timeout-method=signal
--timeout=300 -m "not (avoid_ci or isinstalled or slow)" -k "not ( test_reconnect or test_jupyter_server or
test_stack_overflow or test_pause_while_spilling or test_digests or test_dashboard_host or test_runspec_regression_sync or
test_popen_timeout or test_runspec_regression_sync or test_client_async_before_loop_starts or
test_plugin_internal_exception or test_client_async_before_loop_starts or test_web_preload or test_web_preload_worker or
test_bandwidth_clear or test_include_communication_in_occupancy or test_worker_start_exception or
test_task_state_instance_are_garbage_collected or test_spillbuffer_oserror or test_release_retry or test_timeout_zero
)"
dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p 3.11
returned exit code 13
make[1]: *** [debian/rules:76: override_dh_auto_test] Error 25
make[1]: Leaving directory '/<<PKGBUILDDIR>>'
make: *** [debian/rules:42: binary] Error 2
dpkg-buildpackage: error: debian/rules binary subprocess returned exit status 2
--------------------------------------------------------------------------------
I've put the full build log here:
https://people.debian.org/~sanvila/build-logs/bookworm/
Note: I'm going to disable the test myself, using a very specific "skipif"
which checks the number of CPUs.
Ideally, this should also be forwarded upstream, but in debian/patches I see
some changes in timeout values which might affect the outcome of the test,
so we should be sure that it's not our fault before forwarding the issue.
Thanks.