GitHub user rileypeterson created a discussion: Signal propagation into
DockerOperator container
I am running a docker compose file which looks like:
```
...
airflow-worker:
<<: *airflow-common
command: celery worker --pid /opt/airflow/logs/airflow-worker.pid
restart: unless-stopped
environment:
<<: *airflow-common-env
# Required to handle warm shutdown of the celery workers properly
# See
https://airflow.apache.org/docs/docker-stack/entrypoint.html#signal-propagation
DUMB_INIT_SETSID: "0"
ws:
build:
context: ./ws
image: ws_image:1.0
```
Along with a dag which runs the following:
```
ws_task = DockerOperator(
task_id="ws",
image="ws_image:1.0",
api_version="auto",
auto_remove="force",
command=["python", "ws.py", "--runtime-seconds", "3600"],
tty=True,
dag=dag,
)
```
Where ws.py is:
```
import signal
import sys
import time
import asyncio
import random
file = "/tmp/file.txt"
def shutdown_task():
with open(file, "w") as f:
f.write("Finished")
def signal_handler1(sig, frame):
with open(file, "w") as f:
f.write("Got SIGTERM")
sys.exit(0)
def signal_handler2(sig, frame):
with open(file, "w") as f:
f.write("Got SIGINT")
sys.exit(0)
def signal_handler3(sig, frame):
with open(file, "w") as f:
f.write("Got SIGHUP")
sys.exit(0)
signal.signal(signal.SIGTERM, signal_handler1)
signal.signal(signal.SIGINT, signal_handler2)
signal.signal(signal.SIGHUP, signal_handler3)
async def main():
with open(file, "w") as f:
f.write("Started")
now = time.time()
try:
while time.time() < now + 300:
print(random.random())
await asyncio.sleep(2)
finally:
shutdown_task()
if __name__ == '__main__':
asyncio.run(main())
```
I need shutdown_task() to always run. According the comment in the docker
compose (which I've taken from the official one) **`DUMB_INIT_SETSID: "0"` is
required to handle warm shutdown of the celery workers, but I need to propagate
the signals to my docker image... is this possible?**
I tried setting DUMB_INIT_SETSID="1" as an environment variable within
DockerOperator, which didn't work. When I change it to "1" in my docker compose
it does seem to work, but I think this is not recommended (described
[here](https://airflow.apache.org/docs/docker-stack/entrypoint.html#signal-propagation:~:text=This%20is%20useful,terminate%20all%20processes.)).
I've tried several other things to no avail. I found a similar unanswered
stack overflow question
[here](https://stackoverflow.com/questions/74810325/kill-signal-to-docker-operator-airflow#comment140706246_74810325).
GitHub link: https://github.com/apache/airflow/discussions/55129
----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]