nathadfield opened a new issue #19346:
URL: https://github.com/apache/airflow/issues/19346
### Apache Airflow version
2.2.0
### Operating System
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
Existing DAGs that use sensors with retry and timeout functionality are no
longer retrying but instantly going to a failed state rather than up for retry.
### What you expected to happen
Prior to v2.2 if you defined a sensor with a timeout duration of, for
example, 60 seconds and 2 retries this would result in the task running a
maximum of 3 possible attempts and a total duration of three minutes (180
seconds).
In earlier versions, once the timeout was reached the task would enter into
`up_for_retry` state.
### How to reproduce
Using this simple DAG you can replicate the issue.
```
from datetime import datetime, timedelta
from airflow import models
from airflow.sensors.external_task import ExternalTaskSensor
default_args = {
'owner': 'airflow',
'retries': 2,
'retry_delay': timedelta(minutes=5),
'use_legacy_sql': False,
}
with models.DAG(
dag_id='child_dag',
default_args=default_args,
start_date=datetime(2021, 10, 1),
schedule_interval='0 0 * * *',
catchup=False
) as dag:
sensor = ExternalTaskSensor(
task_id='sensor',
external_dag_id='parent_dag',
external_task_id='dummy',
mode='reschedule',
poke_interval=10,
timeout=60
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of
Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]