tim-habitat opened a new issue, #32580:
URL: https://github.com/apache/airflow/issues/32580

   ### Apache Airflow version
   
   2.6.3
   
   ### What happened
   
   It seems that with the new `aws` provider package, when using the 
`deferable` keyword in the `EcsRunTaskOperator` - the `execution_timeout` is 
ignored and the task is killed from another timeout, the trigger timeout seems 
to be `timeout=timedelta(seconds=self.waiter_max_attempts * self.waiter_delay + 
60)`.
   
   Also, it seems when the `trigger` fires that timeout - it seems the task 
return "success" even though it hasn't finished.
   
   ### What you think should happen instead
   
   The `execution_timeout` should be used in the trigger timeout, or at least a 
warning if that timeout is overriden or is smaller
   
   ### How to reproduce
   
   Run an `EcsRunTaskOperator` task with deferable mode, put a large 
`execution_timeout` and a small number of `waiter_retries`. The task should 
terminates based on the `trigger` timing out before the `execution_timeout` is 
up.
   
   ### Operating System
   
   linux ubuntu
   
   ### Versions of Apache Airflow Providers
   
   _No response_
   
   ### Deployment
   
   Other Docker-based deployment
   
   ### Deployment details
   
   _No response_
   
   ### Anything else
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to