buu-nguyen opened a new issue, #36136: URL: https://github.com/apache/airflow/issues/36136
### Apache Airflow version 2.7.3 ### What happened I have encountered a bug where the `executor_config` for a task is lost when the task is retried. This issue occurs when using the Kubernetes Executor with Airflow version 2.7.3 deployed via the official Airflow Helm chart version 8.8.0. The `executor_config` is lost during task retries and appears as an empty dictionary in the Airflow Rest API. This config only reappears and is applied correctly if the task instance details are accessed in the UI. ### What you think should happen instead The `executor_config` should persist and be applied consistently across all retries of a task without requiring intervention via the UI. ### How to reproduce 1. Configure a DAG with a task that includes specific settings in `executor_config`. 2. Trigger the DAG and let the task fail to initiate a retry. 3. Observe that on retry, the `executor_config` is an empty dictionary when checked via the Airflow Rest API. 4. However, if I navigate to the task instance details in the Airflow UI and click on the task, the `executor_config` reappears and is correctly applied to the retried task. ### Operating System apache/airflow:2.7.3-python3.10 docker image ### Versions of Apache Airflow Providers _No response_ ### Deployment Official Apache Airflow Helm Chart ### Deployment details Apache Airflow Version: 2.7.3 Helm Chart Version: 8.8.0 (official Airflow Helm chart) Executor: Kubernetes Executor GKE ### Anything else It appears that the `executor_config` is being lost from the metadata database after the Kubernetes Executor terminates the pod. This results in the absence of the `executor_config` for tasks that are retried. Interestingly, I discovered a workaround to retrieve the `executor_config`: by accessing the task instance details in the Airflow UI. This behavior, however, is inconsistent and somewhat perplexing. This issue suggests that while the `executor_config` is initially stored correctly, it somehow gets dissociated or removed from the task's metadata upon pod termination by the Kubernetes Executor. The ability to recover this configuration via the UI indicates that the data may still exist but is not being correctly relayed or preserved for retries in the automated workflow. ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
