jigs031 opened a new issue, #32485: URL: https://github.com/apache/airflow/issues/32485
### Apache Airflow version Other Airflow 2 version (please specify below) ### What happened We have spined up the Airflow from Helm chart 1.6.0. We have defined AirflowExcecutor as "KubernetesExecutor" in our config map. System works fine when number of parallel runs are less than, but when parallel runs increase - few of the PODS randomly runs on "LocalExecutor" and these task are not visible on Airflow DAG Run UI screen. After the long debugging we identified that it was dues to the environment variable "AIRFLOW__CORE__EXECUTOR " hard-coded to "LocalExecutor" in chart/files/pod-template-file.kubernetes-helm-yaml. We tried to hard-coded as "KubernetesExecutor", then all the Jobs are working as expected. ### What you think should happen instead Like other configuration, Variable AIRFLOW__CORE__EXECUTOR in the pod template file should not be hardcoded (there is a possibility to add it via .Values.<variable_name>). ### How to reproduce Install the helm chart and define - "AIRFLOW__CORE__EXECUTOR == "KubernetesExecutor" in ConfigMap. Run single job multiple times (atleast 20+ runs) within 10 seconds. You may observe that - not all job instances are visible on the screen. If we go on Task level details - you may find task are available but the Job are not register on Airflow UI/Database. ### Operating System Linux ### Versions of Apache Airflow Providers 2.3.1 ### Deployment Other 3rd-party Helm chart ### Deployment details Official Helm Chart version: 1.6.0 ### Anything else _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
