devgonvarun opened a new issue, #61074: URL: https://github.com/apache/airflow/issues/61074
### Description Currently the reconcile_specs method in [k8 pod_generator.py](https://github.com/apache/airflow/blob/4e8274c3616fde8b59f633b543af3509148f2bc1/providers/cncf/kubernetes/src/airflow/providers/cncf/kubernetes/pod_generator.py) is resposible for merging the pod_override provided pod and the base_pod. It successfully overrides the base container resposible for k8 executor worker pod execution. But when you try to override an existing init_container then instead of merging the pod override init_container with the same name with the existing init_container it fails as it tries to create a duplicate init_container with the same name. This happens because there is no merging logic for init_containers in the pod generator. ### Use case/motivation Executor Config should allow overriding existing init containers. Use case: k8 executor, Multinamespace mode Airflow using gitsync with submodules. The helm chart configures the same git sync configuration for all airflow components - dag processor, worker etc. But this does not need to be: the dag processor needs to see the dags from all git submodules to import all dags. But the k8 worker only needs one specific submodule. If i can override the init container in the worker using executor_config using airflow cluster policies then i can dynamically set which repo the worker git sync needs and then it is much faster checking out the specific submodule repo rather than the full repo with multiple submodules. ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
