potiuk opened a new issue, #26709: URL: https://github.com/apache/airflow/issues/26709
### Discussed in https://github.com/apache/airflow/discussions/26547 <div type='discussions-op-text'> <sup>Originally posted by **alionar** September 21, 2022</sup> <img width="919" alt="Screen Shot 2022-09-21 at 10 36 29" src="https://user-images.githubusercontent.com/18596757/191408780-fa3e0aba-2faf-45d2-b0ca-6c8c8db458d2.png"> Airflow Version: 2.2.2 Kubernetes Version: 1.22.12-gke.500 Helm Chart version: 1.6.0 Hi, i found that completed cleanup job pods just stayed in nodes after finished and make GKE auto scaller triggered to add new nodes everytime cleanup job pods executed. It made us to delete all completed job pods and drain the unused nodes manually everyday. I found in k8s docs that: ``` The .spec.successfulJobsHistoryLimit and .spec.failedJobsHistoryLimit fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively ``` But when i tried to add this history limit as 0 into cleanup helm chart value, chart didn't recognize it  is that possible to add some job history limit number in cleanup value chart? Current values.yaml for cleanup: ``` cleanup: enabled: true schedule: "*/15 * * * *" ``` </div> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
