codeprasan opened a new issue, #36932:
URL: https://github.com/apache/airflow/issues/36932

   ### Description
   
   In the current Airflow version, the KubernetesPodOperator allows users to 
specify computer resources, but these resources are distributed across nodes. 
Unfortunately, there is a shortage of available nodes, leading to resource 
constraints. It would be advantageous to introduce an option for Horizontal Pod 
Autoscaling (HPA) for this operator. Currently, Airflow, through the 
KubernetesPodOperator, generates a new pod dynamically for each execution. 
However, the implementation of HPA/Replicas is hindered by a missing feature or 
property in the existing operator. The addition of an HPA option would address 
this limitation and enhance the scalability of the operator.
   
   ### Use case/motivation
   
   In our Airflow setup, we aim to trigger Spark jobs, but the absence of 
Horizontal Pod Autoscaling (HPA) functionality in the KubernetesPodOperator 
poses a challenge. The current operator lacks the capability to leverage HPA 
for dynamic scaling, hindering our ability to efficiently scale resources based 
on workload demands. This limitation impacts the scalability and performance 
optimization of Spark jobs within the Airflow environment. Introducing HPA 
support to the KubernetesPodOperator would be instrumental in addressing this 
issue and enhancing the overall scalability and flexibility of our Spark job 
execution.
   
   ### Related issues
   
   _No response_
   
   ### Are you willing to submit a PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to