kfaraz commented on code in PR #19091:
URL: https://github.com/apache/druid/pull/19091#discussion_r2907090423


##########
indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/autoscaler/CostBasedAutoScaler.java:
##########
@@ -62,10 +63,17 @@ public class CostBasedAutoScaler implements 
SupervisorTaskAutoScaler
   public static final String LAG_COST_METRIC = 
"task/autoScaler/costBased/lagCost";
   public static final String IDLE_COST_METRIC = 
"task/autoScaler/costBased/idleCost";
   public static final String OPTIMAL_TASK_COUNT_METRIC = 
"task/autoScaler/costBased/optimalTaskCount";
+  public static final String INVALID_METRICS_COUNT = 
"task/autoScaler/costBased/invalidMetrics";
 
   static final int MAX_INCREASE_IN_PARTITIONS_PER_TASK = 2;
   static final int MAX_DECREASE_IN_PARTITIONS_PER_TASK = 
MAX_INCREASE_IN_PARTITIONS_PER_TASK * 2;
 
+  /**
+   * If average partition lag crosses this value and the processing rate is
+   * still zero, scaling actions are skipped and an alert is raised.
+   */
+  static final int MAX_IDLENESS_PARTITION_LAG = 10_000;

Review Comment:
   > The reason I ask it's totally feasible for a topic to never reach above 
10k event lag
   
   Yeah, that was kind of the intention here.
   If the lag remains below this value, autoscaling will work as usual.
   But if the lag exceeds this value AND processing rate is zero, that 
indicates something is wrong with the tasks. In such a case, scaling will be 
skipped and an alert will be raised.
   
   Let me know if you still feel that we need this as a config in this PR.
   
   (I was initially using the config value 
`CostBasedAutoScalerConfig.highLagThreshold` for this, but it is also being 
used for another completely different purpose. So I decided not to overload the 
meaning of that config.)
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to