1996fanrui commented on code in PR #726:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/726#discussion_r1422456112


##########
flink-autoscaler/src/main/java/org/apache/flink/autoscaler/config/AutoScalerOptions.java:
##########
@@ -222,6 +222,18 @@ private static ConfigOptions.OptionBuilder 
autoScalerConfig(String key) {
                     .withDescription(
                             "Processing rate increase threshold for detecting 
ineffective scaling threshold. 0.1 means if we do not accomplish at least 10% 
of the desired capacity increase with scaling, the action is marked 
ineffective.");
 
+    public static final ConfigOption<Double> GC_PRESSURE_THRESHOLD =
+            autoScalerConfig("memory.gc-pressure.threshold")
+                    .doubleType()
+                    .defaultValue(0.3)
+                    .withDescription("Max allowed GC pressure during scaling 
operations");
+
+    public static final ConfigOption<Double> HEAP_USAGE_THRESHOLD =
+            autoScalerConfig("memory.heap-usage.threshold")
+                    .doubleType()
+                    .defaultValue(0.9)

Review Comment:
   I have 2 questions for this autoscaler rule:
   
   1. Does high heap usage indicate insufficient memory?
       -  When the GC is severe, the memory is indeed not enough. 
       - But when the GC time is very low, and the heap usage is high, 
taskManagers might work well, right? (The memory may be just enough). 
   2. Is the insufficient memory caused by rescale down?
       - The GC is fine before rescaling, but the busy ratio is very low, so 
autoscaler scale down this job.
       - But the GC is severe after rescale down.
       - Could we revert this rescaling? Or we think this scale down is 
expected, and users should increase the task memory after scaling down.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to