afedulov commented on PR #711:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/711#issuecomment-1810584902

   @gyfora @mxm thanks for the feedback.
   >We already have the start time of the last scaling in memory via the 
scaling history. We can then keep note of the end time once we detect the 
scaling is over. That leaves a little bit of error in case of downtime of the 
operator which will produce a long rescaling time. I think that should be fine 
though, since we cap at the max configured rescale time.
   
   Are we talking about having a field in the `ScalingExecutor`? Because 
fetching `scalingHistory` won't be sufficient - we need some indication that a 
**new** rescaling was applied by the time we see the transition into RUNNING 
with the expected parallelism. This "flag" then needs to be cleaned. 
   General question: it feels like we are very focused on optimizing the size 
of this particular configmap. Can't we create a separate configmap, if this is 
a concern? We already store so much stuff in the `flink-config-autoscaling-job` 
configmap (see the amount of logging configuration alone) that it feels like 
obsession over whether we store two timestamps or one for 5-10 values we'll 
keep in state at the expense of significantly increased code complexity and 
potentially loosing restart data is not worth it. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to