mxm opened a new pull request, #762: URL: https://github.com/apache/flink-kubernetes-operator/pull/762
The current autoscaling algorithm adjusts the parallelism of the job task vertices according to the processing needs. By adjusting the parallelism, we systematically scale the amount of CPU for a task. At the same time, we also indirectly change the amount of memory tasks have at their dispense. However, there are some problems with this. 1. Memory is overprovisioned: On scale up, we may add more memory than we actually need. On scale down, the memory / cpu ratio can still be off and too much memory is used. 2. Memory is underprovisioned: For stateful jobs, we risk running into OutOfMemoryErrors on scale down. Even before running out of memory, too little memory can have a negative impact on the effectiveness of the scaling. We lack the capability to tune memory proportionally to the processing needs. In the same way that we measure CPU usage and size the tasks accordingly, we need to evaluate memory usage and adjust the heap memory size. The implemented heap memory tuning here works as follows: ### 1. Establish a heap memory baseline We observe the average heap memory usage (`heap_usage`) at task managers. ### 2. Calculate memory usage per record The memory requirements per record can be estimated by calculating this ratio: ``` heap_memory_per_rec = sum(heap_usage) / sum(processing_rate) ``` This ratio is surprisingly constant based off looking at empirical data. ### 3. Scale memory proportionally to the per-record memory needs ``` heap_memory_per_tm = max_expected_records_per_sec * heap_memory_per_rec / num_task_managers ``` A minimum heap memory limit avoids scaling down memory too much. The max memory per TM is equal to the initially defined user-specified limit from the ResourceSpec. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
