morhidi opened a new pull request, #617:
URL: https://github.com/apache/flink-kubernetes-operator/pull/617
## What is the purpose of the change
It is beneficial to report the recommended parallelism and overlay it with
the current parallelism on the same chart when auto scaler is running in
advisor mode.
## Brief change log
- Added a new `RECOMMENDED_PARALLELISM` entry to `ScalingMetric`
- And the latest recommended parallelisms are now being reported as
evaluated scaling metric
- Recommended parallelisms follow the logic below:
- `RECOMMENDED_PARALLELISM` will be set to `PARALLELISM` while the metric
window is filling up
- `RECOMMENDED_PARALLELISM` may change according to the evaluated scaling
metrics
- `PARALLELISM` will be set to `RECOMMENDED_PARALLELISM` after scaling if
scaling is enabled
- `RECOMMENDED_PARALLELISM` will set to `null` during a scaling operation,
while metric collection is not possible
- `RECOMMENDED_PARALLELISM` will be set again to the changed
`PARALLELISM` while the metric window is filling up
## Verifying this change
<!--
Please make sure both new and modified tests in this PR follows the
conventions defined in our code quality guide:
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
-->
- Added unit test for recommended parallelisms
- Manual by checking the actual metrics being reported
```
TODO
```
## Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): ( no)
- The public API, i.e., is any changes to the `CustomResourceDescriptors`:
( no)
- Core observer or reconciler logic that is regularly executed: (no)
## Documentation
- Does this pull request introduce a new feature? (no)
- If yes, how is the feature documented? (docs)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]