rmetzger commented on a change in pull request #15355: URL: https://github.com/apache/flink/pull/15355#discussion_r600752855
########## File path: docs/content.zh/docs/deployment/elastic_scaling.md ########## @@ -88,10 +88,20 @@ If you manually set a parallelism in your job for individual operators or the en Note that such a high maxParallelism might affect performance of the job, since more internal structures are needed to maintain [some internal structures](https://flink.apache.org/features/2017/07/04/flink-rescalable-state.html) of Flink. +When enabling Reactive Mode, the `jobmanager.adaptive-scheduler.resource-wait-timeout` configuration key will default to `-1`. This means that the JobManager will run forever waiting for sufficient resources. +If you want the JobManager to stop after a certain time without enough TaskManagers to run the job, configure `jobmanager.adaptive-scheduler.resource-wait-timeout`. + +With Reactive Mode enabled, the `jobmanager.adaptive-scheduler.resource-stabilization-timeout` configuration key will default to `0`: Flink will start runnning the job, as soon as there are sufficient resources available. +In scenarios where TaskManagers are not connecting at the same time, but slowly one after another, this behavior leads to a job restart whenever a TaskManager connects. Increase this configuration value if you want to wait for the resources to stabilize before scheduling the job. Review comment: As you've said, these changes are unrelated and have been introduced in a separate PR https://github.com/apache/flink/pull/15159. I wouldn't say we are optimizing for demos, rather for a good out of the box experience for people trying out reactive mode. The settings are well documented, and people will/should explore reactive mode before using it in prod. I hope by then they've adjusted the settings. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
