austince opened a new pull request #15497:
URL: https://github.com/apache/flink/pull/15497


   # What is the purpose of the change
   
   Ensure that the same max parallelism value is used as the default in the 
Adaptive Scheduler between submissions, even as the job scales in and out. 
Previously, the default was used as if it were specified by the user in the 
JobGraph, making it impossible to rescale from a configured maxP to the default.
   
   The solution of passing a defaulting function is not the prettiest, but
   is easy enough to follow and can be removed once the overall flow of
   parallelism-from-state is improved.
   
   Signed-off-by: austin ce <[email protected]>
   
   ## Brief change log
   
   - Allow passing a function, used to compute the default max parallelism 
value for a vertex,
   to the SchedulerBase#computeVertexParallelismStore method 
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   - Ran the RescalingITCase with adaptive scheduling enabled locally
   - Added AdaptiveSchedulerTest#testConsistentMaxParallelism to ensure 
rescaling jobs
   use the same default max parallelism
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): no
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
     - The serializers: no
     - The runtime per-record code paths (performance sensitive): no
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: yes
     - The S3 file system connector: no
   
   ## Documentation
   
     - Does this pull request introduce a new feature? no
     - If yes, how is the feature documented? JavaDocs
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to