Thesharing opened a new pull request #16436:
URL: https://github.com/apache/flink/pull/16436
## What is the purpose of the change
*In FLINK-22017, we construct a scenario that regions may never be scheduled
when there is cross-region blocking edges in the graph. To solve this issue we
should allow BLOCKING result partitions be consumable individually. Note that
this will result in the scheduling to become execution-vertex-wise instead of
stage-wise, with a nice side effect towards better resource utilization. The
PipelinedRegionSchedulingStrategy can be simplified along with change to get
rid of the correlatedResultPartitions.*
*There's three main concerns that need to be considered:*
*1. Since scheduling become execution-vertex-wise instead of stage-wise, we
need to make sure the computation complexity in
PipelinedRegionSchedulingStrategy won't fall back to O(n^2). We tested it with
benchmark and end-to-end tests. Our pull request doesn't introduce significant
performance regression.*
*2. Before this pull request,`finishPartitionsAndUpdateConsumers` already
has the complexity of O(n^2). We intend to optimize it in FLINK-21915. Since
each partition will finish individually, this optimization is not valid any
more. As we tested in the job with two vertices (parallelism 8k, all-to-all,
batch mode), it takes less than five seconds.*
*3. `SchedulingDownstreamTasksInBatchJobBenchmark` is modified in accordance
with this change. We need to monitor the result of this benchmark.*
## Brief change log
- *We can get the ConsumedPartitionGroup that an
IntermediateResultPartition or a DefaultResultPartition belongs to*
- *A blocking result partition will be consumable individually once its
producer is finished. It doesn't need to wait until all other
IntermediateResultPartitions that belong to the same IntermediateResult finish*
## Verifying this change
This change added tests and can be verified as follows:
- *Added unit tests for getting ConsumedPartitionGroup from
IntermediateResultPartition and DefaultResultPartition*
- *Added unit tests for scheduling pointwise vertices in the batch job*
- *Extended the unit tests that schedule vertices in the graph illustrated
in FLINK-22017*
- *Manually verified the change by running a job with two job vertices,
their parallelisms are both 8k. Two distribution patterns (pointwise and
all-to-all) and two job type (batch and streaming) are involved. All jobs
finish correctly.*
## Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): (yes / **no**)
- The public API, i.e., is any changed class annotated with
`@Public(Evolving)`: (yes / **no**)
- The serializers: (yes / **no** / don't know)
- The runtime per-record code paths (performance sensitive): (yes / **no**
/ don't know)
- Anything that affects deployment or recovery: JobManager (and its
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (**yes** / no / don't
know)
- The S3 file system connector: (yes / **no** / don't know)
## Documentation
- Does this pull request introduce a new feature? (yes / **no**)
- If yes, how is the feature documented? (**not applicable** / docs /
JavaDocs / not documented)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]