kennknowles opened a new issue, #19044: URL: https://github.com/apache/beam/issues/19044
On some occasions input dataset might be already correctly shuffled (i.e. as a result of previous operation(s)), which means that subsequent grouping operation could leverage this and remove the unneeded shuffle. Example (pseudocode): ``` Dataset<Integer> input = ... Dataset<Pair<Integer, Long>> counts1 = CountByKey.of(input) .keyBy(e -> e) .windowBy( /* some small window */ ) .output(); Dataset<Pair<Integer, Long>> counts2 = SumByKey.of(counts1) .keyBy(Pair::getFirst) .windowBy( /* larger window */ ) .output(); ``` Now, the second `ReduceByKey` already might have correct shuffle (depends on runner), but isn't able to leverage this, because it isn't aware that the key grouping key has not changed from the previous operation. Proposed change: ``` Dataset<Integer> input = ... Dataset<Pair<Integer, Long>> counts1 = CountByKey.of(input) .keyBy(e -> e) .windowBy( /* some small window */ ) .output(); Dataset<Pair<Integer, Long>> counts2 = SumByKey.of(counts1) .keyByLocally(Pair::getFirst) .windowBy( /* larger window */ ) .output(); ``` Introduce `keyByLocally` to keyed operations, which will tell the runner that the grouping is preserved from one keyed operator to the other. This will probably require some support on Beam SDK side, because this information has to be passed to the runner (so that i.e. FlinkRunner can make use of something like `DataStreamUtils#reinterpretAsKeyedStream`. Imported from Jira [BEAM-5330](https://issues.apache.org/jira/browse/BEAM-5330). Original Jira may contain additional context. Reported by: janl. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
