[
https://issues.apache.org/jira/browse/BEAM-5392?focusedWorklogId=196901&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-196901
]
ASF GitHub Bot logged work on BEAM-5392:
----------------------------------------
Author: ASF GitHub Bot
Created on: 11/Feb/19 11:58
Start Date: 11/Feb/19 11:58
Worklog Time Spent: 10m
Work Description: mareksimunek commented on issue #7601: [BEAM-5392]
GroupByKey optimized for non-merging windows
URL: https://github.com/apache/beam/pull/7601#issuecomment-462303073
I rebased on actual `master` and got unrelated errors to this PR:
`Execution failed for task ':beam-runners-flink_2.11:test'.`
`Execution failed for task ':beam-runners-direct-java:needsRunnerTests'.`
` Execution failed for task ':beam-examples-java:flinkRunnerPreCommit'.`
Any ideas?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 196901)
Time Spent: 6h 50m (was: 6h 40m)
> GroupByKey on Spark: All values for a single key need to fit in-memory at once
> ------------------------------------------------------------------------------
>
> Key: BEAM-5392
> URL: https://issues.apache.org/jira/browse/BEAM-5392
> Project: Beam
> Issue Type: Bug
> Components: runner-spark
> Affects Versions: 2.6.0
> Reporter: David Moravek
> Assignee: David Moravek
> Priority: Major
> Labels: performance, triaged
> Time Spent: 6h 50m
> Remaining Estimate: 0h
>
> Currently, when using GroupByKey, all values for a single key need to fit
> in-memory at once.
>
> There are following issues, that need to be addressed:
> a) We can not use Spark's _groupByKey_, because it requires all values to fit
> in memory for a single key (it is implemented as "list combiner")
> b) _ReduceFnRunner_ iterates over values multiple times in order to group
> also by window
>
> Solution:
>
> In Dataflow Worker code, there are optimized versions of ReduceFnRunner, that
> can take advantage of having elements for a single key sorted by timestamp.
>
> We can use Spark's `{{repartitionAndSortWithinPartitions}}` in order to meet
> this constraint.
>
> For non-merging windows, we can put window itself into the key resulting in
> smaller groupings.
>
> This approach was already tested in ~100TB input scale on Spark 2.3.x.
> (custom Spark runner).
>
> I'll submit a patch once the Dataflow Worker code donation is complete.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)