[
https://issues.apache.org/jira/browse/BEAM-8191?focusedWorklogId=317647&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-317647
]
ASF GitHub Bot logged work on BEAM-8191:
----------------------------------------
Author: ASF GitHub Bot
Created on: 24/Sep/19 16:45
Start Date: 24/Sep/19 16:45
Worklog Time Spent: 10m
Work Description: RyanSkraba commented on pull request #9544: [BEAM-8191]
Fixes potentially large number of tasks on Spark after Flatten.pCollections()
URL: https://github.com/apache/beam/pull/9544#discussion_r327721690
##########
File path:
runners/spark/src/main/java/org/apache/beam/runners/spark/translation/SparkBatchPortablePipelineTranslator.java
##########
@@ -353,6 +353,11 @@ public void setName(String name) {
index++;
}
unionRDD = context.getSparkContext().union(rdds);
+
+ Partitioner partitioner = getPartitioner(context);
Review comment:
In principle, I don't see any problem with coalescing partitions to avoid
the overhead of too many partitions -- it's certainly something that a
developer would do when hand-coding spark jobs.
In this case, I'm not sure about your logic here -- as far as I can tell,
this will *always* coalesce to the `sc.defaultParallelism()` (or not at all if
bundleSize is set in the pipeline options).
As I understand, bundleSize is only relevant for sourceRDD, but the union
causing the partition explosion can be anywhere in the DAG.
What do you think about checking the number of partitions in the unionRDD
against a threshold, and doing the coalesce if there are "too many" instead?
What do you think about something like:
```
if (unionRdd.partitions.size > sc.defaultParallelism() * THRESHOLD) {
coalesce(sc.defaultParallelism() * 2) // or three according to the spark
tuning recommendations.
}
```
Even setting the threshold to a large, but non-negligable fixed value like
5-10K might be enough.
What do you think?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 317647)
Time Spent: 1h (was: 50m)
> Multiple Flatten.pCollections() transforms generate an overwhelming number of
> tasks
> -----------------------------------------------------------------------------------
>
> Key: BEAM-8191
> URL: https://issues.apache.org/jira/browse/BEAM-8191
> Project: Beam
> Issue Type: Bug
> Components: runner-spark
> Affects Versions: 2.12.0, 2.14.0, 2.15.0
> Reporter: Peter Backx
> Assignee: Peter Backx
> Priority: Major
> Time Spent: 1h
> Remaining Estimate: 0h
>
> The Flatten.pCollections() is translated into a Spark union operation. The
> resulting RDD will have the sum of the partitions in the originating RDDs.
> If you flatten 2 PCollections with each 10 partitions, the result will have
> 20 partitions.
> This is ok in small pipelins, but in our main pipeline, this means the number
> of tasks grows out of hand quite easily (over 500k tasks in one stage). This
> overloads the driver and crashes the process.
> I have created a small repro case:
> [https://github.com/pbackx/beam-flatmap-test]
>
> A possible solution is to add a coalesce call after the union. We have been
> testing this and it seems to do exactly what we want, but I'm not sure if
> this fix is applicable for all cases.
> I will open a PR for this so that you can review my proposed change and
> discuss whether or not it's a good idea.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)