[ 
https://issues.apache.org/jira/browse/BEAM-8191?focusedWorklogId=318574&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-318574
 ]

ASF GitHub Bot logged work on BEAM-8191:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 25/Sep/19 20:23
            Start Date: 25/Sep/19 20:23
    Worklog Time Spent: 10m 
      Work Description: pbackx commented on pull request #9544: [BEAM-8191] 
Fixes potentially large number of tasks on Spark after Flatten.pCollections()
URL: https://github.com/apache/beam/pull/9544#discussion_r328322683
 
 

 ##########
 File path: 
runners/spark/src/main/java/org/apache/beam/runners/spark/translation/SparkBatchPortablePipelineTranslator.java
 ##########
 @@ -353,6 +353,11 @@ public void setName(String name) {
         index++;
       }
       unionRDD = context.getSparkContext().union(rdds);
+
+      Partitioner partitioner = getPartitioner(context);
 
 Review comment:
   Hi @RyanSkraba, thanks for reviewing.
   The default parallelism is kind of arbitrary, I understand and I also wanted 
a better option.
   
   I'm not sure about the threshold being that large. On our 150 machine 
cluster with 13 usable cores in each machine and 2 tasks per core, we have 
parallelism of 3900. Multiplying that by 5000 is exactly what is causing us 
troubles at the moment.
   
   Maybe this threshold could be a Spark pipeline option? Something like 
"spark.max.parallelism"?
   
   Regarding the note about multiplying by 2 or 3. I already take that into 
account when setting the default parallelism, so I don't think this is needed. 
This value could maybe be replaced by the max parallelism config option? 
   
   Are you guys open for adding an extra pipeline option? That could solve this 
problem. 
   * If I like to keep the maximum parallelism at the default parallelism, I 
can set both values through the pipeline options. 
   * The default value could be positive infinity, which would be the exact 
behavior as it is today.
   
   Sounds like a good idea?
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 318574)
    Time Spent: 1h 20m  (was: 1h 10m)

> Multiple Flatten.pCollections() transforms generate an overwhelming number of 
> tasks
> -----------------------------------------------------------------------------------
>
>                 Key: BEAM-8191
>                 URL: https://issues.apache.org/jira/browse/BEAM-8191
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-spark
>    Affects Versions: 2.12.0, 2.14.0, 2.15.0
>            Reporter: Peter Backx
>            Assignee: Peter Backx
>            Priority: Major
>          Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The Flatten.pCollections() is translated into a Spark union operation. The 
> resulting RDD will have the sum of the partitions in the originating RDDs.
> If you flatten 2 PCollections with each 10 partitions, the result will have 
> 20 partitions.
> This is ok in small pipelins, but in our main pipeline, this means the number 
> of tasks grows out of hand quite easily (over 500k tasks in one stage). This 
> overloads the driver and crashes the process.
> I have created a small repro case:
> [https://github.com/pbackx/beam-flatmap-test]
>  
> A possible solution is to add a coalesce call after the union. We have been 
> testing this and it seems to do exactly what we want, but I'm not sure if 
> this fix is applicable for all cases. 
> I will open a PR for this so that you can review my proposed change and 
> discuss whether or not it's a good idea.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to