[ 
https://issues.apache.org/jira/browse/BEAM-7413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854487#comment-16854487
 ] 

Peter Backx commented on BEAM-7413:
-----------------------------------

After more debugging I have been able to track down the source of the issue:
 * The original groupByKey implementation uses this partitioner:
{code:java}
new HashPartitioner(context.getSparkContext().defaultParallelism()){code}

 * The new GroupNonMergingWindowsFunctions groupByKey implementation 
(BEAM-5392) creates the following partitioner:
{code:java}
new HashPartitioner(rdd.getNumPartitions()){code}

This second partitioner is that the number of tasks will never decrease, but 
will keep increasing with the number of group operations that are performed.

In fact, it looks like this is a side-effect of the refactoring that happened 
in BEAM-4783 and I think it can be debated whether or not the 
TransformTranslator#flattenPColl method should also perform a "coalesce" on the 
RDD to reduce the number of partitions back to the size of the cluster 
parallelism. 

I will submit a PR with a quick (but correct) fix for this issue, while we 
still continue experimenting whether not "coalesce" in flatten is a good idea 
(all ideas welcome!)

> Huge amount of tasks per stage in Spark runner after upgrade to Beam 2.12.0
> ---------------------------------------------------------------------------
>
>                 Key: BEAM-7413
>                 URL: https://issues.apache.org/jira/browse/BEAM-7413
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-spark
>    Affects Versions: 2.12.0
>            Reporter: Peter Backx
>            Priority: Major
>
> After upgrading from Beam 2.8.0 to 2.12.0 we see a huge number of tasks per 
> stage in our pipelines. Where we used to see a few thousands tasks/stage at 
> most, it's now into the millions. This makes the pipeline unable to complete 
> successfully (driver and network are overloaded)
> It looks like after each (Co)GroupByKey operation, the amount of tasks (per 
> stage) at least doubles sometimes even more.
> I did notice a fix to GroupByKey (BEAM-5392) that may or may not be related.
> I cannot post the full pipeline, but we have created a small test to showcase 
> the effect:
> [https://github.com/pbackx/beam-groupbykey-test]
>  
> [https://github.com/pbackx/beam-groupbykey-test/blob/master/src/test/java/NumTaskTest.java]
>  contains two tests:
>  * One shows how we would usually join PCollections together and if you run 
> it, you'll see the number of tasks gradually increase
>  * The other uses a GroupIntoBatches operation after each join. The effect is 
> that there's no longer an increase in tasks. (the deprecated Reshuffle 
> operation has a similar effect, but it's deprecated...)
> We've now sprinkled GroupIntoBatches throughout our pipeline and this seems 
> to avoid the issue, but at the cost of performance (this effect is much worse 
> in the toy example than in our "real" pipeline to be honest). 
> My questions:
>  * Is this a bug or is this expected behavior?
>  * Is the GroupIntoBatches the best workaround or are there better options?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to