Peter Backx created BEAM-7413:
---------------------------------
Summary: Huge amount of tasks per stage in Spark runner after
upgrade to Beam 2.12.0
Key: BEAM-7413
URL: https://issues.apache.org/jira/browse/BEAM-7413
Project: Beam
Issue Type: Bug
Components: runner-spark
Affects Versions: 2.12.0
Reporter: Peter Backx
After upgrading from Beam 2.8.0 to 2.12.0 we see a huge number of tasks per
stage in our pipelines. Where we used to see a few thousands tasks/stage at
most, it's now into the millions. This makes the pipeline unable to complete
successfully (driver and network are overloaded)
It looks like after each (Co)GroupByKey operation, the amount of tasks (per
stage) at least doubles sometimes even more.
I did notice a fix to GroupByKey (BEAM-5392) that may or may not be related.
I cannot post the full pipeline, but we have created a small test to showcase
the effect:
[https://github.com/pbackx/beam-groupbykey-test]
[https://github.com/pbackx/beam-groupbykey-test/blob/master/src/test/java/NumTaskTest.java]
contains two tests:
* One shows how we would usually join PCollections together and if you run it,
you'll see the number of tasks gradually increase
* The other uses a GroupIntoBatches operation after each join. The effect is
that there's no longer an increase in tasks. (the deprecated Reshuffle
operation has a similar effect, but it's deprecated...)
We've now sprinkled GroupIntoBatches throughout our pipeline and this seems to
avoid the issue, but at the cost of performance (this effect is much worse in
the toy example than in our "real" pipeline to be honest).
My questions:
* Is this a bug or is this expected behavior?
* Is the GroupIntoBatches the best workaround or are there better options?
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)