[ 
https://issues.apache.org/jira/browse/BEAM-9434?focusedWorklogId=417100&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-417100
 ]

ASF GitHub Bot logged work on BEAM-9434:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 06/Apr/20 20:36
            Start Date: 06/Apr/20 20:36
    Worklog Time Spent: 10m 
      Work Description: ecapoccia commented on pull request #11037: [BEAM-9434] 
performance improvements reading many Avro files in S3
URL: https://github.com/apache/beam/pull/11037#discussion_r404372090
 
 

 ##########
 File path: 
runners/spark/src/main/java/org/apache/beam/runners/spark/translation/GroupCombineFunctions.java
 ##########
 @@ -184,11 +184,11 @@
 
   /** An implementation of {@link Reshuffle} for the Spark runner. */
   public static <T> JavaRDD<WindowedValue<T>> reshuffle(
-      JavaRDD<WindowedValue<T>> rdd, WindowedValueCoder<T> wvCoder) {
+      JavaRDD<WindowedValue<T>> rdd, WindowedValueCoder<T> wvCoder, int 
numPartitions) {
 
 Review comment:
   @iemejia @lukecwik ok will do the changes. The only reason why I opted to 
change only the batch pipeline for Spark is that it is the only one I am in a 
position to thoroughly test for real. However, analysing the code suggests that 
changing all pipelines does not harm, so I'll go for the suggested changes.
   Will submit the squashed PR soon. In the meantime, thanks for the review.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 417100)
    Time Spent: 3h 40m  (was: 3.5h)

> Improve Spark runner reshuffle translation to maximize parallelism
> ------------------------------------------------------------------
>
>                 Key: BEAM-9434
>                 URL: https://issues.apache.org/jira/browse/BEAM-9434
>             Project: Beam
>          Issue Type: Improvement
>          Components: runner-spark
>    Affects Versions: 2.19.0
>            Reporter: Emiliano Capoccia
>            Assignee: Emiliano Capoccia
>            Priority: Minor
>             Fix For: 2.21.0
>
>          Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> There is a performance issue when processing a large number of small Avro 
> files in Spark on k8s (tens of thousands or more).
> The recommended way of reading a pattern of Avro files in Beam is by means of:
>  
> {code:java}
> PCollection<AvroGenClass> records = p.apply(AvroIO.read(AvroGenClass.class)
> .from("s3://my-bucket/path-to/*.avro").withHintMatchesManyFiles())
> {code}
> However, in the case of many small files, the above results in the entire 
> reading taking place in a single task/node, which is considerably slow and 
> has scalability issues.
> The option of omitting the hint is not viable, as it results in too many 
> tasks being spawn, and the cluster being busy doing coordination of tiny 
> tasks with high overhead.
> There are a few workarounds on the internet which mainly revolve around 
> compacting the input files before processing, so that a reduced number of 
> bulky files is processed in parallel.
> It seems the Spark runner is using the parallelism of the input distributed 
> collection (RDD) to calculate the number of partitions in Reshuffle. In the 
> case of FileIO/AvroIO if the input pattern is a regex the size of the input 
> is 1 which would be far from an optimal parallelism value. We may fix this by 
> improving the translation of reshuffle to maximize parallelism.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to