[ 
https://issues.apache.org/jira/browse/BEAM-9434?focusedWorklogId=400436&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-400436
 ]

ASF GitHub Bot logged work on BEAM-9434:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 09/Mar/20 23:10
            Start Date: 09/Mar/20 23:10
    Worklog Time Spent: 10m 
      Work Description: ecapoccia commented on issue #11037: [BEAM-9434] 
performance improvements reading many Avro files in S3
URL: https://github.com/apache/beam/pull/11037#issuecomment-596822153
 
 
   Unfortunately the problem happens for me, that is why this work started. 
Let's see if we can understand the root cause for it.
   
   > Reshuffle should ensure that there is a repartition between MatchAll and 
ReadMatches, is it missing (it is difficult to tell from your screenshots)? If 
it isn't missing, they why is the following stage only executing on a single 
machine (since repartition shouldn't be restricting output to only a single 
machine)
   
   It's clearly not missing as in the base case I'm using 
withHintMatchesManyFiles().
   Still what happens is that the entire reading is on one machine (see second 
last screenshot "summary metrics for 2 completed tasks"). The impression I have 
is that when the physical plan is created, there is only one task detected that 
is bound to do the entire reading on one executor. Consider that, I am doing 
something really plain, just reading from two buckets, joining the records and 
writing them back to S3. Did you try this yourself to see if you can reproduce 
the issue?
   
   I had a look at the code of Reshuffle.expand() and Reshuffle.ViaRandomKey, 
but I have some doubts on what is the expected behaviour in terms of machines / 
partitions.
   
   How many different partitions shall Reshuffle create? Will there be 1 task 
per partition? and how are the tasks ultimately assigned to the executors?
   Maybe you can help me understand the above / point me to the relevant 
documentation. That should hopefully help me troubleshoot this.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 400436)
    Time Spent: 2.5h  (was: 2h 20m)

> Performance improvements processing a large number of Avro files in S3+Spark
> ----------------------------------------------------------------------------
>
>                 Key: BEAM-9434
>                 URL: https://issues.apache.org/jira/browse/BEAM-9434
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-java-aws, sdk-java-core
>    Affects Versions: 2.19.0
>            Reporter: Emiliano Capoccia
>            Assignee: Emiliano Capoccia
>            Priority: Minor
>          Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> There is a performance issue when processing a large number of small Avro 
> files in Spark on K8S (tens of thousands or more).
> The recommended way of reading a pattern of Avro files in Beam is by means of:
>  
> {code:java}
> PCollection<AvroGenClass> records = p.apply(AvroIO.read(AvroGenClass.class)
> .from("s3://my-bucket/path-to/*.avro").withHintMatchesManyFiles())
> {code}
> However, in the case of many small files, the above results in the entire 
> reading taking place in a single task/node, which is considerably slow and 
> has scalability issues.
> The option of omitting the hint is not viable, as it results in too many 
> tasks being spawn, and the cluster being busy doing coordination of tiny 
> tasks with high overhead.
> There are a few workarounds on the internet which mainly revolve around 
> compacting the input files before processing, so that a reduced number of 
> bulky files is processed in parallel.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to