[ 
https://issues.apache.org/jira/browse/BEAM-9434?focusedWorklogId=399535&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-399535
 ]

ASF GitHub Bot logged work on BEAM-9434:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 07/Mar/20 04:24
            Start Date: 07/Mar/20 04:24
    Worklog Time Spent: 10m 
      Work Description: ecapoccia commented on issue #11037: [BEAM-9434] 
performance improvements reading many Avro files in S3
URL: https://github.com/apache/beam/pull/11037#issuecomment-596045046
 
 
   > The expansion for withHintManyFiles uses a reshuffle between the match and 
the actual reading of the file. The reshuffle allows for the runner to balance 
the amount of work across as many nodes as it wants. The only thing being 
reshuffled is file metadata so after that reshuffle the file reading should be 
distributed to several nodes.
   > 
   > In your reference run, when you say that "the entire reading taking place 
in a single task/node", was it that the match all happened on a single node or 
was it that the "read" happened all on a single node?
   
   Both.
   Reading the metadata wouldn't be a problem, it also happens on every node in 
the proposed PR.
   But the actual reading also happens on one node with unacceptably high 
reading times.
   What you say applies to the case of "bulky" files. For those, the shuffling 
stage chunks the files and shuffle reading of each chunk. 
   However, my solution particularly applies to the case where there is a high 
number of tiny files (I think I explained better in the Jira ticket).
   In this latter case, the latency of reading each file from S3 dominates, but 
no chunking / shuffling happens with the standard Beam.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 399535)
    Time Spent: 1h 20m  (was: 1h 10m)

> Performance improvements processing a large number of Avro files in S3+Spark
> ----------------------------------------------------------------------------
>
>                 Key: BEAM-9434
>                 URL: https://issues.apache.org/jira/browse/BEAM-9434
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-java-aws, sdk-java-core
>    Affects Versions: 2.19.0
>            Reporter: Emiliano Capoccia
>            Assignee: Emiliano Capoccia
>            Priority: Minor
>          Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> There is a performance issue when processing a large number of small Avro 
> files in Spark on K8S (tens of thousands or more).
> The recommended way of reading a pattern of Avro files in Beam is by means of:
>  
> {code:java}
> PCollection<AvroGenClass> records = p.apply(AvroIO.read(AvroGenClass.class)
> .from("s3://my-bucket/path-to/*.avro").withHintMatchesManyFiles())
> {code}
> However, in the case of many small files, the above results in the entire 
> reading taking place in a single task/node, which is considerably slow and 
> has scalability issues.
> The option of omitting the hint is not viable, as it results in too many 
> tasks being spawn, and the cluster being busy doing coordination of tiny 
> tasks with high overhead.
> There are a few workarounds on the internet which mainly revolve around 
> compacting the input files before processing, so that a reduced number of 
> bulky files is processed in parallel.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to