[ 
https://issues.apache.org/jira/browse/BEAM-9434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17053885#comment-17053885
 ] 

Emiliano Capoccia edited comment on BEAM-9434 at 3/7/20, 9:30 AM:
------------------------------------------------------------------

Both.

Reading the metadata wouldn't be a problem, it also happens on every node in 
the proposed PR.
But the actual reading also happens on one node with unacceptably high reading 
times.
Maybe what you say applies possibly to the case of "bulky" files.
However, my solution particularly applies to the case where there is a high 
number of tiny files (I think I explained better in the Jira ticket).
In this latter case, the latency of reading each file from S3 dominates, but no 
chunking / shuffling happens with the standard Beam.

When I look at the DAG in Spark, I can see only one task there, and if I look 
at the executors they are all idle spare the one when all the reading happens. 
This is true for both the stage where you read the metadata, and the stage 
where you read the data.

With the proposed PR instead the number of tasks and parallel executors in the 
DAG is the one that you pass in the hint.


was (Author: ecapoccia):
Both.
Reading the metadata wouldn't be a problem, it also happens on every node in 
the proposed PR.
But the actual reading also happens on one node with unacceptably high reading 
times.
What you say applies to the case of "bulky" files. For those, the shuffling 
stage chunks the files and shuffle reading of each chunk.
However, my solution particularly applies to the case where there is a high 
number of tiny files (I think I explained better in the Jira ticket).
In this latter case, the latency of reading each file from S3 dominates, but no 
chunking / shuffling happens with the standard Beam.

> Performance improvements processing a large number of Avro files in S3+Spark
> ----------------------------------------------------------------------------
>
>                 Key: BEAM-9434
>                 URL: https://issues.apache.org/jira/browse/BEAM-9434
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-java-aws, sdk-java-core
>    Affects Versions: 2.19.0
>            Reporter: Emiliano Capoccia
>            Assignee: Emiliano Capoccia
>            Priority: Minor
>          Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> There is a performance issue when processing a large number of small Avro 
> files in Spark on K8S (tens of thousands or more).
> The recommended way of reading a pattern of Avro files in Beam is by means of:
>  
> {code:java}
> PCollection<AvroGenClass> records = p.apply(AvroIO.read(AvroGenClass.class)
> .from("s3://my-bucket/path-to/*.avro").withHintMatchesManyFiles())
> {code}
> However, in the case of many small files, the above results in the entire 
> reading taking place in a single task/node, which is considerably slow and 
> has scalability issues.
> The option of omitting the hint is not viable, as it results in too many 
> tasks being spawn, and the cluster being busy doing coordination of tiny 
> tasks with high overhead.
> There are a few workarounds on the internet which mainly revolve around 
> compacting the input files before processing, so that a reduced number of 
> bulky files is processed in parallel.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to