[
https://issues.apache.org/jira/browse/BEAM-9434?focusedWorklogId=410790&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-410790
]
ASF GitHub Bot logged work on BEAM-9434:
----------------------------------------
Author: ASF GitHub Bot
Created on: 27/Mar/20 02:40
Start Date: 27/Mar/20 02:40
Worklog Time Spent: 10m
Work Description: lukecwik commented on issue #11037: [BEAM-9434]
performance improvements reading many Avro files in S3
URL: https://github.com/apache/beam/pull/11037#issuecomment-604785112
Sorry about the long delay but **Reshuffle** should produce as many
partitions as the runner thinks is optimal. It is effectively a
**redistribute** operation.
It looks like the spark translation is copying the number of partitions from
the upstream transform for the reshuffle translation and in your case this is
likely 1.
Translation:
https://github.com/apache/beam/blob/f5a4a5afcd9425c0ddb9ec9c70067a5d5c0bc769/runners/spark/src/main/java/org/apache/beam/runners/spark/translation/TransformTranslator.java#L681
Copying partitions:
https://github.com/apache/beam/blob/f5a4a5afcd9425c0ddb9ec9c70067a5d5c0bc769/runners/spark/src/main/java/org/apache/beam/runners/spark/translation/GroupCombineFunctions.java#L191
@iemejia Shouldn't we be using a much larger value for partitions, e.g. the
number of nodes?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 410790)
Time Spent: 2h 50m (was: 2h 40m)
> Performance improvements processing a large number of Avro files in S3+Spark
> ----------------------------------------------------------------------------
>
> Key: BEAM-9434
> URL: https://issues.apache.org/jira/browse/BEAM-9434
> Project: Beam
> Issue Type: Improvement
> Components: io-java-aws, sdk-java-core
> Affects Versions: 2.19.0
> Reporter: Emiliano Capoccia
> Assignee: Emiliano Capoccia
> Priority: Minor
> Time Spent: 2h 50m
> Remaining Estimate: 0h
>
> There is a performance issue when processing a large number of small Avro
> files in Spark on K8S (tens of thousands or more).
> The recommended way of reading a pattern of Avro files in Beam is by means of:
>
> {code:java}
> PCollection<AvroGenClass> records = p.apply(AvroIO.read(AvroGenClass.class)
> .from("s3://my-bucket/path-to/*.avro").withHintMatchesManyFiles())
> {code}
> However, in the case of many small files, the above results in the entire
> reading taking place in a single task/node, which is considerably slow and
> has scalability issues.
> The option of omitting the hint is not viable, as it results in too many
> tasks being spawn, and the cluster being busy doing coordination of tiny
> tasks with high overhead.
> There are a few workarounds on the internet which mainly revolve around
> compacting the input files before processing, so that a reduced number of
> bulky files is processed in parallel.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)