ecapoccia edited a comment on issue #11037: [BEAM-9434] performance improvements reading many Avro files in S3 URL: https://github.com/apache/beam/pull/11037#issuecomment-596045046 > The expansion for withHintManyFiles uses a reshuffle between the match and the actual reading of the file. The reshuffle allows for the runner to balance the amount of work across as many nodes as it wants. The only thing being reshuffled is file metadata so after that reshuffle the file reading should be distributed to several nodes. > > In your reference run, when you say that "the entire reading taking place in a single task/node", was it that the match all happened on a single node or was it that the "read" happened all on a single node? Both. Reading the metadata wouldn't be a problem, it also happens on every node in the proposed PR. But the actual reading also happens on one node with unacceptably high reading times. Maybe what you say applies possibly to the case of "bulky" files. However, my solution particularly applies to the case where there is a high number of tiny files (I think I explained better in the Jira ticket). In this latter case, the latency of reading each file from S3 dominates, but no chunking / shuffling happens with the standard Beam. When I look at the DAG in Spark, I can see only one task there, and if I look at the executors they are all idle, spare the one where all the readings happen. This is true for both the stage where you read the metadata, and for the stage where you read the data. With the proposed PR instead the number of tasks and parallel executors in the DAG is the one that you pass in the hint.
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
