[ 
https://issues.apache.org/jira/browse/BEAM-9434?focusedWorklogId=399383&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-399383
 ]

ASF GitHub Bot logged work on BEAM-9434:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 06/Mar/20 21:59
            Start Date: 06/Mar/20 21:59
    Worklog Time Spent: 10m 
      Work Description: ecapoccia commented on issue #11037: [BEAM-9434] 
performance improvements reading many Avro files in S3
URL: https://github.com/apache/beam/pull/11037#issuecomment-595982928
 
 
   > Adding a filesystem API that all filesystems need to implement is going to 
raise some questions in the Apache Beam community. Asked some follow-ups on 
BEAM-9434 to see if the issue is localized to Spark.
   
   @lukecwik Ok I see.
   
   Technically speaking, it is not *mandatory* to implement in every 
filesystem, but effectively in order to properly support the hint in every 
filesystem it is.
   
   I considered a few alternatives:
   - the current one, throwing an UnsupportedOperationException if a filesystem 
does not support it
   - a default implementation that does a wasteful filtering before returning 
the results (not scalable)
   - implementing it for all filesystem
   
   The reality is, I haven't got a mean of testing the last option on anything 
else than S3, otherwise the last option is the best approach imho.
   
   Let me know what are the opinions. 
   
   Also, looks like to me that the filesystems classes are internal to the 
framework, not supposed to be used directly by end users. In which case *maybe* 
another option is viable, which is renaming appropriately the new hint, and 
don't make it mandatory by means of the framework to consider the hint.
   
   In other words, I'm saying that we can hint to use N partitions, but the 
runtime can just ignore the hint if that's not supported by the underlying 
filesystem.
   I can modify the code in this way if that's viable. 
   
   Happy to hear back from you guys, and thanks for the feedback.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 399383)
    Time Spent: 1h  (was: 50m)

> Performance improvements processing a large number of Avro files in S3+Spark
> ----------------------------------------------------------------------------
>
>                 Key: BEAM-9434
>                 URL: https://issues.apache.org/jira/browse/BEAM-9434
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-java-aws, sdk-java-core
>    Affects Versions: 2.19.0
>            Reporter: Emiliano Capoccia
>            Assignee: Emiliano Capoccia
>            Priority: Minor
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> There is a performance issue when processing a large number of small Avro 
> files in Spark on K8S (tens of thousands or more).
> The recommended way of reading a pattern of Avro files in Beam is by means of:
>  
> {code:java}
> PCollection<AvroGenClass> records = p.apply(AvroIO.read(AvroGenClass.class)
> .from("s3://my-bucket/path-to/*.avro").withHintMatchesManyFiles())
> {code}
> However, in the case of many small files, the above results in the entire 
> reading taking place in a single task/node, which is considerably slow and 
> has scalability issues.
> The option of omitting the hint is not viable, as it results in too many 
> tasks being spawn, and the cluster being busy doing coordination of tiny 
> tasks with high overhead.
> There are a few workarounds on the internet which mainly revolve around 
> compacting the input files before processing, so that a reduced number of 
> bulky files is processed in parallel.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to