[ 
https://issues.apache.org/jira/browse/HADOOP-17250?focusedWorklogId=486338&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-486338
 ]

ASF GitHub Bot logged work on HADOOP-17250:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 18/Sep/20 18:55
            Start Date: 18/Sep/20 18:55
    Worklog Time Spent: 10m 
      Work Description: steveloughran commented on pull request #2307:
URL: https://github.com/apache/hadoop/pull/2307#issuecomment-695032321


   BTW, #2168 is calling out for reviewers. This defines a standard option for 
setting seek policy, and another for file length (you can skip the HEAD check 
then,see). And it sets distcp and other download operations (including YARN) to 
always do sequential.
   
   For ABFS, that tells the stream that one big GET with as much prefetch as 
you can do is going to be best


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 486338)
    Time Spent: 2h  (was: 1h 50m)

> ABFS: Allow random read sizes to be of buffer size
> --------------------------------------------------
>
>                 Key: HADOOP-17250
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17250
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/azure
>    Affects Versions: 3.3.0
>            Reporter: Sneha Vijayarajan
>            Assignee: Sneha Vijayarajan
>            Priority: Major
>              Labels: abfsactive, pull-request-available
>          Time Spent: 2h
>  Remaining Estimate: 0h
>
> ADLS Gen2/ABFS driver is optimized to read only the bytes that are requested 
> for when the read pattern is random. 
> It was observed in some spark jobs that though the reads are random, the next 
> read doesn't skip by a lot and can be served by the earlier read if read was 
> done in buffer size. As a result the job triggered a higher count of read 
> calls and resulted in higher job runtime.
> When these jobs were run against Gen1 which always reads in buffer size , the 
> jobs fared well. 
> In this Jira we try to provide a control over config on random read to be of 
> requested size or buffer size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to