[
https://issues.apache.org/jira/browse/HADOOP-17250?focusedWorklogId=485651&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485651
]
ASF GitHub Bot logged work on HADOOP-17250:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 17/Sep/20 10:45
Start Date: 17/Sep/20 10:45
Worklog Time Spent: 10m
Work Description: steveloughran commented on a change in pull request
#2307:
URL: https://github.com/apache/hadoop/pull/2307#discussion_r490146585
##########
File path:
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##########
@@ -180,9 +205,13 @@ private int readOneBlock(final byte[] b, final int off,
final int len) throws IO
// Enable readAhead when reading sequentially
if (-1 == fCursorAfterLastRead || fCursorAfterLastRead == fCursor ||
b.length >= bufferSize) {
+ LOG.debug("Sequential read with read ahead size of {}", bufferSize);
bytesRead = readInternal(fCursor, buffer, 0, bufferSize, false);
} else {
- bytesRead = readInternal(fCursor, buffer, 0, b.length, true);
+ // Enabling read ahead for random reads as well to reduce number of
remote calls.
+ int lengthWithReadAhead = Math.min(b.length + readAheadRange,
bufferSize);
+ LOG.debug("Random read with read ahead size of {}",
lengthWithReadAhead);
+ bytesRead = readInternal(fCursor, buffer, 0, lengthWithReadAhead,
true);
Review comment:
Based on the S3A experience (which didn't always read into a buffer,
BTW), the "penalty" of having a large readahead range is there is more data to
drain when you want to cancel the read (ie. a seek out of range).
That code does the draining in the active thread. If that were to be done in
a background thread, the penalty of a larger readahead would be less, as you
would only see a delay from the draining if there were no free HTTPS
connections in the pool. Setting up a new HTTPS connection is expensive though.
If there were no free HTTPS connections in the pool, you would be better off
draining the stream in the active thread. Maybe.
(Disclaimer: all my claims about cost of HTTPS are based on S3 +Java7/8, and
S3 is very slow to set up a connection. If the ADLS Gen2 store is faster to
negotiate then it becomes a lot more justifiable to drain in a separate thread)
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 485651)
Time Spent: 1h (was: 50m)
> ABFS: Allow random read sizes to be of buffer size
> --------------------------------------------------
>
> Key: HADOOP-17250
> URL: https://issues.apache.org/jira/browse/HADOOP-17250
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/azure
> Affects Versions: 3.3.0
> Reporter: Sneha Vijayarajan
> Assignee: Sneha Vijayarajan
> Priority: Major
> Labels: abfsactive, pull-request-available
> Time Spent: 1h
> Remaining Estimate: 0h
>
> ADLS Gen2/ABFS driver is optimized to read only the bytes that are requested
> for when the read pattern is random.
> It was observed in some spark jobs that though the reads are random, the next
> read doesn't skip by a lot and can be served by the earlier read if read was
> done in buffer size. As a result the job triggered a higher count of read
> calls and resulted in higher job runtime.
> When these jobs were run against Gen1 which always reads in buffer size , the
> jobs fared well.
> In this Jira we try to provide a control over config on random read to be of
> requested size or buffer size.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]