[
https://issues.apache.org/jira/browse/HADOOP-17250?focusedWorklogId=485168&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-485168
]
ASF GitHub Bot logged work on HADOOP-17250:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 16/Sep/20 14:10
Start Date: 16/Sep/20 14:10
Worklog Time Spent: 10m
Work Description: snvijaya commented on a change in pull request #2307:
URL: https://github.com/apache/hadoop/pull/2307#discussion_r489461371
##########
File path:
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##########
@@ -180,9 +205,13 @@ private int readOneBlock(final byte[] b, final int off,
final int len) throws IO
// Enable readAhead when reading sequentially
if (-1 == fCursorAfterLastRead || fCursorAfterLastRead == fCursor ||
b.length >= bufferSize) {
+ LOG.debug("Sequential read with read ahead size of {}", bufferSize);
bytesRead = readInternal(fCursor, buffer, 0, bufferSize, false);
} else {
- bytesRead = readInternal(fCursor, buffer, 0, b.length, true);
+ // Enabling read ahead for random reads as well to reduce number of
remote calls.
+ int lengthWithReadAhead = Math.min(b.length + readAheadRange,
bufferSize);
+ LOG.debug("Random read with read ahead size of {}",
lengthWithReadAhead);
+ bytesRead = readInternal(fCursor, buffer, 0, lengthWithReadAhead,
true);
Review comment:
As with Parquet and ORC we have seen read patterns move from sequential
to random and vice versa. That being the case would it not be better to read
ahead to bufferSize always ? Providing options to read to lower bytes like 64
KB can actually lead to more IOPs. From our meeting yesterday too , one thing
we all agree to was lower the IOPs better and also better to read more than
smaller size.
So let remove the config for readAheadRange and instead always readAhead for
whats configured for bufferSize.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 485168)
Time Spent: 40m (was: 0.5h)
> ABFS: Allow random read sizes to be of buffer size
> --------------------------------------------------
>
> Key: HADOOP-17250
> URL: https://issues.apache.org/jira/browse/HADOOP-17250
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/azure
> Affects Versions: 3.3.0
> Reporter: Sneha Vijayarajan
> Assignee: Sneha Vijayarajan
> Priority: Major
> Labels: abfsactive, pull-request-available
> Time Spent: 40m
> Remaining Estimate: 0h
>
> ADLS Gen2/ABFS driver is optimized to read only the bytes that are requested
> for when the read pattern is random.
> It was observed in some spark jobs that though the reads are random, the next
> read doesn't skip by a lot and can be served by the earlier read if read was
> done in buffer size. As a result the job triggered a higher count of read
> calls and resulted in higher job runtime.
> When these jobs were run against Gen1 which always reads in buffer size , the
> jobs fared well.
> In this Jira we try to provide a control over config on random read to be of
> requested size or buffer size.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]