snvijaya commented on a change in pull request #1898: HADOOP-16852: Report
read-ahead error back
URL: https://github.com/apache/hadoop/pull/1898#discussion_r400112122
##########
File path:
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ReadBufferManager.java
##########
@@ -299,11 +327,32 @@ private void clearFromReadAheadQueue(final
AbfsInputStream stream, final long re
}
private int getBlockFromCompletedQueue(final AbfsInputStream stream, final
long position, final int length,
- final byte[] buffer) {
- ReadBuffer buf = getFromList(completedReadList, stream, position);
- if (buf == null || position >= buf.getOffset() + buf.getLength()) {
+ final byte[] buffer) throws
IOException {
+ ReadBuffer buf = getBufferFromCompletedQueue(stream, position);
+
+ if (buf == null) {
return 0;
}
+
+ if (buf.getStatus() == ReadBufferStatus.READ_FAILED) {
+ // Eviction of a read buffer is triggered only when a queue request
comes in
+ // and each eviction attempt tries to find one eligible buffer.
+ // Hence there are chances that an old read-ahead buffer with exception
is still
+ // available. To prevent new read requests to fail due to such old
buffers,
+ // return exception only from buffers that failed within last
THRESHOLD_AGE_MILLISECONDS
+ if ((currentTimeMillis() - (buf.getTimeStamp()) <
THRESHOLD_AGE_MILLISECONDS)) {
Review comment:
Aim here is to enforce the read-ahead failure for the threshold time
duration (which currently is 30 sec), i.e. any read request for that offset
that can be served by the ReadBuffer needs to fail.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]