saxenapranav commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1569984613


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemRandomRead.java:
##########
@@ -192,7 +192,11 @@ public void testSkipBounds() throws Exception {
     Path testPath = path(TEST_FILE_PREFIX + "_testSkipBounds");
     long testFileLength = assumeHugeFileExists(testPath);
 
-    try (FSDataInputStream inputStream = this.getFileSystem().open(testPath)) {
+    try (FSDataInputStream inputStream = this.getFileSystem()

Review Comment:
   There is an assertion on the if we can not seek past contentLength. With 
lazy optimization if we open the inputStream with `fs.open()`, the inputStream 
would not know the contentLength until first read is done. In this test, there 
is no read but only seek and skip. Hence, using 
`fs.openWithFile().withFileStatus().build().get()`  so that the opened 
inputStream is aware of the contentLength and can raise proper exception if 
skip is out of contentLength.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to