anmolanmol1234 commented on code in PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#discussion_r1566892915


##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsInputStreamStatistics.java:
##########
@@ -227,15 +227,23 @@ public void testReadStatistics() throws IOException {
        * readOps - Since each time read operation is performed OPERATIONS
        * times, total number of read operations would be equal to OPERATIONS.
        *
-       * remoteReadOps - Only a single remote read operation is done. Hence,
+       * remoteReadOps -
+       * In case of Head Optimization for InputStream, the first read operation
+       * would read only the asked range and would not be able to read the 
entire file
+       * ras it has no information on the contentLength of the file. The second

Review Comment:
   typo: as



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsNetworkStatistics.java:
##########
@@ -231,7 +237,17 @@ public void testAbfsHttpResponseStatistics() throws 
IOException {
       // 1 read request = 1 connection and 1 get response
       expectedConnectionsMade++;
       expectedGetResponses++;
-      expectedBytesReceived += bytesWrittenToFile;
+      if (!getConfiguration().getHeadOptimizationForInputStream()) {
+        expectedBytesReceived += bytesWrittenToFile;
+      } else {
+        /*
+         * With head optimization enabled, the abfsInputStream is not aware
+         * of the contentLength and hence, it would only read data for which 
the range
+         * is provided. With the first remote call done, the inputStream will 
get
+         * aware of the contentLength and would be able to use it for further 
reads.
+         */
+        expectedBytesReceived += 1;

Review Comment:
   why +1 ?



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/contract/ITestAbfsFileSystemContractOpen.java:
##########
@@ -49,4 +61,55 @@ protected Configuration createConfiguration() {
   protected AbstractFSContract createContract(final Configuration conf) {
     return new AbfsFileSystemContract(conf, isSecure);
   }
+
+  @Override
+  public FileSystem getFileSystem() {

Review Comment:
   This code is repeated at multiple places, can it be made centralized



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemAuthorization.java:
##########
@@ -327,7 +328,15 @@ private void executeOp(Path reqPath, AzureBlobFileSystem 
fs,
       fs.open(reqPath);
       break;
     case Open:
-      fs.open(reqPath);
+      InputStream is = fs.open(reqPath);
+      if (getConfiguration().getHeadOptimizationForInputStream()) {
+        try {
+          is.read();
+        } catch (IOException ex) {
+          is.close();

Review Comment:
   close should be in finally method



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemRandomRead.java:
##########
@@ -192,7 +192,11 @@ public void testSkipBounds() throws Exception {
     Path testPath = path(TEST_FILE_PREFIX + "_testSkipBounds");
     long testFileLength = assumeHugeFileExists(testPath);
 
-    try (FSDataInputStream inputStream = this.getFileSystem().open(testPath)) {
+    try (FSDataInputStream inputStream = this.getFileSystem()

Review Comment:
   need for this change ?



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##########
@@ -234,15 +244,41 @@ private void seekReadAndTest(final AzureBlobFileSystem fs,
       long expectedBCursor;
       long expectedFCursor;
       if (optimizationOn) {
-        if (actualContentLength <= footerReadBufferSize) {
-          expectedLimit = actualContentLength;
-          expectedBCursor = seekPos + actualLength;
+        if (getConfiguration().getHeadOptimizationForInputStream()) {

Review Comment:
   Too many variable changes, can we add comments please 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to