amahussein commented on a change in pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322#discussion_r694125842



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
##########
@@ -1857,18 +1859,57 @@ public synchronized void cancelDelegationToken(final 
Token<?> token
   }
 
   @Override
-  public BlockLocation[] getFileBlockLocations(final Path p,
-      final long offset, final long length) throws IOException {
+  public BlockLocation[] getFileBlockLocations(final Path p, final long offset,
+      final long length) throws IOException {
     statistics.incrementReadOps(1);
     storageStatistics.incrementOpCounter(OpType.GET_FILE_BLOCK_LOCATIONS);
+    BlockLocation[] locations = null;
+    try {
+      if (isServerHCFSCompatible) {
+        locations =
+            getFileBlockLocations(GetOpParam.Op.GETFILEBLOCKLOCATIONS, p, 
offset, length);
+      } else {
+        locations = getFileBlockLocations(GetOpParam.Op.GET_BLOCK_LOCATIONS, p,
+            offset, length);
+      }
+    } catch (RemoteException e) {
+      if (isGetFileBlockLocationsException(e)) {

Review comment:
       ```suggestion
         // parsing the exception is needed only if the client thinks the 
service is compatible
         if (isServerHCFSCompatible && isGetFileBlockLocationsException(e)) {
   ```

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
##########
@@ -2002,4 +2003,38 @@ public void testContentType() throws Exception {
         () -> HttpFSUtils.jsonParse(conn));
     conn.disconnect();
   }
+
+  @Test
+  @TestDir
+  @TestJetty
+  @TestHdfs
+  public void testGetFileBlockLocations() throws Exception {
+    createHttpFSServer(false, false);
+    // Create a test directory
+    String pathStr = "/tmp/tmp-snap-diff-test";
+    createDirWithHttp(pathStr, "700", null);
+
+    Path path = new Path(pathStr);
+    DistributedFileSystem dfs = (DistributedFileSystem) FileSystem
+        .get(path.toUri(), TestHdfsHelper.getHdfsConf());
+    // Enable snapshot
+    dfs.allowSnapshot(path);
+    Assert.assertTrue(dfs.getFileStatus(path).isSnapshotEnabled());
+    // Create a file and take a snapshot
+    String file1 = pathStr + "/file1";
+    createWithHttp(file1, null);
+    HttpURLConnection conn = sendRequestToHttpFSServer(file1,
+        "GETFILEBLOCKLOCATIONS", "length=10&offset10");
+    Assert.assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode());
+    BlockLocation[] locations1 =
+        dfs.getFileBlockLocations(new Path(file1), 0, 1);
+    Assert.assertNotNull(locations1);
+
+    HttpURLConnection conn1 = sendRequestToHttpFSServer(file1,
+        "GET_BLOCK_LOCATIONS", "length=10&offset10");
+    Assert.assertEquals(HttpURLConnection.HTTP_OK, conn1.getResponseCode());
+    BlockLocation[] locations2 =
+        dfs.getFileBlockLocations(new Path(file1), 0, 1);
+    Assert.assertNotNull(locations2);
+  }

Review comment:
       Falling back from `GETFILEBLOCKLOCATIONS` to `GET_FILE_BLOCK_LOCATIONS` 
and caching the boolean flag is not tested. Maybe we need another unit test 
that assumes that the operation is not supported and falls back to the old.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to