[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2021-09-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=645653=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-645653
 ]

ASF GitHub Bot logged work on HDFS-6874:


Author: ASF GitHub Bot
Created on: 02/Sep/21 01:17
Start Date: 02/Sep/21 01:17
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322#issuecomment-910991818


   Hey Ahmed. Thanks a lot but I think this is going to require another few 
iterations.
   Will ping you when I feel like it gets the quality I like.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 645653)
Time Spent: 2h 10m  (was: 2h)

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: BB2015-05-TBR, pull-request-available
> Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.011.patch, HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874.07.patch, HDFS-6874.08.patch, HDFS-6874.09.patch, 
> HDFS-6874.10.patch, HDFS-6874.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2021-09-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=645327=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-645327
 ]

ASF GitHub Bot logged work on HDFS-6874:


Author: ASF GitHub Bot
Created on: 01/Sep/21 15:01
Start Date: 01/Sep/21 15:01
Worklog Time Spent: 10m 
  Work Description: amahussein commented on pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322#issuecomment-910371397


   Hey @jojochuang 
   Thanks for the recent fixes.
   Are you still working on making changes to the PR or should I start doing a 
quick review?
   
   Have you checked whether the failed unit tests are related to the changes?
   ```bash
   [ERROR] Errors: 
   [ERROR] 
org.apache.hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem.testOperationDoAs[43](org.apache.hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem)
   [ERROR]   Run 1: 
TestHttpFSFWithSWebhdfsFileSystem>BaseTestHttpFSWith.testOperationDoAs:1391->BaseTestHttpFSWith.access$100:115->BaseTestHttpFSWith.operation:1351->BaseTestHttpFSWith.testGetFileBlockLocationsFallback:2043
 » SSL
   [ERROR]   Run 2: 
TestHttpFSFWithSWebhdfsFileSystem>BaseTestHttpFSWith.testOperationDoAs:1391->BaseTestHttpFSWith.access$100:115->BaseTestHttpFSWith.operation:1351->BaseTestHttpFSWith.testGetFileBlockLocationsFallback:2043
 » SSL
   [ERROR]   Run 3: 
TestHttpFSFWithSWebhdfsFileSystem>BaseTestHttpFSWith.testOperationDoAs:1391->BaseTestHttpFSWith.access$100:115->BaseTestHttpFSWith.operation:1351->BaseTestHttpFSWith.testGetFileBlockLocationsFallback:2043
 » SSL
   [INFO] 
   [ERROR] 
org.apache.hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem.testOperation[43](org.apache.hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem)
   [ERROR]   Run 1: 
TestHttpFSFWithSWebhdfsFileSystem>BaseTestHttpFSWith.testOperation:1380->BaseTestHttpFSWith.operation:1351->BaseTestHttpFSWith.testGetFileBlockLocationsFallback:2043
 » SSL
   [ERROR]   Run 2: 
TestHttpFSFWithSWebhdfsFileSystem>BaseTestHttpFSWith.testOperation:1380->BaseTestHttpFSWith.operation:1351->BaseTestHttpFSWith.testGetFileBlockLocationsFallback:2043
 » SSL
   [ERROR]   Run 3: 
TestHttpFSFWithSWebhdfsFileSystem>BaseTestHttpFSWith.testOperation:1380->BaseTestHttpFSWith.operation:1351->BaseTestHttpFSWith.testGetFileBlockLocationsFallback:2043
 » SSL
   [INFO] 
   [ERROR] 
org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem.testOperationDoAs[43](org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem)
   [ERROR]   Run 1: 
TestHttpFSFileSystemLocalFileSystem>BaseTestHttpFSWith.testOperationDoAs:1391->BaseTestHttpFSWith.access$100:115->BaseTestHttpFSWith.operation:1351->BaseTestHttpFSWith.testGetFileBlockLocationsFallback:2014
 » ClassCast
   [ERROR]   Run 2: 
TestHttpFSFileSystemLocalFileSystem>BaseTestHttpFSWith.testOperationDoAs:1391->BaseTestHttpFSWith.access$100:115->BaseTestHttpFSWith.operation:1351->BaseTestHttpFSWith.testGetFileBlockLocationsFallback:2014
 » ClassCast
   [ERROR]   Run 3: 
TestHttpFSFileSystemLocalFileSystem>BaseTestHttpFSWith.testOperationDoAs:1391->BaseTestHttpFSWith.access$100:115->BaseTestHttpFSWith.operation:1351->BaseTestHttpFSWith.testGetFileBlockLocationsFallback:2014
 » ClassCast
   [INFO] 
   [ERROR] 
org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem.testOperation[43](org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem)
   [ERROR]   Run 1: 
TestHttpFSFileSystemLocalFileSystem>BaseTestHttpFSWith.testOperation:1380->BaseTestHttpFSWith.operation:1351->BaseTestHttpFSWith.testGetFileBlockLocationsFallback:2014
 » ClassCast
   [ERROR]   Run 2: 
TestHttpFSFileSystemLocalFileSystem>BaseTestHttpFSWith.testOperation:1380->BaseTestHttpFSWith.operation:1351->BaseTestHttpFSWith.testGetFileBlockLocationsFallback:2014
 » ClassCast
   [ERROR]   Run 3: 
TestHttpFSFileSystemLocalFileSystem>BaseTestHttpFSWith.testOperation:1380->BaseTestHttpFSWith.operation:1351->BaseTestHttpFSWith.testGetFileBlockLocationsFallback:2014
 » ClassCast
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 645327)
Time Spent: 2h  (was: 1h 50m)

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>

[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2021-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=642313=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-642313
 ]

ASF GitHub Bot logged work on HDFS-6874:


Author: ASF GitHub Bot
Created on: 26/Aug/21 12:59
Start Date: 26/Aug/21 12:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322#issuecomment-906385099


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 48s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 34s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  16m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  22m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 53s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/3/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 9 new + 477 unchanged - 1 fixed = 486 total (was 
478)  |
   | +1 :green_heart: |  mvnsite  |   3m 45s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 30s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  17m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 27s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 326m 15s |  |  hadoop-hdfs in the patch 
passed.  |
   | -1 :x: |  unit  |  14m 54s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt)
 |  hadoop-hdfs-httpfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 550m 53s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem |
   |   | hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3322 |
   | 

[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2021-08-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=642123=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-642123
 ]

ASF GitHub Bot logged work on HDFS-6874:


Author: ASF GitHub Bot
Created on: 26/Aug/21 03:25
Start Date: 26/Aug/21 03:25
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322#discussion_r696260085



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
##
@@ -908,4 +911,51 @@ private static SnapshotStatus toSnapshotStatus(
 SnapshotStatus.getParentPath(fullPath)));
 return snapshotStatus;
   }
+
+  static BlockLocation[] toBlockLocationArray(Map json)
+  throws IOException {
+final Map rootmap =
+(Map) json.get(BlockLocation.class.getSimpleName() + "s");
+final List array =
+JsonUtilClient.getList(rootmap, BlockLocation.class.getSimpleName());
+Preconditions.checkNotNull(array);
+final BlockLocation[] locations = new BlockLocation[array.size()];
+int i = 0;
+for (Object object : array) {
+  final Map m = (Map) object;
+  locations[i++] = JsonUtilClient.toBlockLocation(m);
+}
+return locations;
+  }
+
+  /** Convert a Json map to BlockLocation. **/
+  static BlockLocation toBlockLocation(Map m) throws IOException {

Review comment:
   this is only used by toBlockLocationArray() and the test of that covers 
it too.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 642123)
Time Spent: 1h 40m  (was: 1.5h)

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: BB2015-05-TBR, pull-request-available
> Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.011.patch, HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874.07.patch, HDFS-6874.08.patch, HDFS-6874.09.patch, 
> HDFS-6874.10.patch, HDFS-6874.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2021-08-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=642108=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-642108
 ]

ASF GitHub Bot logged work on HDFS-6874:


Author: ASF GitHub Bot
Created on: 26/Aug/21 02:28
Start Date: 26/Aug/21 02:28
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322#discussion_r696240622



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
##
@@ -2002,4 +2003,38 @@ public void testContentType() throws Exception {
 () -> HttpFSUtils.jsonParse(conn));
 conn.disconnect();
   }
+
+  @Test
+  @TestDir
+  @TestJetty
+  @TestHdfs
+  public void testGetFileBlockLocations() throws Exception {
+createHttpFSServer(false, false);
+// Create a test directory
+String pathStr = "/tmp/tmp-snap-diff-test";
+createDirWithHttp(pathStr, "700", null);
+
+Path path = new Path(pathStr);
+DistributedFileSystem dfs = (DistributedFileSystem) FileSystem
+.get(path.toUri(), TestHdfsHelper.getHdfsConf());
+// Enable snapshot
+dfs.allowSnapshot(path);
+Assert.assertTrue(dfs.getFileStatus(path).isSnapshotEnabled());
+// Create a file and take a snapshot
+String file1 = pathStr + "/file1";
+createWithHttp(file1, null);
+HttpURLConnection conn = sendRequestToHttpFSServer(file1,
+"GETFILEBLOCKLOCATIONS", "length=10");
+Assert.assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode());
+BlockLocation[] locations1 =
+dfs.getFileBlockLocations(new Path(file1), 0, 1);
+Assert.assertNotNull(locations1);

Review comment:
   it makes no sense to check nullity of locations1 using hdfs. The code 
doesn't change the file, so why check it?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
##
@@ -2002,4 +2003,38 @@ public void testContentType() throws Exception {
 () -> HttpFSUtils.jsonParse(conn));
 conn.disconnect();
   }
+
+  @Test
+  @TestDir
+  @TestJetty
+  @TestHdfs
+  public void testGetFileBlockLocations() throws Exception {
+createHttpFSServer(false, false);
+// Create a test directory
+String pathStr = "/tmp/tmp-snap-diff-test";
+createDirWithHttp(pathStr, "700", null);
+
+Path path = new Path(pathStr);
+DistributedFileSystem dfs = (DistributedFileSystem) FileSystem
+.get(path.toUri(), TestHdfsHelper.getHdfsConf());
+// Enable snapshot
+dfs.allowSnapshot(path);
+Assert.assertTrue(dfs.getFileStatus(path).isSnapshotEnabled());
+// Create a file and take a snapshot
+String file1 = pathStr + "/file1";
+createWithHttp(file1, null);
+HttpURLConnection conn = sendRequestToHttpFSServer(file1,
+"GETFILEBLOCKLOCATIONS", "length=10");
+Assert.assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode());
+BlockLocation[] locations1 =
+dfs.getFileBlockLocations(new Path(file1), 0, 1);
+Assert.assertNotNull(locations1);
+
+HttpURLConnection conn1 = sendRequestToHttpFSServer(file1,
+"GET_BLOCK_LOCATIONS", "length=10");
+Assert.assertEquals(HttpURLConnection.HTTP_OK, conn1.getResponseCode());
+BlockLocation[] locations2 =
+dfs.getFileBlockLocations(new Path(file1), 0, 1);
+Assert.assertNotNull(locations2);

Review comment:
   here, too.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
##
@@ -2002,4 +2003,38 @@ public void testContentType() throws Exception {
 () -> HttpFSUtils.jsonParse(conn));
 conn.disconnect();
   }
+
+  @Test
+  @TestDir
+  @TestJetty
+  @TestHdfs
+  public void testGetFileBlockLocations() throws Exception {
+createHttpFSServer(false, false);
+// Create a test directory
+String pathStr = "/tmp/tmp-snap-diff-test";
+createDirWithHttp(pathStr, "700", null);
+
+Path path = new Path(pathStr);
+DistributedFileSystem dfs = (DistributedFileSystem) FileSystem
+.get(path.toUri(), TestHdfsHelper.getHdfsConf());
+// Enable snapshot

Review comment:
   no i don't understand this. it doesn't look like snapshot is used at all.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 642108)
Time Spent: 1.5h  (was: 1h 20m)

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> 

[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2021-08-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=641607=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641607
 ]

ASF GitHub Bot logged work on HDFS-6874:


Author: ASF GitHub Bot
Created on: 25/Aug/21 10:40
Start Date: 25/Aug/21 10:40
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322#discussion_r695375296



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
##
@@ -908,4 +911,51 @@ private static SnapshotStatus toSnapshotStatus(
 SnapshotStatus.getParentPath(fullPath)));
 return snapshotStatus;
   }
+
+  static BlockLocation[] toBlockLocationArray(Map json)

Review comment:
   TODO: need test

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
##
@@ -908,4 +911,51 @@ private static SnapshotStatus toSnapshotStatus(
 SnapshotStatus.getParentPath(fullPath)));
 return snapshotStatus;
   }
+
+  static BlockLocation[] toBlockLocationArray(Map json)
+  throws IOException {
+final Map rootmap =
+(Map) json.get(BlockLocation.class.getSimpleName() + "s");
+final List array =
+JsonUtilClient.getList(rootmap, BlockLocation.class.getSimpleName());
+Preconditions.checkNotNull(array);
+final BlockLocation[] locations = new BlockLocation[array.size()];
+int i = 0;
+for (Object object : array) {
+  final Map m = (Map) object;
+  locations[i++] = JsonUtilClient.toBlockLocation(m);
+}
+return locations;
+  }
+
+  /** Convert a Json map to BlockLocation. **/
+  static BlockLocation toBlockLocation(Map m) throws IOException {
+if (m == null) {
+  return null;
+}
+long length = ((Number) m.get("length")).longValue();
+long offset = ((Number) m.get("offset")).longValue();
+boolean corrupt = Boolean.getBoolean(m.get("corrupt").toString());
+String[] storageIds = toStringArray(getList(m, "storageIds"));
+String[] cachedHosts = toStringArray(getList(m, "cachedHosts"));
+String[] hosts = toStringArray(getList(m, "hosts"));
+String[] names = toStringArray(getList(m, "names"));
+String[] topologyPaths = toStringArray(getList(m, "topologyPaths"));
+StorageType[] storageTypes = toStorageTypeArray(getList(m, 
"storageTypes"));
+return new BlockLocation(names, hosts, cachedHosts, topologyPaths,
+storageIds, storageTypes, offset, length, corrupt);
+  }
+
+  static String[] toStringArray(List list) {

Review comment:
   VisibleForTesting

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
##
@@ -908,4 +911,51 @@ private static SnapshotStatus toSnapshotStatus(
 SnapshotStatus.getParentPath(fullPath)));
 return snapshotStatus;
   }
+
+  static BlockLocation[] toBlockLocationArray(Map json)
+  throws IOException {
+final Map rootmap =
+(Map) json.get(BlockLocation.class.getSimpleName() + "s");
+final List array =
+JsonUtilClient.getList(rootmap, BlockLocation.class.getSimpleName());
+Preconditions.checkNotNull(array);
+final BlockLocation[] locations = new BlockLocation[array.size()];
+int i = 0;
+for (Object object : array) {
+  final Map m = (Map) object;
+  locations[i++] = JsonUtilClient.toBlockLocation(m);
+}
+return locations;
+  }
+
+  /** Convert a Json map to BlockLocation. **/
+  static BlockLocation toBlockLocation(Map m) throws IOException {

Review comment:
   need test.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
##
@@ -2002,4 +2003,38 @@ public void testContentType() throws Exception {
 () -> HttpFSUtils.jsonParse(conn));
 conn.disconnect();
   }
+
+  @Test
+  @TestDir
+  @TestJetty
+  @TestHdfs
+  public void testGetFileBlockLocations() throws Exception {
+createHttpFSServer(false, false);
+// Create a test directory
+String pathStr = "/tmp/tmp-snap-diff-test";
+createDirWithHttp(pathStr, "700", null);
+
+Path path = new Path(pathStr);
+DistributedFileSystem dfs = (DistributedFileSystem) FileSystem
+.get(path.toUri(), TestHdfsHelper.getHdfsConf());
+// Enable snapshot
+dfs.allowSnapshot(path);
+Assert.assertTrue(dfs.getFileStatus(path).isSnapshotEnabled());
+// Create a file and take a snapshot
+String file1 = pathStr + "/file1";
+createWithHttp(file1, null);
+HttpURLConnection conn = sendRequestToHttpFSServer(file1,
+"GETFILEBLOCKLOCATIONS", "length=10");
+Assert.assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode());
+

[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2021-08-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=641496=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641496
 ]

ASF GitHub Bot logged work on HDFS-6874:


Author: ASF GitHub Bot
Created on: 25/Aug/21 03:59
Start Date: 25/Aug/21 03:59
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322#discussion_r695373362



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
##
@@ -3889,4 +3891,27 @@ public MultipartUploaderBuilder 
createMultipartUploader(final Path basePath)
   throws IOException {
 return new FileSystemMultipartUploaderBuilder(this, basePath);
   }
+
+  public LocatedBlocks getLocatedBlocks(Path p, long start, long len)
+  throws IOException {
+statistics.incrementReadOps(1);
+storageStatistics.incrementOpCounter(OpType.GET_FILE_BLOCK_LOCATIONS);
+final Path absF = fixRelativePart(p);
+return new FileSystemLinkResolver() {
+  @Override
+  public LocatedBlocks doCall(final Path p) throws IOException {
+return dfs.getLocatedBlocks(getPathName(p), start, len);
+  }
+  @Override
+  public LocatedBlocks next(final FileSystem fs, final Path p)
+  throws IOException {
+if (fs instanceof DistributedFileSystem) {
+  DistributedFileSystem myDfs = (DistributedFileSystem)fs;
+  return myDfs.getLocatedBlocks(p, start, len);
+}
+throw new UnsupportedOperationException("Cannot recoverLease through" +

Review comment:
   TODO: update the exception message. It was modified based on 
recoverLease()




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 641496)
Time Spent: 1h 10m  (was: 1h)

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: BB2015-05-TBR, pull-request-available
> Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.011.patch, HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874.07.patch, HDFS-6874.08.patch, HDFS-6874.09.patch, 
> HDFS-6874.10.patch, HDFS-6874.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2021-08-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=641480=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641480
 ]

ASF GitHub Bot logged work on HDFS-6874:


Author: ASF GitHub Bot
Created on: 25/Aug/21 03:19
Start Date: 25/Aug/21 03:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322#issuecomment-905150078


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  28m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  23m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 34s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  18m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  27m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  27m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  24m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 41s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/2/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 8 new + 477 unchanged - 1 fixed = 485 total (was 
478)  |
   | +1 :green_heart: |  mvnsite  |   4m 19s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   3m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 30s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  19m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 27s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 50s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 321m  3s |  |  hadoop-hdfs in the patch 
passed.  |
   | -1 :x: |  unit  |  13m 36s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt)
 |  hadoop-hdfs-httpfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 579m 13s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem |
   |   | hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3322 |
   | 

[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2021-08-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=641148=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641148
 ]

ASF GitHub Bot logged work on HDFS-6874:


Author: ASF GitHub Bot
Created on: 24/Aug/21 15:27
Start Date: 24/Aug/21 15:27
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322#discussion_r694962055



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
##
@@ -492,6 +509,27 @@ public InputStream run() throws Exception {
   response = Response.ok(js).type(MediaType.APPLICATION_JSON).build();
   break;
 }
+case GET_BLOCK_LOCATIONS: {
+  long offset = 0;
+  long len = Long.MAX_VALUE;
+  Long offsetParam = params.get(OffsetParam.NAME, OffsetParam.class);
+  Long lenParam = params.get(LenParam.NAME, LenParam.class);
+  AUDIT_LOG.info("[{}] offset [{}] len [{}]",
+  new Object[] { path, offsetParam, lenParam });
+  if (offsetParam != null && offsetParam.longValue() > 0) {
+offset = offsetParam.longValue();
+  }
+  if (lenParam != null && lenParam.longValue() > 0) {
+len = lenParam.longValue();
+  }
+  FSOperations.FSFileBlockLocations command =

Review comment:
   actually this looks wrong. Httpfs's GET_BLOCK_LOCATIONS should behave 
just like webhdfs's GET_BLOCK_LOCATIONS, which returns serialized LocatedBlocks 
rather than BlockLocations[].




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 641148)
Time Spent: 50m  (was: 40m)

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: BB2015-05-TBR, pull-request-available
> Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.011.patch, HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874.07.patch, HDFS-6874.08.patch, HDFS-6874.09.patch, 
> HDFS-6874.10.patch, HDFS-6874.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2021-08-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=641017=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641017
 ]

ASF GitHub Bot logged work on HDFS-6874:


Author: ASF GitHub Bot
Created on: 24/Aug/21 09:35
Start Date: 24/Aug/21 09:35
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322#discussion_r694685317



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
##
@@ -1948,4 +1952,30 @@ public void testStoragePolicySatisfier() throws 
Exception {
   dfs.delete(path1, true);
 }
   }
+
+  private void testGetFileBlockLocations() throws Exception {
+BlockLocation[] locations1, locations2 = null;
+Path testFile = null;
+if (!this.isLocalFS()) {
+  FileSystem fs = this.getHttpFSFileSystem();
+  testFile = new Path(getProxiedFSTestDir(), "singleBlock.txt");
+  DFSTestUtil.createFile(fs, testFile, (long) 1, (short) 1, 0L);
+  if (fs instanceof HttpFSFileSystem) {
+HttpFSFileSystem httpFS = (HttpFSFileSystem) fs;
+locations1 = httpFS.getFileBlockLocations(testFile, 0, 1);
+Assert.assertNotNull(locations1);
+
+// TODO: add test for HttpFSFileSystem.toBlockLocations()

Review comment:
   this is my bad. I thought i added the test. Will update in the next 
revision.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 641017)
Time Spent: 40m  (was: 0.5h)

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: BB2015-05-TBR, pull-request-available
> Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.011.patch, HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874.07.patch, HDFS-6874.08.patch, HDFS-6874.09.patch, 
> HDFS-6874.10.patch, HDFS-6874.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2021-08-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=640761=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-640761
 ]

ASF GitHub Bot logged work on HDFS-6874:


Author: ASF GitHub Bot
Created on: 23/Aug/21 16:31
Start Date: 23/Aug/21 16:31
Worklog Time Spent: 10m 
  Work Description: amahussein commented on a change in pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322#discussion_r694125842



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
##
@@ -1857,18 +1859,57 @@ public synchronized void cancelDelegationToken(final 
Token token
   }
 
   @Override
-  public BlockLocation[] getFileBlockLocations(final Path p,
-  final long offset, final long length) throws IOException {
+  public BlockLocation[] getFileBlockLocations(final Path p, final long offset,
+  final long length) throws IOException {
 statistics.incrementReadOps(1);
 storageStatistics.incrementOpCounter(OpType.GET_FILE_BLOCK_LOCATIONS);
+BlockLocation[] locations = null;
+try {
+  if (isServerHCFSCompatible) {
+locations =
+getFileBlockLocations(GetOpParam.Op.GETFILEBLOCKLOCATIONS, p, 
offset, length);
+  } else {
+locations = getFileBlockLocations(GetOpParam.Op.GET_BLOCK_LOCATIONS, p,
+offset, length);
+  }
+} catch (RemoteException e) {
+  if (isGetFileBlockLocationsException(e)) {

Review comment:
   ```suggestion
 // parsing the exception is needed only if the client thinks the 
service is compatible
 if (isServerHCFSCompatible && isGetFileBlockLocationsException(e)) {
   ```

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
##
@@ -2002,4 +2003,38 @@ public void testContentType() throws Exception {
 () -> HttpFSUtils.jsonParse(conn));
 conn.disconnect();
   }
+
+  @Test
+  @TestDir
+  @TestJetty
+  @TestHdfs
+  public void testGetFileBlockLocations() throws Exception {
+createHttpFSServer(false, false);
+// Create a test directory
+String pathStr = "/tmp/tmp-snap-diff-test";
+createDirWithHttp(pathStr, "700", null);
+
+Path path = new Path(pathStr);
+DistributedFileSystem dfs = (DistributedFileSystem) FileSystem
+.get(path.toUri(), TestHdfsHelper.getHdfsConf());
+// Enable snapshot
+dfs.allowSnapshot(path);
+Assert.assertTrue(dfs.getFileStatus(path).isSnapshotEnabled());
+// Create a file and take a snapshot
+String file1 = pathStr + "/file1";
+createWithHttp(file1, null);
+HttpURLConnection conn = sendRequestToHttpFSServer(file1,
+"GETFILEBLOCKLOCATIONS", "length=10");
+Assert.assertEquals(HttpURLConnection.HTTP_OK, conn.getResponseCode());
+BlockLocation[] locations1 =
+dfs.getFileBlockLocations(new Path(file1), 0, 1);
+Assert.assertNotNull(locations1);
+
+HttpURLConnection conn1 = sendRequestToHttpFSServer(file1,
+"GET_BLOCK_LOCATIONS", "length=10");
+Assert.assertEquals(HttpURLConnection.HTTP_OK, conn1.getResponseCode());
+BlockLocation[] locations2 =
+dfs.getFileBlockLocations(new Path(file1), 0, 1);
+Assert.assertNotNull(locations2);
+  }

Review comment:
   Falling back from `GETFILEBLOCKLOCATIONS` to `GET_FILE_BLOCK_LOCATIONS` 
and caching the boolean flag is not tested. Maybe we need another unit test 
that assumes that the operation is not supported and falls back to the old.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 640761)
Time Spent: 0.5h  (was: 20m)

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: BB2015-05-TBR, pull-request-available
> Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.011.patch, HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874.07.patch, HDFS-6874.08.patch, HDFS-6874.09.patch, 
> HDFS-6874.10.patch, HDFS-6874.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> 

[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2021-08-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=640739=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-640739
 ]

ASF GitHub Bot logged work on HDFS-6874:


Author: ASF GitHub Bot
Created on: 23/Aug/21 15:24
Start Date: 23/Aug/21 15:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322#issuecomment-903872705


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 44s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   4m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  1s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 10s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 20s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt)
 |  hadoop-hdfs-httpfs in the patch failed.  |
   | -1 :x: |  compile  |   4m 17s | 
[/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/1/artifact/out/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs-project in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   4m 17s | 
[/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/1/artifact/out/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs-project in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   4m  4s | 
[/patch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/1/artifact/out/patch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs-project in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  javac  |   4m  4s | 
[/patch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/1/artifact/out/patch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs-project in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  7s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/1/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 4 new + 462 unchanged - 1 fixed = 
466 total (was 463)  |
   | -1 :x: |  mvnsite  |   0m 22s | 
[/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/1/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt)
 |  hadoop-hdfs-httpfs in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 

[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2021-08-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=640645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-640645
 ]

ASF GitHub Bot logged work on HDFS-6874:


Author: ASF GitHub Bot
Created on: 23/Aug/21 09:33
Start Date: 23/Aug/21 09:33
Worklog Time Spent: 10m 
  Work Description: jojochuang opened a new pull request #3322:
URL: https://github.com/apache/hadoop/pull/3322


   ### Description of PR
   This is a rebase of the patch file 11 attached to HDFS-6874.
   
   The GETFILEBLOCKLOCATIONS is HCFS compatible. Add support of it to httpfs to 
makes it possible for more applications to run directly against HttpFS server.
   
   Add GETFILEBLOCKLOCATIONS op support for httpfs server (HttpFSServer). Add 
the same for httpfs client (HttpFSFileSystem)
   Let webhdfs client (WebHdfsFileSystem ) tries the new GETFILEBLOCKLOCATIONS 
op if the server supports it. Otherwise, fall back to the old 
GET_FILE_BLOCK_LOCATIONS op. The selection is cached so the second invocation 
doesn't need to trial and error again.
   
   ### How was this patch tested?
   Unit tests.
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 640645)
Remaining Estimate: 0h
Time Spent: 10m

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.011.patch, HDFS-6874.02.patch, HDFS-6874.03.patch, 
> HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, 
> HDFS-6874.07.patch, HDFS-6874.08.patch, HDFS-6874.09.patch, 
> HDFS-6874.10.patch, HDFS-6874.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org