[jira] [Updated] (HDFS-16185) Fix comment in LowRedundancyBlocks.java
[ https://issues.apache.org/jira/browse/HDFS-16185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-16185: - Labels: newbie (was: ) > Fix comment in LowRedundancyBlocks.java > --- > > Key: HDFS-16185 > URL: https://issues.apache.org/jira/browse/HDFS-16185 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Priority: Minor > Labels: newbie > > [https://github.com/apache/hadoop/blob/c8e58648389c7b0b476c3d0d47be86af2966842f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/LowRedundancyBlocks.java#L249] > "can only afford one replica loss" is not correct there. Before HDFS-9857, > the comment is "there is less than a third as many blocks as requested; this > is considered very under-replicated" and it seems correct. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16185) Fix comment in LowRedundancyBlocks.java
Akira Ajisaka created HDFS-16185: Summary: Fix comment in LowRedundancyBlocks.java Key: HDFS-16185 URL: https://issues.apache.org/jira/browse/HDFS-16185 Project: Hadoop HDFS Issue Type: Bug Components: documentation Reporter: Akira Ajisaka [https://github.com/apache/hadoop/blob/c8e58648389c7b0b476c3d0d47be86af2966842f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/LowRedundancyBlocks.java#L249] "can only afford one replica loss" is not correct there. Before HDFS-9857, the comment is "there is less than a third as many blocks as requested; this is considered very under-replicated" and it seems correct. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16184) De-flake TestBlockScanner#testSkipRecentAccessFile
[ https://issues.apache.org/jira/browse/HDFS-16184?focusedWorklogId=641505=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641505 ] ASF GitHub Bot logged work on HDFS-16184: - Author: ASF GitHub Bot Created on: 25/Aug/21 04:39 Start Date: 25/Aug/21 04:39 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #3329: URL: https://github.com/apache/hadoop/pull/3329#issuecomment-905177085 Done @ayushtkn. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 641505) Time Spent: 1h 10m (was: 1h) > De-flake TestBlockScanner#testSkipRecentAccessFile > -- > > Key: HDFS-16184 > URL: https://issues.apache.org/jira/browse/HDFS-16184 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > Test TestBlockScanner#testSkipRecentAccessFile is flaky: > > {code:java} > [ERROR] > testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) > Time elapsed: 3.936 s <<< FAILURE![ERROR] > testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) > Time elapsed: 3.936 s <<< FAILURE!java.lang.AssertionError: Scan nothing > for all files are accessed in last period. at > org.junit.Assert.fail(Assert.java:89) at > org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testSkipRecentAccessFile(TestBlockScanner.java:1015) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > {code} > e.g > [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/37/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS
[ https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=641496=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641496 ] ASF GitHub Bot logged work on HDFS-6874: Author: ASF GitHub Bot Created on: 25/Aug/21 03:59 Start Date: 25/Aug/21 03:59 Worklog Time Spent: 10m Work Description: jojochuang commented on a change in pull request #3322: URL: https://github.com/apache/hadoop/pull/3322#discussion_r695373362 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java ## @@ -3889,4 +3891,27 @@ public MultipartUploaderBuilder createMultipartUploader(final Path basePath) throws IOException { return new FileSystemMultipartUploaderBuilder(this, basePath); } + + public LocatedBlocks getLocatedBlocks(Path p, long start, long len) + throws IOException { +statistics.incrementReadOps(1); +storageStatistics.incrementOpCounter(OpType.GET_FILE_BLOCK_LOCATIONS); +final Path absF = fixRelativePart(p); +return new FileSystemLinkResolver() { + @Override + public LocatedBlocks doCall(final Path p) throws IOException { +return dfs.getLocatedBlocks(getPathName(p), start, len); + } + @Override + public LocatedBlocks next(final FileSystem fs, final Path p) + throws IOException { +if (fs instanceof DistributedFileSystem) { + DistributedFileSystem myDfs = (DistributedFileSystem)fs; + return myDfs.getLocatedBlocks(p, start, len); +} +throw new UnsupportedOperationException("Cannot recoverLease through" + Review comment: TODO: update the exception message. It was modified based on recoverLease() -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 641496) Time Spent: 1h 10m (was: 1h) > Add GETFILEBLOCKLOCATIONS operation to HttpFS > - > > Key: HDFS-6874 > URL: https://issues.apache.org/jira/browse/HDFS-6874 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 2.4.1, 2.7.3 >Reporter: Gao Zhong Liang >Assignee: Weiwei Yang >Priority: Major > Labels: BB2015-05-TBR, pull-request-available > Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, > HDFS-6874.011.patch, HDFS-6874.02.patch, HDFS-6874.03.patch, > HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, > HDFS-6874.07.patch, HDFS-6874.08.patch, HDFS-6874.09.patch, > HDFS-6874.10.patch, HDFS-6874.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already > supported in WebHDFS. For the request of GETFILEBLOCKLOCATIONS in > org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far: > ... > case GETFILEBLOCKLOCATIONS: { > response = Response.status(Response.Status.BAD_REQUEST).build(); > break; > } > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16182) numOfReplicas is given the wrong value in BlockPlacementPolicyDefault$chooseTarget can cause DataStreamer to fail with Heterogeneous Storage
[ https://issues.apache.org/jira/browse/HDFS-16182?focusedWorklogId=641488=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641488 ] ASF GitHub Bot logged work on HDFS-16182: - Author: ASF GitHub Bot Created on: 25/Aug/21 03:39 Start Date: 25/Aug/21 03:39 Worklog Time Spent: 10m Work Description: jojochuang commented on a change in pull request #3320: URL: https://github.com/apache/hadoop/pull/3320#discussion_r695335223 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java ## @@ -469,7 +469,7 @@ private Node chooseTarget(int numOfReplicas, LOG.trace("storageTypes={}", storageTypes); try { - if ((numOfReplicas = requiredStorageTypes.size()) == 0) { Review comment: better to declare numOfReplicas a final variable at line 438. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java ## @@ -1337,6 +1337,51 @@ public void testChooseSsdOverDisk() throws Exception { Assert.assertEquals(StorageType.DISK, targets[1].getStorageType()); } + @Test + public void testAddDatanode2ExistingPipelineInSsd() throws Exception { +BlockStoragePolicy policy = POLICY_SUITE.getPolicy(ALLSSD); + +final String[] racks = {"/d1/r1", "/d2/r2", "/d3/r3", "/d4/r4", "/d5/r5", +"/d6/r6", "/d7/r7"}; +final String[] hosts = {"host1", "host2", "host3", "host4", "host5", +"host6", "host7"}; +final StorageType[] disks = {StorageType.DISK, StorageType.DISK, StorageType.DISK}; + +final DatanodeStorageInfo[] diskStorages += DFSTestUtil.createDatanodeStorageInfos(7, racks, hosts, disks); +final DatanodeDescriptor[] dataNodes += DFSTestUtil.toDatanodeDescriptor(diskStorages); +for(int i = 0; i < dataNodes.length ; i++) { + BlockManagerTestUtil.updateStorage(dataNodes[i], + new DatanodeStorage("ssd" + i + 1, DatanodeStorage.State.NORMAL, + StorageType.SSD)); +} + +FileSystem.setDefaultUri(conf, "hdfs://localhost:0"); +conf.set(DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_KEY, "0.0.0.0:0"); +File baseDir = PathUtils.getTestDir(TestReplicationPolicy.class); +conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, +new File(baseDir, "name").getPath()); +DFSTestUtil.formatNameNode(conf); +NameNode namenode = new NameNode(conf); + +final BlockManager bm = namenode.getNamesystem().getBlockManager(); +BlockPlacementPolicy replicator = bm.getBlockPlacementPolicy(); +NetworkTopology cluster = bm.getDatanodeManager().getNetworkTopology(); +for (DatanodeDescriptor datanode : dataNodes) { + cluster.add(datanode); +} +// chsenDs are DISK StorageType to simulate not enough SDD Storage +List chsenDs = new ArrayList<>(); +chsenDs.add(diskStorages[0]); +chsenDs.add(diskStorages[1]); +DatanodeStorageInfo[] targets = replicator.chooseTarget("/foo", 1, +null, chsenDs, true, +new HashSet(), 0, policy, null); +System.out.println(policy.getName() + ": " + Arrays.asList(targets)); Review comment: Please use log4j to log the message. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 641488) Time Spent: 1h (was: 50m) > numOfReplicas is given the wrong value in > BlockPlacementPolicyDefault$chooseTarget can cause DataStreamer to fail with > Heterogeneous Storage > --- > > Key: HDFS-16182 > URL: https://issues.apache.org/jira/browse/HDFS-16182 > Project: Hadoop HDFS > Issue Type: Bug > Components: namanode >Affects Versions: 3.4.0 >Reporter: Max Xie >Assignee: Max Xie >Priority: Major > Labels: pull-request-available > Attachments: HDFS-16182.patch > > Time Spent: 1h > Remaining Estimate: 0h > > In our hdfs cluster, we use heterogeneous storage to store data in SSD for a > better performance. Sometimes hdfs client transfer data in pipline, it will > throw IOException and exit. Exception logs are below: > ``` > java.io.IOException: Failed to replace a bad datanode on the existing > pipeline due to no more good datanodes being available to try. (Nodes: >
[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS
[ https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=641480=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641480 ] ASF GitHub Bot logged work on HDFS-6874: Author: ASF GitHub Bot Created on: 25/Aug/21 03:19 Start Date: 25/Aug/21 03:19 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3322: URL: https://github.com/apache/hadoop/pull/3322#issuecomment-905150078 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 6s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 16s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 34s | | trunk passed | | +1 :green_heart: | compile | 28m 33s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 23m 23s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 4m 36s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 37s | | trunk passed | | +1 :green_heart: | javadoc | 3m 25s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 4m 12s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 34s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 18m 38s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 11s | | the patch passed | | +1 :green_heart: | compile | 27m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 27m 49s | | the patch passed | | +1 :green_heart: | compile | 24m 31s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 24m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 41s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/2/artifact/out/results-checkstyle-root.txt) | root: The patch generated 8 new + 477 unchanged - 1 fixed = 485 total (was 478) | | +1 :green_heart: | mvnsite | 4m 19s | | the patch passed | | +1 :green_heart: | xml | 0m 3s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 3m 16s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 32s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 30s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 19m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 27s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 2m 50s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 321m 3s | | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 13m 36s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt) | hadoop-hdfs-httpfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 58s | | The patch does not generate ASF License warnings. | | | | 579m 13s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem | | | hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3322/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3322 | |
[jira] [Commented] (HDFS-9256) Erasure Coding: Improve failure handling of ECWorker striped block reconstruction
[ https://issues.apache.org/jira/browse/HDFS-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17404129#comment-17404129 ] ayu wulandari commented on HDFS-9256: - thank you very much the [information|http://namaanakbayi.com] is very useful > Erasure Coding: Improve failure handling of ECWorker striped block > reconstruction > - > > Key: HDFS-9256 > URL: https://issues.apache.org/jira/browse/HDFS-9256 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: Rakesh Radhakrishnan >Assignee: Rakesh Radhakrishnan >Priority: Major > Labels: hdfs-ec-3.0-nice-to-have > > As we know reconstruction of missed striped block is a costly operation, it > involves the following steps:- > step-1) read the data from minimum number of sources(remotely reading the > data) > step-2) decode data for the targets (CPU cycles) > step-3) transfer the data to the targets(remotely writing the data) > Assume there is a failure in step-3 due to target DN disconnected or dead > etc. Presently {{ECWorker}} is skipping the failed DN and continue > transferring data to the other targets. In the next round, it should again > start the reconstruction operation from first step. Considering the cost of > reconstruction, it would be good to give another chance to retry the failed > operation. The idea of this jira is to disucss the possible approaches and > implement it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16183) RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient
[ https://issues.apache.org/jira/browse/HDFS-16183?focusedWorklogId=641459=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641459 ] ASF GitHub Bot logged work on HDFS-16183: - Author: ASF GitHub Bot Created on: 25/Aug/21 02:24 Start Date: 25/Aug/21 02:24 Worklog Time Spent: 10m Work Description: wzhallright commented on a change in pull request #3328: URL: https://github.com/apache/hadoop/pull/3328#discussion_r695342310 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java ## @@ -517,7 +517,6 @@ private Object invokeMethod( // Communication retries are handled by the retry policy if (this.rpcMonitor != null) { this.rpcMonitor.proxyOpFailureCommunicate(); -this.rpcMonitor.proxyOpComplete(false); Review comment: If boolean is false, proxyOpComplete in FederationRPCPerformanceMonitor will do nothing, so maybe it can be deleted? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 641459) Time Spent: 40m (was: 0.5h) > RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient > - > > Key: HDFS-16183 > URL: https://issues.apache.org/jira/browse/HDFS-16183 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: wangzhaohui >Assignee: wangzhaohui >Priority: Minor > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > In routerRpcClient, this.rpcMonitor.proxyOpComplete(false) do nothing, so > maybe it doesn't need to exist. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16184) De-flake TestBlockScanner#testSkipRecentAccessFile
[ https://issues.apache.org/jira/browse/HDFS-16184?focusedWorklogId=641458=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641458 ] ASF GitHub Bot logged work on HDFS-16184: - Author: ASF GitHub Bot Created on: 25/Aug/21 02:24 Start Date: 25/Aug/21 02:24 Worklog Time Spent: 10m Work Description: ayushtkn commented on pull request #3329: URL: https://github.com/apache/hadoop/pull/3329#issuecomment-905121674 Can you update the description with the reason for failure and detail about the fix -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 641458) Time Spent: 1h (was: 50m) > De-flake TestBlockScanner#testSkipRecentAccessFile > -- > > Key: HDFS-16184 > URL: https://issues.apache.org/jira/browse/HDFS-16184 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Test TestBlockScanner#testSkipRecentAccessFile is flaky: > > {code:java} > [ERROR] > testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) > Time elapsed: 3.936 s <<< FAILURE![ERROR] > testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) > Time elapsed: 3.936 s <<< FAILURE!java.lang.AssertionError: Scan nothing > for all files are accessed in last period. at > org.junit.Assert.fail(Assert.java:89) at > org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testSkipRecentAccessFile(TestBlockScanner.java:1015) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > {code} > e.g > [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/37/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16128) [FGL] Add support for saving/loading an FS Image for PartitionedGSet
[ https://issues.apache.org/jira/browse/HDFS-16128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17404107#comment-17404107 ] Renukaprasad C commented on HDFS-16128: --- org.apache.hadoop.hdfs.server.namenode.INodeMap#get(long) here {code:java} pgs.get(inode); should be able to get the inode from the partitions. But we changed this code with for (int p = 0; p < NUM_RANGES_STATIC; p++) { INodeDirectory key = new INodeDirectory(INodeId.ROOT_INODE_ID, "range key".getBytes(StandardCharsets.UTF_8), perm, 0); key.setParent(new INodeDirectory((long)p, null, perm, 0)); PartitionedGSet.PartitionEntry e = pgs.getPartition(key); if (e.contains(inode)) { return (INode) e.get(inode); } } {code} But the new code fails to get the INode when new partitions were added dynamically. This part of code can be changed back to "pgs.get(inode);" ? Any issue found with this code? > [FGL] Add support for saving/loading an FS Image for PartitionedGSet > > > Key: HDFS-16128 > URL: https://issues.apache.org/jira/browse/HDFS-16128 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs, namenode >Reporter: Xing Lin >Assignee: Xing Lin >Priority: Major > Labels: pull-request-available > Fix For: Fine-Grained Locking > > Time Spent: 50m > Remaining Estimate: 0h > > Add support to save Inodes stored in PartitionedGSet when saving an FS image > and load Inodes into PartitionedGSet from a saved FS image. > h1. Saving FSImage > *Original HDFS design*: iterate every inode in inodeMap and save them into > the FSImage file. > *FGL*: no change is needed here, since PartitionedGSet also provides an > iterator interface, to iterate over inodes stored in partitions. > h1. Loading an HDFS > *Original HDFS design*: it first loads the FSImage files and then loads edit > logs for recent changes. FSImage files contain different sections, including > INodeSections and INodeDirectorySections. An InodeSection contains serialized > Inodes objects and the INodeDirectorySection contains the parent inode for an > Inode. When loading an FSImage, the system first loads INodeSections and then > load the INodeDirectorySections, to set the parent inode for each inode. > After FSImage files are loaded, edit logs are then loaded. Edit log contains > recent changes to the filesystem, including Inodes creation/deletion. For a > newly created INode, the parent inode is set before it is added to the > inodeMap. > *FGL*: when adding an Inode into the partitionedGSet, we need the parent > inode of an inode, in order to determine which partition to store that inode, > when NAMESPACE_KEY_DEPTH = 2. Thus, in FGL, when loading FSImage files, we > used a temporary LightweightGSet (inodeMapTemp), to store inodes. When > LoadFSImage is done, the parent inode for all existing inodes in FSImage > files is set. We can now move the inodes into a partitionedGSet. Load edit > logs can work as usual, as the parent inode for an inode is set before it is > added to the inodeMap. > In theory, PartitionedGSet can support to store inodes without setting its > parent inodes. All these inodes will be stored in the 0th partition. However, > we decide to use a temporary LightweightGSet (inodeMapTemp) to store these > inodes, to make this case more transparent. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16184) De-flake TestBlockScanner#testSkipRecentAccessFile
[ https://issues.apache.org/jira/browse/HDFS-16184?focusedWorklogId=641404=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641404 ] ASF GitHub Bot logged work on HDFS-16184: - Author: ASF GitHub Bot Created on: 24/Aug/21 23:57 Start Date: 24/Aug/21 23:57 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3329: URL: https://github.com/apache/hadoop/pull/3329#issuecomment-905051804 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 36s | | trunk passed | | +1 :green_heart: | compile | 1m 43s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 27s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 11s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 34s | | trunk passed | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 44s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 1s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 24s | | the patch passed | | +1 :green_heart: | compile | 1m 29s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 29s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 59s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 18s | | the patch passed | | +1 :green_heart: | javadoc | 0m 50s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 24s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 15s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 233m 55s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 329m 44s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3329/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3329 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux a643b7af123d 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 32976fb61adbad825f421f06655fe16f9abe | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3329/3/testReport/ | | Max. process+thread count | 3340 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3329/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This
[jira] [Work logged] (HDFS-16184) De-flake TestBlockScanner#testSkipRecentAccessFile
[ https://issues.apache.org/jira/browse/HDFS-16184?focusedWorklogId=641391=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641391 ] ASF GitHub Bot logged work on HDFS-16184: - Author: ASF GitHub Bot Created on: 24/Aug/21 23:45 Start Date: 24/Aug/21 23:45 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3329: URL: https://github.com/apache/hadoop/pull/3329#issuecomment-905047783 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 50s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 27s | | trunk passed | | +1 :green_heart: | javadoc | 0m 57s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 9s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 12s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 51s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 57s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 233m 25s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 317m 34s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3329/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3329 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux c4ba9578d2a6 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 32976fb61adbad825f421f06655fe16f9abe | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3329/2/testReport/ | | Max. process+thread count | 3433 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3329/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This
[jira] [Work logged] (HDFS-16183) RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient
[ https://issues.apache.org/jira/browse/HDFS-16183?focusedWorklogId=641341=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641341 ] ASF GitHub Bot logged work on HDFS-16183: - Author: ASF GitHub Bot Created on: 24/Aug/21 21:20 Start Date: 24/Aug/21 21:20 Worklog Time Spent: 10m Work Description: goiri commented on a change in pull request #3328: URL: https://github.com/apache/hadoop/pull/3328#discussion_r695225668 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java ## @@ -517,7 +517,6 @@ private Object invokeMethod( // Communication retries are handled by the retry policy if (this.rpcMonitor != null) { this.rpcMonitor.proxyOpFailureCommunicate(); -this.rpcMonitor.proxyOpComplete(false); Review comment: Why we don't need the metric increase anymore? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 641341) Time Spent: 0.5h (was: 20m) > RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient > - > > Key: HDFS-16183 > URL: https://issues.apache.org/jira/browse/HDFS-16183 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: wangzhaohui >Assignee: wangzhaohui >Priority: Minor > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > In routerRpcClient, this.rpcMonitor.proxyOpComplete(false) do nothing, so > maybe it doesn't need to exist. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16184) De-flake TestBlockScanner#testSkipRecentAccessFile
[ https://issues.apache.org/jira/browse/HDFS-16184?focusedWorklogId=641263=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641263 ] ASF GitHub Bot logged work on HDFS-16184: - Author: ASF GitHub Bot Created on: 24/Aug/21 18:52 Start Date: 24/Aug/21 18:52 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #3329: URL: https://github.com/apache/hadoop/pull/3329#issuecomment-904890937 @ayushtkn @tasanuma could you please review this PR? Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 641263) Time Spent: 0.5h (was: 20m) > De-flake TestBlockScanner#testSkipRecentAccessFile > -- > > Key: HDFS-16184 > URL: https://issues.apache.org/jira/browse/HDFS-16184 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Test TestBlockScanner#testSkipRecentAccessFile is flaky: > > {code:java} > [ERROR] > testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) > Time elapsed: 3.936 s <<< FAILURE![ERROR] > testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) > Time elapsed: 3.936 s <<< FAILURE!java.lang.AssertionError: Scan nothing > for all files are accessed in last period. at > org.junit.Assert.fail(Assert.java:89) at > org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testSkipRecentAccessFile(TestBlockScanner.java:1015) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > {code} > e.g > [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/37/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16184) De-flake TestBlockScanner#testSkipRecentAccessFile
[ https://issues.apache.org/jira/browse/HDFS-16184?focusedWorklogId=641260=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641260 ] ASF GitHub Bot logged work on HDFS-16184: - Author: ASF GitHub Bot Created on: 24/Aug/21 18:48 Start Date: 24/Aug/21 18:48 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3329: URL: https://github.com/apache/hadoop/pull/3329#issuecomment-904888487 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 55s | | trunk passed | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 14s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 56s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 9s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 14s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 51s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 13s | | the patch passed | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 24s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 7s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 59s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 235m 35s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3329/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 319m 46s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3329/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3329 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 780bcd5f80f1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 32976fb61adbad825f421f06655fe16f9abe | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3329/1/testReport/ | | Max. process+thread count | 3286 (vs. ulimit of 5500) | | modules | C:
[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS
[ https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=641148=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641148 ] ASF GitHub Bot logged work on HDFS-6874: Author: ASF GitHub Bot Created on: 24/Aug/21 15:27 Start Date: 24/Aug/21 15:27 Worklog Time Spent: 10m Work Description: jojochuang commented on a change in pull request #3322: URL: https://github.com/apache/hadoop/pull/3322#discussion_r694962055 ## File path: hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java ## @@ -492,6 +509,27 @@ public InputStream run() throws Exception { response = Response.ok(js).type(MediaType.APPLICATION_JSON).build(); break; } +case GET_BLOCK_LOCATIONS: { + long offset = 0; + long len = Long.MAX_VALUE; + Long offsetParam = params.get(OffsetParam.NAME, OffsetParam.class); + Long lenParam = params.get(LenParam.NAME, LenParam.class); + AUDIT_LOG.info("[{}] offset [{}] len [{}]", + new Object[] { path, offsetParam, lenParam }); + if (offsetParam != null && offsetParam.longValue() > 0) { +offset = offsetParam.longValue(); + } + if (lenParam != null && lenParam.longValue() > 0) { +len = lenParam.longValue(); + } + FSOperations.FSFileBlockLocations command = Review comment: actually this looks wrong. Httpfs's GET_BLOCK_LOCATIONS should behave just like webhdfs's GET_BLOCK_LOCATIONS, which returns serialized LocatedBlocks rather than BlockLocations[]. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 641148) Time Spent: 50m (was: 40m) > Add GETFILEBLOCKLOCATIONS operation to HttpFS > - > > Key: HDFS-6874 > URL: https://issues.apache.org/jira/browse/HDFS-6874 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 2.4.1, 2.7.3 >Reporter: Gao Zhong Liang >Assignee: Weiwei Yang >Priority: Major > Labels: BB2015-05-TBR, pull-request-available > Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, > HDFS-6874.011.patch, HDFS-6874.02.patch, HDFS-6874.03.patch, > HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, > HDFS-6874.07.patch, HDFS-6874.08.patch, HDFS-6874.09.patch, > HDFS-6874.10.patch, HDFS-6874.patch > > Time Spent: 50m > Remaining Estimate: 0h > > GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already > supported in WebHDFS. For the request of GETFILEBLOCKLOCATIONS in > org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far: > ... > case GETFILEBLOCKLOCATIONS: { > response = Response.status(Response.Status.BAD_REQUEST).build(); > break; > } > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16129) HttpFS signature secret file misusage
[ https://issues.apache.org/jira/browse/HDFS-16129?focusedWorklogId=641146=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641146 ] ASF GitHub Bot logged work on HDFS-16129: - Author: ASF GitHub Bot Created on: 24/Aug/21 15:16 Start Date: 24/Aug/21 15:16 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3209: URL: https://github.com/apache/hadoop/pull/3209#issuecomment-904733944 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 6s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 7 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 29s | | trunk passed | | +1 :green_heart: | compile | 30m 29s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 34s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 42s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 17s | | trunk passed | | +1 :green_heart: | javadoc | 2m 35s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 4s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 57s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 42s | | the patch passed | | +1 :green_heart: | compile | 21m 29s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 21m 29s | | the patch passed | | +1 :green_heart: | compile | 18m 51s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 51s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 33s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3209/13/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 95 unchanged - 0 fixed = 96 total (was 95) | | +1 :green_heart: | mvnsite | 3m 13s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 29s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 2s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 1s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 5s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 3m 43s | | hadoop-kms in the patch passed. | | +1 :green_heart: | unit | 6m 17s | | hadoop-hdfs-httpfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 0s | | The patch does not generate ASF License warnings. | | | | 222m 51s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3209/13/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3209 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux ffbe3d1b7034 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f4cffb51d01bd310ceee065c90b7fb7e2589edaf | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions |
[jira] [Work logged] (HDFS-16183) RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient
[ https://issues.apache.org/jira/browse/HDFS-16183?focusedWorklogId=641112=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641112 ] ASF GitHub Bot logged work on HDFS-16183: - Author: ASF GitHub Bot Created on: 24/Aug/21 14:16 Start Date: 24/Aug/21 14:16 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3328: URL: https://github.com/apache/hadoop/pull/3328#issuecomment-904682583 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 46s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 3s | | trunk passed | | +1 :green_heart: | compile | 0m 48s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 47s | | trunk passed | | +1 :green_heart: | javadoc | 0m 46s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 42s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 42s | | the patch passed | | +1 :green_heart: | compile | 0m 36s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 36s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 38s | | the patch passed | | +1 :green_heart: | javadoc | 0m 37s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 32s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 22m 45s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3328/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 111m 49s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3328/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3328 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 2504eb7137a3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / ea540a2ef18be83c25e182a1eeb48369ee4d7e72 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results |
[jira] [Work started] (HDFS-16184) De-flake TestBlockScanner#testSkipRecentAccessFile
[ https://issues.apache.org/jira/browse/HDFS-16184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-16184 started by Viraj Jasani. --- > De-flake TestBlockScanner#testSkipRecentAccessFile > -- > > Key: HDFS-16184 > URL: https://issues.apache.org/jira/browse/HDFS-16184 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Test TestBlockScanner#testSkipRecentAccessFile is flaky: > > {code:java} > [ERROR] > testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) > Time elapsed: 3.936 s <<< FAILURE![ERROR] > testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) > Time elapsed: 3.936 s <<< FAILURE!java.lang.AssertionError: Scan nothing > for all files are accessed in last period. at > org.junit.Assert.fail(Assert.java:89) at > org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testSkipRecentAccessFile(TestBlockScanner.java:1015) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > {code} > e.g > [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/37/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16184) De-flake TestBlockScanner#testSkipRecentAccessFile
[ https://issues.apache.org/jira/browse/HDFS-16184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated HDFS-16184: Status: Patch Available (was: In Progress) > De-flake TestBlockScanner#testSkipRecentAccessFile > -- > > Key: HDFS-16184 > URL: https://issues.apache.org/jira/browse/HDFS-16184 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Test TestBlockScanner#testSkipRecentAccessFile is flaky: > > {code:java} > [ERROR] > testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) > Time elapsed: 3.936 s <<< FAILURE![ERROR] > testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) > Time elapsed: 3.936 s <<< FAILURE!java.lang.AssertionError: Scan nothing > for all files are accessed in last period. at > org.junit.Assert.fail(Assert.java:89) at > org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testSkipRecentAccessFile(TestBlockScanner.java:1015) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > {code} > e.g > [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/37/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16184) De-flake TestBlockScanner#testSkipRecentAccessFile
[ https://issues.apache.org/jira/browse/HDFS-16184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-16184: -- Labels: pull-request-available (was: ) > De-flake TestBlockScanner#testSkipRecentAccessFile > -- > > Key: HDFS-16184 > URL: https://issues.apache.org/jira/browse/HDFS-16184 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Test TestBlockScanner#testSkipRecentAccessFile is flaky: > > {code:java} > [ERROR] > testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) > Time elapsed: 3.936 s <<< FAILURE![ERROR] > testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) > Time elapsed: 3.936 s <<< FAILURE!java.lang.AssertionError: Scan nothing > for all files are accessed in last period. at > org.junit.Assert.fail(Assert.java:89) at > org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testSkipRecentAccessFile(TestBlockScanner.java:1015) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > {code} > e.g > [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/37/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16184) De-flake TestBlockScanner#testSkipRecentAccessFile
[ https://issues.apache.org/jira/browse/HDFS-16184?focusedWorklogId=641089=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641089 ] ASF GitHub Bot logged work on HDFS-16184: - Author: ASF GitHub Bot Created on: 24/Aug/21 13:27 Start Date: 24/Aug/21 13:27 Worklog Time Spent: 10m Work Description: virajjasani opened a new pull request #3329: URL: https://github.com/apache/hadoop/pull/3329 ### Description of PR Test TestBlockScanner#testSkipRecentAccessFile is flaky: ``` [ERROR] testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) Time elapsed: 3.936 s <<< FAILURE! java.lang.AssertionError: Scan nothing for all files are accessed in last period. at org.junit.Assert.fail(Assert.java:89) at org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testSkipRecentAccessFile(TestBlockScanner.java:1015) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) ``` ### How was this patch tested? Unit tests -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 641089) Remaining Estimate: 0h Time Spent: 10m > De-flake TestBlockScanner#testSkipRecentAccessFile > -- > > Key: HDFS-16184 > URL: https://issues.apache.org/jira/browse/HDFS-16184 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Test TestBlockScanner#testSkipRecentAccessFile is flaky: > > {code:java} > [ERROR] > testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) > Time elapsed: 3.936 s <<< FAILURE![ERROR] > testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) > Time elapsed: 3.936 s <<< FAILURE!java.lang.AssertionError: Scan nothing > for all files are accessed in last period. at > org.junit.Assert.fail(Assert.java:89) at > org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testSkipRecentAccessFile(TestBlockScanner.java:1015) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > {code} > e.g > [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/37/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16184) De-flake TestBlockScanner#testSkipRecentAccessFile
Viraj Jasani created HDFS-16184: --- Summary: De-flake TestBlockScanner#testSkipRecentAccessFile Key: HDFS-16184 URL: https://issues.apache.org/jira/browse/HDFS-16184 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Viraj Jasani Assignee: Viraj Jasani Test TestBlockScanner#testSkipRecentAccessFile is flaky: {code:java} [ERROR] testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) Time elapsed: 3.936 s <<< FAILURE![ERROR] testSkipRecentAccessFile(org.apache.hadoop.hdfs.server.datanode.TestBlockScanner) Time elapsed: 3.936 s <<< FAILURE!java.lang.AssertionError: Scan nothing for all files are accessed in last period. at org.junit.Assert.fail(Assert.java:89) at org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testSkipRecentAccessFile(TestBlockScanner.java:1015) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) {code} e.g [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/37/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16183) RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient
[ https://issues.apache.org/jira/browse/HDFS-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-16183: -- Labels: pull-request-available (was: ) > RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient > - > > Key: HDFS-16183 > URL: https://issues.apache.org/jira/browse/HDFS-16183 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: wangzhaohui >Assignee: wangzhaohui >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > In routerRpcClient, this.rpcMonitor.proxyOpComplete(false) do nothing, so > maybe it doesn't need to exist. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16183) RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient
[ https://issues.apache.org/jira/browse/HDFS-16183?focusedWorklogId=641056=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641056 ] ASF GitHub Bot logged work on HDFS-16183: - Author: ASF GitHub Bot Created on: 24/Aug/21 12:22 Start Date: 24/Aug/21 12:22 Worklog Time Spent: 10m Work Description: wzhallright opened a new pull request #3328: URL: https://github.com/apache/hadoop/pull/3328 JIRA: https://issues.apache.org/jira/browse/HDFS-16183 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 641056) Remaining Estimate: 0h Time Spent: 10m > RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient > - > > Key: HDFS-16183 > URL: https://issues.apache.org/jira/browse/HDFS-16183 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: wangzhaohui >Assignee: wangzhaohui >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > In routerRpcClient, this.rpcMonitor.proxyOpComplete(false) do nothing, so > maybe it doesn't need to exist. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16183) RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient
[ https://issues.apache.org/jira/browse/HDFS-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wangzhaohui updated HDFS-16183: --- Description: In routerRpcClient, this.rpcMonitor.proxyOpComplete(false) do nothing, so maybe it doesn't need to exist. > RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient > - > > Key: HDFS-16183 > URL: https://issues.apache.org/jira/browse/HDFS-16183 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: wangzhaohui >Assignee: wangzhaohui >Priority: Minor > > In routerRpcClient, this.rpcMonitor.proxyOpComplete(false) do nothing, so > maybe it doesn't need to exist. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16183) RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient
wangzhaohui created HDFS-16183: -- Summary: RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient Key: HDFS-16183 URL: https://issues.apache.org/jira/browse/HDFS-16183 Project: Hadoop HDFS Issue Type: Improvement Reporter: wangzhaohui -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-16183) RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient
[ https://issues.apache.org/jira/browse/HDFS-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wangzhaohui reassigned HDFS-16183: -- Assignee: wangzhaohui > RBF: Delete unnecessary metric of proxyOpComplete in routerRpcClient > - > > Key: HDFS-16183 > URL: https://issues.apache.org/jira/browse/HDFS-16183 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: wangzhaohui >Assignee: wangzhaohui >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS
[ https://issues.apache.org/jira/browse/HDFS-6874?focusedWorklogId=641017=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-641017 ] ASF GitHub Bot logged work on HDFS-6874: Author: ASF GitHub Bot Created on: 24/Aug/21 09:35 Start Date: 24/Aug/21 09:35 Worklog Time Spent: 10m Work Description: jojochuang commented on a change in pull request #3322: URL: https://github.com/apache/hadoop/pull/3322#discussion_r694685317 ## File path: hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java ## @@ -1948,4 +1952,30 @@ public void testStoragePolicySatisfier() throws Exception { dfs.delete(path1, true); } } + + private void testGetFileBlockLocations() throws Exception { +BlockLocation[] locations1, locations2 = null; +Path testFile = null; +if (!this.isLocalFS()) { + FileSystem fs = this.getHttpFSFileSystem(); + testFile = new Path(getProxiedFSTestDir(), "singleBlock.txt"); + DFSTestUtil.createFile(fs, testFile, (long) 1, (short) 1, 0L); + if (fs instanceof HttpFSFileSystem) { +HttpFSFileSystem httpFS = (HttpFSFileSystem) fs; +locations1 = httpFS.getFileBlockLocations(testFile, 0, 1); +Assert.assertNotNull(locations1); + +// TODO: add test for HttpFSFileSystem.toBlockLocations() Review comment: this is my bad. I thought i added the test. Will update in the next revision. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 641017) Time Spent: 40m (was: 0.5h) > Add GETFILEBLOCKLOCATIONS operation to HttpFS > - > > Key: HDFS-6874 > URL: https://issues.apache.org/jira/browse/HDFS-6874 > Project: Hadoop HDFS > Issue Type: Improvement > Components: httpfs >Affects Versions: 2.4.1, 2.7.3 >Reporter: Gao Zhong Liang >Assignee: Weiwei Yang >Priority: Major > Labels: BB2015-05-TBR, pull-request-available > Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, > HDFS-6874.011.patch, HDFS-6874.02.patch, HDFS-6874.03.patch, > HDFS-6874.04.patch, HDFS-6874.05.patch, HDFS-6874.06.patch, > HDFS-6874.07.patch, HDFS-6874.08.patch, HDFS-6874.09.patch, > HDFS-6874.10.patch, HDFS-6874.patch > > Time Spent: 40m > Remaining Estimate: 0h > > GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already > supported in WebHDFS. For the request of GETFILEBLOCKLOCATIONS in > org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far: > ... > case GETFILEBLOCKLOCATIONS: { > response = Response.status(Response.Status.BAD_REQUEST).build(); > break; > } > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org