[jira] [Commented] (HDFS-13671) Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
[ https://issues.apache.org/jira/browse/HDFS-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352282#comment-17352282 ] Hui Fei commented on HDFS-13671: [~sodonnell] Thanks for your comments, This issue continued for several years , many companies using hadoop 3.x encounter it, and FoldedTreeSet tests already had performance problems in HDFS-9260, maybe reverting FoldedTreeSet is better, not optimize on it now. There are 4 failed UTs as following based on latest trunk. # TestHdfsConfigFields#testCompareXmlAgainstConfigurationClass # TestAddOverReplicatedStripedBlocks#testProcessOverReplicatedSBSmallerThanFullBlocks # TestDecommission#testDecommissionWithOpenfileReporting # TestFsDatasetImpl#testSortedFinalizedBlocksAreSorted TestHdfsConfigFields can be fixed with removing related configs from hdfs-default.xml TestFsDatasetImpl is related to [~sodonnell] mentioned bugs, it can be fixed with revert HDFS-15574 I don't know yet why the other 2 UTs fail > Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet > -- > > Key: HDFS-13671 > URL: https://issues.apache.org/jira/browse/HDFS-13671 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.0.3 >Reporter: Yiqun Lin >Assignee: Haibin Huang >Priority: Major > Attachments: HDFS-13671-001.patch > > > NameNode hung when deleting large files/blocks. The stack info: > {code} > "IPC Server handler 4 on 8020" #87 daemon prio=5 os_prio=0 > tid=0x7fb505b27800 nid=0x94c3 runnable [0x7fa861361000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hdfs.util.FoldedTreeSet.compare(FoldedTreeSet.java:474) > at > org.apache.hadoop.hdfs.util.FoldedTreeSet.removeAndGet(FoldedTreeSet.java:849) > at > org.apache.hadoop.hdfs.util.FoldedTreeSet.remove(FoldedTreeSet.java:911) > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.removeBlock(DatanodeStorageInfo.java:252) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:194) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:108) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlockFromMap(BlockManager.java:3813) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlock(BlockManager.java:3617) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removeBlocks(FSNamesystem.java:4270) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4244) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4180) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:4164) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:871) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:311) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:625) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > {code} > In the current deletion logic in NameNode, there are mainly two steps: > * Collect INodes and all blocks to be deleted, then delete INodes. > * Remove blocks chunk by chunk in a loop. > Actually the first step should be a more expensive operation and will takes > more time. However, now we always see NN hangs during the remove block > operation. > Looking into this, we introduced a new structure {{FoldedTreeSet}} to have a > better performance in dealing FBR/IBRs. But compared with early > implementation in remove-block logic, {{FoldedTreeSet}} seems more slower > since It will take additional time to balance tree node. When there are large > block to be removed/deleted, it looks bad. > For the get type operations in {{DatanodeStorageInfo}}, we only provide the > {{getBlockIterator}} to return blocks iterator and no other get operation > with specified block. Still we need to use {{FoldedTreeSet}} in > {{DatanodeStorageInfo}}? As we know {{FoldedTreeSet}} is benefit for Get not > Update. Maybe we can revert this to the early implementation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional
[jira] [Updated] (HDFS-16043) HDFS : Delete performance optimization
[ https://issues.apache.org/jira/browse/HDFS-16043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiangyi Zhu updated HDFS-16043: --- Attachment: 20210527-before.svg 20210527-after.svg > HDFS : Delete performance optimization > -- > > Key: HDFS-16043 > URL: https://issues.apache.org/jira/browse/HDFS-16043 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, namanode >Affects Versions: 3.4.0 >Reporter: Xiangyi Zhu >Priority: Major > Attachments: 20210527-after.svg, 20210527-before.svg > > > The deletion of the large directory caused NN to hold the lock for too long, > which caused our NameNode to be killed by ZKFC. > Through the flame graph, it is found that its main time-consuming > calculation is QuotaCount when removingBlocks(toRemovedBlocks) and deleting > inodes, and removeBlocks(toRemovedBlocks) takes a higher proportion of time. > h3. solution: > 1. RemoveBlocks is processed asynchronously. A thread is started in the > BlockManager to process the deleted blocks and control the lock time. > 2. QuotaCount calculation optimization, this is similar to the optimization > of this Issue HDFS-16000. > h3. Comparison before and after optimization: > Delete 1000w Inode and 1000w block test. > *before:* > Before optimization: remove inode elapsed time: 7691 ms > remove block elapsed time :11107 ms > *after:* > remove inode elapsed time: 4149 ms > remove block elapsed time :0 ms -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-16043) HDFS : Delete performance optimization
[ https://issues.apache.org/jira/browse/HDFS-16043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiangyi Zhu reassigned HDFS-16043: -- Assignee: Xiangyi Zhu > HDFS : Delete performance optimization > -- > > Key: HDFS-16043 > URL: https://issues.apache.org/jira/browse/HDFS-16043 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, namanode >Affects Versions: 3.4.0 >Reporter: Xiangyi Zhu >Assignee: Xiangyi Zhu >Priority: Major > Attachments: 20210527-after.svg, 20210527-before.svg > > > The deletion of the large directory caused NN to hold the lock for too long, > which caused our NameNode to be killed by ZKFC. > Through the flame graph, it is found that its main time-consuming > calculation is QuotaCount when removingBlocks(toRemovedBlocks) and deleting > inodes, and removeBlocks(toRemovedBlocks) takes a higher proportion of time. > h3. solution: > 1. RemoveBlocks is processed asynchronously. A thread is started in the > BlockManager to process the deleted blocks and control the lock time. > 2. QuotaCount calculation optimization, this is similar to the optimization > of this Issue HDFS-16000. > h3. Comparison before and after optimization: > Delete 1000w Inode and 1000w block test. > *before:* > Before optimization: remove inode elapsed time: 7691 ms > remove block elapsed time :11107 ms > *after:* > remove inode elapsed time: 4149 ms > remove block elapsed time :0 ms -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16043) HDFS : Delete performance optimization
Xiangyi Zhu created HDFS-16043: -- Summary: HDFS : Delete performance optimization Key: HDFS-16043 URL: https://issues.apache.org/jira/browse/HDFS-16043 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs, namanode Affects Versions: 3.4.0 Reporter: Xiangyi Zhu The deletion of the large directory caused NN to hold the lock for too long, which caused our NameNode to be killed by ZKFC. Through the flame graph, it is found that its main time-consuming calculation is QuotaCount when removingBlocks(toRemovedBlocks) and deleting inodes, and removeBlocks(toRemovedBlocks) takes a higher proportion of time. h3. solution: 1. RemoveBlocks is processed asynchronously. A thread is started in the BlockManager to process the deleted blocks and control the lock time. 2. QuotaCount calculation optimization, this is similar to the optimization of this Issue [HDFS-16000|https://issues.apache.org/jira/browse/HDFS-16000]. h3. Comparison before and after optimization: Delete 1000w Inode and 1000w block test. *before:* Before optimization: remove inode elapsed time: 7691 ms remove block elapsed time :11107 ms *after:* remove inode elapsed time: 4149 ms remove block elapsed time :0 ms -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16043) HDFS : Delete performance optimization
[ https://issues.apache.org/jira/browse/HDFS-16043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiangyi Zhu updated HDFS-16043: --- Description: The deletion of the large directory caused NN to hold the lock for too long, which caused our NameNode to be killed by ZKFC. Through the flame graph, it is found that its main time-consuming calculation is QuotaCount when removingBlocks(toRemovedBlocks) and deleting inodes, and removeBlocks(toRemovedBlocks) takes a higher proportion of time. h3. solution: 1. RemoveBlocks is processed asynchronously. A thread is started in the BlockManager to process the deleted blocks and control the lock time. 2. QuotaCount calculation optimization, this is similar to the optimization of this Issue HDFS-16000. h3. Comparison before and after optimization: Delete 1000w Inode and 1000w block test. *before:* Before optimization: remove inode elapsed time: 7691 ms remove block elapsed time :11107 ms *after:* remove inode elapsed time: 4149 ms remove block elapsed time :0 ms was: The deletion of the large directory caused NN to hold the lock for too long, which caused our NameNode to be killed by ZKFC. Through the flame graph, it is found that its main time-consuming calculation is QuotaCount when removingBlocks(toRemovedBlocks) and deleting inodes, and removeBlocks(toRemovedBlocks) takes a higher proportion of time. h3. solution: 1. RemoveBlocks is processed asynchronously. A thread is started in the BlockManager to process the deleted blocks and control the lock time. 2. QuotaCount calculation optimization, this is similar to the optimization of this Issue [HDFS-16000|https://issues.apache.org/jira/browse/HDFS-16000]. h3. Comparison before and after optimization: Delete 1000w Inode and 1000w block test. *before:* Before optimization: remove inode elapsed time: 7691 ms remove block elapsed time :11107 ms *after:* remove inode elapsed time: 4149 ms remove block elapsed time :0 ms > HDFS : Delete performance optimization > -- > > Key: HDFS-16043 > URL: https://issues.apache.org/jira/browse/HDFS-16043 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, namanode >Affects Versions: 3.4.0 >Reporter: Xiangyi Zhu >Priority: Major > > The deletion of the large directory caused NN to hold the lock for too long, > which caused our NameNode to be killed by ZKFC. > Through the flame graph, it is found that its main time-consuming > calculation is QuotaCount when removingBlocks(toRemovedBlocks) and deleting > inodes, and removeBlocks(toRemovedBlocks) takes a higher proportion of time. > h3. solution: > 1. RemoveBlocks is processed asynchronously. A thread is started in the > BlockManager to process the deleted blocks and control the lock time. > 2. QuotaCount calculation optimization, this is similar to the optimization > of this Issue HDFS-16000. > h3. Comparison before and after optimization: > Delete 1000w Inode and 1000w block test. > *before:* > Before optimization: remove inode elapsed time: 7691 ms > remove block elapsed time :11107 ms > *after:* > remove inode elapsed time: 4149 ms > remove block elapsed time :0 ms -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16042) DatanodeAdminMonitor scan should be delay based
[ https://issues.apache.org/jira/browse/HDFS-16042?focusedWorklogId=602713=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602713 ] ASF GitHub Bot logged work on HDFS-16042: - Author: ASF GitHub Bot Created on: 27/May/21 03:23 Start Date: 27/May/21 03:23 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3058: URL: https://github.com/apache/hadoop/pull/3058#issuecomment-849288721 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 43s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 23s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 4s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 17s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 56s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 15s | | the patch passed | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 20s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 321m 10s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3058/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 37s | | The patch does not generate ASF License warnings. | | | | 414m 32s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3058/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3058 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 37e01b3f8927 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bd86c2fd17f08428453bf32aa73d5465947a88f4 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK
[jira] [Work logged] (HDFS-13729) Fix broken links to RBF documentation
[ https://issues.apache.org/jira/browse/HDFS-13729?focusedWorklogId=602711=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602711 ] ASF GitHub Bot logged work on HDFS-13729: - Author: ASF GitHub Bot Created on: 27/May/21 02:53 Start Date: 27/May/21 02:53 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3059: URL: https://github.com/apache/hadoop/pull/3059#issuecomment-849278228 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 5s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 47m 57s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 15s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 6s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 64m 6s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3059/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3059 | | Optional Tests | dupname asflicense mvnsite codespell | | uname | Linux 81757006eeb6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / ad228352d58f6c3fba401f3cbbf0442bdec1d8fa | | Max. process+thread count | 542 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3059/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 602711) Time Spent: 20m (was: 10m) > Fix broken links to RBF documentation > - > > Key: HDFS-13729 > URL: https://issues.apache.org/jira/browse/HDFS-13729 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: jwhitter >Assignee: Gabor Bota >Priority: Minor > Labels: pull-request-available > Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4 > > Attachments: HADOOP-15589.001.patch, HDFS-13729-branch-2.001.patch, > hadoop_broken_link.png > > Time Spent: 20m > Remaining Estimate: 0h > > A broken link on the page [http://hadoop.apache.org/docs/current/] > * HDFS > ** HDFS Router based federation. See the [user > documentation|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html] > for more details. > The link for user documentation > [http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html] > is not found. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16040) RpcQueueTime metric counts requeued calls as unique events.
[ https://issues.apache.org/jira/browse/HDFS-16040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352237#comment-17352237 ] Hadoop QA commented on HDFS-16040: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 49s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 50s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 36s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 46s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 2s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 41s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 14s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 13s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 14s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 32m 28s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 5m 52s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 15s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 22s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 22s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 22s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 22s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 42s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 6s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 13s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | |
[jira] [Commented] (HDFS-15973) RBF: Add permission check before doting router federation rename.
[ https://issues.apache.org/jira/browse/HDFS-15973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352232#comment-17352232 ] Jinglun commented on HDFS-15973: Wait one day for further comments. After that I'll commit this. > RBF: Add permission check before doting router federation rename. > - > > Key: HDFS-15973 > URL: https://issues.apache.org/jira/browse/HDFS-15973 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-15973.001.patch, HDFS-15973.002.patch, > HDFS-15973.003.patch, HDFS-15973.004.patch, HDFS-15973.005.patch, > HDFS-15973.006.patch, HDFS-15973.007.patch, HDFS-15973.008.patch, > HDFS-15973.009.patch, HDFS-15973.010.patch > > > The router federation rename is lack of permission check. It is a security > issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13729) Fix broken links to RBF documentation
[ https://issues.apache.org/jira/browse/HDFS-13729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-13729: -- Labels: pull-request-available (was: ) > Fix broken links to RBF documentation > - > > Key: HDFS-13729 > URL: https://issues.apache.org/jira/browse/HDFS-13729 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: jwhitter >Assignee: Gabor Bota >Priority: Minor > Labels: pull-request-available > Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4 > > Attachments: HADOOP-15589.001.patch, HDFS-13729-branch-2.001.patch, > hadoop_broken_link.png > > Time Spent: 10m > Remaining Estimate: 0h > > A broken link on the page [http://hadoop.apache.org/docs/current/] > * HDFS > ** HDFS Router based federation. See the [user > documentation|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html] > for more details. > The link for user documentation > [http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html] > is not found. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-13729) Fix broken links to RBF documentation
[ https://issues.apache.org/jira/browse/HDFS-13729?focusedWorklogId=602700=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602700 ] ASF GitHub Bot logged work on HDFS-13729: - Author: ASF GitHub Bot Created on: 27/May/21 01:48 Start Date: 27/May/21 01:48 Worklog Time Spent: 10m Work Description: oojas opened a new pull request #3059: URL: https://github.com/apache/hadoop/pull/3059 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 602700) Remaining Estimate: 0h Time Spent: 10m > Fix broken links to RBF documentation > - > > Key: HDFS-13729 > URL: https://issues.apache.org/jira/browse/HDFS-13729 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: jwhitter >Assignee: Gabor Bota >Priority: Minor > Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4 > > Attachments: HADOOP-15589.001.patch, HDFS-13729-branch-2.001.patch, > hadoop_broken_link.png > > Time Spent: 10m > Remaining Estimate: 0h > > A broken link on the page [http://hadoop.apache.org/docs/current/] > * HDFS > ** HDFS Router based federation. See the [user > documentation|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html] > for more details. > The link for user documentation > [http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html] > is not found. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15973) RBF: Add permission check before doting router federation rename.
[ https://issues.apache.org/jira/browse/HDFS-15973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352216#comment-17352216 ] Íñigo Goiri commented on HDFS-15973: +1 on [^HDFS-15973.010.patch]. > RBF: Add permission check before doting router federation rename. > - > > Key: HDFS-15973 > URL: https://issues.apache.org/jira/browse/HDFS-15973 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-15973.001.patch, HDFS-15973.002.patch, > HDFS-15973.003.patch, HDFS-15973.004.patch, HDFS-15973.005.patch, > HDFS-15973.006.patch, HDFS-15973.007.patch, HDFS-15973.008.patch, > HDFS-15973.009.patch, HDFS-15973.010.patch > > > The router federation rename is lack of permission check. It is a security > issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15915) Race condition with async edits logging due to updating txId outside of the namesystem log
[ https://issues.apache.org/jira/browse/HDFS-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-15915: --- Fix Version/s: 3.3.2 3.2.3 2.10.2 3.1.5 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) I just committed this to trunk and all branches down to branch-2.10. > Race condition with async edits logging due to updating txId outside of the > namesystem log > -- > > Key: HDFS-15915 > URL: https://issues.apache.org/jira/browse/HDFS-15915 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, namenode >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Fix For: 3.4.0, 3.1.5, 2.10.2, 3.2.3, 3.3.2 > > Attachments: HDFS-15915-01.patch, HDFS-15915-02.patch, > HDFS-15915-03.patch, HDFS-15915-04.patch, HDFS-15915-05.patch, > testMkdirsRace.patch > > > {{FSEditLogAsync}} creates an {{FSEditLogOp}} and populates its fields inside > {{FSNamesystem.writeLock}}. But one essential field the transaction id of the > edits op remains unset until the time when the operation is scheduled for > synching. At that time {{beginTransaction()}} will set the the > {{FSEditLogOp.txid}} and increment the global transaction count. On busy > NameNode this event can fall outside the write lock. > This causes problems for Observer reads. It also can potentially reshuffle > transactions and Standby will apply them in a wrong order. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15971) Make mkstemp cross platform
[ https://issues.apache.org/jira/browse/HDFS-15971?focusedWorklogId=602576=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602576 ] ASF GitHub Bot logged work on HDFS-15971: - Author: ASF GitHub Bot Created on: 26/May/21 21:13 Start Date: 26/May/21 21:13 Worklog Time Spent: 10m Work Description: goiri merged pull request #3044: URL: https://github.com/apache/hadoop/pull/3044 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 602576) Time Spent: 1h 40m (was: 1.5h) > Make mkstemp cross platform > --- > > Key: HDFS-15971 > URL: https://issues.apache.org/jira/browse/HDFS-15971 > Project: Hadoop HDFS > Issue Type: Improvement > Components: libhdfs++ >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: Dockerfile_centos_7, build-log.zip, commit-details.txt > > Time Spent: 1h 40m > Remaining Estimate: 0h > > mkstemp isn't available in Visual C++. Need to make it cross platform. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15971) Make mkstemp cross platform
[ https://issues.apache.org/jira/browse/HDFS-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri resolved HDFS-15971. Fix Version/s: 3.4.0 Resolution: Fixed > Make mkstemp cross platform > --- > > Key: HDFS-15971 > URL: https://issues.apache.org/jira/browse/HDFS-15971 > Project: Hadoop HDFS > Issue Type: Improvement > Components: libhdfs++ >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: Dockerfile_centos_7, build-log.zip, commit-details.txt > > Time Spent: 1.5h > Remaining Estimate: 0h > > mkstemp isn't available in Visual C++. Need to make it cross platform. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15971) Make mkstemp cross platform
[ https://issues.apache.org/jira/browse/HDFS-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352081#comment-17352081 ] Íñigo Goiri commented on HDFS-15971: I'm merging this again as the Docker image seems to work fine. It looks like the fuse dependencies were missing but the Docker image has them. > Make mkstemp cross platform > --- > > Key: HDFS-15971 > URL: https://issues.apache.org/jira/browse/HDFS-15971 > Project: Hadoop HDFS > Issue Type: Improvement > Components: libhdfs++ >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Attachments: Dockerfile_centos_7, build-log.zip, commit-details.txt > > Time Spent: 1.5h > Remaining Estimate: 0h > > mkstemp isn't available in Visual C++. Need to make it cross platform. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16042) DatanodeAdminMonitor scan should be delay based
[ https://issues.apache.org/jira/browse/HDFS-16042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-16042: -- Labels: pull-request-available (was: ) > DatanodeAdminMonitor scan should be delay based > --- > > Key: HDFS-16042 > URL: https://issues.apache.org/jira/browse/HDFS-16042 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > In {{DatanodeAdminManager.activate()}}, the Monitor task is scheduled with a > fixed rate, ie. the period is from start1 -> start2. > {code:java} > executor.scheduleAtFixedRate(monitor, intervalSecs, intervalSecs, >TimeUnit.SECONDS); > {code} > According to Java API docs for {{scheduleAtFixedRate}}, > {quote}If any execution of this task takes longer than its period, then > subsequent executions may start late, but will not concurrently > execute.{quote} > It should be a fixed delay so it's end1 -> start1. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16042) DatanodeAdminMonitor scan should be delay based
[ https://issues.apache.org/jira/browse/HDFS-16042?focusedWorklogId=602549=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602549 ] ASF GitHub Bot logged work on HDFS-16042: - Author: ASF GitHub Bot Created on: 26/May/21 20:27 Start Date: 26/May/21 20:27 Worklog Time Spent: 10m Work Description: amahussein opened a new pull request #3058: URL: https://github.com/apache/hadoop/pull/3058 [HDFS-16042](https://issues.apache.org/jira/browse/HDFS-16042) DatanodeAdminMonitor scan should be delay based. In DatanodeAdminManager.activate(), the Monitor task is scheduled with a fixed rate, ie. the period is from start1 -> start2. It should be a fixed delay so it's end1 -> start1. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 602549) Remaining Estimate: 0h Time Spent: 10m > DatanodeAdminMonitor scan should be delay based > --- > > Key: HDFS-16042 > URL: https://issues.apache.org/jira/browse/HDFS-16042 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > In {{DatanodeAdminManager.activate()}}, the Monitor task is scheduled with a > fixed rate, ie. the period is from start1 -> start2. > {code:java} > executor.scheduleAtFixedRate(monitor, intervalSecs, intervalSecs, >TimeUnit.SECONDS); > {code} > According to Java API docs for {{scheduleAtFixedRate}}, > {quote}If any execution of this task takes longer than its period, then > subsequent executions may start late, but will not concurrently > execute.{quote} > It should be a fixed delay so it's end1 -> start1. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-16042) DatanodeAdminMonitor scan should be delay based
[ https://issues.apache.org/jira/browse/HDFS-16042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-16042 started by Ahmed Hussein. > DatanodeAdminMonitor scan should be delay based > --- > > Key: HDFS-16042 > URL: https://issues.apache.org/jira/browse/HDFS-16042 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > > In {{DatanodeAdminManager.activate()}}, the Monitor task is scheduled with a > fixed rate, ie. the period is from start1 -> start2. > {code:java} > executor.scheduleAtFixedRate(monitor, intervalSecs, intervalSecs, >TimeUnit.SECONDS); > {code} > According to Java API docs for {{scheduleAtFixedRate}}, > {quote}If any execution of this task takes longer than its period, then > subsequent executions may start late, but will not concurrently > execute.{quote} > It should be a fixed delay so it's end1 -> start1. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16042) DatanodeAdminMonitor scan should be delay based
Ahmed Hussein created HDFS-16042: Summary: DatanodeAdminMonitor scan should be delay based Key: HDFS-16042 URL: https://issues.apache.org/jira/browse/HDFS-16042 Project: Hadoop HDFS Issue Type: Bug Components: datanode Reporter: Ahmed Hussein Assignee: Ahmed Hussein In {{DatanodeAdminManager.activate()}}, the Monitor task is scheduled with a fixed rate, ie. the period is from start1 -> start2. {code:java} executor.scheduleAtFixedRate(monitor, intervalSecs, intervalSecs, TimeUnit.SECONDS); {code} According to Java API docs for {{scheduleAtFixedRate}}, {quote}If any execution of this task takes longer than its period, then subsequent executions may start late, but will not concurrently execute.{quote} It should be a fixed delay so it's end1 -> start1. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-16040) RpcQueueTime metric counts requeued calls as unique events.
[ https://issues.apache.org/jira/browse/HDFS-16040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352032#comment-17352032 ] Simbarashe Dzinamarira edited comment on HDFS-16040 at 5/26/21, 7:39 PM: - Took a second look to check if the rpcQueueTime is being calculated correctly. I believe it is. {noformat} details.set(Timing.QUEUE, startTime - timestampNanos - details.get(Timing.ENQUEUE));{noformat} startTime is when the call is removed from the call queue for processing. (The final time the call is actually removed, not intermediate pops withs re-queuing) timestampNanos is when the call was received. details.get(Timing.ENQUEUE) is the time it took to initially place the call on the call queue. (Doesn't count the re-queues in the rpc server). The logic above will correctly count the times between pops and re-queues as part of the rpcQueueTime. was (Author: simbadzina): Took a second look to check is the rpcQueueTime is being calculated correctly. I believe it is. {noformat} details.set(Timing.QUEUE, startTime - timestampNanos - details.get(Timing.ENQUEUE));{noformat} startTime is when the call is removed from the call queue for processing. (The final time it is actually removed, not intermediate pops withs re-queuing) timestampNanos is when the call was received. details.get(Timing.ENQUEUE) is the time it took to initially place the call on the call queue. (Doesn't count the re-queues in the rpc server). The logic above will correctly count the times between pops and re-queues as part of the rpcQueueTime. > RpcQueueTime metric counts requeued calls as unique events. > --- > > Key: HDFS-16040 > URL: https://issues.apache.org/jira/browse/HDFS-16040 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.10.0, 3.3.0 >Reporter: Simbarashe Dzinamarira >Assignee: Simbarashe Dzinamarira >Priority: Major > Attachments: HDFS-16040.001.patch, HDFS-16040.002.patch, > HDFS-16040.003.patch > > > The RpcQueueTime metric is updated every time a call is re-queued while > waiting for the server state to reach the call's client's state ID. This is > in contrast to RpcProcessingTime which is only updated when the call if > finally processed. > On the Observer NameNode this can result in RpcQueueTimeNumOps being much > larger than RpcProcessingTimeNumOps. The re-queueing is an internal > optimization to avoid blocking and shouldn't result in an inflated metric. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16040) RpcQueueTime metric counts requeued calls as unique events.
[ https://issues.apache.org/jira/browse/HDFS-16040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352032#comment-17352032 ] Simbarashe Dzinamarira commented on HDFS-16040: --- Took a second look to check is the rpcQueueTime is being calculated correctly. I believe it is. {noformat} details.set(Timing.QUEUE, startTime - timestampNanos - details.get(Timing.ENQUEUE));{noformat} startTime is when the call is removed from the call queue for processing. (The final time it is actually removed, not intermediate pops withs re-queuing) timestampNanos is when the call was received. details.get(Timing.ENQUEUE) is the time it took to initially place the call on the call queue. (Doesn't count the re-queues in the rpc server). The logic above will correctly count the times between pops and re-queues as part of the rpcQueueTime. > RpcQueueTime metric counts requeued calls as unique events. > --- > > Key: HDFS-16040 > URL: https://issues.apache.org/jira/browse/HDFS-16040 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.10.0, 3.3.0 >Reporter: Simbarashe Dzinamarira >Assignee: Simbarashe Dzinamarira >Priority: Major > Attachments: HDFS-16040.001.patch, HDFS-16040.002.patch, > HDFS-16040.003.patch > > > The RpcQueueTime metric is updated every time a call is re-queued while > waiting for the server state to reach the call's client's state ID. This is > in contrast to RpcProcessingTime which is only updated when the call if > finally processed. > On the Observer NameNode this can result in RpcQueueTimeNumOps being much > larger than RpcProcessingTimeNumOps. The re-queueing is an internal > optimization to avoid blocking and shouldn't result in an inflated metric. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15916) DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff
[ https://issues.apache.org/jira/browse/HDFS-15916?focusedWorklogId=602511=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602511 ] ASF GitHub Bot logged work on HDFS-15916: - Author: ASF GitHub Bot Created on: 26/May/21 19:18 Start Date: 26/May/21 19:18 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3056: URL: https://github.com/apache/hadoop/pull/3056#issuecomment-849052992 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 54s | | trunk passed | | +1 :green_heart: | compile | 1m 1s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 50s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 57s | | trunk passed | | +1 :green_heart: | javadoc | 0m 52s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 40s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 3s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 7s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 59s | | the patch passed | | +1 :green_heart: | compile | 1m 4s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 4s | | the patch passed | | +1 :green_heart: | compile | 0m 53s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 53s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 22s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 56s | | the patch passed | | +1 :green_heart: | javadoc | 0m 37s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 34s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 3s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 30s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 93m 32s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3056/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3056 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux b718320df92d 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f94b53d69a75ff734747bde16360467ccd1da0b3 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3056/1/testReport/ | | Max. process+thread count | 551 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3056/1/console | |
[jira] [Updated] (HDFS-16040) RpcQueueTime metric counts requeued calls as unique events.
[ https://issues.apache.org/jira/browse/HDFS-16040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simbarashe Dzinamarira updated HDFS-16040: -- Attachment: HDFS-16040.003.patch Status: Patch Available (was: Open) Fixed checkstyle issues * Switched to java.util.function.Supplier instead of com.google.common.base.Supplier * Line length > RpcQueueTime metric counts requeued calls as unique events. > --- > > Key: HDFS-16040 > URL: https://issues.apache.org/jira/browse/HDFS-16040 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.3.0, 2.10.0 >Reporter: Simbarashe Dzinamarira >Assignee: Simbarashe Dzinamarira >Priority: Major > Attachments: HDFS-16040.001.patch, HDFS-16040.002.patch, > HDFS-16040.003.patch > > > The RpcQueueTime metric is updated every time a call is re-queued while > waiting for the server state to reach the call's client's state ID. This is > in contrast to RpcProcessingTime which is only updated when the call if > finally processed. > On the Observer NameNode this can result in RpcQueueTimeNumOps being much > larger than RpcProcessingTimeNumOps. The re-queueing is an internal > optimization to avoid blocking and shouldn't result in an inflated metric. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16040) RpcQueueTime metric counts requeued calls as unique events.
[ https://issues.apache.org/jira/browse/HDFS-16040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simbarashe Dzinamarira updated HDFS-16040: -- Status: Open (was: Patch Available) > RpcQueueTime metric counts requeued calls as unique events. > --- > > Key: HDFS-16040 > URL: https://issues.apache.org/jira/browse/HDFS-16040 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.3.0, 2.10.0 >Reporter: Simbarashe Dzinamarira >Assignee: Simbarashe Dzinamarira >Priority: Major > Attachments: HDFS-16040.001.patch, HDFS-16040.002.patch > > > The RpcQueueTime metric is updated every time a call is re-queued while > waiting for the server state to reach the call's client's state ID. This is > in contrast to RpcProcessingTime which is only updated when the call if > finally processed. > On the Observer NameNode this can result in RpcQueueTimeNumOps being much > larger than RpcProcessingTimeNumOps. The re-queueing is an internal > optimization to avoid blocking and shouldn't result in an inflated metric. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15915) Race condition with async edits logging due to updating txId outside of the namesystem log
[ https://issues.apache.org/jira/browse/HDFS-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17352023#comment-17352023 ] Konstantin Shvachko commented on HDFS-15915: Ran unit tests that failed on Jenkins. All passing locally. Will be committing this shortly. > Race condition with async edits logging due to updating txId outside of the > namesystem log > -- > > Key: HDFS-15915 > URL: https://issues.apache.org/jira/browse/HDFS-15915 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, namenode >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Attachments: HDFS-15915-01.patch, HDFS-15915-02.patch, > HDFS-15915-03.patch, HDFS-15915-04.patch, HDFS-15915-05.patch, > testMkdirsRace.patch > > > {{FSEditLogAsync}} creates an {{FSEditLogOp}} and populates its fields inside > {{FSNamesystem.writeLock}}. But one essential field the transaction id of the > edits op remains unset until the time when the operation is scheduled for > synching. At that time {{beginTransaction()}} will set the the > {{FSEditLogOp.txid}} and increment the global transaction count. On busy > NameNode this event can fall outside the write lock. > This causes problems for Observer reads. It also can potentially reshuffle > transactions and Standby will apply them in a wrong order. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15916) DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff
[ https://issues.apache.org/jira/browse/HDFS-15916?focusedWorklogId=602459=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602459 ] ASF GitHub Bot logged work on HDFS-15916: - Author: ASF GitHub Bot Created on: 26/May/21 17:44 Start Date: 26/May/21 17:44 Worklog Time Spent: 10m Work Description: ayushtkn commented on a change in pull request #2863: URL: https://github.com/apache/hadoop/pull/2863#discussion_r639987959 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java ## @@ -2388,8 +2389,15 @@ private SnapshotDiffReport getSnapshotDiffReportInternal( List deletedList = new ChunkedArrayList<>(); SnapshotDiffReportListing report; do { - report = dfs.getSnapshotDiffReportListing(snapshotDir, fromSnapshot, - toSnapshot, startPath, index); + try { +report = dfs.getSnapshotDiffReportListing(snapshotDir, fromSnapshot, +toSnapshot, startPath, index); + } catch (RpcNoSuchMethodException e) { +// In case the server doesn't support getSnapshotDiffReportListing, +// fallback to getSnapshotDiffReport. +LOG.warn("Falling back to getSnapshotDiffReport {}", e.getMessage()); Review comment: yeps, Raised Addendum PR: https://github.com/apache/hadoop/pull/3056 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 602459) Time Spent: 1.5h (was: 1h 20m) > DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for > snapshotdiff > --- > > Key: HDFS-15916 > URL: https://issues.apache.org/jira/browse/HDFS-15916 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 3.2.2 >Reporter: Srinivasu Majeti >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > > Looks like when using distcp diff options between two snapshots from a hadoop > 3 cluster to hadoop 2 cluster , we get below exception and seems to be break > backward compatibility due to new API introduction > getSnapshotDiffReportListing. > > {code:java} > hadoop distcp -diff s1 s2 -update src_cluster_path dst_cluster_path > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException): > Unknown method getSnapshotDiffReportListing called on > org.apache.hadoop.hdfs.protocol.ClientProtocol protocol > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15916) DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff
[ https://issues.apache.org/jira/browse/HDFS-15916?focusedWorklogId=602457=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602457 ] ASF GitHub Bot logged work on HDFS-15916: - Author: ASF GitHub Bot Created on: 26/May/21 17:43 Start Date: 26/May/21 17:43 Worklog Time Spent: 10m Work Description: ayushtkn opened a new pull request #3056: URL: https://github.com/apache/hadoop/pull/3056 Addendum, fixing log to DFSClient.LOG -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 602457) Time Spent: 1h 20m (was: 1h 10m) > DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for > snapshotdiff > --- > > Key: HDFS-15916 > URL: https://issues.apache.org/jira/browse/HDFS-15916 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 3.2.2 >Reporter: Srinivasu Majeti >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Looks like when using distcp diff options between two snapshots from a hadoop > 3 cluster to hadoop 2 cluster , we get below exception and seems to be break > backward compatibility due to new API introduction > getSnapshotDiffReportListing. > > {code:java} > hadoop distcp -diff s1 s2 -update src_cluster_path dst_cluster_path > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException): > Unknown method getSnapshotDiffReportListing called on > org.apache.hadoop.hdfs.protocol.ClientProtocol protocol > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16024) RBF: Rename data to the Trash should be based on src locations
[ https://issues.apache.org/jira/browse/HDFS-16024?focusedWorklogId=602304=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602304 ] ASF GitHub Bot logged work on HDFS-16024: - Author: ASF GitHub Bot Created on: 26/May/21 12:51 Start Date: 26/May/21 12:51 Worklog Time Spent: 10m Work Description: zhuxiangyi commented on pull request #3009: URL: https://github.com/apache/hadoop/pull/3009#issuecomment-848742552 @goiri @ferhui Thank you for your review, which made my code more clear, which helped me a lot. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 602304) Time Spent: 7h 20m (was: 7h 10m) > RBF: Rename data to the Trash should be based on src locations > -- > > Key: HDFS-16024 > URL: https://issues.apache.org/jira/browse/HDFS-16024 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Affects Versions: 3.4.0 >Reporter: Xiangyi Zhu >Assignee: Xiangyi Zhu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 7h 20m > Remaining Estimate: 0h > > 1.When deleting data to the Trash without configuring a mount point for the > Trash, the Router should recognize and move the data to the Trash > 2.When the user’s trash can is configured with a mount point and is different > from the NS of the deleted directory, the router should identify and move the > data to the trash can of the current user of src > The same is true for using ViewFs mount points, I think we should be > consistent with it -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15979) Move within EZ fails and cannot remove nested EZs
[ https://issues.apache.org/jira/browse/HDFS-15979?focusedWorklogId=602277=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602277 ] ASF GitHub Bot logged work on HDFS-15979: - Author: ASF GitHub Bot Created on: 26/May/21 11:57 Start Date: 26/May/21 11:57 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2919: URL: https://github.com/apache/hadoop/pull/2919#issuecomment-848707942 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 16s | | trunk passed | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 28s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 11s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 10s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 10s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 53s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 8s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 231m 32s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2919/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | | The patch does not generate ASF License warnings. | | | | 316m 38s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.cli.TestErasureCodingCLI | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2919/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2919 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux 9363b2996041 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 687a324ecbe2a53006aeff1185ec106311555b8e | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2919/1/testReport/ | | Max. process+thread count
[jira] [Work logged] (HDFS-16041) TestErasureCodingCLI fails
[ https://issues.apache.org/jira/browse/HDFS-16041?focusedWorklogId=602264=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602264 ] ASF GitHub Bot logged work on HDFS-16041: - Author: ASF GitHub Bot Created on: 26/May/21 11:35 Start Date: 26/May/21 11:35 Worklog Time Spent: 10m Work Description: ferhui commented on pull request #3052: URL: https://github.com/apache/hadoop/pull/3052#issuecomment-848695408 @tasanuma Thanks for review and merge -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 602264) Time Spent: 1h 50m (was: 1h 40m) > TestErasureCodingCLI fails > -- > > Key: HDFS-16041 > URL: https://issues.apache.org/jira/browse/HDFS-16041 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.4.0 >Reporter: Hui Fei >Assignee: Hui Fei >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > Because of HDFS-16018, TestErasureCodingCLI fails, reported by HDFS-13671 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-16041) TestErasureCodingCLI fails
[ https://issues.apache.org/jira/browse/HDFS-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma resolved HDFS-16041. - Fix Version/s: 3.4.0 Resolution: Fixed > TestErasureCodingCLI fails > -- > > Key: HDFS-16041 > URL: https://issues.apache.org/jira/browse/HDFS-16041 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.4.0 >Reporter: Hui Fei >Assignee: Hui Fei >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > Because of HDFS-16018, TestErasureCodingCLI fails, reported by HDFS-13671 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16041) TestErasureCodingCLI fails
[ https://issues.apache.org/jira/browse/HDFS-16041?focusedWorklogId=602253=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602253 ] ASF GitHub Bot logged work on HDFS-16041: - Author: ASF GitHub Bot Created on: 26/May/21 11:12 Start Date: 26/May/21 11:12 Worklog Time Spent: 10m Work Description: tasanuma commented on pull request #3052: URL: https://github.com/apache/hadoop/pull/3052#issuecomment-848681903 Thanks for fixing it, @ferhui. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 602253) Time Spent: 1h 40m (was: 1.5h) > TestErasureCodingCLI fails > -- > > Key: HDFS-16041 > URL: https://issues.apache.org/jira/browse/HDFS-16041 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.4.0 >Reporter: Hui Fei >Assignee: Hui Fei >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > Because of HDFS-16018, TestErasureCodingCLI fails, reported by HDFS-13671 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16041) TestErasureCodingCLI fails
[ https://issues.apache.org/jira/browse/HDFS-16041?focusedWorklogId=602252=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602252 ] ASF GitHub Bot logged work on HDFS-16041: - Author: ASF GitHub Bot Created on: 26/May/21 11:12 Start Date: 26/May/21 11:12 Worklog Time Spent: 10m Work Description: tasanuma merged pull request #3052: URL: https://github.com/apache/hadoop/pull/3052 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 602252) Time Spent: 1.5h (was: 1h 20m) > TestErasureCodingCLI fails > -- > > Key: HDFS-16041 > URL: https://issues.apache.org/jira/browse/HDFS-16041 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.4.0 >Reporter: Hui Fei >Assignee: Hui Fei >Priority: Minor > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > Because of HDFS-16018, TestErasureCodingCLI fails, reported by HDFS-13671 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14849) Erasure Coding: the internal block is replicated many times when datanode is decommissioning
[ https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhengchenyu updated HDFS-14849: --- Description: colored textWhen the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal block in that datanode will be replicated many times. // added 2019/09/19 I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes simultaneously. !scheduleReconstruction.png! !fsck-file.png! was: When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal block in that datanode will be replicated many times. // added 2019/09/19 I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes simultaneously. !scheduleReconstruction.png! !fsck-file.png! > Erasure Coding: the internal block is replicated many times when datanode is > decommissioning > > > Key: HDFS-14849 > URL: https://issues.apache.org/jira/browse/HDFS-14849 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec, erasure-coding >Affects Versions: 3.3.0 >Reporter: HuangTao >Assignee: HuangTao >Priority: Major > Labels: EC, HDFS, NameNode > Fix For: 3.3.0, 3.1.4, 3.2.2 > > Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, > HDFS-14849.branch-3.1.patch, fsck-file.png, liveBlockIndices.png, > scheduleReconstruction.png > > > colored textWhen the datanode keeping in DECOMMISSION_INPROGRESS status, the > EC internal block in that datanode will be replicated many times. > // added 2019/09/19 > I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes > simultaneously. > !scheduleReconstruction.png! > !fsck-file.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15916) DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff
[ https://issues.apache.org/jira/browse/HDFS-15916?focusedWorklogId=602225=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602225 ] ASF GitHub Bot logged work on HDFS-15916: - Author: ASF GitHub Bot Created on: 26/May/21 10:07 Start Date: 26/May/21 10:07 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3054: URL: https://github.com/apache/hadoop/pull/3054#issuecomment-848643832 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 11m 3s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 32s | | branch-3.3 passed | | +1 :green_heart: | compile | 0m 50s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 0m 23s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 0m 56s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 0m 36s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 2m 31s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 19m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 48s | | the patch passed | | +1 :green_heart: | compile | 0m 45s | | the patch passed | | +1 :green_heart: | javac | 0m 45s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 16s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3054/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt) | hadoop-hdfs-project/hadoop-hdfs-client: The patch generated 1 new + 18 unchanged - 0 fixed = 19 total (was 18) | | +1 :green_heart: | mvnsite | 0m 48s | | the patch passed | | +1 :green_heart: | javadoc | 0m 28s | | the patch passed | | +1 :green_heart: | spotbugs | 2m 38s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 8s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 94m 9s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3054/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3054 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 4c32034089a3 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 705efa226ae82b6764fab14911a527136847459d | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3054/1/testReport/ | | Max. process+thread count | 513 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3054/1/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 602225) Time Spent: 1h 10m (was: 1h) > DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for >
[jira] [Work logged] (HDFS-16041) TestErasureCodingCLI fails
[ https://issues.apache.org/jira/browse/HDFS-16041?focusedWorklogId=60=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-60 ] ASF GitHub Bot logged work on HDFS-16041: - Author: ASF GitHub Bot Created on: 26/May/21 09:58 Start Date: 26/May/21 09:58 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3052: URL: https://github.com/apache/hadoop/pull/3052#issuecomment-848637665 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 54s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 31s | | trunk passed | | +1 :green_heart: | compile | 21m 28s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 9s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 43s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 15s | | trunk passed | | +1 :green_heart: | javadoc | 2m 16s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 25s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 41s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 45s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 6s | | the patch passed | | +1 :green_heart: | compile | 20m 11s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 11s | | the patch passed | | +1 :green_heart: | compile | 18m 27s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 3m 54s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 14s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 13s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 23s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 6m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 9s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 24s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 238m 29s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 12s | | The patch does not generate ASF License warnings. | | | | 445m 19s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3052/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3052 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux a13e909c742b 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / cdb947be77cda6d568d311722280e5d9865691ab | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3052/2/testReport/ | | Max. process+thread count | 3826 (vs. ulimit of
[jira] [Updated] (HDFS-15850) Superuser actions should be reported to external enforcers
[ https://issues.apache.org/jira/browse/HDFS-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-15850: --- Target Version/s: 3.3.2 (was: 3.3.1) > Superuser actions should be reported to external enforcers > -- > > Key: HDFS-15850 > URL: https://issues.apache.org/jira/browse/HDFS-15850 > Project: Hadoop HDFS > Issue Type: Task > Components: security >Affects Versions: 3.3.0 >Reporter: Vivek Ratnavel Subramanian >Assignee: Vivek Ratnavel Subramanian >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: HDFS-15850.branch-3.3.001.patch, HDFS-15850.v1.patch, > HDFS-15850.v2.patch > > Time Spent: 5h 10m > Remaining Estimate: 0h > > Currently, HDFS superuser checks or actions are not reported to external > enforcers like Ranger and the audit report provided by such external enforces > are not complete and are missing the superuser actions. To fix this, add a > new method to "AccessControlEnforcer" for all superuser checks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16032) DFSClient#delete supports Trash
[ https://issues.apache.org/jira/browse/HDFS-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351669#comment-17351669 ] Xiangyi Zhu commented on HDFS-16032: [~ayushtkn],[~sodonnell] Thanks a lot for your comments, I use your suggestions to improve it. > DFSClient#delete supports Trash > > > Key: HDFS-16032 > URL: https://issues.apache.org/jira/browse/HDFS-16032 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hadoop-client, hdfs >Affects Versions: 3.4.0 >Reporter: Xiangyi Zhu >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Currently, HDFS can only move deleted data to Trash through Shell commands. > In actual scenarios, most of the data is deleted through DFSClient Api. I > think it should support Trash. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16032) DFSClient#delete supports Trash
[ https://issues.apache.org/jira/browse/HDFS-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351659#comment-17351659 ] Stephen O'Donnell commented on HDFS-16032: -- I agree with [~ayushtkn]. It does not feel correct to enforce trash at the existing delete API call. One other reason, is that there may be some reason a call to delete does not want to use trash, so you need a "skipTrash" option, which would break compatibility. Exposing a more public "deleteWithTrash" is probably better, and then a user of the API can decide which they want to use. > DFSClient#delete supports Trash > > > Key: HDFS-16032 > URL: https://issues.apache.org/jira/browse/HDFS-16032 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hadoop-client, hdfs >Affects Versions: 3.4.0 >Reporter: Xiangyi Zhu >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Currently, HDFS can only move deleted data to Trash through Shell commands. > In actual scenarios, most of the data is deleted through DFSClient Api. I > think it should support Trash. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13671) Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
[ https://issues.apache.org/jira/browse/HDFS-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351652#comment-17351652 ] Stephen O'Donnell commented on HDFS-13671: -- Did anyone spend any time trying to understand what is going wrong with the FoldedTreeSet structure to cause it to become so slow? In my experience, it works well for some time, and then it suddenly starts performing poorly. The only way to recover from that is to restart the service. I have seen it happen in the namenodes often, but I have also seen the same behaviour on datanodes which have been up for a long time. I have not checked the patch in detail, but I suspect there is a bug in FsDatasetImpl.getSortedFinalizedBlocks {code} public List getSortedFinalizedBlocks(String bpid) { try (AutoCloseableLock lock = datasetReadLock.acquire()) { final List finalized = new ArrayList( volumeMap.size(bpid)); for (ReplicaInfo b : volumeMap.replicas(bpid)) { if (b.getState() == ReplicaState.FINALIZED) { finalized.add(b); } } return finalized; } } {code} The lightWeightGset is not a sorted structure like FoldedTreeSet was, so this method will no longer return the blocks sorted. The DirectoryScanner uses this method and needs the blocks to be sorted. The DirectoryScanner tests did not catch this, but TestFsDatasetImpl.testSortedFinalizedBlocksAreSorted did. > Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet > -- > > Key: HDFS-13671 > URL: https://issues.apache.org/jira/browse/HDFS-13671 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.0.3 >Reporter: Yiqun Lin >Assignee: Haibin Huang >Priority: Major > Attachments: HDFS-13671-001.patch > > > NameNode hung when deleting large files/blocks. The stack info: > {code} > "IPC Server handler 4 on 8020" #87 daemon prio=5 os_prio=0 > tid=0x7fb505b27800 nid=0x94c3 runnable [0x7fa861361000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hdfs.util.FoldedTreeSet.compare(FoldedTreeSet.java:474) > at > org.apache.hadoop.hdfs.util.FoldedTreeSet.removeAndGet(FoldedTreeSet.java:849) > at > org.apache.hadoop.hdfs.util.FoldedTreeSet.remove(FoldedTreeSet.java:911) > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.removeBlock(DatanodeStorageInfo.java:252) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:194) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:108) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlockFromMap(BlockManager.java:3813) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlock(BlockManager.java:3617) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removeBlocks(FSNamesystem.java:4270) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4244) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4180) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:4164) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:871) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:311) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:625) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > {code} > In the current deletion logic in NameNode, there are mainly two steps: > * Collect INodes and all blocks to be deleted, then delete INodes. > * Remove blocks chunk by chunk in a loop. > Actually the first step should be a more expensive operation and will takes > more time. However, now we always see NN hangs during the remove block > operation. > Looking into this, we introduced a new structure {{FoldedTreeSet}} to have a > better performance in dealing FBR/IBRs. But compared with early > implementation in remove-block logic, {{FoldedTreeSet}} seems more slower > since It will take additional time to balance tree node. When there are large > block to be removed/deleted, it looks bad. > For the get type operations in {{DatanodeStorageInfo}}, we only provide the > {{getBlockIterator}} to return blocks iterator and no other
[jira] [Work logged] (HDFS-16041) TestErasureCodingCLI fails
[ https://issues.apache.org/jira/browse/HDFS-16041?focusedWorklogId=602198=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602198 ] ASF GitHub Bot logged work on HDFS-16041: - Author: ASF GitHub Bot Created on: 26/May/21 08:55 Start Date: 26/May/21 08:55 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3052: URL: https://github.com/apache/hadoop/pull/3052#issuecomment-848594880 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 12s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 17s | | trunk passed | | +1 :green_heart: | compile | 23m 38s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 19m 58s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 11s | | trunk passed | | +1 :green_heart: | javadoc | 2m 14s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 19s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 10s | | the patch passed | | +1 :green_heart: | compile | 21m 38s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 21m 38s | | the patch passed | | +1 :green_heart: | compile | 19m 8s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 19m 8s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 3m 48s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 22s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 15s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 25s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 6m 30s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 44s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 20s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 229m 4s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 13s | | The patch does not generate ASF License warnings. | | | | 444m 39s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3052/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3052 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux 85cdaf8bfac4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 518ea9c1b5efef0128a7768b89751fd23253c073 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3052/1/testReport/ | | Max. process+thread count | 2678 (vs. ulimit of 5500)
[jira] [Commented] (HDFS-15916) DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff
[ https://issues.apache.org/jira/browse/HDFS-15916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351641#comment-17351641 ] Wei-Chiu Chuang commented on HDFS-15916: Branch-3.3 backport PR #3054 https://github.com/apache/hadoop/pull/3054 > DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for > snapshotdiff > --- > > Key: HDFS-15916 > URL: https://issues.apache.org/jira/browse/HDFS-15916 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 3.2.2 >Reporter: Srinivasu Majeti >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h > Remaining Estimate: 0h > > Looks like when using distcp diff options between two snapshots from a hadoop > 3 cluster to hadoop 2 cluster , we get below exception and seems to be break > backward compatibility due to new API introduction > getSnapshotDiffReportListing. > > {code:java} > hadoop distcp -diff s1 s2 -update src_cluster_path dst_cluster_path > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException): > Unknown method getSnapshotDiffReportListing called on > org.apache.hadoop.hdfs.protocol.ClientProtocol protocol > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15916) DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff
[ https://issues.apache.org/jira/browse/HDFS-15916?focusedWorklogId=602193=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602193 ] ASF GitHub Bot logged work on HDFS-15916: - Author: ASF GitHub Bot Created on: 26/May/21 08:33 Start Date: 26/May/21 08:33 Worklog Time Spent: 10m Work Description: jojochuang commented on a change in pull request #3054: URL: https://github.com/apache/hadoop/pull/3054#discussion_r639515184 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java ## @@ -2314,8 +2315,15 @@ private SnapshotDiffReport getSnapshotDiffReportInternal( List deletedList = new ChunkedArrayList<>(); SnapshotDiffReportListing report; do { - report = dfs.getSnapshotDiffReportListing(snapshotDir, fromSnapshot, - toSnapshot, startPath, index); + try { +report = dfs.getSnapshotDiffReportListing(snapshotDir, fromSnapshot, +toSnapshot, startPath, index); + } catch (RpcNoSuchMethodException e) { +// In case the server doesn't support getSnapshotDiffReportListing, +// fallback to getSnapshotDiffReport. +DFSClient.LOG.warn("Falling back to getSnapshotDiffReport {}", e.getMessage()); Review comment: Had to use DFSClient.LOG instead of FileSystem.LOG because FileSystem in 3.3 uses commons-logging, not slf4, and does not compile. DFSClient.LOG uses slf4j. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 602193) Time Spent: 1h (was: 50m) > DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for > snapshotdiff > --- > > Key: HDFS-15916 > URL: https://issues.apache.org/jira/browse/HDFS-15916 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 3.2.2 >Reporter: Srinivasu Majeti >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h > Remaining Estimate: 0h > > Looks like when using distcp diff options between two snapshots from a hadoop > 3 cluster to hadoop 2 cluster , we get below exception and seems to be break > backward compatibility due to new API introduction > getSnapshotDiffReportListing. > > {code:java} > hadoop distcp -diff s1 s2 -update src_cluster_path dst_cluster_path > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException): > Unknown method getSnapshotDiffReportListing called on > org.apache.hadoop.hdfs.protocol.ClientProtocol protocol > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15916) DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff
[ https://issues.apache.org/jira/browse/HDFS-15916?focusedWorklogId=602192=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602192 ] ASF GitHub Bot logged work on HDFS-15916: - Author: ASF GitHub Bot Created on: 26/May/21 08:31 Start Date: 26/May/21 08:31 Worklog Time Spent: 10m Work Description: jojochuang opened a new pull request #3054: URL: https://github.com/apache/hadoop/pull/3054 Signed-off-by: Wei-Chiu Chuang (cherry picked from commit c6539e3289711d29f508930bbda40302f48ddf4c) Conflicts: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 602192) Time Spent: 50m (was: 40m) > DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for > snapshotdiff > --- > > Key: HDFS-15916 > URL: https://issues.apache.org/jira/browse/HDFS-15916 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 3.2.2 >Reporter: Srinivasu Majeti >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 50m > Remaining Estimate: 0h > > Looks like when using distcp diff options between two snapshots from a hadoop > 3 cluster to hadoop 2 cluster , we get below exception and seems to be break > backward compatibility due to new API introduction > getSnapshotDiffReportListing. > > {code:java} > hadoop distcp -diff s1 s2 -update src_cluster_path dst_cluster_path > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException): > Unknown method getSnapshotDiffReportListing called on > org.apache.hadoop.hdfs.protocol.ClientProtocol protocol > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15916) DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff
[ https://issues.apache.org/jira/browse/HDFS-15916?focusedWorklogId=602181=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-602181 ] ASF GitHub Bot logged work on HDFS-15916: - Author: ASF GitHub Bot Created on: 26/May/21 08:10 Start Date: 26/May/21 08:10 Worklog Time Spent: 10m Work Description: jojochuang commented on a change in pull request #2863: URL: https://github.com/apache/hadoop/pull/2863#discussion_r639498241 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java ## @@ -2388,8 +2389,15 @@ private SnapshotDiffReport getSnapshotDiffReportInternal( List deletedList = new ChunkedArrayList<>(); SnapshotDiffReportListing report; do { - report = dfs.getSnapshotDiffReportListing(snapshotDir, fromSnapshot, - toSnapshot, startPath, index); + try { +report = dfs.getSnapshotDiffReportListing(snapshotDir, fromSnapshot, +toSnapshot, startPath, index); + } catch (RpcNoSuchMethodException e) { +// In case the server doesn't support getSnapshotDiffReportListing, +// fallback to getSnapshotDiffReport. +LOG.warn("Falling back to getSnapshotDiffReport {}", e.getMessage()); Review comment: Should use DFSClient.LOG instead because the rest of code logs using DFSClient.LOG. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 602181) Time Spent: 40m (was: 0.5h) > DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for > snapshotdiff > --- > > Key: HDFS-15916 > URL: https://issues.apache.org/jira/browse/HDFS-15916 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 3.2.2 >Reporter: Srinivasu Majeti >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 40m > Remaining Estimate: 0h > > Looks like when using distcp diff options between two snapshots from a hadoop > 3 cluster to hadoop 2 cluster , we get below exception and seems to be break > backward compatibility due to new API introduction > getSnapshotDiffReportListing. > > {code:java} > hadoop distcp -diff s1 s2 -update src_cluster_path dst_cluster_path > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException): > Unknown method getSnapshotDiffReportListing called on > org.apache.hadoop.hdfs.protocol.ClientProtocol protocol > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16040) RpcQueueTime metric counts requeued calls as unique events.
[ https://issues.apache.org/jira/browse/HDFS-16040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351570#comment-17351570 ] Hadoop QA commented on HDFS-16040: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 15s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 52s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 58s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 58s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 37s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 0s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 4s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 25s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 10s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 32m 33s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 5m 52s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 12s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 11s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 11s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 51s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 51s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 38s{color} | {color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/615/artifact/out/diff-checkstyle-root.txt{color} | {color:orange} root: The patch generated 3 new + 202 unchanged - 0 fixed = 205 total (was 202) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 7s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} |
[jira] [Commented] (HDFS-16040) RpcQueueTime metric counts requeued calls as unique events.
[ https://issues.apache.org/jira/browse/HDFS-16040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351568#comment-17351568 ] Hadoop QA commented on HDFS-16040: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 5s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 13m 0s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 11s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 12s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 13s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 28s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 52s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 25m 30s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 35s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 25s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 38m 5s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 6m 35s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 38s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 15s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 15s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 52s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 52s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 5m 6s{color} | {color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/613/artifact/out/diff-checkstyle-root.txt{color} | {color:orange} root: The patch generated 2 new + 202 unchanged - 0 fixed = 204 total (was 202) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 50s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} |
[jira] [Commented] (HDFS-16032) DFSClient#delete supports Trash
[ https://issues.apache.org/jira/browse/HDFS-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351543#comment-17351543 ] Ayush Saxena commented on HDFS-16032: - Trash is a FsShell concept, I would argue if it belongs to DfsClient. Though we have a config to enforce whether it should be enabled or not to prevent compatibility and with this on, but this would be just restricted to HDFS or say DistributedFileSystem, In general Downstream projects using the API's don't make specific assumptions related to FileSystem types changing. So, making a DFS only enhancement doesn't seems to be very right. Maybe just having a util method, like FileUtils.delete(..) or FileUtils.deleteWithTrash(..) can be exposed, if something of that sort isn't already there, which can be generic to all the FileSystems and would keep a strict control in hands of the Applications, and applications can adopt to it if there is a need to that. Secondly, initially the trash was to prevent user level accidents, the API calls are supposed to be coming from some fairly stable applications, knowing what they are doing, if the application wanted to move to trash, or have some precautions, they could have coded that way, there are ways to do that > DFSClient#delete supports Trash > > > Key: HDFS-16032 > URL: https://issues.apache.org/jira/browse/HDFS-16032 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hadoop-client, hdfs >Affects Versions: 3.4.0 >Reporter: Xiangyi Zhu >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Currently, HDFS can only move deleted data to Trash through Shell commands. > In actual scenarios, most of the data is deleted through DFSClient Api. I > think it should support Trash. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16040) RpcQueueTime metric counts requeued calls as unique events.
[ https://issues.apache.org/jira/browse/HDFS-16040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351510#comment-17351510 ] Konstantin Shvachko commented on HDFS-16040: I checked that the test is fails without the fix and passes with it. I was wondering if the code correct;y counts the queue time for Observer. That is takes into account the time the call was requeued. It seems to me that it does. [~simbadzina] could you please double-check. I guess there will be some checkstyle warnings when Jenkins finishes. > RpcQueueTime metric counts requeued calls as unique events. > --- > > Key: HDFS-16040 > URL: https://issues.apache.org/jira/browse/HDFS-16040 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.10.0, 3.3.0 >Reporter: Simbarashe Dzinamarira >Assignee: Simbarashe Dzinamarira >Priority: Major > Attachments: HDFS-16040.001.patch, HDFS-16040.002.patch > > > The RpcQueueTime metric is updated every time a call is re-queued while > waiting for the server state to reach the call's client's state ID. This is > in contrast to RpcProcessingTime which is only updated when the call if > finally processed. > On the Observer NameNode this can result in RpcQueueTimeNumOps being much > larger than RpcProcessingTimeNumOps. The re-queueing is an internal > optimization to avoid blocking and shouldn't result in an inflated metric. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org