[jira] [Commented] (HDFS-15808) Add metrics for FSNamesystem read/write lock hold long time
[ https://issues.apache.org/jira/browse/HDFS-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287582#comment-17287582 ] tomscut commented on HDFS-15808: Failed junit tests hadoop.hdfs.server.namenode.ha.TestHAAppend hadoop.hdfs.qjournal.server.TestJournalNodeSync hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks hadoop.hdfs.server.datanode.TestBlockScanner Those failed unit tests are unrelated to the change. > Add metrics for FSNamesystem read/write lock hold long time > --- > > Key: HDFS-15808 > URL: https://issues.apache.org/jira/browse/HDFS-15808 > Project: Hadoop HDFS > Issue Type: Wish > Components: hdfs >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: hdfs, lock, metrics, pull-request-available > Attachments: ExpiredHeartbeat.png, lockLongHoldCount > > Time Spent: 4h 40m > Remaining Estimate: 0h > > To monitor how often read/write locks exceed thresholds, we can add two > metrics(ReadLockWarning/WriteLockWarning), which are exposed in JMX. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15808) Add metrics for FSNamesystem read/write lock hold long time
[ https://issues.apache.org/jira/browse/HDFS-15808?focusedWorklogId=555096=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-555096 ] ASF GitHub Bot logged work on HDFS-15808: - Author: ASF GitHub Bot Created on: 20/Feb/21 07:25 Start Date: 20/Feb/21 07:25 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2668: URL: https://github.com/apache/hadoop/pull/2668#issuecomment-782578219 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 18m 18s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 53s | | trunk passed | | +1 :green_heart: | compile | 1m 20s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 5s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 17s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 28s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 3m 7s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 5s | | trunk passed | | -0 :warning: | patch | 3m 24s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 10s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 6s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 6s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 55s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 11s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 47s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 58s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 39s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | findbugs | 3m 57s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 204m 49s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2668/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 312m 18s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHAAppend | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestBlockScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2668/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2668 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6ac55fde21d0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git
[jira] [Commented] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder
[ https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287578#comment-17287578 ] Hadoop QA commented on HDFS-15841: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 45s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 50s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 9s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 22s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 29s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 30s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 38s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 34s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 50s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 52s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 26s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 53s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 53s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 43s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 43s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 23s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 23s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green}{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green}
[jira] [Comment Edited] (HDFS-15808) Add metrics for FSNamesystem read/write lock hold long time
[ https://issues.apache.org/jira/browse/HDFS-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287458#comment-17287458 ] tomscut edited comment on HDFS-15808 at 2/20/21, 7:18 AM: -- Thanks [~xkrogen] for your good point and [~shv] for your suggestion. I added the type value on the {{@Metric}} annotation. was (Author: tomscut): Thanks [~xkrogen] for your good point and [~shv] for your suggestion. I fixed it. > Add metrics for FSNamesystem read/write lock hold long time > --- > > Key: HDFS-15808 > URL: https://issues.apache.org/jira/browse/HDFS-15808 > Project: Hadoop HDFS > Issue Type: Wish > Components: hdfs >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: hdfs, lock, metrics, pull-request-available > Attachments: ExpiredHeartbeat.png, lockLongHoldCount > > Time Spent: 4.5h > Remaining Estimate: 0h > > To monitor how often read/write locks exceed thresholds, we can add two > metrics(ReadLockWarning/WriteLockWarning), which are exposed in JMX. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15380) RBF: Could not fetch real remote IP in RouterWebHdfsMethods
[ https://issues.apache.org/jira/browse/HDFS-15380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287482#comment-17287482 ] Tomscut commented on HDFS-15380: Thanks [~elgoiri] and [~ayushtkn]. > RBF: Could not fetch real remote IP in RouterWebHdfsMethods > --- > > Key: HDFS-15380 > URL: https://issues.apache.org/jira/browse/HDFS-15380 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.1.0 >Reporter: Tomscut >Assignee: Tomscut >Priority: Major > Labels: router, webhdfs > Fix For: 3.4.0 > > Attachments: HDFS-15380.001.patch > > Original Estimate: 2h > Remaining Estimate: 2h > > We plan to add audit log for hdfs router, then we fetch remote ip via > Server.getRemoteIp(), but the result is "localhost/127.0.0.1". > > "REMOTE_ADDRESS" in RouterWebHdfsMethods.java is a ThreadLocal field, > setting in construction method RouterWebHdfsMethods() and init(). When we > call method Server.getRemoteIp() to fetch remote ip, the thread would be > changed, so the ThreadLocal field "REMOTE_ADDRESS" is null, and would be > passed to "localhost/127.0.0.1" via InetAddress.getByName(). > > So we can change this field "REMOTE_ADDRESS" to a String value, just like > NamenodeWebHdfsMethods does. > > I printed thread name and the value of "REMOTE_ADDRESS" in log, the log is > shown below: > {code:java} > 2020-05-27 19:15:18,797 INFO router.RouterWebHdfsMethods > (RouterWebHdfsMethods.java:(138)) - RouterWebHdfsMethods > REMOTE_ADDRESS: 14.39.39.28, current thread: qtp476579021-1090 > 2020-05-27 19:15:18,827 INFO router.RouterWebHdfsMethods > (RouterWebHdfsMethods.java:init(150)) - init REMOTE_ADDRESS: 14.39.39.28, > current thread: qtp476579021-1090 > 2020-05-27 19:15:18,836 INFO router.RouterWebHdfsMethods > (RouterWebHdfsMethods.java:getRemoteAddr(170)) - getRemoteAddr > REMOTE_ADDRESS: null, current thread: IPC Server handler 75 on > 2020-05-27 19:15:18,837 INFO router.RouterWebHdfsMethods > (RouterWebHdfsMethods.java:getRemoteAddr(170)) - getRemoteAddr > REMOTE_ADDRESS: null, current thread: IPC Server handler 75 on > 2020-05-27 19:15:18,883 INFO router.RouterWebHdfsMethods > (RouterWebHdfsMethods.java:reset(164)) - reset REMOTE_ADDRESS: null, current > thread: IPC Server handler 75 on > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15809) DeadNodeDetector doesn't remove live nodes from dead node set.
[ https://issues.apache.org/jira/browse/HDFS-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287477#comment-17287477 ] Jinglun commented on HDFS-15809: Submit v03 fix checkstyle and unit tests. > DeadNodeDetector doesn't remove live nodes from dead node set. > -- > > Key: HDFS-15809 > URL: https://issues.apache.org/jira/browse/HDFS-15809 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-15809.001.patch, HDFS-15809.002.patch, > HDFS-15809.003.patch > > > We found the dead node detector might never remove the alive nodes from the > dead node set in a big cluster. For example: > # 200 nodes are added to the dead node set by DeadNodeDetector. > # DeadNodeDetector#checkDeadNodes() adds 100 nodes to the > deadNodesProbeQueue because the queue limited length is 100. > # The probe threads start working and probe 30 nodes. > # DeadNodeDetector#checkDeadNodes() is scheduled again. It iterates the dead > node set and adds 30 nodes to the deadNodesProbeQueue. But the order is the > same as the last time. So the 30 nodes that has already been probed are added > to the queue again. > # Repeat 3 and 4. But we always add the first 30 nodes from the dead set. If > they are all dead then the live nodes behind them could never be recovered. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15809) DeadNodeDetector doesn't remove live nodes from dead node set.
[ https://issues.apache.org/jira/browse/HDFS-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinglun updated HDFS-15809: --- Attachment: HDFS-15809.003.patch > DeadNodeDetector doesn't remove live nodes from dead node set. > -- > > Key: HDFS-15809 > URL: https://issues.apache.org/jira/browse/HDFS-15809 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-15809.001.patch, HDFS-15809.002.patch, > HDFS-15809.003.patch > > > We found the dead node detector might never remove the alive nodes from the > dead node set in a big cluster. For example: > # 200 nodes are added to the dead node set by DeadNodeDetector. > # DeadNodeDetector#checkDeadNodes() adds 100 nodes to the > deadNodesProbeQueue because the queue limited length is 100. > # The probe threads start working and probe 30 nodes. > # DeadNodeDetector#checkDeadNodes() is scheduled again. It iterates the dead > node set and adds 30 nodes to the deadNodesProbeQueue. But the order is the > same as the last time. So the 30 nodes that has already been probed are added > to the queue again. > # Repeat 3 and 4. But we always add the first 30 nodes from the dead set. If > they are all dead then the live nodes behind them could never be recovered. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15808) Add metrics for FSNamesystem read/write lock hold long time
[ https://issues.apache.org/jira/browse/HDFS-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287458#comment-17287458 ] Tomscut commented on HDFS-15808: Thanks [~xkrogen] for your good point and [~shv] for your suggestion. I fixed it. > Add metrics for FSNamesystem read/write lock hold long time > --- > > Key: HDFS-15808 > URL: https://issues.apache.org/jira/browse/HDFS-15808 > Project: Hadoop HDFS > Issue Type: Wish > Components: hdfs >Reporter: Tomscut >Assignee: Tomscut >Priority: Major > Labels: hdfs, lock, metrics, pull-request-available > Attachments: ExpiredHeartbeat.png, lockLongHoldCount > > Time Spent: 4.5h > Remaining Estimate: 0h > > To monitor how often read/write locks exceed thresholds, we can add two > metrics(ReadLockWarning/WriteLockWarning), which are exposed in JMX. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads
[ https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287457#comment-17287457 ] Yang Yun commented on HDFS-15793: - Update to HDFS-15793.003.patch for checkstyle issue. > Add command to DFSAdmin for Balancer max concurrent threads > > > Key: HDFS-15793 > URL: https://issues.apache.org/jira/browse/HDFS-15793 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer mover >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15793.001.patch, HDFS-15793.002.patch, > HDFS-15793.003.patch > > > We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the > max number of bytes per second of network bandwidth to be used by a datanode > during balancing. Also add '-setBalancerMaxThreads' to dynamically change > the balancer maxThread number which may be used concurrently for moving > blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads
[ https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15793: Status: Open (was: Patch Available) > Add command to DFSAdmin for Balancer max concurrent threads > > > Key: HDFS-15793 > URL: https://issues.apache.org/jira/browse/HDFS-15793 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer mover >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15793.001.patch, HDFS-15793.002.patch, > HDFS-15793.003.patch > > > We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the > max number of bytes per second of network bandwidth to be used by a datanode > during balancing. Also add '-setBalancerMaxThreads' to dynamically change > the balancer maxThread number which may be used concurrently for moving > blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads
[ https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15793: Attachment: HDFS-15793.003.patch Status: Patch Available (was: Open) > Add command to DFSAdmin for Balancer max concurrent threads > > > Key: HDFS-15793 > URL: https://issues.apache.org/jira/browse/HDFS-15793 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer mover >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15793.001.patch, HDFS-15793.002.patch, > HDFS-15793.003.patch > > > We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the > max number of bytes per second of network bandwidth to be used by a datanode > during balancing. Also add '-setBalancerMaxThreads' to dynamically change > the balancer maxThread number which may be used concurrently for moving > blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder
[ https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287452#comment-17287452 ] Yang Yun commented on HDFS-15841: - Update to HDFS-15841.002.patch for checkstyle issue. > Use xattr to support delete file to trash by forced for important folder > > > Key: HDFS-15841 > URL: https://issues.apache.org/jira/browse/HDFS-15841 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15841.001.patch, HDFS-15841.002.patch > > > Deletion is a dangerous operation. > If a folder has xattr 'user.force2trash', any deletion of this folder and > it's sub file/folder will be moved to trash by forced. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder
[ https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15841: Attachment: HDFS-15841.002.patch Status: Patch Available (was: Open) > Use xattr to support delete file to trash by forced for important folder > > > Key: HDFS-15841 > URL: https://issues.apache.org/jira/browse/HDFS-15841 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15841.001.patch, HDFS-15841.002.patch > > > Deletion is a dangerous operation. > If a folder has xattr 'user.force2trash', any deletion of this folder and > it's sub file/folder will be moved to trash by forced. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder
[ https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15841: Status: Open (was: Patch Available) > Use xattr to support delete file to trash by forced for important folder > > > Key: HDFS-15841 > URL: https://issues.apache.org/jira/browse/HDFS-15841 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15841.001.patch, HDFS-15841.002.patch > > > Deletion is a dangerous operation. > If a folder has xattr 'user.force2trash', any deletion of this folder and > it's sub file/folder will be moved to trash by forced. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder
[ https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287449#comment-17287449 ] Yang Yun edited comment on HDFS-15841 at 2/20/21, 1:47 AM: --- Thanks [~ayushtkn] for your comment. There are some small diffrents with protected directories, * protected directories forbids deleting some directories; force2trash can delete but to trash for regretting. * protected directories is from server side, the admin can set some protected directories; force2trash is from client side, any user can has special setting for any file/folder. was (Author: hadoop_yangyun): Thanks [~ayushtkn] for your comment. There are some small diffrents with protected directories, * protected directories forbids deleting some directories; force2trash can delete but to trash for regretting. * protected directories is from server side, the admin can set some protected directories; force2trash is from client side, any user can has his special setting for any file/folder. > Use xattr to support delete file to trash by forced for important folder > > > Key: HDFS-15841 > URL: https://issues.apache.org/jira/browse/HDFS-15841 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15841.001.patch > > > Deletion is a dangerous operation. > If a folder has xattr 'user.force2trash', any deletion of this folder and > it's sub file/folder will be moved to trash by forced. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder
[ https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287449#comment-17287449 ] Yang Yun commented on HDFS-15841: - Thanks [~ayushtkn] for your comment. There are some small diffrents with protected directories, * protected directories forbids deleting some directories; force2trash can delete but to trash for regretting. * protected directories is from server side, the admin can set some protected directories; force2trash is from client side, any user can has his special setting for any file/folder. > Use xattr to support delete file to trash by forced for important folder > > > Key: HDFS-15841 > URL: https://issues.apache.org/jira/browse/HDFS-15841 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15841.001.patch > > > Deletion is a dangerous operation. > If a folder has xattr 'user.force2trash', any deletion of this folder and > it's sub file/folder will be moved to trash by forced. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15781) Add metrics for how blocks are moved in replaceBlock
[ https://issues.apache.org/jira/browse/HDFS-15781?focusedWorklogId=555000=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-555000 ] ASF GitHub Bot logged work on HDFS-15781: - Author: ASF GitHub Bot Created on: 20/Feb/21 00:40 Start Date: 20/Feb/21 00:40 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2704: URL: https://github.com/apache/hadoop/pull/2704#issuecomment-782481131 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 14s | | trunk passed | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 11s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 4s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 21s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 46s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 54s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 22s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 3m 14s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 11s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 6s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 6s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 59s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 12s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 17s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | findbugs | 3m 16s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 214m 40s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2704/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 308m 8s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2704/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2704 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 484da49dbdae 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d28b6f90c8c | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2704/4/testReport/ | | Max. process+thread
[jira] [Created] (HDFS-15842) HDFS mover to emit metrics
Leon Gao created HDFS-15842: --- Summary: HDFS mover to emit metrics Key: HDFS-15842 URL: https://issues.apache.org/jira/browse/HDFS-15842 Project: Hadoop HDFS Issue Type: Improvement Components: balancer mover Reporter: Leon Gao Assignee: Leon Gao We can emit metrics thru metrics2 when running HDFS mover, which can help to monitor the progress and turn mover parameters. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15808) Add metrics for FSNamesystem read/write lock hold long time
[ https://issues.apache.org/jira/browse/HDFS-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287396#comment-17287396 ] Konstantin Shvachko commented on HDFS-15808: [~tomscut] sure we all use different systems for managing metrics. {{RpcQueueTime}} is of type {{MutableRate}}, while {{ExpiredHeartbeats}} and your new metric are just a {{@Metric}}, which makes it of type {{GAUGE}} as Erik pointed out. In my system {{ExpiredHeartbeats}} look like this: !ExpiredHeartbeat.png! Good point [~xkrogen] about adding {{type=Type.COUNT}} to the annotation, this should fix the problem. > Add metrics for FSNamesystem read/write lock hold long time > --- > > Key: HDFS-15808 > URL: https://issues.apache.org/jira/browse/HDFS-15808 > Project: Hadoop HDFS > Issue Type: Wish > Components: hdfs >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: hdfs, lock, metrics, pull-request-available > Attachments: ExpiredHeartbeat.png, lockLongHoldCount > > Time Spent: 4.5h > Remaining Estimate: 0h > > To monitor how often read/write locks exceed thresholds, we can add two > metrics(ReadLockWarning/WriteLockWarning), which are exposed in JMX. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15808) Add metrics for FSNamesystem read/write lock hold long time
[ https://issues.apache.org/jira/browse/HDFS-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-15808: --- Attachment: ExpiredHeartbeat.png > Add metrics for FSNamesystem read/write lock hold long time > --- > > Key: HDFS-15808 > URL: https://issues.apache.org/jira/browse/HDFS-15808 > Project: Hadoop HDFS > Issue Type: Wish > Components: hdfs >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: hdfs, lock, metrics, pull-request-available > Attachments: ExpiredHeartbeat.png, lockLongHoldCount > > Time Spent: 4.5h > Remaining Estimate: 0h > > To monitor how often read/write locks exceed thresholds, we can add two > metrics(ReadLockWarning/WriteLockWarning), which are exposed in JMX. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15808) Add metrics for FSNamesystem read/write lock hold long time
[ https://issues.apache.org/jira/browse/HDFS-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-15808: --- Attachment: (was: ExpiredHeartbeat.png) > Add metrics for FSNamesystem read/write lock hold long time > --- > > Key: HDFS-15808 > URL: https://issues.apache.org/jira/browse/HDFS-15808 > Project: Hadoop HDFS > Issue Type: Wish > Components: hdfs >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: hdfs, lock, metrics, pull-request-available > Attachments: lockLongHoldCount > > Time Spent: 4.5h > Remaining Estimate: 0h > > To monitor how often read/write locks exceed thresholds, we can add two > metrics(ReadLockWarning/WriteLockWarning), which are exposed in JMX. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15808) Add metrics for FSNamesystem read/write lock hold long time
[ https://issues.apache.org/jira/browse/HDFS-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-15808: --- Attachment: ExpiredHeartbeat.png > Add metrics for FSNamesystem read/write lock hold long time > --- > > Key: HDFS-15808 > URL: https://issues.apache.org/jira/browse/HDFS-15808 > Project: Hadoop HDFS > Issue Type: Wish > Components: hdfs >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: hdfs, lock, metrics, pull-request-available > Attachments: ExpiredHeartbeat.png, lockLongHoldCount > > Time Spent: 4.5h > Remaining Estimate: 0h > > To monitor how often read/write locks exceed thresholds, we can add two > metrics(ReadLockWarning/WriteLockWarning), which are exposed in JMX. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15422) Reported IBR is partially replaced with stored info when queuing.
[ https://issues.apache.org/jira/browse/HDFS-15422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287377#comment-17287377 ] Hadoop QA commented on HDFS-15422: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 53s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 34s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 11s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 2s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 58s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 12s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 36s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | |
[jira] [Updated] (HDFS-15422) Reported IBR is partially replaced with stored info when queuing.
[ https://issues.apache.org/jira/browse/HDFS-15422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell updated HDFS-15422: - Attachment: HDFS-15422.001.patch > Reported IBR is partially replaced with stored info when queuing. > - > > Key: HDFS-15422 > URL: https://issues.apache.org/jira/browse/HDFS-15422 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Kihwal Lee >Assignee: Stephen O'Donnell >Priority: Critical > Attachments: HDFS-15422-branch-2.10.001.patch, HDFS-15422.001.patch > > > When queueing an IBR (incremental block report) on a standby namenode, some > of the reported information is being replaced with the existing stored > information. This can lead to false block corruption. > We had a namenode, after transitioning to active, started reporting missing > blocks with "SIZE_MISMATCH" as corrupt reason. These were blocks that were > appended and the sizes were actually correct on the datanodes. Upon further > investigation, it was determined that the namenode was queueing IBRs with > altered information. > Although it sounds bad, I am not making it blocker -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15422) Reported IBR is partially replaced with stored info when queuing.
[ https://issues.apache.org/jira/browse/HDFS-15422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell reassigned HDFS-15422: Assignee: Stephen O'Donnell > Reported IBR is partially replaced with stored info when queuing. > - > > Key: HDFS-15422 > URL: https://issues.apache.org/jira/browse/HDFS-15422 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Kihwal Lee >Assignee: Stephen O'Donnell >Priority: Critical > Attachments: HDFS-15422-branch-2.10.001.patch > > > When queueing an IBR (incremental block report) on a standby namenode, some > of the reported information is being replaced with the existing stored > information. This can lead to false block corruption. > We had a namenode, after transitioning to active, started reporting missing > blocks with "SIZE_MISMATCH" as corrupt reason. These were blocks that were > appended and the sizes were actually correct on the datanodes. Upon further > investigation, it was determined that the namenode was queueing IBRs with > altered information. > Although it sounds bad, I am not making it blocker -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15422) Reported IBR is partially replaced with stored info when queuing.
[ https://issues.apache.org/jira/browse/HDFS-15422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell updated HDFS-15422: - Target Version/s: 3.4.0 (was: 2.10.2) > Reported IBR is partially replaced with stored info when queuing. > - > > Key: HDFS-15422 > URL: https://issues.apache.org/jira/browse/HDFS-15422 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Kihwal Lee >Priority: Critical > Attachments: HDFS-15422-branch-2.10.001.patch > > > When queueing an IBR (incremental block report) on a standby namenode, some > of the reported information is being replaced with the existing stored > information. This can lead to false block corruption. > We had a namenode, after transitioning to active, started reporting missing > blocks with "SIZE_MISMATCH" as corrupt reason. These were blocks that were > appended and the sizes were actually correct on the datanodes. Upon further > investigation, it was determined that the namenode was queueing IBRs with > altered information. > Although it sounds bad, I am not making it blocker -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15422) Reported IBR is partially replaced with stored info when queuing.
[ https://issues.apache.org/jira/browse/HDFS-15422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287194#comment-17287194 ] Stephen O'Donnell commented on HDFS-15422: -- We have had a report of something that sounds very like this problem. Appended blocks, failover and the blocks are marked corrupt. They are still readable etc. I suspect if we fail back over and restart the new SBNN it will clear it, but I am waiting to confirm that. This is on Cloudera CDP 7.x, which is a heavily patched 3.1 build. It looks like this problem is still there on trunk. From earlier comments, it sounds like a unit test for this is very difficult, so I will post a trunk patch with the small change [~kihwal] suggested and see what Yetus says. > Reported IBR is partially replaced with stored info when queuing. > - > > Key: HDFS-15422 > URL: https://issues.apache.org/jira/browse/HDFS-15422 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Kihwal Lee >Priority: Critical > Attachments: HDFS-15422-branch-2.10.001.patch > > > When queueing an IBR (incremental block report) on a standby namenode, some > of the reported information is being replaced with the existing stored > information. This can lead to false block corruption. > We had a namenode, after transitioning to active, started reporting missing > blocks with "SIZE_MISMATCH" as corrupt reason. These were blocks that were > appended and the sizes were actually correct on the datanodes. Upon further > investigation, it was determined that the namenode was queueing IBRs with > altered information. > Although it sounds bad, I am not making it blocker -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15809) DeadNodeDetector doesn't remove live nodes from dead node set.
[ https://issues.apache.org/jira/browse/HDFS-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287180#comment-17287180 ] Hadoop QA commented on HDFS-15809: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 49s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 45s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 36s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 31s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 53s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 13s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 17s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 55s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 26s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 4s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 5s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 57s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 57s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 50s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 50s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 16s{color} | {color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/485/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 30 unchanged - 0 fixed = 33 total (was 30) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 12s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color}
[jira] [Updated] (HDFS-15632) AbstractContractDeleteTest should set recursive parameter to true for recursive test cases.
[ https://issues.apache.org/jira/browse/HDFS-15632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HDFS-15632: -- Summary: AbstractContractDeleteTest should set recursive parameter to true for recursive test cases. (was: AbstractContractDeleteTest should set recursive peremeter to true for recursive test cases.) > AbstractContractDeleteTest should set recursive parameter to true for > recursive test cases. > --- > > Key: HDFS-15632 > URL: https://issues.apache.org/jira/browse/HDFS-15632 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.10.0 >Reporter: Konstantin Shvachko >Assignee: Anton Kutuzov >Priority: Major > Labels: newbie, pull-request-available > Fix For: 3.4.0, 3.1.5, 2.10.2, 3.2.3 > > Time Spent: 20m > Remaining Estimate: 0h > > {{AbstractContractDeleteTest.testDeleteNonexistentPathRecursive()}} should > call {{delete(path, true)}} rather than {{false}} > Also {{AbstractContractDeleteTest.testDeleteNonexistentPathNonRecursive()}} > has a wrong assert message. Should be {{"... attempting to non-recursively > delete ..."}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15808) Add metrics for FSNamesystem read/write lock hold long time
[ https://issues.apache.org/jira/browse/HDFS-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287156#comment-17287156 ] Erik Krogen commented on HDFS-15808: I agree with [~tomscut] here, as long as the metric is marked as a {{COUNTER}}, indicating to the metrics system that it is a monotonically increasing counter. For example [inGraphs|https://engineering.linkedin.com/blog/2017/08/ingraphs--monitoring-and-unexpected-artwork] will automatically turn counter-type metrics into delta graphs. It looks like the current patch doesn't set the type, meaning it uses {{Type.DEFAULT}}, which AFAICT will end up as a {{GAUGE}} (ref [here|https://github.com/apache/hadoop/blob/1e3a6efcef2924a7966c44ca63476c853956691d/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MethodMetric.java#L63]). So I think we need to adjust the {{type}} value on the {{@Metric}} annotation. > Add metrics for FSNamesystem read/write lock hold long time > --- > > Key: HDFS-15808 > URL: https://issues.apache.org/jira/browse/HDFS-15808 > Project: Hadoop HDFS > Issue Type: Wish > Components: hdfs >Reporter: tomscut >Assignee: tomscut >Priority: Major > Labels: hdfs, lock, metrics, pull-request-available > Attachments: lockLongHoldCount > > Time Spent: 4.5h > Remaining Estimate: 0h > > To monitor how often read/write locks exceed thresholds, we can add two > metrics(ReadLockWarning/WriteLockWarning), which are exposed in JMX. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15806) DeadNodeDetector should close all the threads when it is closed.
[ https://issues.apache.org/jira/browse/HDFS-15806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287127#comment-17287127 ] Ayush Saxena commented on HDFS-15806: - Yahh, I am also not very aware of the Original design. v001 LGTM +1, Will commit by tomorrow if no objections. > DeadNodeDetector should close all the threads when it is closed. > > > Key: HDFS-15806 > URL: https://issues.apache.org/jira/browse/HDFS-15806 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-15806.001.patch > > > The DeadNodeDetector doesn't close all the threads when it is closed. This > Jira trys to fix this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.
[ https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287126#comment-17287126 ] Hadoop QA commented on HDFS-15816: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 40s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 5s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 5s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 56s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 42s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 41s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 27s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 37s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 23s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 16s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green}{color} | {color:green} the patch
[jira] [Commented] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads
[ https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287123#comment-17287123 ] Hadoop QA commented on HDFS-15793: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 42s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} buf {color} | {color:blue} 0m 1s{color} | {color:blue}{color} | {color:blue} buf was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 3s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 47s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 27s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 34s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 56s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 25s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 12s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 1s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 12s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 31s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 31s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 40s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 4m 40s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/483/artifact/out/diff-compile-cc-hadoop-hdfs-project-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt{color} | {color:red} hadoop-hdfs-project-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 3 new + 90 unchanged - 3 fixed = 93 total (was 93) {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 40s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 23s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 4m 23s{color} |
[jira] [Commented] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder
[ https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287124#comment-17287124 ] Ayush Saxena commented on HDFS-15841: - Will protected directories not help here? > Use xattr to support delete file to trash by forced for important folder > > > Key: HDFS-15841 > URL: https://issues.apache.org/jira/browse/HDFS-15841 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15841.001.patch > > > Deletion is a dangerous operation. > If a folder has xattr 'user.force2trash', any deletion of this folder and > it's sub file/folder will be moved to trash by forced. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15839) RBF: Cannot get method setBalancerBandwidth on Router Client
[ https://issues.apache.org/jira/browse/HDFS-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287122#comment-17287122 ] Ayush Saxena commented on HDFS-15839: - Thanx [~hadoop_yangyun] for the update. v001 LGTM +1 Will commit by tomorrow if no further comments. > RBF: Cannot get method setBalancerBandwidth on Router Client > > > Key: HDFS-15839 > URL: https://issues.apache.org/jira/browse/HDFS-15839 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Major > Attachments: HDFS-15839.001.patch, HDFS-15839.patch > > > When call setBalancerBandwidth, throw exeption, > {code:java} > 02-18 14:39:59,186 [IPC Server handler 0 on default port 43545] ERROR > router.RemoteMethod (RemoteMethod.java:getMethod(146)) - Cannot get method > setBalancerBandwidth with types [class java.lang.Long] from > ClientProtocoljava.lang.NoSuchMethodException: > org.apache.hadoop.hdfs.protocol.ClientProtocol.setBalancerBandwidth(java.lang.Long) > at java.lang.Class.getDeclaredMethod(Class.java:2130) at > org.apache.hadoop.hdfs.server.federation.router.RemoteMethod.getMethod(RemoteMethod.java:140) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1312) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1250) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1221) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1194) > at > org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.setBalancerBandwidth(RouterClientProtocol.java:1188) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setBalancerBandwidth(RouterRpcServer.java:1211) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setBalancerBandwidth(ClientNamenodeProtocolServerSideTranslatorPB.java:1254) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:537) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086) at > org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1037) at > org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:965) at > java.security.AccessController.doPrivileged(Native Method) at > javax.security.auth.Subject.doAs(Subject.java:422) at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2972){code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder
[ https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287099#comment-17287099 ] Hadoop QA commented on HDFS-15841: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 3s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 12s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 8s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 21s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 45s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 15s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 12s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 43s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 18s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 49s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 1s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 57s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 40s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 40s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 17s{color} | {color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/482/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt{color} | {color:orange} hadoop-hdfs-project: The patch generated 6 new + 57 unchanged - 0 fixed = 63 total (was 57) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color}
[jira] [Commented] (HDFS-15809) DeadNodeDetector doesn't remove live nodes from dead node set.
[ https://issues.apache.org/jira/browse/HDFS-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287014#comment-17287014 ] Jinglun commented on HDFS-15809: I haven't deal with the checkstyle complain and it is out of date now(cry). Re-upload v02 to trigger the jenkins. > DeadNodeDetector doesn't remove live nodes from dead node set. > -- > > Key: HDFS-15809 > URL: https://issues.apache.org/jira/browse/HDFS-15809 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-15809.001.patch, HDFS-15809.002.patch > > > We found the dead node detector might never remove the alive nodes from the > dead node set in a big cluster. For example: > # 200 nodes are added to the dead node set by DeadNodeDetector. > # DeadNodeDetector#checkDeadNodes() adds 100 nodes to the > deadNodesProbeQueue because the queue limited length is 100. > # The probe threads start working and probe 30 nodes. > # DeadNodeDetector#checkDeadNodes() is scheduled again. It iterates the dead > node set and adds 30 nodes to the deadNodesProbeQueue. But the order is the > same as the last time. So the 30 nodes that has already been probed are added > to the queue again. > # Repeat 3 and 4. But we always add the first 30 nodes from the dead set. If > they are all dead then the live nodes behind them could never be recovered. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15809) DeadNodeDetector doesn't remove live nodes from dead node set.
[ https://issues.apache.org/jira/browse/HDFS-15809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinglun updated HDFS-15809: --- Attachment: HDFS-15809.002.patch > DeadNodeDetector doesn't remove live nodes from dead node set. > -- > > Key: HDFS-15809 > URL: https://issues.apache.org/jira/browse/HDFS-15809 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Attachments: HDFS-15809.001.patch, HDFS-15809.002.patch > > > We found the dead node detector might never remove the alive nodes from the > dead node set in a big cluster. For example: > # 200 nodes are added to the dead node set by DeadNodeDetector. > # DeadNodeDetector#checkDeadNodes() adds 100 nodes to the > deadNodesProbeQueue because the queue limited length is 100. > # The probe threads start working and probe 30 nodes. > # DeadNodeDetector#checkDeadNodes() is scheduled again. It iterates the dead > node set and adds 30 nodes to the deadNodesProbeQueue. But the order is the > same as the last time. So the 30 nodes that has already been probed are added > to the queue again. > # Repeat 3 and 4. But we always add the first 30 nodes from the dead set. If > they are all dead then the live nodes behind them could never be recovered. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads
[ https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15793: Attachment: (was: HDFS-15793.002.patch) > Add command to DFSAdmin for Balancer max concurrent threads > > > Key: HDFS-15793 > URL: https://issues.apache.org/jira/browse/HDFS-15793 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer mover >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15793.001.patch, HDFS-15793.002.patch > > > We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the > max number of bytes per second of network bandwidth to be used by a datanode > during balancing. Also add '-setBalancerMaxThreads' to dynamically change > the balancer maxThread number which may be used concurrently for moving > blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads
[ https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15793: Attachment: HDFS-15793.002.patch Status: Patch Available (was: Open) > Add command to DFSAdmin for Balancer max concurrent threads > > > Key: HDFS-15793 > URL: https://issues.apache.org/jira/browse/HDFS-15793 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer mover >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15793.001.patch, HDFS-15793.002.patch > > > We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the > max number of bytes per second of network bandwidth to be used by a datanode > during balancing. Also add '-setBalancerMaxThreads' to dynamically change > the balancer maxThread number which may be used concurrently for moving > blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15793) Add command to DFSAdmin for Balancer max concurrent threads
[ https://issues.apache.org/jira/browse/HDFS-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15793: Status: Open (was: Patch Available) > Add command to DFSAdmin for Balancer max concurrent threads > > > Key: HDFS-15793 > URL: https://issues.apache.org/jira/browse/HDFS-15793 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer mover >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15793.001.patch, HDFS-15793.002.patch > > > We have DFSAdmin command '-setBalancerBandwidth' to dynamically change the > max number of bytes per second of network bandwidth to be used by a datanode > during balancing. Also add '-setBalancerMaxThreads' to dynamically change > the balancer maxThread number which may be used concurrently for moving > blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.
[ https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15816: Attachment: HDFS-15816.002.patch Status: Patch Available (was: Open) > If NO stale node in last choosing, the chooseTarget don't need to retry with > stale nodes. > - > > Key: HDFS-15816 > URL: https://issues.apache.org/jira/browse/HDFS-15816 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch > > > If NO stale node in last choosing, the chooseTarget don't need to retry with > stale nodes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.
[ https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15816: Attachment: (was: HDFS-15816.002.patch) > If NO stale node in last choosing, the chooseTarget don't need to retry with > stale nodes. > - > > Key: HDFS-15816 > URL: https://issues.apache.org/jira/browse/HDFS-15816 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch > > > If NO stale node in last choosing, the chooseTarget don't need to retry with > stale nodes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15816) If NO stale node in last choosing, the chooseTarget don't need to retry with stale nodes.
[ https://issues.apache.org/jira/browse/HDFS-15816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15816: Status: Open (was: Patch Available) > If NO stale node in last choosing, the chooseTarget don't need to retry with > stale nodes. > - > > Key: HDFS-15816 > URL: https://issues.apache.org/jira/browse/HDFS-15816 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15816.001.patch, HDFS-15816.002.patch > > > If NO stale node in last choosing, the chooseTarget don't need to retry with > stale nodes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15839) RBF: Cannot get method setBalancerBandwidth on Router Client
[ https://issues.apache.org/jira/browse/HDFS-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17286963#comment-17286963 ] Hadoop QA commented on HDFS-15839: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 59s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 42s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 14s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 11s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 31s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green}{color} | {color:green} the
[jira] [Updated] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder
[ https://issues.apache.org/jira/browse/HDFS-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15841: Attachment: HDFS-15841.001.patch Status: Patch Available (was: Open) > Use xattr to support delete file to trash by forced for important folder > > > Key: HDFS-15841 > URL: https://issues.apache.org/jira/browse/HDFS-15841 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15841.001.patch > > > Deletion is a dangerous operation. > If a folder has xattr 'user.force2trash', any deletion of this folder and > it's sub file/folder will be moved to trash by forced. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15841) Use xattr to support delete file to trash by forced for important folder
Yang Yun created HDFS-15841: --- Summary: Use xattr to support delete file to trash by forced for important folder Key: HDFS-15841 URL: https://issues.apache.org/jira/browse/HDFS-15841 Project: Hadoop HDFS Issue Type: Improvement Reporter: Yang Yun Assignee: Yang Yun Deletion is a dangerous operation. If a folder has xattr 'user.force2trash', any deletion of this folder and it's sub file/folder will be moved to trash by forced. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15839) RBF: Cannot get method setBalancerBandwidth on Router Client
[ https://issues.apache.org/jira/browse/HDFS-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15839: Attachment: (was: HDFS-15839.001.patch) > RBF: Cannot get method setBalancerBandwidth on Router Client > > > Key: HDFS-15839 > URL: https://issues.apache.org/jira/browse/HDFS-15839 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Major > Attachments: HDFS-15839.001.patch, HDFS-15839.patch > > > When call setBalancerBandwidth, throw exeption, > {code:java} > 02-18 14:39:59,186 [IPC Server handler 0 on default port 43545] ERROR > router.RemoteMethod (RemoteMethod.java:getMethod(146)) - Cannot get method > setBalancerBandwidth with types [class java.lang.Long] from > ClientProtocoljava.lang.NoSuchMethodException: > org.apache.hadoop.hdfs.protocol.ClientProtocol.setBalancerBandwidth(java.lang.Long) > at java.lang.Class.getDeclaredMethod(Class.java:2130) at > org.apache.hadoop.hdfs.server.federation.router.RemoteMethod.getMethod(RemoteMethod.java:140) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1312) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1250) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1221) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1194) > at > org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.setBalancerBandwidth(RouterClientProtocol.java:1188) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setBalancerBandwidth(RouterRpcServer.java:1211) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setBalancerBandwidth(ClientNamenodeProtocolServerSideTranslatorPB.java:1254) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:537) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086) at > org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1037) at > org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:965) at > java.security.AccessController.doPrivileged(Native Method) at > javax.security.auth.Subject.doAs(Subject.java:422) at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2972){code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15839) RBF: Cannot get method setBalancerBandwidth on Router Client
[ https://issues.apache.org/jira/browse/HDFS-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15839: Attachment: HDFS-15839.001.patch Status: Patch Available (was: Open) > RBF: Cannot get method setBalancerBandwidth on Router Client > > > Key: HDFS-15839 > URL: https://issues.apache.org/jira/browse/HDFS-15839 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Major > Attachments: HDFS-15839.001.patch, HDFS-15839.patch > > > When call setBalancerBandwidth, throw exeption, > {code:java} > 02-18 14:39:59,186 [IPC Server handler 0 on default port 43545] ERROR > router.RemoteMethod (RemoteMethod.java:getMethod(146)) - Cannot get method > setBalancerBandwidth with types [class java.lang.Long] from > ClientProtocoljava.lang.NoSuchMethodException: > org.apache.hadoop.hdfs.protocol.ClientProtocol.setBalancerBandwidth(java.lang.Long) > at java.lang.Class.getDeclaredMethod(Class.java:2130) at > org.apache.hadoop.hdfs.server.federation.router.RemoteMethod.getMethod(RemoteMethod.java:140) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1312) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1250) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1221) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1194) > at > org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.setBalancerBandwidth(RouterClientProtocol.java:1188) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setBalancerBandwidth(RouterRpcServer.java:1211) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setBalancerBandwidth(ClientNamenodeProtocolServerSideTranslatorPB.java:1254) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:537) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086) at > org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1037) at > org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:965) at > java.security.AccessController.doPrivileged(Native Method) at > javax.security.auth.Subject.doAs(Subject.java:422) at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2972){code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15839) RBF: Cannot get method setBalancerBandwidth on Router Client
[ https://issues.apache.org/jira/browse/HDFS-15839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15839: Status: Open (was: Patch Available) > RBF: Cannot get method setBalancerBandwidth on Router Client > > > Key: HDFS-15839 > URL: https://issues.apache.org/jira/browse/HDFS-15839 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Major > Attachments: HDFS-15839.001.patch, HDFS-15839.patch > > > When call setBalancerBandwidth, throw exeption, > {code:java} > 02-18 14:39:59,186 [IPC Server handler 0 on default port 43545] ERROR > router.RemoteMethod (RemoteMethod.java:getMethod(146)) - Cannot get method > setBalancerBandwidth with types [class java.lang.Long] from > ClientProtocoljava.lang.NoSuchMethodException: > org.apache.hadoop.hdfs.protocol.ClientProtocol.setBalancerBandwidth(java.lang.Long) > at java.lang.Class.getDeclaredMethod(Class.java:2130) at > org.apache.hadoop.hdfs.server.federation.router.RemoteMethod.getMethod(RemoteMethod.java:140) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1312) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1250) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1221) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1194) > at > org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.setBalancerBandwidth(RouterClientProtocol.java:1188) > at > org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setBalancerBandwidth(RouterRpcServer.java:1211) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setBalancerBandwidth(ClientNamenodeProtocolServerSideTranslatorPB.java:1254) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:537) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086) at > org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1037) at > org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:965) at > java.security.AccessController.doPrivileged(Native Method) at > javax.security.auth.Subject.doAs(Subject.java:422) at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2972){code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org