[jira] [Comment Edited] (HDFS-17038) TestDirectoryScanner.testThrottle() is still a little flaky
[ https://issues.apache.org/jira/browse/HDFS-17038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729192#comment-17729192 ] Ayush Saxena edited comment on HDFS-17038 at 6/5/23 5:56 AM: - Ok, it does log, from the jenkins result {noformat} 2023-06-03 08:14:01,387 [main] INFO datanode.TestDirectoryScanner (TestDirectoryScanner.java:testThrottling(787)) - RATIO: 2.4101796{noformat} ran locally {noformat} 2023-06-05 11:22:13,585 [main] INFO datanode.TestDirectoryScanner (TestDirectoryScanner.java:testThrottling(793)) - RATIO: 2.314121{noformat} may be change it to: 2 or 2.2 was (Author: ayushtkn): Ok, it does logs, from the jenkins result {noformat} 2023-06-03 08:14:01,387 [main] INFO datanode.TestDirectoryScanner (TestDirectoryScanner.java:testThrottling(787)) - RATIO: 2.4101796{noformat} ran locally {noformat} 2023-06-05 11:22:13,585 [main] INFO datanode.TestDirectoryScanner (TestDirectoryScanner.java:testThrottling(793)) - RATIO: 2.314121{noformat} may be change it to: 2 or 2.2 > TestDirectoryScanner.testThrottle() is still a little flaky > --- > > Key: HDFS-17038 > URL: https://issues.apache.org/jira/browse/HDFS-17038 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Priority: Major > > Failing every now and then > {noformat} > java.lang.AssertionError: Throttle is too permissive > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.assertTrue(Assert.java:42) > at > org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:789) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){noformat} > [https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1247/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDirectoryScanner/testThrottling/] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17038) TestDirectoryScanner.testThrottle() is still a little flaky
[ https://issues.apache.org/jira/browse/HDFS-17038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-17038: Priority: Minor (was: Major) > TestDirectoryScanner.testThrottle() is still a little flaky > --- > > Key: HDFS-17038 > URL: https://issues.apache.org/jira/browse/HDFS-17038 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Priority: Minor > > Failing every now and then > {noformat} > java.lang.AssertionError: Throttle is too permissive > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.assertTrue(Assert.java:42) > at > org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:789) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){noformat} > [https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1247/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDirectoryScanner/testThrottling/] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17038) TestDirectoryScanner.testThrottle() is still a little flaky
[ https://issues.apache.org/jira/browse/HDFS-17038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-17038: Issue Type: Test (was: Bug) > TestDirectoryScanner.testThrottle() is still a little flaky > --- > > Key: HDFS-17038 > URL: https://issues.apache.org/jira/browse/HDFS-17038 > Project: Hadoop HDFS > Issue Type: Test >Reporter: Ayush Saxena >Priority: Minor > > Failing every now and then > {noformat} > java.lang.AssertionError: Throttle is too permissive > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.assertTrue(Assert.java:42) > at > org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:789) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){noformat} > [https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1247/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDirectoryScanner/testThrottling/] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17038) TestDirectoryScanner.testThrottle() is still a little flaky
[ https://issues.apache.org/jira/browse/HDFS-17038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729192#comment-17729192 ] Ayush Saxena commented on HDFS-17038: - Ok, it does logs, from the jenkins result {noformat} 2023-06-03 08:14:01,387 [main] INFO datanode.TestDirectoryScanner (TestDirectoryScanner.java:testThrottling(787)) - RATIO: 2.4101796{noformat} ran locally {noformat} 2023-06-05 11:22:13,585 [main] INFO datanode.TestDirectoryScanner (TestDirectoryScanner.java:testThrottling(793)) - RATIO: 2.314121{noformat} may be change it to: 2 or 2.2 > TestDirectoryScanner.testThrottle() is still a little flaky > --- > > Key: HDFS-17038 > URL: https://issues.apache.org/jira/browse/HDFS-17038 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Priority: Major > > Failing every now and then > {noformat} > java.lang.AssertionError: Throttle is too permissive > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.assertTrue(Assert.java:42) > at > org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:789) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){noformat} > [https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1247/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDirectoryScanner/testThrottling/] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16946) RBF: top real owners metrics can't been parsed json string
[ https://issues.apache.org/jira/browse/HDFS-16946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729188#comment-17729188 ] ASF GitHub Bot commented on HDFS-16946: --- ayushtkn commented on code in PR #5696: URL: https://github.com/apache/hadoop/pull/5696#discussion_r1217529624 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RBFMetrics.java: ## @@ -712,13 +715,22 @@ public long getCurrentTokensCount() { @Override public String getTopTokenRealOwners() { Review Comment: TestRouterHDFSContractDelegationToken contract test aren't a good place, can explore ``TestRouterSecurityManager`` > RBF: top real owners metrics can't been parsed json string > -- > > Key: HDFS-16946 > URL: https://issues.apache.org/jira/browse/HDFS-16946 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.4.0 >Reporter: Max Xie >Assignee: Nishtha Shah >Priority: Minor > Labels: pull-request-available > Attachments: image-2023-03-09-22-24-39-833.png > > > After HDFS-15447, Add top real owners metrics for delegation tokens. But the > metrics can't been parsed json string. > RBFMetrics$getTopTokenRealOwners method just return > `org.apache.hadoop.metrics2.util.Metrics2Util$NameValuePair@1` > !image-2023-03-09-22-24-39-833.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17038) TestDirectoryScanner.testThrottle() is still a little flaky
[ https://issues.apache.org/jira/browse/HDFS-17038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729187#comment-17729187 ] Ayush Saxena commented on HDFS-17038: - The last time they increased the value itself, may be we can explore the same. The most important thing to do is: We put the ratio in the assertion message atleast, > TestDirectoryScanner.testThrottle() is still a little flaky > --- > > Key: HDFS-17038 > URL: https://issues.apache.org/jira/browse/HDFS-17038 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ayush Saxena >Priority: Major > > Failing every now and then > {noformat} > java.lang.AssertionError: Throttle is too permissive > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.assertTrue(Assert.java:42) > at > org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:789) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){noformat} > [https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1247/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDirectoryScanner/testThrottling/] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17038) TestDirectoryScanner.testThrottle() is still a little flaky
Ayush Saxena created HDFS-17038: --- Summary: TestDirectoryScanner.testThrottle() is still a little flaky Key: HDFS-17038 URL: https://issues.apache.org/jira/browse/HDFS-17038 Project: Hadoop HDFS Issue Type: Bug Reporter: Ayush Saxena Failing every now and then {noformat} java.lang.AssertionError: Throttle is too permissive at org.junit.Assert.fail(Assert.java:89) at org.junit.Assert.assertTrue(Assert.java:42) at org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:789) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){noformat} [https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1247/testReport/junit/org.apache.hadoop.hdfs.server.datanode/TestDirectoryScanner/testThrottling/] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16946) RBF: top real owners metrics can't been parsed json string
[ https://issues.apache.org/jira/browse/HDFS-16946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729185#comment-17729185 ] ASF GitHub Bot commented on HDFS-16946: --- NishthaShah commented on code in PR #5696: URL: https://github.com/apache/hadoop/pull/5696#discussion_r1217519201 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RBFMetrics.java: ## @@ -712,13 +715,22 @@ public long getCurrentTokensCount() { @Override public String getTopTokenRealOwners() { Review Comment: @ayushtkn Let me know if adding it in TestRouterHDFSContractDelegationToken, sounds good or we should add it in some more suitable place. (Initially had tried to add a test in TestRBFMetrics, but this.router.getRpcServer(), is failing with NullPointerException) > RBF: top real owners metrics can't been parsed json string > -- > > Key: HDFS-16946 > URL: https://issues.apache.org/jira/browse/HDFS-16946 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.4.0 >Reporter: Max Xie >Assignee: Nishtha Shah >Priority: Minor > Labels: pull-request-available > Attachments: image-2023-03-09-22-24-39-833.png > > > After HDFS-15447, Add top real owners metrics for delegation tokens. But the > metrics can't been parsed json string. > RBFMetrics$getTopTokenRealOwners method just return > `org.apache.hadoop.metrics2.util.Metrics2Util$NameValuePair@1` > !image-2023-03-09-22-24-39-833.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17028) RBF: Optimize debug logs of class ConnectionPool and other related class.
[ https://issues.apache.org/jira/browse/HDFS-17028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729184#comment-17729184 ] ASF GitHub Bot commented on HDFS-17028: --- hfutatzhanghb commented on code in PR #5694: URL: https://github.com/apache/hadoop/pull/5694#discussion_r1217515845 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java: ## @@ -286,8 +286,9 @@ public synchronized List removeConnections(int num) { } this.connections = tmpConnections; } -LOG.debug("Expected to remove {} connection and actually removed {} connections", -num, removed.size()); +LOG.debug("Expected to remove {} connection and actually removed {} connections " + +"for connectionPool: {}", +num, removed.size(), connectionPoolId); Review Comment: Thanks sir, have fixed formatting issue. > RBF: Optimize debug logs of class ConnectionPool and other related class. > - > > Key: HDFS-17028 > URL: https://issues.apache.org/jira/browse/HDFS-17028 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Affects Versions: 3.3.4 >Reporter: farmmamba >Priority: Minor > Labels: pull-request-available > > When we change the log level of RouterRpcClient from INFO to DEBUG to figure > out which connection an user is using. We found logs below: > > {code:java} > 2023-05-29 09:46:09,033 DEBUG > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: User someone > NN ANN:8020 is using connection > ClientNamenodeProtocolTranslatorPB@ANN/ANN_IP:8020x3 > 2023-05-29 09:46:09,037 DEBUG > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: User someone > NN ANN:8020 is using connection > ClientNamenodeProtocolTranslatorPB@ANN/ANN_IP:8020x1 > 2023-05-29 09:46:09,037 DEBUG > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: User someone > NN ANN:8020 is using connection > ClientNamenodeProtocolTranslatorPB@ANN/ANN_IP:8020x2 > 2023-05-29 09:46:09,037 DEBUG > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: User someone > NN ANN:8020 is using connection > ClientNamenodeProtocolTranslatorPB@ANN/ANN_IP:8020x3 > 2023-05-29 09:46:09,042 DEBUG > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: User someone > NN ANN:8020 is using connection > ClientNamenodeProtocolTranslatorPB@ANN/ANN_IP:8020x0 {code} > It seems not very clear for us to figure out which connection user is using. > Therefore, i think we should optimize the toString method of class > ConnectionContext. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16946) RBF: top real owners metrics can't been parsed json string
[ https://issues.apache.org/jira/browse/HDFS-16946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729182#comment-17729182 ] ASF GitHub Bot commented on HDFS-16946: --- NishthaShah commented on code in PR #5696: URL: https://github.com/apache/hadoop/pull/5696#discussion_r1217512371 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RBFMetrics.java: ## @@ -712,13 +715,22 @@ public long getCurrentTokensCount() { @Override public String getTopTokenRealOwners() { Review Comment: Sure @ayushtkn, Let me add it in the same class (TestRouterHDFSContractDelegationToken) where I added to test > RBF: top real owners metrics can't been parsed json string > -- > > Key: HDFS-16946 > URL: https://issues.apache.org/jira/browse/HDFS-16946 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.4.0 >Reporter: Max Xie >Assignee: Nishtha Shah >Priority: Minor > Labels: pull-request-available > Attachments: image-2023-03-09-22-24-39-833.png > > > After HDFS-15447, Add top real owners metrics for delegation tokens. But the > metrics can't been parsed json string. > RBFMetrics$getTopTokenRealOwners method just return > `org.apache.hadoop.metrics2.util.Metrics2Util$NameValuePair@1` > !image-2023-03-09-22-24-39-833.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17037) Consider nonDfsUsed when running balancer
[ https://issues.apache.org/jira/browse/HDFS-17037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729172#comment-17729172 ] ASF GitHub Bot commented on HDFS-17037: --- Hexiaoqiao commented on code in PR #5715: URL: https://github.com/apache/hadoop/pull/5715#discussion_r1217483230 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/BalancingPolicy.java: ## @@ -104,21 +104,21 @@ void accumulateSpaces(DatanodeStorageReport r) { for(StorageReport s : r.getStorageReports()) { final StorageType t = s.getStorage().getStorageType(); totalCapacities.add(t, s.getCapacity()); -totalUsedSpaces.add(t, s.getDfsUsed()); +totalUsedSpaces.add(t, (s.getCapacity() - s.getRemaining())); } } @Override Double getUtilization(DatanodeStorageReport r, final StorageType t) { long capacity = 0L; - long dfsUsed = 0L; + long totalUsed = 0L; for(StorageReport s : r.getStorageReports()) { if (s.getStorage().getStorageType() == t) { capacity += s.getCapacity(); - dfsUsed += s.getDfsUsed(); + totalUsed += (s.getCapacity() - s.getRemaining()); } } - return capacity == 0L? null: dfsUsed*100.0/capacity; + return capacity == 0L? null: totalUsed*100.0/capacity; Review Comment: should fix the codestyle. ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/BalancingPolicy.java: ## @@ -104,21 +104,21 @@ void accumulateSpaces(DatanodeStorageReport r) { for(StorageReport s : r.getStorageReports()) { Review Comment: IMO, the BalancingPolicy.Pool also have the same issue. ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/BalancingPolicy.java: ## @@ -104,21 +104,21 @@ void accumulateSpaces(DatanodeStorageReport r) { for(StorageReport s : r.getStorageReports()) { final StorageType t = s.getStorage().getStorageType(); totalCapacities.add(t, s.getCapacity()); -totalUsedSpaces.add(t, s.getDfsUsed()); +totalUsedSpaces.add(t, (s.getCapacity() - s.getRemaining())); Review Comment: It could not be true when using `s.getCapacity() - s.getRemaining()` directly. For instance, it will be negative if one storage with total capacity 10MB,but configured Capacity 1MB for hdfs, and it remain 8MB after run for a while, then `s.getCapacity() - s.getRemaining()` will be -7*1024*1024. > Consider nonDfsUsed when running balancer > - > > Key: HDFS-17037 > URL: https://issues.apache.org/jira/browse/HDFS-17037 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shuyan Zhang >Assignee: Shuyan Zhang >Priority: Major > Labels: pull-request-available > > When we run balancer with `BalancingPolicy.Node` policy, our goal is to make > each datanode storage balanced. But in the current implementation, the > balancer doesn't account for storage used by non-dfs on the datanodes, which > can make the situation worse for datanodes that are already strained on > storage. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17028) RBF: Optimize debug logs of class ConnectionPool and other related class.
[ https://issues.apache.org/jira/browse/HDFS-17028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729165#comment-17729165 ] ASF GitHub Bot commented on HDFS-17028: --- ayushtkn commented on code in PR #5694: URL: https://github.com/apache/hadoop/pull/5694#discussion_r1217473748 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java: ## @@ -286,8 +286,9 @@ public synchronized List removeConnections(int num) { } this.connections = tmpConnections; } -LOG.debug("Expected to remove {} connection and actually removed {} connections", -num, removed.size()); +LOG.debug("Expected to remove {} connection and actually removed {} connections " + +"for connectionPool: {}", +num, removed.size(), connectionPoolId); Review Comment: formatting issue ``` LOG.debug("Expected to remove {} connection and actually removed {} connections " + "for connectionPool: {}", num, removed.size(), connectionPoolId); ``` > RBF: Optimize debug logs of class ConnectionPool and other related class. > - > > Key: HDFS-17028 > URL: https://issues.apache.org/jira/browse/HDFS-17028 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Affects Versions: 3.3.4 >Reporter: farmmamba >Priority: Minor > Labels: pull-request-available > > When we change the log level of RouterRpcClient from INFO to DEBUG to figure > out which connection an user is using. We found logs below: > > {code:java} > 2023-05-29 09:46:09,033 DEBUG > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: User someone > NN ANN:8020 is using connection > ClientNamenodeProtocolTranslatorPB@ANN/ANN_IP:8020x3 > 2023-05-29 09:46:09,037 DEBUG > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: User someone > NN ANN:8020 is using connection > ClientNamenodeProtocolTranslatorPB@ANN/ANN_IP:8020x1 > 2023-05-29 09:46:09,037 DEBUG > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: User someone > NN ANN:8020 is using connection > ClientNamenodeProtocolTranslatorPB@ANN/ANN_IP:8020x2 > 2023-05-29 09:46:09,037 DEBUG > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: User someone > NN ANN:8020 is using connection > ClientNamenodeProtocolTranslatorPB@ANN/ANN_IP:8020x3 > 2023-05-29 09:46:09,042 DEBUG > org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient: User someone > NN ANN:8020 is using connection > ClientNamenodeProtocolTranslatorPB@ANN/ANN_IP:8020x0 {code} > It seems not very clear for us to figure out which connection user is using. > Therefore, i think we should optimize the toString method of class > ConnectionContext. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16946) RBF: top real owners metrics can't been parsed json string
[ https://issues.apache.org/jira/browse/HDFS-16946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729164#comment-17729164 ] ASF GitHub Bot commented on HDFS-16946: --- ayushtkn commented on code in PR #5696: URL: https://github.com/apache/hadoop/pull/5696#discussion_r1217470177 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RBFMetrics.java: ## @@ -712,13 +715,22 @@ public long getCurrentTokensCount() { @Override public String getTopTokenRealOwners() { Review Comment: @NishthaShah Where is this test added, can you add a test in the PR as well? > RBF: top real owners metrics can't been parsed json string > -- > > Key: HDFS-16946 > URL: https://issues.apache.org/jira/browse/HDFS-16946 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.4.0 >Reporter: Max Xie >Assignee: Nishtha Shah >Priority: Minor > Labels: pull-request-available > Attachments: image-2023-03-09-22-24-39-833.png > > > After HDFS-15447, Add top real owners metrics for delegation tokens. But the > metrics can't been parsed json string. > RBFMetrics$getTopTokenRealOwners method just return > `org.apache.hadoop.metrics2.util.Metrics2Util$NameValuePair@1` > !image-2023-03-09-22-24-39-833.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17029) Support getECPolices API in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-17029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729154#comment-17729154 ] ASF GitHub Bot commented on HDFS-17029: --- zhtttylz commented on PR #5698: URL: https://github.com/apache/hadoop/pull/5698#issuecomment-1575992353 @ayushtkn Would you mind to take another reviews? Thanks. > Support getECPolices API in WebHDFS > --- > > Key: HDFS-17029 > URL: https://issues.apache.org/jira/browse/HDFS-17029 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Attachments: image-2023-05-29-23-55-09-224.png > > > WebHDFS should support getEcPolicies: > !image-2023-05-29-23-55-09-224.png|width=817,height=234! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17029) Support getECPolices API in WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-17029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729144#comment-17729144 ] ASF GitHub Bot commented on HDFS-17029: --- zhtttylz commented on PR #5698: URL: https://github.com/apache/hadoop/pull/5698#issuecomment-1575957222 @ayushtkn Would you mind to take another reviews? Thanks. > Support getECPolices API in WebHDFS > --- > > Key: HDFS-17029 > URL: https://issues.apache.org/jira/browse/HDFS-17029 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 3.4.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > Attachments: image-2023-05-29-23-55-09-224.png > > > WebHDFS should support getEcPolicies: > !image-2023-05-29-23-55-09-224.png|width=817,height=234! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16757) Add a new method copyBlockCrossNamespace to DataNode
[ https://issues.apache.org/jira/browse/HDFS-16757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729139#comment-17729139 ] ASF GitHub Bot commented on HDFS-16757: --- ZanderXu closed pull request #4888: HDFS-16757. [FastCopy] Add a new method copyBlockCrossNamespace to DataNode URL: https://github.com/apache/hadoop/pull/4888 > Add a new method copyBlockCrossNamespace to DataNode > > > Key: HDFS-16757 > URL: https://issues.apache.org/jira/browse/HDFS-16757 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: ZanderXu >Assignee: Haiyang Hu >Priority: Minor > Labels: pull-request-available > > Add a new method copyBlockCrossNamespace in DataTransferProtocol at the > DataNode Side. > This method will copy a source block from one namespace to a target block > from a different namespace. If the target DN is the same with the current DN, > this method will copy the block via HardLink. If the target DN is different > with the current DN, this method will copy the block via TransferBlock. > This method will contains some parameters: > * ExtendedBlock sourceBlock > * Token sourceBlockToken > * ExtendedBlock targetBlock > * Token targetBlockToken > * DatanodeInfo targetDN -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13507) RBF: Remove update functionality from routeradmin's add cmd
[ https://issues.apache.org/jira/browse/HDFS-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729138#comment-17729138 ] ASF GitHub Bot commented on HDFS-13507: --- ZanderXu commented on PR #4990: URL: https://github.com/apache/hadoop/pull/4990#issuecomment-1575942276 The failed UT `hadoop.hdfs.server.datanode.TestDirectoryScanner` is not caused by this PR. > RBF: Remove update functionality from routeradmin's add cmd > --- > > Key: HDFS-13507 > URL: https://issues.apache.org/jira/browse/HDFS-13507 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Gang Li >Priority: Minor > Labels: incompatible, pull-request-available > Attachments: HDFS-13507-HDFS-13891.003.patch, > HDFS-13507-HDFS-13891.004.patch, HDFS-13507.000.patch, HDFS-13507.001.patch, > HDFS-13507.002.patch, HDFS-13507.003.patch > > > Follow up the discussion in HDFS-13326. We should remove the "update" > functionality from routeradmin's add cmd, to make it consistent with RPC > calls. > Note that: this is an incompatible change. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17037) Consider nonDfsUsed when running balancer
[ https://issues.apache.org/jira/browse/HDFS-17037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729107#comment-17729107 ] ASF GitHub Bot commented on HDFS-17037: --- hadoop-yetus commented on PR #5715: URL: https://github.com/apache/hadoop/pull/5715#issuecomment-1575680956 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 22s | | trunk passed | | +1 :green_heart: | compile | 1m 18s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 1m 10s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 7s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 18s | | trunk passed | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 32s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 3m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 5s | | the patch passed | | +1 :green_heart: | compile | 1m 8s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 1m 8s | | the patch passed | | +1 :green_heart: | compile | 1m 2s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 1m 2s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 53s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5715/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 141 unchanged - 1 fixed = 142 total (was 142) | | +1 :green_heart: | mvnsite | 1m 6s | | the patch passed | | +1 :green_heart: | javadoc | 0m 50s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 21s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 3m 2s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 242m 36s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5715/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 50s | | The patch does not generate ASF License warnings. | | | | 347m 19s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5715/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5715 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux c001c4c9aad4 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / ee0d67c3e4908bea7d955e95f7859125521e449f | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private
[jira] [Commented] (HDFS-17037) Consider nonDfsUsed when running balancer
[ https://issues.apache.org/jira/browse/HDFS-17037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729066#comment-17729066 ] ASF GitHub Bot commented on HDFS-17037: --- zhangshuyan0 opened a new pull request, #5715: URL: https://github.com/apache/hadoop/pull/5715 ### Description of PR When we run balancer with `BalancingPolicy.Node` policy, our goal is to make each datanode storage balanced. But in the current implementation, the balancer doesn't account for storage used by non-dfs on the datanodes, which can make the situation worse for datanodes that are already strained on storage. ### How was this patch tested? Add a new UT. > Consider nonDfsUsed when running balancer > - > > Key: HDFS-17037 > URL: https://issues.apache.org/jira/browse/HDFS-17037 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shuyan Zhang >Assignee: Shuyan Zhang >Priority: Major > > When we run balancer with `BalancingPolicy.Node` policy, our goal is to make > each datanode storage balanced. But in the current implementation, the > balancer doesn't account for storage used by non-dfs on the datanodes, which > can make the situation worse for datanodes that are already strained on > storage. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17037) Consider nonDfsUsed when running balancer
[ https://issues.apache.org/jira/browse/HDFS-17037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-17037: -- Labels: pull-request-available (was: ) > Consider nonDfsUsed when running balancer > - > > Key: HDFS-17037 > URL: https://issues.apache.org/jira/browse/HDFS-17037 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shuyan Zhang >Assignee: Shuyan Zhang >Priority: Major > Labels: pull-request-available > > When we run balancer with `BalancingPolicy.Node` policy, our goal is to make > each datanode storage balanced. But in the current implementation, the > balancer doesn't account for storage used by non-dfs on the datanodes, which > can make the situation worse for datanodes that are already strained on > storage. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17037) Consider nonDfsUsed when running balancer
Shuyan Zhang created HDFS-17037: --- Summary: Consider nonDfsUsed when running balancer Key: HDFS-17037 URL: https://issues.apache.org/jira/browse/HDFS-17037 Project: Hadoop HDFS Issue Type: Bug Reporter: Shuyan Zhang Assignee: Shuyan Zhang When we run balancer with `BalancingPolicy.Node` policy, our goal is to make each datanode storage balanced. But in the current implementation, the balancer doesn't account for storage used by non-dfs on the datanodes, which can make the situation worse for datanodes that are already strained on storage. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org