[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=783689=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783689 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 22/Jun/22 04:11 Start Date: 22/Jun/22 04:11 Worklog Time Spent: 10m Work Description: tomscut commented on PR #4467: URL: https://github.com/apache/hadoop/pull/4467#issuecomment-1162616141 The failed unit tests are unrelated to the change. Issue Time Tracking --- Worklog Id: (was: 783689) Time Spent: 3h 10m (was: 3h) > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > On a busy cluster, sometimes it takes bit of time for deleted node(from the > cluster)'s "slow node report" to get removed from slow peer json report on > Namenode JMX metrics. In the meantime, user should be able to browse through > more entries in the report by adjusting i.e. reconfiguring > "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted > without user having to bounce active Namenode just for this purpose. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=783684=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783684 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 22/Jun/22 04:03 Start Date: 22/Jun/22 04:03 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4467: URL: https://github.com/apache/hadoop/pull/4467#issuecomment-1162612112 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 22s | | branch-3.3 passed | | +1 :green_heart: | compile | 1m 30s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 1m 13s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 1m 40s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 1m 54s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 3m 35s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 26m 49s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 27s | | the patch passed | | +1 :green_heart: | compile | 1m 16s | | the patch passed | | +1 :green_heart: | javac | 1m 16s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 22s | | the patch passed | | +1 :green_heart: | javadoc | 1m 32s | | the patch passed | | +1 :green_heart: | spotbugs | 3m 20s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 8s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 202m 10s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 12s | | The patch does not generate ASF License warnings. | | | | 309m 33s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.balancer.TestBalancer | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4467 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 697028a69eca 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 189d54292075458def54da0aa57527a186e6a691 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/6/testReport/ | | Max. process+thread count | 2903 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/6/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. Issue Time Tracking --- Worklog Id: (was: 783684) Time Spent: 3h (was: 2h 50m) > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=783606=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783606 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 21/Jun/22 23:05 Start Date: 21/Jun/22 23:05 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4467: URL: https://github.com/apache/hadoop/pull/4467#issuecomment-1162446944 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 10m 7s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 52s | | branch-3.3 passed | | +1 :green_heart: | compile | 1m 23s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 1m 5s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 1m 33s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 1m 45s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 3m 41s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 28m 18s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 10s | | the patch passed | | +1 :green_heart: | javac | 1m 10s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 46s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 19s | | the patch passed | | +1 :green_heart: | javadoc | 1m 24s | | the patch passed | | +1 :green_heart: | spotbugs | 3m 27s | | the patch passed | | +1 :green_heart: | shadedclient | 27m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 217m 21s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 0s | | The patch does not generate ASF License warnings. | | | | 338m 19s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.server.namenode.TestNameNodeMXBean | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4467 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 5a21ce093036 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 189d54292075458def54da0aa57527a186e6a691 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/4/testReport/ | | Max. process+thread count | 1967 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/4/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. Issue Time Tracking --- Worklog Id: (was: 783606) Time Spent: 2h 50m (was: 2h 40m) > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=783560=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783560 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 21/Jun/22 20:49 Start Date: 21/Jun/22 20:49 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4467: URL: https://github.com/apache/hadoop/pull/4467#issuecomment-1162339393 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 6m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 33s | | branch-3.3 passed | | +1 :green_heart: | compile | 1m 31s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 1m 12s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 1m 42s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 1m 56s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 3m 35s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 26m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 18s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed | | +1 :green_heart: | javac | 1m 13s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 49s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 17s | | the patch passed | | +1 :green_heart: | javadoc | 1m 31s | | the patch passed | | +1 :green_heart: | spotbugs | 3m 14s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 9s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 89m 46s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +0 :ok: | asflicense | 0m 52s | | ASF License check generated no output? | | | | 202m 59s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSStorageStateRecovery | | | hadoop.hdfs.TestSafeModeWithStripedFile | | | hadoop.hdfs.TestErasureCodingPolicies | | | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamUpdatePipeline | | | hadoop.hdfs.TestDFSStripedInputStream | | | hadoop.hdfs.TestFileAppend2 | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData | | | hadoop.hdfs.TestDatanodeDeath | | | hadoop.hdfs.TestErasureCodingMultipleRacks | | | hadoop.hdfs.TestReadStripedFileWithDecoding | | | hadoop.hdfs.TestLocatedBlocksRefresher | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4467 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 5ba6db501893 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 189d54292075458def54da0aa57527a186e6a691 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/5/testReport/ | | Max. process+thread count | 2219 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/5/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=783257=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-783257 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 21/Jun/22 08:35 Start Date: 21/Jun/22 08:35 Worklog Time Spent: 10m Work Description: tomscut commented on PR #4467: URL: https://github.com/apache/hadoop/pull/4467#issuecomment-1161437185 Hi @virajjasani , can you push an empty-commit to retrigger the jenkins. Thanks. Issue Time Tracking --- Worklog Id: (was: 783257) Time Spent: 2.5h (was: 2h 20m) > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2.5h > Remaining Estimate: 0h > > On a busy cluster, sometimes it takes bit of time for deleted node(from the > cluster)'s "slow node report" to get removed from slow peer json report on > Namenode JMX metrics. In the meantime, user should be able to browse through > more entries in the report by adjusting i.e. reconfiguring > "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted > without user having to bounce active Namenode just for this purpose. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782935=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782935 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 20/Jun/22 10:37 Start Date: 20/Jun/22 10:37 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4467: URL: https://github.com/apache/hadoop/pull/4467#issuecomment-1160284165 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 10m 12s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 1s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 7s | | branch-3.3 passed | | +1 :green_heart: | compile | 1m 23s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 1m 5s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 1m 35s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 1m 45s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 3m 41s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 28m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 49s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 20s | | the patch passed | | +1 :green_heart: | javadoc | 1m 26s | | the patch passed | | +1 :green_heart: | spotbugs | 3m 24s | | the patch passed | | +1 :green_heart: | shadedclient | 27m 38s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 223m 35s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 57s | | The patch does not generate ASF License warnings. | | | | 345m 9s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.cli.TestHDFSCLI | | | hadoop.hdfs.TestRollingUpgrade | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4467 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux e4565a89b2d6 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 189d54292075458def54da0aa57527a186e6a691 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/1/testReport/ | | Max. process+thread count | 2211 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4467/1/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. Issue Time Tracking --- Worklog Id: (was: 782935) Time Spent: 2h 20m (was: 2h 10m) > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major >
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782783=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782783 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 20/Jun/22 04:51 Start Date: 20/Jun/22 04:51 Worklog Time Spent: 10m Work Description: virajjasani opened a new pull request, #4467: URL: https://github.com/apache/hadoop/pull/4467 branch-3.3 backport PR of #4448 Issue Time Tracking --- Worklog Id: (was: 782783) Time Spent: 2h (was: 1h 50m) > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h > Remaining Estimate: 0h > > On a busy cluster, sometimes it takes bit of time for deleted node(from the > cluster)'s "slow node report" to get removed from slow peer json report on > Namenode JMX metrics. In the meantime, user should be able to browse through > more entries in the report by adjusting i.e. reconfiguring > "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted > without user having to bounce active Namenode just for this purpose. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782784=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782784 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 20/Jun/22 04:51 Start Date: 20/Jun/22 04:51 Worklog Time Spent: 10m Work Description: virajjasani commented on PR #4467: URL: https://github.com/apache/hadoop/pull/4467#issuecomment-1159969878 FYI @tomscut Issue Time Tracking --- Worklog Id: (was: 782784) Time Spent: 2h 10m (was: 2h) > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > On a busy cluster, sometimes it takes bit of time for deleted node(from the > cluster)'s "slow node report" to get removed from slow peer json report on > Namenode JMX metrics. In the meantime, user should be able to browse through > more entries in the report by adjusting i.e. reconfiguring > "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted > without user having to bounce active Namenode just for this purpose. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782752=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782752 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 20/Jun/22 01:25 Start Date: 20/Jun/22 01:25 Worklog Time Spent: 10m Work Description: tomscut commented on PR #4448: URL: https://github.com/apache/hadoop/pull/4448#issuecomment-1159867414 Hi @virajjasani , could you please submit another PR for branch-3.3 since there are some conflicts when cherry-pick. Thanks. Issue Time Tracking --- Worklog Id: (was: 782752) Time Spent: 1h 50m (was: 1h 40m) > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > On a busy cluster, sometimes it takes bit of time for deleted node(from the > cluster)'s "slow node report" to get removed from slow peer json report on > Namenode JMX metrics. In the meantime, user should be able to browse through > more entries in the report by adjusting i.e. reconfiguring > "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted > without user having to bounce active Namenode just for this purpose. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782751=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782751 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 20/Jun/22 01:21 Start Date: 20/Jun/22 01:21 Worklog Time Spent: 10m Work Description: tomscut merged PR #4448: URL: https://github.com/apache/hadoop/pull/4448 Issue Time Tracking --- Worklog Id: (was: 782751) Time Spent: 1h 40m (was: 1.5h) > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > On a busy cluster, sometimes it takes bit of time for deleted node(from the > cluster)'s "slow node report" to get removed from slow peer json report on > Namenode JMX metrics. In the meantime, user should be able to browse through > more entries in the report by adjusting i.e. reconfiguring > "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted > without user having to bounce active Namenode just for this purpose. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782750=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782750 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 20/Jun/22 01:20 Start Date: 20/Jun/22 01:20 Worklog Time Spent: 10m Work Description: tomscut commented on PR #4448: URL: https://github.com/apache/hadoop/pull/4448#issuecomment-1159864599 Thanks @virajjasani for your contribution! Issue Time Tracking --- Worklog Id: (was: 782750) Time Spent: 1.5h (was: 1h 20m) > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > On a busy cluster, sometimes it takes bit of time for deleted node(from the > cluster)'s "slow node report" to get removed from slow peer json report on > Namenode JMX metrics. In the meantime, user should be able to browse through > more entries in the report by adjusting i.e. reconfiguring > "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted > without user having to bounce active Namenode just for this purpose. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782653=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782653 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 19/Jun/22 00:32 Start Date: 19/Jun/22 00:32 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4448: URL: https://github.com/apache/hadoop/pull/4448#issuecomment-1159589225 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 3s | | trunk passed | | +1 :green_heart: | compile | 1m 40s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 1m 33s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 22s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 42s | | trunk passed | | -1 :x: | javadoc | 1m 20s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in trunk failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 45s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 22s | | the patch passed | | +1 :green_heart: | compile | 1m 29s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 1m 29s | | the patch passed | | +1 :green_heart: | compile | 1m 21s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 21s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 1s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 27s | | the patch passed | | -1 :x: | javadoc | 0m 59s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/3/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 1m 30s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 31s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 37s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 332m 4s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 0s | | The patch does not generate ASF License warnings. | | | | 448m 47s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4448 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 04a98445fc19 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / cba2bb882e864e063ef704d75e0a3a8869eda49c | | Default Java | Private
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782633=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782633 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 18/Jun/22 14:09 Start Date: 18/Jun/22 14:09 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4448: URL: https://github.com/apache/hadoop/pull/4448#issuecomment-1159471779 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 19s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 44m 50s | | trunk passed | | +1 :green_heart: | compile | 2m 2s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 1m 46s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 3s | | trunk passed | | -1 :x: | javadoc | 1m 40s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in trunk failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 1m 56s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 4m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 29m 42s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 38s | | the patch passed | | +1 :green_heart: | compile | 1m 38s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 1m 38s | | the patch passed | | +1 :green_heart: | compile | 1m 33s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 9s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 44s | | the patch passed | | -1 :x: | javadoc | 1m 10s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 1m 40s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 54s | | the patch passed | | +1 :green_heart: | shadedclient | 28m 20s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 396m 33s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 19s | | The patch does not generate ASF License warnings. | | | | 529m 19s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestHostsFiles | | | hadoop.hdfs.server.namenode.TestAuditLogger | | | hadoop.hdfs.server.namenode.TestFileContextAcl | | | hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4448 | |
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782585=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782585 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 18/Jun/22 05:16 Start Date: 18/Jun/22 05:16 Worklog Time Spent: 10m Work Description: virajjasani commented on code in PR #4448: URL: https://github.com/apache/hadoop/pull/4448#discussion_r900699519 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SlowPeerTracker.java: ## @@ -80,7 +80,7 @@ public class SlowPeerTracker { * Number of nodes to include in JSON report. We will return nodes with * the highest number of votes from peers. */ - private final int maxNodesToReport; + private int maxNodesToReport; Review Comment: Yeah it's fine I think, this is not a big concern either ways. So let me make this change. Issue Time Tracking --- Worklog Id: (was: 782585) Time Spent: 1h (was: 50m) > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > On a busy cluster, sometimes it takes bit of time for deleted node(from the > cluster)'s "slow node report" to get removed from slow peer json report on > Namenode JMX metrics. In the meantime, user should be able to browse through > more entries in the report by adjusting i.e. reconfiguring > "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted > without user having to bounce active Namenode just for this purpose. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782584=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782584 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 18/Jun/22 05:05 Start Date: 18/Jun/22 05:05 Worklog Time Spent: 10m Work Description: tomscut commented on code in PR #4448: URL: https://github.com/apache/hadoop/pull/4448#discussion_r900698693 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SlowPeerTracker.java: ## @@ -80,7 +80,7 @@ public class SlowPeerTracker { * Number of nodes to include in JSON report. We will return nodes with * the highest number of votes from peers. */ - private final int maxNodesToReport; + private int maxNodesToReport; Review Comment: This field is almost always read-only and rarely changed, and for read-only operations, it doesn't have much impact. WDYT? Issue Time Tracking --- Worklog Id: (was: 782584) Time Spent: 50m (was: 40m) > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > On a busy cluster, sometimes it takes bit of time for deleted node(from the > cluster)'s "slow node report" to get removed from slow peer json report on > Namenode JMX metrics. In the meantime, user should be able to browse through > more entries in the report by adjusting i.e. reconfiguring > "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted > without user having to bounce active Namenode just for this purpose. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782561=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782561 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 18/Jun/22 01:47 Start Date: 18/Jun/22 01:47 Worklog Time Spent: 10m Work Description: virajjasani commented on code in PR #4448: URL: https://github.com/apache/hadoop/pull/4448#discussion_r900656012 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SlowPeerTracker.java: ## @@ -80,7 +80,7 @@ public class SlowPeerTracker { * Number of nodes to include in JSON report. We will return nodes with * the highest number of votes from peers. */ - private final int maxNodesToReport; + private int maxNodesToReport; Review Comment: Yes, you have made good point @tomscut, this can be done to remain in line with other reconfig changes, however it might cause bit of a performance issue for JMX metrics API overall, hence I was bit reluctant to make the change. But if you have strong preference, I can make the change, WDYT? Issue Time Tracking --- Worklog Id: (was: 782561) Time Spent: 40m (was: 0.5h) > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > On a busy cluster, sometimes it takes bit of time for deleted node(from the > cluster)'s "slow node report" to get removed from slow peer json report on > Namenode JMX metrics. In the meantime, user should be able to browse through > more entries in the report by adjusting i.e. reconfiguring > "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted > without user having to bounce active Namenode just for this purpose. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782508=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782508 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 17/Jun/22 18:42 Start Date: 17/Jun/22 18:42 Worklog Time Spent: 10m Work Description: tomscut commented on code in PR #4448: URL: https://github.com/apache/hadoop/pull/4448#discussion_r900424477 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SlowPeerTracker.java: ## @@ -80,7 +80,7 @@ public class SlowPeerTracker { * Number of nodes to include in JSON report. We will return nodes with * the highest number of votes from peers. */ - private final int maxNodesToReport; + private int maxNodesToReport; Review Comment: Please set this to `volatile`. Although it doesn't make a big difference here, I think it's better to be consistent with other reconfig changes. What do you think of this? Issue Time Tracking --- Worklog Id: (was: 782508) Time Spent: 0.5h (was: 20m) > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > On a busy cluster, sometimes it takes bit of time for deleted node(from the > cluster)'s "slow node report" to get removed from slow peer json report on > Namenode JMX metrics. In the meantime, user should be able to browse through > more entries in the report by adjusting i.e. reconfiguring > "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted > without user having to bounce active Namenode just for this purpose. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782345=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782345 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 17/Jun/22 09:31 Start Date: 17/Jun/22 09:31 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on PR #4448: URL: https://github.com/apache/hadoop/pull/4448#issuecomment-1158690095 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 9s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 6s | | trunk passed | | +1 :green_heart: | compile | 1m 50s | | trunk passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | compile | 1m 40s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 21s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | -1 :x: | javadoc | 1m 21s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in trunk failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 25s | | the patch passed | | +1 :green_heart: | compile | 1m 27s | | the patch passed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 | | +1 :green_heart: | javac | 1m 27s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 1s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 27s | | the patch passed | | -1 :x: | javadoc | 0m 58s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/1/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt) | hadoop-hdfs in the patch failed with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1. | | +1 :green_heart: | javadoc | 1m 31s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 34s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 51s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 377m 10s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 1s | | The patch does not generate ASF License warnings. | | | | 496m 47s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNode | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4448 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux b57abcc008d2
[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics
[ https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782203=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782203 ] ASF GitHub Bot logged work on HDFS-16634: - Author: ASF GitHub Bot Created on: 17/Jun/22 01:13 Start Date: 17/Jun/22 01:13 Worklog Time Spent: 10m Work Description: virajjasani opened a new pull request, #4448: URL: https://github.com/apache/hadoop/pull/4448 ### Description of PR On a busy cluster, sometimes it takes bit of time for deleted node(from the cluster)'s "slow node report" to get removed from slow peer json report on Namenode JMX metrics. In the meantime, user should be able to browse through more entries in the report by adjusting i.e. reconfiguring "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted without user having to bounce active Namenode just for this purpose. ### How was this patch tested? Dev cluster and using UT. ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? Issue Time Tracking --- Worklog Id: (was: 782203) Remaining Estimate: 0h Time Spent: 10m > Dynamically adjust slow peer report size on JMX metrics > --- > > Key: HDFS-16634 > URL: https://issues.apache.org/jira/browse/HDFS-16634 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > On a busy cluster, sometimes it takes bit of time for deleted node(from the > cluster)'s "slow node report" to get removed from slow peer json report on > Namenode JMX metrics. In the meantime, user should be able to browse through > more entries in the report by adjusting i.e. reconfiguring > "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted > without user having to bounce active Namenode just for this purpose. -- This message was sent by Atlassian Jira (v8.20.7#820007) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org