[
https://issues.apache.org/jira/browse/HDFS-10309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15250176#comment-15250176
]
Hadoop QA commented on HDFS-10309:
----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s
{color} | {color:red} The patch doesn't appear to include any new or modified
tests. Please justify why no new tests are needed for this patch. Also please
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 22s
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 105m 23s
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77.
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 26s {color}
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
22s {color} | {color:green} Patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 229m 10s {color}
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | hadoop.hdfs.TestHFlush |
| | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
| | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
| | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
| | hadoop.hdfs.server.blockmanagement.TestBlockManager |
| | hadoop.hdfs.TestDataTransferKeepalive |
| | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
| | hadoop.hdfs.TestErasureCodeBenchmarkThroughput |
| | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
| | hadoop.hdfs.TestRollingUpgrade |
| | hadoop.hdfs.web.TestWebHdfsTokens |
| | hadoop.hdfs.TestFileCreationDelete |
| | hadoop.hdfs.server.namenode.ha.TestHAAppend |
| | hadoop.fs.TestSymlinkHdfsFileContext |
| | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
| | hadoop.hdfs.TestEncryptionZonesWithKMS |
| JDK v1.7.0_95 Failed junit tests |
hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
| | hadoop.hdfs.TestMissingBlocksAlert |
| | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
| | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
| | hadoop.hdfs.TestRollingUpgrade |
| | hadoop.hdfs.server.namenode.ha.TestHAAppend |
| | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12799731/HDFS-10309.01.patch |
| JIRA Issue | HDFS-10309 |
| Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
unit findbugs checkstyle |
| uname | Linux c6e83e3b9d81 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh
|
| git revision | trunk / af9bdbe |
| Default Java | 1.7.0_95 |
| Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_77
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 |
| findbugs | v3.0.0 |
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/15216/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/15216/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
|
| unit test logs |
https://builds.apache.org/job/PreCommit-HDFS-Build/15216/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
https://builds.apache.org/job/PreCommit-HDFS-Build/15216/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
|
| JDK v1.7.0_95 Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/15216/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U:
hadoop-hdfs-project/hadoop-hdfs |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/15216/console |
| Powered by | Apache Yetus 0.2.0 http://yetus.apache.org |
This message was automatically generated.
> HDFS Balancer doesn't honor dfs.blocksize value defined with suffix k(kilo),
> m(mega), g(giga)
> ---------------------------------------------------------------------------------------------
>
> Key: HDFS-10309
> URL: https://issues.apache.org/jira/browse/HDFS-10309
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: balancer & mover
> Affects Versions: 2.8.0
> Reporter: Amit Anand
> Assignee: Amit Anand
> Fix For: 2.8.0
>
> Attachments: HDFS-10309.01.patch
>
>
> While running HDFS Balancer I get error given below when {{dfs.blockSize}} is
> defined with suffix {{k(kilo), m(mega), g(giga)}} in {{hdfs-site.xml}}. In my
> deployment {{dfs.blocksize}} is set to {{128m}}.
> {code}
> hdfs@bcpc-vm1:/home/ubuntu$ hdfs balancer
> 16/04/19 08:49:51 INFO balancer.Balancer: namenodes = [hdfs://Test-Laptop]
> 16/04/19 08:49:51 INFO balancer.Balancer: parameters =
> Balancer.BalancerParameters [BalancingPolicy.Node, threshold = 10.0, max idle
> iteration = 5, #excluded nodes = 0, #included nodes = 0, #source
> nodes = 0, #blockpools = 0, run during upgrade = false]
> 16/04/19 08:49:51 INFO balancer.Balancer: included nodes = []
> 16/04/19 08:49:51 INFO balancer.Balancer: excluded nodes = []
> 16/04/19 08:49:51 INFO balancer.Balancer: source nodes = []
> Time Stamp Iteration# Bytes Already Moved Bytes Left To Move
> Bytes Being Moved
> 16/04/19 08:49:52 INFO balancer.KeyManager: Block token params received from
> NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
> 16/04/19 08:49:52 INFO block.BlockTokenSecretManager: Setting block keys
> 16/04/19 08:49:52 INFO balancer.KeyManager: Update block keys every 2hrs,
> 30mins, 0sec
> 16/04/19 08:49:52 INFO balancer.Balancer: dfs.balancer.movedWinWidth =
> 5400000 (default=5400000)
> 16/04/19 08:49:52 INFO balancer.Balancer: dfs.balancer.moverThreads = 1000
> (default=1000)
> 16/04/19 08:49:52 INFO balancer.Balancer: dfs.balancer.dispatcherThreads =
> 200 (default=200)
> 16/04/19 08:49:52 INFO balancer.Balancer:
> dfs.datanode.balance.max.concurrent.moves = 5 (default=5)
> 16/04/19 08:49:52 INFO balancer.Balancer: dfs.balancer.getBlocks.size =
> 2147483648 (default=2147483648)
> 16/04/19 08:49:52 INFO balancer.Balancer:
> dfs.balancer.getBlocks.min-block-size = 10485760 (default=10485760)
> 16/04/19 08:49:52 INFO block.BlockTokenSecretManager: Setting block keys
> 16/04/19 08:49:52 INFO balancer.Balancer: dfs.balancer.max-size-to-move =
> 10737418240 (default=10737418240)
> Apr 19, 2016 8:49:52 AM Balancing took 1.408 seconds
> 16/04/19 08:49:52 ERROR balancer.Balancer: Exiting balancer due an exception
> java.lang.NumberFormatException: For input string: "128m"
> at
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Long.parseLong(Long.java:589)
> at java.lang.Long.parseLong(Long.java:631)
> at
> org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1311)
> at
> org.apache.hadoop.hdfs.server.balancer.Balancer.getLong(Balancer.java:221)
> at
> org.apache.hadoop.hdfs.server.balancer.Balancer.<init>(Balancer.java:281)
> at
> org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:660)
> at
> org.apache.hadoop.hdfs.server.balancer.Balancer$Cli.run(Balancer.java:774)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at
> org.apache.hadoop.hdfs.server.balancer.Balancer.main(Balancer.java:903)
> {code}
> However, the workaround for this is to run {{hdfs balancer}} with passing
> numeric value for {{dfs.blocksize}} or change your {{hdfs-site.xml}}.
> {code}
> hdfs balancer -Ddfs.blocksize=134217728
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)