[
https://issues.apache.org/jira/browse/HDFS-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101242#comment-15101242
]
Hadoop QA commented on HDFS-9624:
---------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s {color} | {color:green} The patch appears to include 1 new or modified test
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s
{color} | {color:red} Patch generated 1 new checkstyle issues in
hadoop-hdfs-project/hadoop-hdfs (total was 543, now 543). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 49s {color}
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 55s {color}
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
22s {color} | {color:green} Patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 128m 21s {color}
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests |
hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
| JDK v1.7.0_91 Failed junit tests |
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12782424/HDFS-9624.007.patch |
| JIRA Issue | HDFS-9624 |
| Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
unit findbugs checkstyle xml |
| uname | Linux 1f71e79b4350 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh
|
| git revision | trunk / 1da762c |
| Default Java | 1.7.0_91 |
| Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_66
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 |
| findbugs | v3.0.0 |
| checkstyle |
https://builds.apache.org/job/PreCommit-HDFS-Build/14130/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/14130/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/14130/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt
|
| unit test logs |
https://builds.apache.org/job/PreCommit-HDFS-Build/14130/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
https://builds.apache.org/job/PreCommit-HDFS-Build/14130/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt
|
| JDK v1.7.0_91 Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/14130/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U:
hadoop-hdfs-project/hadoop-hdfs |
| Max memory used | 76MB |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/14130/console |
This message was automatically generated.
> DataNode start slowly due to the initial DU command operations
> --------------------------------------------------------------
>
> Key: HDFS-9624
> URL: https://issues.apache.org/jira/browse/HDFS-9624
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.7.1
> Reporter: Lin Yiqun
> Assignee: Lin Yiqun
> Attachments: HDFS-9624.001.patch, HDFS-9624.002.patch,
> HDFS-9624.003.patch, HDFS-9624.004.patch, HDFS-9624.005.patch,
> HDFS-9624.006.patch, HDFS-9624.007.patch
>
>
> It seems starting datanode so slowly when I am finishing migration of
> datanodes and restart them.I look the dn logs:
> {code}
> 2016-01-06 16:05:08,118 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added
> new volume: DS-70097061-42f8-4c33-ac27-2a6ca21e60d4
> 2016-01-06 16:05:08,118 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added
> volume - /home/data/data/hadoop/dfs/data/data12/current, StorageType: DISK
> 2016-01-06 16:05:08,176 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
> Registered FSDatasetState MBean
> 2016-01-06 16:05:08,177 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544
> 2016-01-06 16:05:08,178 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume
> /home/data/data/hadoop/dfs/data/data2/current...
> 2016-01-06 16:05:08,179 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume
> /home/data/data/hadoop/dfs/data/data3/current...
> 2016-01-06 16:05:08,179 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume
> /home/data/data/hadoop/dfs/data/data4/current...
> 2016-01-06 16:05:08,179 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume
> /home/data/data/hadoop/dfs/data/data5/current...
> 2016-01-06 16:05:08,180 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume
> /home/data/data/hadoop/dfs/data/data6/current...
> 2016-01-06 16:05:08,180 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume
> /home/data/data/hadoop/dfs/data/data7/current...
> 2016-01-06 16:05:08,180 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume
> /home/data/data/hadoop/dfs/data/data8/current...
> 2016-01-06 16:05:08,180 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume
> /home/data/data/hadoop/dfs/data/data9/current...
> 2016-01-06 16:05:08,181 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume
> /home/data/data/hadoop/dfs/data/data10/current...
> 2016-01-06 16:05:08,181 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume
> /home/data/data/hadoop/dfs/data/data11/current...
> 2016-01-06 16:05:08,181 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume
> /home/data/data/hadoop/dfs/data/data12/current...
> 2016-01-06 16:09:49,646 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on
> /home/data/data/hadoop/dfs/data/data7/current: 281466ms
> 2016-01-06 16:09:54,235 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on
> /home/data/data/hadoop/dfs/data/data9/current: 286054ms
> 2016-01-06 16:09:57,859 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on
> /home/data/data/hadoop/dfs/data/data2/current: 289680ms
> 2016-01-06 16:10:00,333 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on
> /home/data/data/hadoop/dfs/data/data5/current: 292153ms
> 2016-01-06 16:10:05,696 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on
> /home/data/data/hadoop/dfs/data/data8/current: 297516ms
> 2016-01-06 16:10:11,229 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on
> /home/data/data/hadoop/dfs/data/data6/current: 303049ms
> 2016-01-06 16:10:28,075 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on
> /home/data/data/hadoop/dfs/data/data12/current: 319894ms
> 2016-01-06 16:10:33,017 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on
> /home/data/data/hadoop/dfs/data/data4/current: 324838ms
> 2016-01-06 16:10:40,177 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on
> /home/data/data/hadoop/dfs/data/data10/current: 331996ms
> 2016-01-06 16:10:44,882 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on
> /home/data/data/hadoop/dfs/data/data3/current: 336703ms
> 2016-01-06 16:11:14,241 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on
> /home/data/data/hadoop/dfs/data/data11/current: 366060ms
> 2016-01-06 16:11:14,242 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total
> time to scan all replicas for block pool
> BP-1942012336-xx.xx.xx.xx-1406726500544: 366065ms
> {code}
> And I know that Scanning blocks on volume and then calculating the dfsUsed
> costs the most of time. Because my datanode's migiration costs the much time,
> so that dfsUsed value can't use cache-dfsused and should be doing du
> operations. But actually I don't need do it again because there has no
> operations in these datanodes. The info is these:
> {code}
> /**
> * Read in the cached DU value and return it if it is less than 600 seconds
> * old (DU update interval). Slight imprecision of dfsUsed is not critical
> and
> * skipping DU can significantly shorten the startup time. If the cached
> value
> * is not available or too old, -1 is returned.
> * */
> {code}
> The 600 seconds is a dead code. And it looks not suitable for here.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)