[
https://issues.apache.org/jira/browse/HDFS-14318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921156#comment-16921156
]
Hadoop QA commented on HDFS-14318:
----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m
0s{color} | {color:red} The patch doesn't appear to include any new or modified
tests. Please justify why no new tests are needed for this patch. Also please
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
14m 10s{color} | {color:green} branch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
53s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m
54s{color} | {color:blue} Used deprecated FindBugs config; considering
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}
0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch
generated 1 new + 155 unchanged - 0 fixed = 156 total (was 155) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
14m 0s{color} | {color:green} patch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m
3s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0
unchanged - 0 fixed = 2 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 31s{color}
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
35s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 53s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
| | Possible doublecheck on
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskThread in
org.apache.hadoop.hdfs.server.datanode.DataNode.startCheckDiskThread() At
DataNode.java:org.apache.hadoop.hdfs.server.datanode.DataNode.startCheckDiskThread()
At DataNode.java:[lines 2211-2213] |
| | Null pointer dereference of DataNode.errorDisk in
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError() Dereferenced
at DataNode.java:in
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError() Dereferenced
at DataNode.java:[line 3486] |
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
| | hadoop.hdfs.server.namenode.ha.TestHAAppend |
| | hadoop.hdfs.tools.TestDFSAdmin |
| | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
| | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
| | hadoop.tools.TestJMXGet |
| | hadoop.hdfs.TestDFSOutputStream |
| | hadoop.hdfs.TestFileChecksum |
| | hadoop.hdfs.TestFileChecksumCompositeCrc |
| | hadoop.hdfs.TestDatanodeReport |
| | hadoop.hdfs.TestBlockStoragePolicy |
| | hadoop.hdfs.tools.TestDFSZKFailoverController |
| | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
| | hadoop.hdfs.server.datanode.TestDataNodeUUID |
| | hadoop.hdfs.server.balancer.TestBalancer |
| | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
| | hadoop.hdfs.TestAppendSnapshotTruncate |
| | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base:
https://builds.apache.org/job/hadoop-multibranch/job/PR-1104/10/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/1104 |
| JIRA Issue | HDFS-14318 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite
unit shadedclient findbugs checkstyle |
| uname | Linux b71802a82e3e 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 915cbc9 |
| Default Java | 1.8.0_222 |
| checkstyle |
https://builds.apache.org/job/hadoop-multibranch/job/PR-1104/10/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
|
| findbugs |
https://builds.apache.org/job/hadoop-multibranch/job/PR-1104/10/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
|
| unit |
https://builds.apache.org/job/hadoop-multibranch/job/PR-1104/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/hadoop-multibranch/job/PR-1104/10/testReport/ |
| Max. process+thread count | 2717 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U:
hadoop-hdfs-project/hadoop-hdfs |
| Console output |
https://builds.apache.org/job/hadoop-multibranch/job/PR-1104/10/console |
| versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
This message was automatically generated.
> dn cannot be recognized and must be restarted to recognize the Repaired disk
> ----------------------------------------------------------------------------
>
> Key: HDFS-14318
> URL: https://issues.apache.org/jira/browse/HDFS-14318
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Reporter: hunshenshi
> Assignee: hunshenshi
> Priority: Major
> Attachments: HDFS-14318.patch
>
>
> dn detected that disk a has failed. After disk a is repaired, dn cannot be
> recognized and must be restarted to recognize
>
> I make a patch to dn for recognize the repaired disk without restart dn
--
This message was sent by Atlassian Jira
(v8.3.2#803003)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]