[
https://issues.apache.org/jira/browse/HDFS-11821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16009865#comment-16009865
]
Hadoop QA commented on HDFS-11821:
----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s{color} | {color:green} The patch appears to include 2 new or modified test
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
15s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m
40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 38s{color}
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
19s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 39s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestMaintenanceState |
| | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
| | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11821 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12867997/HDFS-11821-2.patch |
| Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
unit findbugs checkstyle |
| uname | Linux 9009754900a3 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh
|
| git revision | trunk / 6600abb |
| Default Java | 1.8.0_121 |
| findbugs | v3.1.0-RC1 |
| findbugs |
https://builds.apache.org/job/PreCommit-HDFS-Build/19430/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/19430/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/19430/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U:
hadoop-hdfs-project/hadoop-hdfs |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/19430/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
> BlockManager.getMissingReplOneBlocksCount() does not report correct value if
> corrupt file with replication factor of 1 gets deleted
> -----------------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-11821
> URL: https://issues.apache.org/jira/browse/HDFS-11821
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs
> Affects Versions: 2.6.0, 3.0.0-alpha2
> Reporter: Wellington Chevreuil
> Assignee: Wellington Chevreuil
> Priority: Minor
> Attachments: HDFS-11821-1.patch, HDFS-11821-2.patch
>
>
> *BlockManager* keeps a separate metric for number of missing blocks with
> replication factor of 1. This is returned by
> *BlockManager.getMissingReplOneBlocksCount()* method currently, and that's
> what is displayed on below attribute for *dfsadmin -report* (in below
> example, there's one corrupt block that relates to a file with replication
> factor of 1):
> {noformat}
> ...
> Missing blocks (with replication factor 1): 1
> ...
> {noformat}
> However, if the related file gets deleted, (for instance, using hdfs fsck
> -delete option), this metric never gets updated, and *dfsadmin -report* will
> keep reporting a missing block, even though the file does not exist anymore.
> The only workaround available is to restart the NN, so that this metric will
> be cleared.
> This can be easily reproduced by forcing a replication factor 1 file
> corruption such as follows:
> 1) Put a file into hdfs with replication factor 1:
> {noformat}
> $ hdfs dfs -Ddfs.replication=1 -put test_corrupt /
> $ hdfs dfs -ls /
> -rw-r--r-- 1 hdfs supergroup 19 2017-05-10 09:21 /test_corrupt
> {noformat}
> 2) Find related block for the file and delete it from DN:
> {noformat}
> $ hdfs fsck /test_corrupt -files -blocks -locations
> ...
> /test_corrupt 19 bytes, 1 block(s): OK
> 0. BP-782213640-172.31.113.82-1494420317936:blk_1073742742_1918 len=19
> Live_repl=1
> [DatanodeInfoWithStorage[172.31.112.178:20002,DS-a0dc0b30-a323-4087-8c36-26ffdfe44f46,DISK]]
> Status: HEALTHY
> ...
> $ find /dfs/dn/ -name blk_1073742742*
> /dfs/dn/current/BP-782213640-172.31.113.82-1494420317936/current/finalized/subdir0/subdir3/blk_1073742742
> /dfs/dn/current/BP-782213640-172.31.113.82-1494420317936/current/finalized/subdir0/subdir3/blk_1073742742_1918.meta
> $ rm -rf
> /dfs/dn/current/BP-782213640-172.31.113.82-1494420317936/current/finalized/subdir0/subdir3/blk_1073742742
> $ rm -rf
> /dfs/dn/current/BP-782213640-172.31.113.82-1494420317936/current/finalized/subdir0/subdir3/blk_1073742742_1918.meta
> {noformat}
> 3) Running fsck will report the corruption as expected:
> {noformat}
> $ hdfs fsck /test_corrupt -files -blocks -locations
> ...
> /test_corrupt 19 bytes, 1 block(s):
> /test_corrupt: CORRUPT blockpool BP-782213640-172.31.113.82-1494420317936
> block blk_1073742742
> MISSING 1 blocks of total size 19 B
> ...
> Total blocks (validated): 1 (avg. block size 19 B)
> ********************************
> UNDER MIN REPL'D BLOCKS: 1 (100.0 %)
> dfs.namenode.replication.min: 1
> CORRUPT FILES: 1
> MISSING BLOCKS: 1
> MISSING SIZE: 19 B
> CORRUPT BLOCKS: 1
> ...
> {noformat}
> 4) Same for *dfsadmin -report*
> {noformat}
> $ hdfs dfsadmin -report
> ...
> Under replicated blocks: 1
> Blocks with corrupt replicas: 0
> Missing blocks: 1
> Missing blocks (with replication factor 1): 1
> ...
> {noformat}
> 5) Running *fsck -delete* option does cause fsck to report correct
> information about corrupt block, but dfsadmin still shows the corrupt block:
> {noformat}
> $ hdfs fsck /test_corrupt -delete
> ...
> $ hdfs fsck /
> ...
> The filesystem under path '/' is HEALTHY
> ...
> $ hdfs dfsadmin -report
> ...
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 1
> ...
> {noformat}
> The problem seems to be on *BlockManager.removeBlock()* method, which in turn
> uses util class *LowRedundancyBlocks* that classifies blocks according to the
> current replication level, including blocks currently marked as corrupt.
> The related metric showed on *dfsadmin -report* for corrupt blocks with
> replication factor 1 is tracked on this *LowRedundancyBlocks*. Whenever a
> block is marked as corrupt and it has replication factor of 1, the related
> metric is updated. When removing the block, though,
> *BlockManager.removeBlock()* is calling *LowRedundancyBlocks.remove(BlockInfo
> block, int priLevel)*, which does not check if the given block was previously
> marked as corrupt and had replication factor 1, which would require for
> updating the metric.
> Am shortly proposing a patch that seems to fix this by making
> *BlockManager.removeBlock()* call *LowRedundancyBlocks.remove(BlockInfo
> block, int oldReplicas, int oldReadOnlyReplicas, int outOfServiceReplicas,
> int oldExpectedReplicas)* instead, which does update the metric properly.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]