[
https://issues.apache.org/jira/browse/HDFS-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16937194#comment-16937194
]
Hadoop QA commented on HDFS-10348:
----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s{color} | {color:green} The patch appears to include 1 new or modified test
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
14m 5s{color} | {color:green} branch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch
generated 2 new + 112 unchanged - 0 fixed = 114 total (was 112) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git
apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
13m 3s{color} | {color:green} patch has no errors when building and testing
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m
26s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 45s{color}
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
33s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 13s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
| | Redundant nullcheck of storageInfo, which is known to be non-null in
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.markBlockAsCorrupt(BlockToMarkCorrupt,
DatanodeStorageInfo, DatanodeDescriptor) Redundant null check at
BlockManager.java:is known to be non-null in
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.markBlockAsCorrupt(BlockToMarkCorrupt,
DatanodeStorageInfo, DatanodeDescriptor) Redundant null check at
BlockManager.java:[line 1777] |
| Failed junit tests |
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
| | hadoop.hdfs.server.namenode.TestRedudantBlocks |
| | hadoop.cli.TestHDFSCLI |
| | hadoop.hdfs.TestDFSShell |
| | hadoop.hdfs.tools.TestDFSZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-10348 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12981226/HDFS-10348.004.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient findbugs checkstyle |
| uname | Linux c3d5a44e89a7 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f16cf87 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle |
https://builds.apache.org/job/PreCommit-HDFS-Build/27955/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
|
| whitespace |
https://builds.apache.org/job/PreCommit-HDFS-Build/27955/artifact/out/whitespace-eol.txt
|
| findbugs |
https://builds.apache.org/job/PreCommit-HDFS-Build/27955/artifact/out/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/27955/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/27955/testReport/ |
| Max. process+thread count | 3471 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U:
hadoop-hdfs-project/hadoop-hdfs |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/27955/console |
| Powered by | Apache Yetus 0.8.0 http://yetus.apache.org |
This message was automatically generated.
> Namenode report bad block method doesn't check whether the block belongs to
> datanode before adding it to corrupt replicas map.
> ------------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-10348
> URL: https://issues.apache.org/jira/browse/HDFS-10348
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 3.1.2
> Reporter: Rushabh S Shah
> Assignee: Rushabh S Shah
> Priority: Major
> Attachments: HDFS-10348-1.patch, HDFS-10348.003.patch,
> HDFS-10348.004.patch, HDFS-10348.patch
>
>
> Namenode (via report bad block nethod) doesn't check whether the block
> belongs to the datanode before it adds to corrupt replicas map.
> In one of our cluster we found that there were 3 lingering corrupt blocks.
> It happened in the following order.
> 1. Two clients called getBlockLocations for a particular file.
> 2. Client C1 tried to open the file and encountered checksum error from
> node N3 and it reported bad block (blk1) to the namenode.
> 3. Namenode added that node N3 and block blk1 to corrrupt replicas map and
> ask one of the good node (one of the 2 nodes) to replicate the block to
> another node N4.
> 4. After receiving the block, N4 sends an IBR (with RECEIVED_BLOCK) to
> namenode.
> 5. Namenode removed the block and node N3 from corrupt replicas map.
> It also removed N3's storage from triplets and queued an invalidate
> request for N3.
> 6. In the mean time, Client C2 tries to open the file and the request went to
> node N3.
> C2 also encountered the checksum exception and reported bad block to
> namenode.
> 7. Namenode added the corrupt block blk1 and node N3 to the corrupt replicas
> map without confirming whether node N3 has the block or not.
> After deleting the block, N3 sends an IBR (with DELETED) and the namenode
> simply ignores the report since the N3's storage is no longer in the
> triplets(from step 5)
> We took the node out of rotation, but still the block was present only in the
> corruptReplciasMap.
> Since on removing the node, we only goes through the block which are present
> in the triplets for a given datanode.
> [~kshukla]'s patch fixed this bug via
> https://issues.apache.org/jira/browse/HDFS-9958.
> But I think the following check should be made in the
> BlockManager#markBlockAsCorrupt instead of
> BlockManager#findAndMarkBlockAsCorrupt.
> {noformat}
> if (storage == null) {
> storage = storedBlock.findStorageInfo(node);
> }
> if (storage == null) {
> blockLog.debug("BLOCK* findAndMarkBlockAsCorrupt: {} not found on {}",
> blk, dn);
> return;
> }
> {noformat}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]