[
https://issues.apache.org/jira/browse/HDFS-12487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16407187#comment-16407187
]
genericqa commented on HDFS-12487:
----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m
0s{color} | {color:red} The patch doesn't appear to include any new or modified
tests. Please justify why no new tests are needed for this patch. Also please
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
12m 32s{color} | {color:green} branch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
11m 43s{color} | {color:green} patch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 30s{color}
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
24s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 53s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests |
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
| | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-12487 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12888415/HDFS-12487.003.patch |
| Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
unit shadedclient findbugs checkstyle |
| uname | Linux 80435b134fab 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / fe224ff |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/23573/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/23573/testReport/ |
| Max. process+thread count | 3546 (vs. ulimit of 10000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U:
hadoop-hdfs-project/hadoop-hdfs |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/23573/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
> FsDatasetSpi.isValidBlock() lacks null pointer check inside and neither do
> the callers
> --------------------------------------------------------------------------------------
>
> Key: HDFS-12487
> URL: https://issues.apache.org/jira/browse/HDFS-12487
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: balancer & mover, diskbalancer
> Affects Versions: 3.0.0
> Environment: CentOS 6.8 x64
> CPU:4 core
> Memory:16GB
> Hadoop: Release 3.0.0-alpha4
> Reporter: liumi
> Assignee: liumi
> Priority: Major
> Attachments: HDFS-12487.002.patch, HDFS-12487.003.patch
>
> Original Estimate: 0h
> Remaining Estimate: 0h
>
> BlockIteratorImpl.nextBlock() will look for the blocks in the source volume,
> if there are no blocks any more, it will return null up to
> DiskBalancer.getBlockToCopy(). However, the DiskBalancer.getBlockToCopy()
> will check whether it's a valid block.
> When I look into the FsDatasetSpi.isValidBlock(), I find that it doesn't
> check the null pointer! In fact, we firstly need to check whether it's null
> or not, or exception will occur.
> This bug is hard to find, because the DiskBalancer hardly copy all the data
> of one volume to others. Even if some times we may copy all the data of one
> volume to other volumes, when the bug occurs, the copy process has already
> done.
> However, when we try to copy all the data of two or more volumes to other
> volumes in more than one step, the thread will be shut down, which is caused
> by the bug above.
> The bug can fixed by two ways:
> 1)Before the call of FsDatasetSpi.isValidBlock(), we check the null pointer
> 2)Check the null pointer inside the implementation of
> FsDatasetSpi.isValidBlock()
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]