[
https://issues.apache.org/jira/browse/HADOOP-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14060504#comment-14060504
]
Hadoop QA commented on HADOOP-10480:
------------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12655502/HADOOP-10480.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:red}-1 tests included{color}. The patch doesn't appear to include
any new or modified tests.
Please justify why no new tests are needed for this
patch.
Also please list what manual steps were performed to
verify this patch.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.balancer.TestBalancer
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HADOOP-Build/4261//testReport/
Console output:
https://builds.apache.org/job/PreCommit-HADOOP-Build/4261//console
This message is automatically generated.
> Fix new findbugs warnings in hadoop-hdfs
> ----------------------------------------
>
> Key: HADOOP-10480
> URL: https://issues.apache.org/jira/browse/HADOOP-10480
> Project: Hadoop Common
> Issue Type: Sub-task
> Reporter: Haohui Mai
> Assignee: Akira AJISAKA
> Labels: newbie
> Attachments: HADOOP-10480.patch
>
>
> The following findbugs warnings need to be fixed:
> {noformat}
> [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
> [INFO] BugInstance size is 14
> [INFO] Error size is 0
> [INFO] Total bugs: 14
> [INFO] Redundant nullcheck of curPeer, which is known to be non-null in
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp()
> ["org.apache.hadoop.hdfs.BlockReaderFactory"] At
> BlockReaderFactory.java:[lines 68-808]
> [INFO] Increment of volatile field
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.restartingNodeIndex in
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
> ["org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer"] At
> DFSOutputStream.java:[lines 308-1492]
> [INFO] Found reliance on default encoding in
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream,
> DataInputStream, DataOutputStream, String, DataTransferThrottler,
> DatanodeInfo[]): new java.io.FileWriter(File)
> ["org.apache.hadoop.hdfs.server.datanode.BlockReceiver"] At
> BlockReceiver.java:[lines 66-905]
> [INFO] b must be nonnull but is marked as nullable
> ["org.apache.hadoop.hdfs.server.datanode.DatanodeJspHelper$2"] At
> DatanodeJspHelper.java:[lines 546-549]
> [INFO] Found reliance on default encoding in
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap,
> File, boolean): new java.util.Scanner(File)
> ["org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice"] At
> BlockPoolSlice.java:[lines 58-427]
> [INFO] Found reliance on default encoding in
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed():
> new java.util.Scanner(File)
> ["org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice"] At
> BlockPoolSlice.java:[lines 58-427]
> [INFO] Found reliance on default encoding in
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed():
> new java.io.FileWriter(File)
> ["org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice"] At
> BlockPoolSlice.java:[lines 58-427]
> [INFO] Redundant nullcheck of f, which is known to be non-null in
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(String,
> Block[])
> ["org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl"] At
> FsDatasetImpl.java:[lines 60-1910]
> [INFO] Found reliance on default encoding in
> org.apache.hadoop.hdfs.server.namenode.FSImageUtil.<static initializer for
> FSImageUtil>(): String.getBytes()
> ["org.apache.hadoop.hdfs.server.namenode.FSImageUtil"] At
> FSImageUtil.java:[lines 34-89]
> [INFO] Found reliance on default encoding in
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(String,
> byte[], boolean): new String(byte[])
> ["org.apache.hadoop.hdfs.server.namenode.FSNamesystem"] At
> FSNamesystem.java:[lines 301-7701]
> [INFO] Found reliance on default encoding in
> org.apache.hadoop.hdfs.server.namenode.INode.dumpTreeRecursively(PrintStream):
> new java.io.PrintWriter(OutputStream, boolean)
> ["org.apache.hadoop.hdfs.server.namenode.INode"] At INode.java:[lines 51-744]
> [INFO] Redundant nullcheck of fos, which is known to be non-null in
> org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(String,
> HdfsFileStatus, LocatedBlocks)
> ["org.apache.hadoop.hdfs.server.namenode.NamenodeFsck"] At
> NamenodeFsck.java:[lines 94-710]
> [INFO] Found reliance on default encoding in
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
> new java.io.PrintWriter(File)
> ["org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB"] At
> OfflineImageViewerPB.java:[lines 45-181]
> [INFO] Found reliance on default encoding in
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
> new java.io.PrintWriter(OutputStream)
> ["org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB"] At
> OfflineImageViewerPB.java:[lines 45-181]
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.2#6252)