[ https://issues.apache.org/jira/browse/HADOOP-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Haohui Mai updated HADOOP-10480: -------------------------------- Comment: was deleted (was: Looking at the log of jenkins: {quote} /home/jenkins/tools/maven/latest/bin/mvn clean test javadoc:javadoc -DskipTests -Pdocs -DHadoopPatchProcess > /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/patchJavadocWarnings.txt 2>&1 There appear to be 26 javadoc warnings before the patch and 26 javadoc warnings after applying the patch. {quote} Is the proposed fix a JDK-specific issue?) > Fix new findbugs warnings in hadoop-hdfs > ---------------------------------------- > > Key: HADOOP-10480 > URL: https://issues.apache.org/jira/browse/HADOOP-10480 > Project: Hadoop Common > Issue Type: Sub-task > Reporter: Haohui Mai > Assignee: Akira AJISAKA > Labels: newbie > Attachments: HADOOP-10480.patch > > > The following findbugs warnings need to be fixed: > {noformat} > [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs --- > [INFO] BugInstance size is 14 > [INFO] Error size is 0 > [INFO] Total bugs: 14 > [INFO] Redundant nullcheck of curPeer, which is known to be non-null in > org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() > ["org.apache.hadoop.hdfs.BlockReaderFactory"] At > BlockReaderFactory.java:[lines 68-808] > [INFO] Increment of volatile field > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.restartingNodeIndex in > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery() > ["org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer"] At > DFSOutputStream.java:[lines 308-1492] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream, > DataInputStream, DataOutputStream, String, DataTransferThrottler, > DatanodeInfo[]): new java.io.FileWriter(File) > ["org.apache.hadoop.hdfs.server.datanode.BlockReceiver"] At > BlockReceiver.java:[lines 66-905] > [INFO] b must be nonnull but is marked as nullable > ["org.apache.hadoop.hdfs.server.datanode.DatanodeJspHelper$2"] At > DatanodeJspHelper.java:[lines 546-549] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap, > File, boolean): new java.util.Scanner(File) > ["org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice"] At > BlockPoolSlice.java:[lines 58-427] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed(): > new java.util.Scanner(File) > ["org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice"] At > BlockPoolSlice.java:[lines 58-427] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed(): > new java.io.FileWriter(File) > ["org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice"] At > BlockPoolSlice.java:[lines 58-427] > [INFO] Redundant nullcheck of f, which is known to be non-null in > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(String, > Block[]) > ["org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl"] At > FsDatasetImpl.java:[lines 60-1910] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.server.namenode.FSImageUtil.<static initializer for > FSImageUtil>(): String.getBytes() > ["org.apache.hadoop.hdfs.server.namenode.FSImageUtil"] At > FSImageUtil.java:[lines 34-89] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(String, > byte[], boolean): new String(byte[]) > ["org.apache.hadoop.hdfs.server.namenode.FSNamesystem"] At > FSNamesystem.java:[lines 301-7701] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.server.namenode.INode.dumpTreeRecursively(PrintStream): > new java.io.PrintWriter(OutputStream, boolean) > ["org.apache.hadoop.hdfs.server.namenode.INode"] At INode.java:[lines 51-744] > [INFO] Redundant nullcheck of fos, which is known to be non-null in > org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(String, > HdfsFileStatus, LocatedBlocks) > ["org.apache.hadoop.hdfs.server.namenode.NamenodeFsck"] At > NamenodeFsck.java:[lines 94-710] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]): > new java.io.PrintWriter(File) > ["org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB"] At > OfflineImageViewerPB.java:[lines 45-181] > [INFO] Found reliance on default encoding in > org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]): > new java.io.PrintWriter(OutputStream) > ["org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB"] At > OfflineImageViewerPB.java:[lines 45-181] > {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)