[ 
https://issues.apache.org/jira/browse/HDFS-3332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266852#comment-13266852
 ] 

Hadoop QA commented on HDFS-3332:
---------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525315/HDFS-3332.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified 
tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    -1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

    +1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    -1 findbugs.  The patch appears to introduce 1 new Findbugs (version 1.3.9) 
warnings.

    +1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

    +1 core tests.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2361//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2361//artifact/trunk/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2361//console

This message is automatically generated.
                
> NullPointerException in DN when directoryscanner is trying to report bad 
> blocks
> -------------------------------------------------------------------------------
>
>                 Key: HDFS-3332
>                 URL: https://issues.apache.org/jira/browse/HDFS-3332
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 3.0.0
>         Environment: HDFS
>            Reporter: amith
>            Assignee: amith
>             Fix For: 3.0.0
>
>         Attachments: HDFS-3332.patch
>
>
> There is 1 NN and 1 DN (NN is started with HA conf)
> I corrupted 1 block and found 
> {code}
> 2012-04-27 09:59:01,214 INFO  datanode.DataNode 
> (BPServiceActor.java:blockReport(401)) - BlockReport of 2 blocks took 0 msec 
> to generate and 5 msecs for RPC and NN processing
> 2012-04-27 09:59:01,214 INFO  datanode.DataNode 
> (BPServiceActor.java:blockReport(420)) - sent block report, processed 
> command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@3b756db3
> 2012-04-27 09:59:01,726 INFO  datanode.DirectoryScanner 
> (DirectoryScanner.java:scan(390)) - BlockPool 
> BP-2087868617-10.18.40.95-1335500488012 Total blocks: 2, missing metadata 
> files:0, missing block files:0, missing blocks in memory:0, mismatched 
> blocks:1
> 2012-04-27 09:59:01,727 WARN  impl.FsDatasetImpl 
> (FsDatasetImpl.java:checkAndUpdate(1366)) - Updating size of block 
> -4466699320171028643 from 1024 to 1034
> 2012-04-27 09:59:01,727 WARN  impl.FsDatasetImpl 
> (FsDatasetImpl.java:checkAndUpdate(1374)) - Reporting the block 
> blk_-4466699320171028643_1004 as corrupt due to length mismatch
> 2012-04-27 09:59:01,728 DEBUG ipc.Client (Client.java:sendParam(807)) - IPC 
> Client (1957050620) connection to /10.18.40.95:8020 from root sending #257
> 2012-04-27 09:59:01,730 DEBUG ipc.Client (Client.java:receiveResponse(848)) - 
> IPC Client (1957050620) connection to /10.18.40.95:8020 from root got value 
> #257
> 2012-04-27 09:59:01,730 DEBUG ipc.ProtobufRpcEngine 
> (ProtobufRpcEngine.java:invoke(193)) - Call: reportBadBlocks 2
> 2012-04-27 09:59:01,731 ERROR datanode.DirectoryScanner 
> (DirectoryScanner.java:run(288)) - Exception during DirectoryScanner 
> execution - will continue next cycle
> java.lang.NullPointerException
>       at org.apache.hadoop.hdfs.protocol.DatanodeID.<init>(DatanodeID.java:66)
>       at 
> org.apache.hadoop.hdfs.protocol.DatanodeInfo.<init>(DatanodeInfo.java:87)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.reportBadBlocks(BPServiceActor.java:238)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.reportBadBlocks(BPOfferService.java:187)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.reportBadBlocks(DataNode.java:559)
>       at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkAndUpdate(FsDatasetImpl.java:1377)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:318)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:284)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>       at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>       at java.lang.Thread.run(Thread.java:619)
> {code}
> Here when Directory scanner is trying to report badblock we got a NPE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to