[ https://issues.apache.org/jira/browse/HDFS-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13038592#comment-13038592 ]
Hadoop QA commented on HDFS-1951: --------------------------------- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12480247/HDFS-1951.patch against trunk revision 1126795. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 4 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/614//console This message is automatically generated. > Null pointer exception comes when Namenode recovery happens and there is no > response from client to NN more than the hardlimit for NN recovery and the > current block is more than the prev block size in NN > ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ > > Key: HDFS-1951 > URL: https://issues.apache.org/jira/browse/HDFS-1951 > Project: Hadoop HDFS > Issue Type: Bug > Components: name-node > Affects Versions: 0.20-append > Reporter: ramkrishna.s.vasudevan > Fix For: 0.20-append > > Attachments: HDFS-1951.patch > > > Null pointer exception comes when Namenode recovery happens and there is no > response from client to NN more than the hardlimit for NN recovery and the > current block is more than the prev block size in NN > 1. Write using a client to 2 datanodes > 2. Kill one data node and allow pipeline recovery. > 3. write somemore data to the same block > 4. Parallely allow the namenode recovery to happen > Null pointer exception will come in addStoreBlock api. > -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira