[ 
https://issues.apache.org/jira/browse/HDFS-7567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14257753#comment-14257753
 ] 

Jing Zhao commented on HDFS-7567:
---------------------------------

Normally we will not hit any QuotaExceededException while replaying editlog 
since all the checks have been done when first time serving the request (i.e., 
no editlog was written unless the op succeeded at that time). If the newFile is 
null this usually means there is corruption in the editlog. But I agree instead 
of throwing NPE directly maybe we can add an extra check and throw an exception 
containing more information here.

> Potential null dereference in FSEditLogLoader#applyEditLogOp()
> --------------------------------------------------------------
>
>                 Key: HDFS-7567
>                 URL: https://issues.apache.org/jira/browse/HDFS-7567
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Ted Yu
>            Priority: Minor
>
> {code}
>       INodeFile oldFile = INodeFile.valueOf(iip.getLastINode(), path, true);
>       if (oldFile != null && addCloseOp.overwrite) {
> ...
>       INodeFile newFile = oldFile;
> ...
>       // Update the salient file attributes.
>       newFile.setAccessTime(addCloseOp.atime, Snapshot.CURRENT_STATE_ID);
>       newFile.setModificationTime(addCloseOp.mtime, 
> Snapshot.CURRENT_STATE_ID);
> {code}
> The last two lines are not protected by null check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to