[ 
https://issues.apache.org/jira/browse/HDFS-8428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8428:
----------------------------
       Resolution: Fixed
    Fix Version/s: HDFS-7285
     Hadoop Flags: Reviewed
           Status: Resolved  (was: Patch Available)

Committed the patch. Thanks Yi for the contribution!

> Erasure Coding: Fix the NullPointerException when deleting file
> ---------------------------------------------------------------
>
>                 Key: HDFS-8428
>                 URL: https://issues.apache.org/jira/browse/HDFS-8428
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Yi Liu
>            Assignee: Yi Liu
>             Fix For: HDFS-7285
>
>         Attachments: HDFS-8428-HDFS-7285.001.patch
>
>
> In HDFS, when removing some file, NN will also remove all its blocks from 
> {{BlocksMap}}, and send {{DNA_INVALIDATE}} (invalidate blocks) commands to 
> datanodes.  After datanodes successfully delete the block replicas, will 
> report {{DELETED_BLOCK}} to NameNode.
> snippet code logic in {{BlockManager#processIncrementalBlockReport}} is as 
> following
> {code}
> case DELETED_BLOCK:
>         removeStoredBlock(storageInfo, getStoredBlock(rdbi.getBlock()), node);
>         ...
> {code}
> {code}
> private void removeStoredBlock(DatanodeStorageInfo storageInfo, Block block,
>       DatanodeDescriptor node) {
>     if (shouldPostponeBlocksFromFuture &&
>         namesystem.isGenStampInFuture(block)) {
>       queueReportedBlock(storageInfo, block, null,
>           QUEUE_REASON_FUTURE_GENSTAMP);
>       return;
>     }
>     removeStoredBlock(getStoredBlock(block), node);
>   }
> {code}
> In EC branch, we add {{getStoredBlock}}. There is {{NullPointerException}} 
> when handling {{DELETED_BLOCK}} of incrementalBlockReport from DataNode after 
> delete a file, since the block is already removed, we need to check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to