[ 
https://issues.apache.org/jira/browse/HDFS-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14056523#comment-14056523
 ] 

Jing Zhao commented on HDFS-6647:
---------------------------------

In HDFS-6527 we do not allow users to get an additional block if the file has 
been deleted (but can be in a snapshot). Maybe here we should also fail the 
{{updatePipeline}} call to make it consistent?

But in the meanwhile, I think in the future it will be better to weaken the 
dependency between the states of blocks and files, e.g., letting RPC calls like 
{{updatePipeline}} only update and check the state of blocks. This can make 
work like separating block management out as a service (HDFS-5477) easier.

> Edit log corruption when pipeline recovery occurs for deleted file present in 
> snapshot
> --------------------------------------------------------------------------------------
>
>                 Key: HDFS-6647
>                 URL: https://issues.apache.org/jira/browse/HDFS-6647
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode, snapshots
>    Affects Versions: 2.4.1
>            Reporter: Aaron T. Myers
>            Priority: Blocker
>         Attachments: HDFS-6647-failing-test.patch
>
>
> I've encountered a situation wherein an OP_UPDATE_BLOCKS can appear in the 
> edit log for a file after an OP_DELETE has previously been logged for that 
> file. Such an edit log sequence cannot then be successfully read by the 
> NameNode.
> More details in the first comment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to