[
https://issues.apache.org/jira/browse/HADOOP-4663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur updated HADOOP-4663:
-------------------------------------
Attachment: handleTmp1.patch
This patch creates all blocks that are part of client write request to be
stored in blocksBeingWritten directory. Blocsk that are part of replication
requests and created in blocksBeingReplicated directory.
A datanode restarts removes all blocks from the "tmp" and blocksBeingReplicated
directory and moves all blocks from the blocksBeingWritten directory into the
real block directory.
> Datanode should delete files under tmp when upgraded from 0.17
> --------------------------------------------------------------
>
> Key: HADOOP-4663
> URL: https://issues.apache.org/jira/browse/HADOOP-4663
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.18.0
> Reporter: Raghu Angadi
> Assignee: dhruba borthakur
> Priority: Blocker
> Fix For: 0.18.3
>
> Attachments: deleteTmp.patch, deleteTmp2.patch, deleteTmp_0.18.patch,
> handleTmp1.patch
>
>
> Before 0.18, when Datanode restarts, it deletes files under data-dir/tmp
> directory since these files are not valid anymore. But in 0.18 it moves these
> files to normal directory incorrectly making them valid blocks. One of the
> following would work :
> - remove the tmp files during upgrade, or
> - if the files under /tmp are in pre-18 format (i.e. no generation), delete
> them.
> Currently effect of this bug is that, these files end up failing block
> verification and eventually get deleted. But cause incorrect over-replication
> at the namenode before that.
> Also it looks like our policy regd treating files under tmp needs to be
> defined better. Right now there are probably one or two more bugs with it.
> Dhruba, please file them if you rememeber.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.