[ https://issues.apache.org/jira/browse/HADOOP-4663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12651105#action_12651105 ]
Raghu Angadi commented on HADOOP-4663: -------------------------------------- > Why "datanode to not know which files under tmp are invalid since 0.18"? It just does not. Thats why it moves them to normal directory (this jira and HADOOP-4702), and incorrectly 'recovers' (HADOOP-3574) etc. Why? Because it does not have anything to distinguish between valid and invalid. Note that before 0.18 everything there is invalid.. Since you guys implemented it, it is better for you to explain :-), I guess. > Datanode should delete files under tmp when upgraded from 0.17 > -------------------------------------------------------------- > > Key: HADOOP-4663 > URL: https://issues.apache.org/jira/browse/HADOOP-4663 > Project: Hadoop Core > Issue Type: Bug > Components: dfs > Affects Versions: 0.18.0 > Reporter: Raghu Angadi > Assignee: dhruba borthakur > Priority: Blocker > Fix For: 0.18.3 > > Attachments: deleteTmp.patch > > > Before 0.18, when Datanode restarts, it deletes files under data-dir/tmp > directory since these files are not valid anymore. But in 0.18 it moves these > files to normal directory incorrectly making them valid blocks. One of the > following would work : > - remove the tmp files during upgrade, or > - if the files under /tmp are in pre-18 format (i.e. no generation), delete > them. > Currently effect of this bug is that, these files end up failing block > verification and eventually get deleted. But cause incorrect over-replication > at the namenode before that. > Also it looks like our policy regd treating files under tmp needs to be > defined better. Right now there are probably one or two more bugs with it. > Dhruba, please file them if you rememeber. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.