[ https://issues.apache.org/jira/browse/HADOOP-4663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12668209#action_12668209 ]
Raghu Angadi commented on HADOOP-4663: -------------------------------------- > If such a old block checks in with the NN via a block report, it will get > deleted. Does it sound right? So we do lose that block. Isn't that a problem? > Datanode should delete files under tmp when upgraded from 0.17 > -------------------------------------------------------------- > > Key: HADOOP-4663 > URL: https://issues.apache.org/jira/browse/HADOOP-4663 > Project: Hadoop Core > Issue Type: Bug > Components: dfs > Affects Versions: 0.18.0 > Reporter: Raghu Angadi > Assignee: dhruba borthakur > Priority: Blocker > Fix For: 0.19.1 > > Attachments: deleteTmp.patch, deleteTmp2.patch, deleteTmp_0.18.patch, > handleTmp1.patch > > > Before 0.18, when Datanode restarts, it deletes files under data-dir/tmp > directory since these files are not valid anymore. But in 0.18 it moves these > files to normal directory incorrectly making them valid blocks. One of the > following would work : > - remove the tmp files during upgrade, or > - if the files under /tmp are in pre-18 format (i.e. no generation), delete > them. > Currently effect of this bug is that, these files end up failing block > verification and eventually get deleted. But cause incorrect over-replication > at the namenode before that. > Also it looks like our policy regd treating files under tmp needs to be > defined better. Right now there are probably one or two more bugs with it. > Dhruba, please file them if you rememeber. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.