DataNode should clean up temporary files when writeBlock fails.
---------------------------------------------------------------

                 Key: HADOOP-3015
                 URL: https://issues.apache.org/jira/browse/HADOOP-3015
             Project: Hadoop Core
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.15.3
            Reporter: Raghu Angadi



Once a datanode starts receiving a block and if it fails to complete receiving 
the block, it leaves the temporary block files in the temp directory. Because 
of this, same block can not be written to this node for next one hour. 

DataNode should really delete these files and allow the next attempt.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to