[ https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13233148#comment-13233148 ]
Milind Bhandarkar commented on HDFS-3107: ----------------------------------------- What if user accidentally deletes a directory ? You guys never supported me when I asked for a file-by-file deletion, that could be aborted in time to save 70 pct of users' time, right? Instead you have always supported a directory deletion with a single misdirected RPC. Anyway, to answer your question, if user accidentally truncates, he/she can always append again, without losing any efficiency. Can we have some mature discussions on this jira please ? -- Milind Bhandarkar Chief Architect, Greenplum Labs, Data Computing Division, EMC +1-650-523-3858 (W) +1-408-666-8483 (C) > HDFS truncate > ------------- > > Key: HDFS-3107 > URL: https://issues.apache.org/jira/browse/HDFS-3107 > Project: Hadoop HDFS > Issue Type: New Feature > Components: data-node, name-node > Reporter: Lei Chang > Attachments: HDFS_truncate_semantics_Mar15.pdf > > Original Estimate: 1,344h > Remaining Estimate: 1,344h > > Systems with transaction support often need to undo changes made to the > underlying storage when a transaction is aborted. Currently HDFS does not > support truncate (a standard Posix operation) which is a reverse operation of > append, which makes upper layer applications use ugly workarounds (such as > keeping track of the discarded byte range per file in a separate metadata > store, and periodically running a vacuum process to rewrite compacted files) > to overcome this limitation of HDFS. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira