[ https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155367#comment-14155367 ]
Konstantin Shvachko commented on HDFS-3107: ------------------------------------------- [~cmccabe] feels like some clarification is needed from you here, so that we could avoid misunderstanding like in the other jira. # I assume that your [veto of Sept 17 conditional on the design document|https://issues.apache.org/jira/browse/HDFS-3107?focusedCommentId=14137882&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14137882] had been addressed. Now with subsequent corrections in line with your suggestions. # [Earlier this week you stated|https://issues.apache.org/jira/browse/HDFS-3107?focusedCommentId=14152094&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14152094] ??we should hold off on committing to trunk until we figure out the snapshot story.?? Given the design in place, the subtask HDFS-7056 opened, and the general acceptance of the approach by other contributors, the snapshot story seem to be clear. So if your statement is a veto could you please clarify the reason(s)? If not please say so, so that people could proceed with the subtasks? I hope you can reply by tomorrow. > HDFS truncate > ------------- > > Key: HDFS-3107 > URL: https://issues.apache.org/jira/browse/HDFS-3107 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, namenode > Reporter: Lei Chang > Assignee: Plamen Jeliazkov > Attachments: HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, > HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, HDFS_truncate.pdf, > HDFS_truncate_semantics_Mar15.pdf, HDFS_truncate_semantics_Mar21.pdf, > editsStored > > Original Estimate: 1,344h > Remaining Estimate: 1,344h > > Systems with transaction support often need to undo changes made to the > underlying storage when a transaction is aborted. Currently HDFS does not > support truncate (a standard Posix operation) which is a reverse operation of > append, which makes upper layer applications use ugly workarounds (such as > keeping track of the discarded byte range per file in a separate metadata > store, and periodically running a vacuum process to rewrite compacted files) > to overcome this limitation of HDFS. -- This message was sent by Atlassian JIRA (v6.3.4#6332)