[ https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14148430#comment-14148430 ]
Plamen Jeliazkov commented on HDFS-3107: ---------------------------------------- Attaching a new patch. I have included a separate file called "editsStored" to the attachments. This is a binary editLog segment that belongs in test/resources directory. Without that binary file in place we can expect TestOfflineEditsViewer.testStored() to fail. I cannot seem to add in a binary file to a patch. Turns out the following tests also needed to be fixed up to account for a new FSEditLogOp: # TestNameNodeRetryCache # TestRetryCacheWithHA # I added in [~jingzhao]'s catch for the snapshot check. I now make use of INode.isInLatestSnapshot() instead. > HDFS truncate > ------------- > > Key: HDFS-3107 > URL: https://issues.apache.org/jira/browse/HDFS-3107 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, namenode > Reporter: Lei Chang > Assignee: Plamen Jeliazkov > Attachments: HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, > HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, > HDFS_truncate_semantics_Mar15.pdf, HDFS_truncate_semantics_Mar21.pdf > > Original Estimate: 1,344h > Remaining Estimate: 1,344h > > Systems with transaction support often need to undo changes made to the > underlying storage when a transaction is aborted. Currently HDFS does not > support truncate (a standard Posix operation) which is a reverse operation of > append, which makes upper layer applications use ugly workarounds (such as > keeping track of the discarded byte range per file in a separate metadata > store, and periodically running a vacuum process to rewrite compacted files) > to overcome this limitation of HDFS. -- This message was sent by Atlassian JIRA (v6.3.4#6332)