[
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14518741#comment-14518741
]
Neeta Garimella commented on HDFS-3107:
---------------------------------------
Thanks Yi. I will get the latest.
> HDFS truncate
> -------------
>
> Key: HDFS-3107
> URL: https://issues.apache.org/jira/browse/HDFS-3107
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: datanode, namenode
> Reporter: Lei Chang
> Assignee: Plamen Jeliazkov
> Fix For: 2.7.0
>
> Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch,
> HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch,
> HDFS-3107.15_branch2.patch, HDFS-3107.patch, HDFS-3107.patch,
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch,
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch,
> HDFS-3107.patch, HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate.pdf,
> HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf,
> HDFS_truncate_semantics_Mar15.pdf, HDFS_truncate_semantics_Mar21.pdf,
> HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml
>
> Original Estimate: 1,344h
> Remaining Estimate: 1,344h
>
> Systems with transaction support often need to undo changes made to the
> underlying storage when a transaction is aborted. Currently HDFS does not
> support truncate (a standard Posix operation) which is a reverse operation of
> append, which makes upper layer applications use ugly workarounds (such as
> keeping track of the discarded byte range per file in a separate metadata
> store, and periodically running a vacuum process to rewrite compacted files)
> to overcome this limitation of HDFS.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)