[
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14196336#comment-14196336
]
Plamen Jeliazkov commented on HDFS-3107:
----------------------------------------
[~cmccabe],
At the time [~shv] talked about my new patch there was nothing posted yet in
HDFS-7056 minus Konstantin's design doc.
We only uploaded even newer patches yesterday around noon.
Please be careful not to confuse [~shv] and [~cos].
The snapshot support patch (for HDFS-7056) was not ready yet when [~cos] made
his comment.
We don't have to commit HDFS-3107 on its own.
There is the option to treat the combined patch HDFS-3107-&-7056 as the first
patch, which accounts for upgrade and rollback functionality as well as
snapshot support, demonstrated in unit test.
This should address your comment: "My reasoning is that if the first patch
breaks rollback, it's tough to see it getting into trunk."
I am not objecting to do work on a branch but I am unsure it is necessary given
the combined patch seems to meet the support requirements asked for this work.
I'll investigate the FindBugs.
> HDFS truncate
> -------------
>
> Key: HDFS-3107
> URL: https://issues.apache.org/jira/browse/HDFS-3107
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: datanode, namenode
> Reporter: Lei Chang
> Assignee: Plamen Jeliazkov
> Attachments: HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch,
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch,
> HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch,
> HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate.pdf,
> HDFS_truncate_semantics_Mar15.pdf, HDFS_truncate_semantics_Mar21.pdf,
> editsStored, editsStored.xml
>
> Original Estimate: 1,344h
> Remaining Estimate: 1,344h
>
> Systems with transaction support often need to undo changes made to the
> underlying storage when a transaction is aborted. Currently HDFS does not
> support truncate (a standard Posix operation) which is a reverse operation of
> append, which makes upper layer applications use ugly workarounds (such as
> keeping track of the discarded byte range per file in a separate metadata
> store, and periodically running a vacuum process to rewrite compacted files)
> to overcome this limitation of HDFS.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)