[
https://issues.apache.org/jira/browse/HDFS-14381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796338#comment-16796338
]
Daniel Templeton commented on HDFS-14381:
-----------------------------------------
That's a really good point. I'll update the description accordingly.
> Add option to hdfs dfs -cat to ignore corrupt blocks
> ----------------------------------------------------
>
> Key: HDFS-14381
> URL: https://issues.apache.org/jira/browse/HDFS-14381
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: tools
> Affects Versions: 3.2.0
> Reporter: Daniel Templeton
> Priority: Minor
>
> If I have a file in HDFS that contains 100 blocks, and I happen to lose the
> first block (for whatever obscure/unlikely/dumb reason), I can no longer
> access the 99% of the file that's still there and accessible. In the case of
> some data formats (e.g. text), the remaining data may still be useful. It
> would be nice to have a way to extract the remaining data without having to
> manually reassemble the file contents from the block files. Something like
> {{hdfs dfs -cat -ignoreCorrupt <file>}}. It could insert some marker to show
> where the missing blocks are.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]