[
https://issues.apache.org/jira/browse/HDFS-14381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796322#comment-16796322
]
Íñigo Goiri commented on HDFS-14381:
------------------------------------
I'm guessing this issue makes sense when you have multiple blocks.
In a conservative case that would be at least 64MB.
Not sure how common is to cat files larger than 64MB (that's a lot of text).
We should target copyToLocal or similar (I'm fine with also supporting cat but
I think the main target should be copyToLocal).
> Add option to hdfs dfs -cat to ignore corrupt blocks
> ----------------------------------------------------
>
> Key: HDFS-14381
> URL: https://issues.apache.org/jira/browse/HDFS-14381
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: tools
> Affects Versions: 3.2.0
> Reporter: Daniel Templeton
> Priority: Minor
>
> If I have a file in HDFS that contains 100 blocks, and I happen to lose the
> first block (for whatever obscure/unlikely/dumb reason), I can no longer
> access the 99% of the file that's still there and accessible. In the case of
> some data formats (e.g. text), the remaining data may still be useful. It
> would be nice to have a way to extract the remaining data without having to
> manually reassemble the file contents from the block files. Something like
> {{hdfs dfs -cat -ignoreCorrupt <file>}}. It could insert some marker to show
> where the missing blocks are.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]