[
https://issues.apache.org/jira/browse/HDFS-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290087#comment-16290087
]
Lei (Eddy) Xu commented on HDFS-12923:
--
[~elgoiri] yes, sure. In {{FSDirConcatOp#verifySrcFiles()}}, it has already
checked the EC policy ID between files, so a {{HadoopIllegalArgumentException}}
will be thrown.
{code:title=FSDirConcatOp.java}
private static INodeFile[] verifySrcFiles(...) {
if(srcINodeFile.getErasureCodingPolicyID() !=
targetINode.getErasureCodingPolicyID()) {
throw new HadoopIllegalArgumentException("Source file " + src
+ " and target file " + targetIIP.getPath()
+ " have different erasure coding policy");
}
}
{code}
> DFS.concat should throw exception if files have different EC policies.
> ---
>
> Key: HDFS-12923
> URL: https://issues.apache.org/jira/browse/HDFS-12923
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Priority: Critical
> Fix For: 3.0.0
>
>
> {{DFS#concat}} appends blocks from different files to a single file. However,
> if these files have different EC policies, or mixed with replicated and EC
> files, the resulted file would be problematic to read, because the EC codec
> is defined in INode instead of in a block.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org