[
https://issues.apache.org/jira/browse/HDFS-12349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16146254#comment-16146254
]
Andrew Wang commented on HDFS-12349:
------------------------------------
Thanks for working on this Eddy, nice supportability improvement. I ran the
changed test in TestDFSStripedOSWF and saw this modified log message:
{noformat}
java.io.IOException: File /TestDFSStripedOutputStreamWithFailure/ecfile could
only be replicated to 5 nodes instead of required data units for RS-6-3-1024k
(=6). There are 5 datanode(s) running and no node(s) are excluded in this
operation.
{noformat}
One comment on this is that "replicated" is inaccurate for EC files, maybe say
"written" to be more generic? I'd also put the 6 closer to the 5. Ex: "could
only be written to 5 of the 6 required locations for RS-6-3-1024k."
Could you provide a sample of the other modified log message? One typo: "no
enough" -> "not enough"
> Improve log message when it could not alloc enough blocks for EC
> -----------------------------------------------------------------
>
> Key: HDFS-12349
> URL: https://issues.apache.org/jira/browse/HDFS-12349
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: erasure-coding
> Affects Versions: 3.0.0-alpha3
> Reporter: Lei (Eddy) Xu
> Assignee: Lei (Eddy) Xu
> Priority: Minor
> Attachments: HDFS-12349.00.patch, HDFS-12349.01.patch
>
>
> When an EC output stream could not alloc enough blocks for parity blocks, it
> sets the warning.
> {code}
> if (blocks[i] == null) {
> LOG.warn("Failed to get block location for parity block, index=" + i);
> {code}
> We should clarify the cause of this warning message.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]