[ 
https://issues.apache.org/jira/browse/HDFS-12349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12349:
---------------------------------
    Attachment: HDFS-12349.02.patch

Hi, [~andrew.wang]

Thanks for the review, addressed your comments accordingly. 

The following are a few message changes:

* {{"Failed to get block location for parity block, index=3"}}  to 
{{"Cannot allocate parity block(index=3, policy=RS-6-3-1024k), Not enough 
datanodes? Exclude nodes=None)"}}

* {{"Failed to get following block, i=3"}}  to {{"Failed to allocate parity 
block, index=3"}}

* For a 3x replicated file: {{"File /foo could only be written to 1 of the 2 
minReplication. There are 5 datanode(s) running and  no node(s) are excluded in 
this operation."}}

* For an EC file: {{"File /bar could only be written to 5 of the 6 required 
nodes for RS-6-3-1024k. There are 5 datanode(s) running and  no node(s) are 
excluded in this operation."}}



> Improve log message when it could not alloc enough blocks for EC 
> -----------------------------------------------------------------
>
>                 Key: HDFS-12349
>                 URL: https://issues.apache.org/jira/browse/HDFS-12349
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: erasure-coding
>    Affects Versions: 3.0.0-alpha3
>            Reporter: Lei (Eddy) Xu
>            Assignee: Lei (Eddy) Xu
>            Priority: Minor
>         Attachments: HDFS-12349.00.patch, HDFS-12349.01.patch, 
> HDFS-12349.02.patch
>
>
> When an EC output stream could not alloc enough blocks for parity blocks, it 
> sets the warning.
> {code}
> if (blocks[i] == null) {
>         LOG.warn("Failed to get block location for parity block, index=" + i);
> {code}
> We should clarify the cause of this warning message.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to