[
https://issues.apache.org/jira/browse/HDFS-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15042537#comment-15042537
]
Hadoop QA commented on HDFS-9373:
---------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color}
| {color:red} HDFS-9373 does not apply to trunk. Rebase required? Wrong Branch?
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12771711/HDFS-9373-001.patch |
| JIRA Issue | HDFS-9373 |
| Powered by | Apache Yetus http://yetus.apache.org |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/13780/console |
This message was automatically generated.
> Show friendly information to user when client succeeds the writing with some
> failed streamers
> ---------------------------------------------------------------------------------------------
>
> Key: HDFS-9373
> URL: https://issues.apache.org/jira/browse/HDFS-9373
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: erasure-coding
> Affects Versions: 3.0.0
> Reporter: Li Bo
> Assignee: Li Bo
> Attachments: HDFS-9373-001.patch
>
>
> When not more than PARITY_NUM streamers fail for a block group, the client
> may still succeed to write the data. But several exceptions are thrown to
> user and user has to check the reasons. The friendly way is just inform user
> that some streamers fail when writing a block group. It’s not necessary to
> show the details of exceptions because a small number of stream failures is
> not vital to the client writing.
> When only DATA_NUM streamers succeed, the block group is in a high risk
> because the corrupt of any block will cause all the six blocks' data lost. We
> should give obvious warning to user when this occurs.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)