[ 
https://issues.apache.org/jira/browse/HDFS-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-9373:
------------------------
    Attachment: HDFS-9373-002.patch

Thanks Zhe and Daniel’s review. Just update the patch according the newest 
trunk code.
The failed block id can be achieved by other log information, so we just need 
to tell user which block groups have the corrupt blocks.


> Show friendly information to user when client succeeds the writing with some 
> failed streamers
> ---------------------------------------------------------------------------------------------
>
>                 Key: HDFS-9373
>                 URL: https://issues.apache.org/jira/browse/HDFS-9373
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: erasure-coding
>    Affects Versions: 3.0.0
>            Reporter: Li Bo
>            Assignee: Li Bo
>         Attachments: HDFS-9373-001.patch, HDFS-9373-002.patch
>
>
> When not more than PARITY_NUM streamers fail for a block group, the client 
> may still succeed to write the data. But several exceptions are thrown to 
> user and user has to check the reasons.  The friendly way is just inform user 
> that some streamers fail when writing a block group. It’s not necessary to 
> show the details of exceptions because a small number of stream failures is 
> not vital to the client writing.
> When only DATA_NUM streamers succeed, the block group is in a high risk 
> because the corrupt of any block will cause all the six blocks' data lost. We 
> should give obvious warning to user when this occurs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to