Xiao Chen created HDFS-12933:
--------------------------------
Summary: Improve logging when DFSStripedOutputStream failed to
read some blocks
Key: HDFS-12933
URL: https://issues.apache.org/jira/browse/HDFS-12933
Project: Hadoop HDFS
Issue Type: Improvement
Components: erasure-coding
Reporter: Xiao Chen
Priority: Minor
Currently if there are less DataNodes than the erasure coding policy's (# of
data blocks + # of parity blocks), the client sees this:
{noformat}
09:18:24 17/12/14 09:18:24 WARN hdfs.DFSOutputStream: Cannot allocate parity
block(index=13, policy=RS-10-4-1024k). Not enough datanodes? Exclude nodes=[]
09:18:24 17/12/14 09:18:24 WARN hdfs.DFSOutputStream: Block group <1> has 1
corrupt blocks.
{noformat}
The 1st line is good. The 2nd line may be confusing to end users. We should
investigate the error and be more general / accurate. Maybe something like
'failed to read x blocks'.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]