[
https://issues.apache.org/jira/browse/HDFS-9023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297535#comment-16297535
]
Xiao Chen commented on HDFS-9023:
---------------------------------
Hi [~surendrasingh],
Thanks for creating the jira and express your thoughts.
I ran into the same (lack of) logging issue while debugging some EC stuff, and
did a patch locally. Git blaming to HDFS-8946, I saw your comment there and
followed the trace to this jira. I like your idea about extra info, and
implemented it in HDFS-12726. Would you have time to take a look? I can also
post here and close HDFS-12726 as a dup if you don't mind. Sorry I didn't find
this jira earlier.
> When NN is not able to identify DN for replication, reason behind it can be
> logged
> ----------------------------------------------------------------------------------
>
> Key: HDFS-9023
> URL: https://issues.apache.org/jira/browse/HDFS-9023
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs-client, namenode
> Affects Versions: 2.7.1
> Reporter: Surendra Singh Lilhore
> Assignee: Surendra Singh Lilhore
> Priority: Critical
>
> When NN is not able to identify DN for replication, reason behind it can be
> logged (at least critical information why DNs not chosen like disk is full).
> At present it is expected to enable debug log.
> For example the reason for below error looks like all 7 DNs are busy for data
> writes. But at client or NN side no hint is given in the log message.
> {noformat}
> File /tmp/logs/spark/logs/application_1437051383180_0610/xyz-195_26009.tmp
> could only be replicated to 0 nodes instead of minReplication (=1). There
> are 7 datanode(s) running and no node(s) are excluded in this operation.
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1553)
>
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]