[ 
https://issues.apache.org/jira/browse/HDFS-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13035138#comment-13035138
 ] 

Ted Yu commented on HDFS-1332:
------------------------------

This time the log is clearer:
{code}
2011-05-17 16:20:28,920 INFO  ipc.Server (Server.java:run(1434)) - IPC Server 
handler 2 on 64506, call addBlock(/filestatus.dat, 
DFSClient_NONMAPREDUCE_-2042930756_1, null, 
[Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@27cc7f4b), rpc version=1, 
client version=67, methodsFingerPrint=-1645111634 from 127.0.0.1:64512: error: 
java.io.IOException: File /filestatus.dat could only be replicated to 0 nodes, 
instead of 1. There are 1 datanode(s) running but 1 node(s) are excluded in 
this operation.
{code}

> When unable to place replicas, BlockPlacementPolicy should log reasons nodes 
> were excluded
> ------------------------------------------------------------------------------------------
>
>                 Key: HDFS-1332
>                 URL: https://issues.apache.org/jira/browse/HDFS-1332
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>            Reporter: Todd Lipcon
>            Assignee: Ted Yu
>            Priority: Minor
>              Labels: newbie
>             Fix For: 0.23.0
>
>         Attachments: HDFS-1332-concise.patch
>
>
> Whenever the block placement policy determines that a node is not a "good 
> target" it could add the reason for exclusion to a list, and then when we log 
> "Not able to place enough replicas" we could say why each node was refused. 
> This would help new users who are having issues on pseudo-distributed (eg 
> because their data dir is on /tmp and /tmp is full). Right now it's very 
> difficult to figure out the issue.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to