[ 
https://issues.apache.org/jira/browse/HDFS-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17358091#comment-17358091
 ] 

Ayush Saxena commented on HDFS-15815:
-------------------------------------

[~hadoop_yangyun] recently saw a cluster with this patch, it had only 2 
datanodes though. but it was filled with 
{noformat}
INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not 
enough replicas was chosen. Reason:{NO_REQUIRED_STORAGE_TYPE=1}{noformat}
I guess may be due to only single rack in the cluster? (I suppose), and the log 
frequency was too high, means whole of the logs were filled with this, and it 
wasn't fetching any reasonable information as well, can you reduce the log to 
debug and add the some identifier as well to the log.

 

In case you are aware of some conf mismap, which can lead to this, do let me 
know. else plan to take this back

>  if required storageType are unavailable, log the failed reason during 
> choosing Datanode
> ----------------------------------------------------------------------------------------
>
>                 Key: HDFS-15815
>                 URL: https://issues.apache.org/jira/browse/HDFS-15815
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: block placement
>            Reporter: Yang Yun
>            Assignee: Yang Yun
>            Priority: Minor
>              Labels: pull-request-available
>             Fix For: 3.3.1, 3.4.0, 3.2.3
>
>         Attachments: HDFS-15815.001.patch, HDFS-15815.002.patch, 
> HDFS-15815.003.patch
>
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> For better debug,  if required storageType are unavailable, log the failed 
> reason "NO_REQUIRED_STORAGE_TYPE" when choosing Datanode.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to