[ 
https://issues.apache.org/jira/browse/HDFS-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16509492#comment-16509492
 ] 

Brahma Reddy Battula commented on HDFS-13670:
---------------------------------------------

Looks file is open hence this block will not replicated till it closed. you 
need to close that file.("hdfs fsck -openforwrite / -files -blocks -locations", 
this is command to get openfile )

which hadoop version are you using..?

if it's hadoop-2.7 +, you can use "hdfs debug recoverLease -path <opefilepath>" 
, or use recoverlease api for below +2.7 versions.

Bytheway Namenode will trigger the recover lease after onehour.

Even you can delete that file,incase if you dn't require.

> Decommissioning  datanode never end
> -----------------------------------
>
>                 Key: HDFS-13670
>                 URL: https://issues.apache.org/jira/browse/HDFS-13670
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.6.0
>            Reporter: zhangzhuo
>            Priority: Major
>
> In my cluster,there has one datanode in decommissing which nerver end. On the 
> web UI,i can see this datanode has one under replicatied blocks ,how can i 
> force this datanode to decommissed status , or how can i do to make this 
> block statisfied the repilication factor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to