[ 
https://issues.apache.org/jira/browse/AMBARI-20785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated AMBARI-20785:
-----------------------------
    Priority: Critical  (was: Major)

> Ambari report datanode decommissioned but datanode is still in decommissing
> ---------------------------------------------------------------------------
>
>                 Key: AMBARI-20785
>                 URL: https://issues.apache.org/jira/browse/AMBARI-20785
>             Project: Ambari
>          Issue Type: Bug
>          Components: infra
>    Affects Versions: 2.4.0
>            Reporter: Chen He
>            Priority: Critical
>
> If we decommission HDFS datanode through ambari REST API call. It will create 
> a new request http://ambari_server:8080/api/v1/clusters/cluster_name/requests/
> However, the request quickly response "COMPLETED" only after it added the 
> given datanode into dfs.exclude. It does not block till datanode fully 
> decommissioned. It should block till the datanode completely decommissioned. 
> At the same time, org.apache.ambari.groovy.client.decommissionDataNode() is 
> using the same way to decommission datanode. It could cause data loss if 
> cluster shutdown this node instantly after decommission the datanode. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to