Abhishek Sakhuja created HDFS-13139:
---------------------------------------

             Summary: Default HDFS as Azure WASB tries rebalancing datanode 
data to HDFS (0% capacity) and fails
                 Key: HDFS-13139
                 URL: https://issues.apache.org/jira/browse/HDFS-13139
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: datanode, fs/azure, hdfs
    Affects Versions: 2.7.3
            Reporter: Abhishek Sakhuja


Created a Hadoop cluster andĀ configuredĀ Azure WASB storage as a default HDFS 
location which means that Hadoop HDFS capacity will be 0. I have default 
replication as 1 but now when I am trying to decommission a node, datanode 
tries to rebalance some 28KB of data to another available datanode. However, 
our HDFS has 0 capacity and therefore, decommissioning fails with below given 
error:
New node(s) could not be removed from the cluster. Reason Trying to move 
'28672' bytes worth of data to nodes with '0' bytes of capacity is not allowed

Getting the information on cluster shows that default local HDFS is still used 
for some KB space which is getting rebalanced whereas available capacity is 0:
{code:java}
"CapacityRemaining" : 0,
 "CapacityTotal" : 0,
 "CapacityUsed" : 131072,
 "DeadNodes" : "{}",
 "DecomNodes" : "{}",
 "HeapMemoryMax" : 1060372480,
 "HeapMemoryUsed" : 147668152,
 "NonDfsUsedSpace" : 0,
 "NonHeapMemoryMax" : -1,
 "NonHeapMemoryUsed" : 75319744,
 "PercentRemaining" : 0.0,
 "PercentUsed" : 100.0,
 "Safemode" : "",
 "StartTime" : 1518241019502,
 "TotalFiles" : 1,
 "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to