Hi, I'm running a single-node test HDFS cluster (RF=1, in case that matters).
Here's my problem: * hdfs -dfsadmin shows 10G for "dfsUsed". * But "du -sh" on the data directory (/var/hadoop_data/hdfs/datanode) shows 30G. I also noticed that the following file has around ~30G for dfsUsed: $ cat dfsUsed 32516294656 1474677109034 $ pwd /var/hadoop_data/hdfs/datanode/current/BP-<XYZ>/current So I'm wondering if dfsadmin is not picking it up correctly? The end result is that a script that's supposed to archive old data is not doing anything, because it thinks there's more space, whereas in reality there isn't any. Can someone please help me here? Thanks in advance, -deepak --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org For additional commands, e-mail: user-h...@hadoop.apache.org