How do I un-InconsistentFSStateException my development environment?

I did an asan build, then deleted all of the invalid cmake files (*), then
when I did another build and tried to testdata/bin/run-all.sh, datanodes
won't start and I get the following error message:

org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory /home/jbapple/Impala/testdata/cluster/cdh5/node-2/data/dfs/dn is
in an inconsistent state: Can't format the storage directory because the
current/ directory is not empty.
        at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:480)
        at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:585)
        at
org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:279)
        at
org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:418)
        at
org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:397)
        at
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:575)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1486)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1446)
        at
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:313)
        at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:219)
        at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:673)
        at java.lang.Thread.run(Thread.java:745)
2016-10-06 15:03:43,300 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
Block pool <registering> (Datanode Uuid unassigned) service to localhost/
127.0.0.1:20500. Exiting.
java.io.IOException: All specified directories are failed to load.
        at
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:576)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1486)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1446)
        at
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:313)
        at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:219)
        at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:673)
        at java.lang.Thread.run(Thread.java:745)


That message is
from ./testdata/cluster/cdh5/node-2/var/log/hadoop-hdfs/hdfs-datanode.log.

Should I clobber all of the stuff
in ./testdata/cluster/cdh5/node-2/data/dfs/dn/current?

(*) find -iname '*cmake*' -not -name CMakeLists.txt | grep -v -e
cmake_module | grep -v -e thirdparty | xargs rm -Rf

Reply via email to