Hi,

I tried to shut down and restart a namenode, but it throws a NPE on startup. 
The last couple of lines from the log are these:
10/12/18 15:39:07 WARN hdfs.StateChange: DIR* FSDirectory.unprotectedSetTimes: 
failed to setTimes 
/hbase/inrdb_ris_update_rrc00/fe5090c366e326cf2b123502e2d4bcce/data/1350525083587292896
 because source does not exist
10/12/18 15:39:07 WARN hdfs.StateChange: DIR* FSDirectory.unprotectedSetTimes: 
failed to setTimes 
/hbase/inrdb_ris_update_rrc00/fe5090c366e326cf2b123502e2d4bcce/meta/4413022065008239343
 because source does not exist
10/12/18 15:39:07 DEBUG namenode.FSNamesystem: 0: 
/hbase/.logs/w2r1.inrdb.ripe.net,60020,1292333234919/w2r1.inrdb.ripe.net%3A60020.1292336839737
 numblocks : 0 clientHolder DFSClient_131715208 clientMachine 193.0.23.32
10/12/18 15:39:07 DEBUG hdfs.StateChange: DIR* FSDirectory.unprotectedDelete: 
failed to remove 
/hbase/.logs/w2r1.inrdb.ripe.net,60020,1292333234919/w2r1.inrdb.ripe.net%3A60020.1292336839737
 because it does not exist
10/12/18 15:39:07 ERROR namenode.NameNode: java.lang.NullPointerException
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1088)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1100)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.addNode(FSDirectory.java:1003)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedAddFile(FSDirectory.java:206)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:637)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1039)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:845)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:379)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:99)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:343)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:317)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:214)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:394)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1148)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1157)

10/12/18 15:39:07 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at m1r1.inrdb.ripe.net/193.0.23.51
************************************************************/

It looks like the array populated by 
INodeDirectoryWithQuota#getExistingPathINodes(...) has a null somewhere which 
is not expected by the FSDirectory#addChild(...). I tried to do some more 
digging, but to me it is not immediately obvious from the code.

I tried going back to a checkpoint, but the problem is also in there.

Help is very much appreciated.


Thanks,
Friso

Reply via email to