I am seeing the following exception when I attempt to start my Hadoop namenode. 
This is a namenode that was working and the system had a few TB of files in it. 
Yesterday we were attempting to upgrade it to a more recent version and things 
appeared to be working. 

2009-03-20 10:14:02,667 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
STARTUP_MSG:  
/************************************************************                   
            
STARTUP_MSG: Starting NameNode                                                  
            
STARTUP_MSG:   host = proxy-1.t2.ucsd.edu/169.228.130.63                        
            
STARTUP_MSG:   args = []                                                        
            
STARTUP_MSG:   version = 0.19.2-dev                                             
            
STARTUP_MSG:   build = 
http://svn.apache.org/repos/asf/hadoop/core/tags/release-0.19.1 -r 748415; 
compiled by 'sl1-user' on Tue Mar 17 10:49:06 CDT 2009                          
                                                                                
                      
************************************************************/                   
                                                            
2009-03-20 10:14:02,772 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: 
Initializing RPC Metrics with hostName=NameNode, port=9000           
2009-03-20 10:14:02,776 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
Namenode up at: proxy-1.t2.ucsd.edu/169.228.130.63:9000       
2009-03-20 10:14:02,778 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processName=NameNode, sessionId=Hadoop 
2009-03-20 10:14:02,783 INFO 
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing 
NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext 
                                                                                
              
2009-03-20 10:14:02,832 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
fsOwner=root,root,bin,daemon,sys,adm,disk,wheel           
2009-03-20 10:14:02,832 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup      
                               
2009-03-20 10:14:02,832 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true   
                               
2009-03-20 10:14:02,839 INFO 
org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: 
Initializing FSNamesystemMetrics using context 
object:org.apache.hadoop.metrics.spi.NullContext                                
                                                        
2009-03-20 10:14:02,840 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered 
FSNamesystemStatusMBean                        
2009-03-20 10:14:02,866 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Number of files = 1435                                           
2009-03-20 10:14:03,061 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Number of files under construction = 3                           
2009-03-20 10:14:03,065 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Image file of size 361956 loaded in 0 seconds.
2009-03-20 10:14:03,074 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
java.lang.NullPointerException
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1006)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.addNode(FSDirectory.java:982)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedAddFile(FSDirectory.java:194)
        at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:613)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:973)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:793)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:352)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:309)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:288)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)

2009-03-20 10:14:03,075 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at proxy-1.t2.ucsd.edu/169.228.130.63
************************************************************/

Could this be some sort of corruption on loading the image? 

Terrence

Reply via email to