It seems the exception occurs during NameNode loads the editlog. make sure the editlog file exists. or you can debug the application to see what's wrong.
On Thu, Dec 23, 2010 at 2:01 AM, daniel sikar <[email protected]> wrote: > I can't help but with hindsight - it's advisable to snapshot your > namenodes as HDFS dies with them. > > On 22 December 2010 15:03, Bjoern Schiessle <[email protected]> wrote: > > Hi, > > > > After a Kernel update and a reboot the namenode doesn't start. I run the > > Cloudera cdh3 Hadoop distribution. I have already searched for a > solution. > > It looks like I'm not the only one with such a problem. Sadly I could > only > > find descriptions of similar problems, but no solutions... > > > > This is the error message from the namenode log file: > > > > > > 2010-12-22 16:13:04,830 INFO > org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: > > /************************************************************ > > STARTUP_MSG: Starting NameNode > > STARTUP_MSG: host = pcube/129.69.216.24 > > STARTUP_MSG: args = [] > > STARTUP_MSG: version = 0.20.2+737 > > STARTUP_MSG: build = -r 98c55c28258aa6f42250569bd7fa431ac657bdbd; > compiled by 'root' on Mon Oct 11 17:21:30 UTC 2010 > > ************************************************************/ > > 2010-12-22 16:13:05,001 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: > Initializing JVM Metrics with processName=NameNode, sessionId=null > > 2010-12-22 16:13:05,007 INFO > org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing > NameNodeMeterics using context > object:org.apache.hadoop.metrics.spi.NullContext > > 2010-12-22 16:13:05,036 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hdfs > > 2010-12-22 16:13:05,036 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup > > 2010-12-22 16:13:05,036 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > isPermissionEnabled=false > > 2010-12-22 16:13:05,040 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), > accessTokenLifetime=0 min(s) > > 2010-12-22 16:13:05,335 INFO > org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: > Initializing FSNamesystemMetrics using context > object:org.apache.hadoop.metrics.spi.NullContext > > 2010-12-22 16:13:05,336 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered > FSNamesystemStatusMBean > > 2010-12-22 16:13:05,361 INFO > org.apache.hadoop.hdfs.server.common.Storage: Number of files = 72 > > 2010-12-22 16:13:05,374 INFO > org.apache.hadoop.hdfs.server.common.Storage: Number of files under > construction = 3 > > 2010-12-22 16:13:05,375 INFO > org.apache.hadoop.hdfs.server.common.Storage: Image file of size 8822 loaded > in 0 seconds. > > 2010-12-22 16:13:05,377 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: > java.lang.NullPointerException > > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1088) > > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1100) > > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.addNode(FSDirectory.java:1003) > > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedAddFile(FSDirectory.java:206) > > at > org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:637) > > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1034) > > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:845) > > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:379) > > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:99) > > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:343) > > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:317) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:214) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:394) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1148) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1157) > > > > 2010-12-22 16:13:05,377 INFO > org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: > > /************************************************************ > > SHUTDOWN_MSG: Shutting down NameNode at pcube/129.69.216.24 > > ************************************************************/ > > > > Any idea what could be wrong and how I can get my namenode up running > again? > > > > Thanks a lot! > > Björn > > > -- -----李平
