What OS are you starting this on? Are you able to run the command "df -k /tmp/hadoop-hadoop/dfs/name/" as user "hadoop"?
On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata <[email protected]> wrote: > Hi All, > > I'm new in Hadoop administration, Can someone please help me? > > Hadoop-version :- 2.0.5 alpha and using QJM > > I'm getting below error messages while starting Hadoop hdfs using > 'start-dfs.sh' > > 2013-01-23 03:25:43,208 INFO > org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121 > loaded in 0 seconds. > 2013-01-23 03:25:43,209 INFO > org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid > 0 from /tmp/hadoop-hadoop/dfs/name/current/fsimage_0000000000000000000 > 2013-01-23 03:25:43,217 INFO > org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 > entries 0 lookups > 2013-01-23 03:25:43,217 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading > FSImage in 1692 msecs > 2013-01-23 03:25:43,552 INFO org.apache.hadoop.ipc.Server: Starting > Socket Reader #1 for port 8020 > 2013-01-23 03:25:43,592 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered > FSNamesystemState MBean > 2013-01-23 03:25:43,699 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services > started for standby state > 2013-01-23 03:25:43,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services > started for active state > 2013-01-23 03:25:43,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services > started for standby state > 2013-01-23 03:25:43,824 INFO org.apache.hadoop.ipc.Server: Stopping > server on 8020 > 2013-01-23 03:25:43,829 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode > metrics system... > 2013-01-23 03:25:43,831 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics > system stopped. > 2013-01-23 03:25:43,832 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics > system shutdown complete. > 2013-01-23 03:25:43,835 FATAL > org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode > join > org.apache.hadoop.util.Shell$ExitCodeException: > at org.apache.hadoop.util.Shell.runCommand(Shell.java:202) > at org.apache.hadoop.util.Shell.run(Shell.java:129) > at org.apache.hadoop.fs.DF.getFilesystem(DF.java:108) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:683) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:484) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:448) > > > Thanks -- Harsh J
