One more thing. I ran "hadoop namenode" and it says that namenode has not been 
formatted!! But I have run  classification commands some days ago and the size 
of data dir is nearly 60GB containing my data.

So why it says that namenode has not been formatted? Please see the output


$ hadoop namenode
14/03/27 17:39:37 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = tiger/192.168.1.5
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = 
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; 
compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.7.0_51
************************************************************/
14/03/27 17:39:38 INFO impl.MetricsConfig: loaded properties from 
hadoop-metrics2.properties
14/03/27 17:39:38 INFO impl.MetricsSourceAdapter: MBean for source 
MetricsSystem,sub=Stats registered.
14/03/27 17:39:38 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 
second(s).
14/03/27 17:39:38 INFO impl.MetricsSystemImpl: NameNode metrics system started
14/03/27 17:39:38 INFO impl.MetricsSourceAdapter: MBean for source ugi 
registered.
14/03/27 17:39:38 INFO impl.MetricsSourceAdapter: MBean for source jvm 
registered.
14/03/27 17:39:38 INFO impl.MetricsSourceAdapter: MBean for source NameNode 
registered.
14/03/27 17:39:38 INFO util.GSet: Computing capacity for map BlocksMap
14/03/27 17:39:38 INFO util.GSet: VM type       = 64-bit
14/03/27 17:39:38 INFO util.GSet: 2.0% max memory = 1005060096
14/03/27 17:39:38 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/03/27 17:39:38 INFO util.GSet: recommended=2097152, actual=2097152
14/03/27 17:39:38 INFO namenode.FSNamesystem: fsOwner=hadoop
14/03/27 17:39:38 INFO namenode.FSNamesystem: supergroup=supergroup
14/03/27 17:39:38 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/03/27 17:39:38 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/03/27 17:39:38 INFO namenode.FSNamesystem: isAccessTokenEnabled=false 
accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/03/27 17:39:38 INFO namenode.FSNamesystem: Registered FSNamesystemStateMBean 
and NameNodeMXBean
14/03/27 17:39:38 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length 
= 0
14/03/27 17:39:38 INFO namenode.NameNode: Caching file names occuring more than 
10 times 
14/03/27 17:39:38 ERROR namenode.FSNamesystem: FSNamesystem initialization 
failed.
java.io.IOException: NameNode is not formatted.
    at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
    at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
14/03/27 17:39:38 ERROR namenode.NameNode: java.io.IOException: NameNode is not 
formatted.
    at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
    at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

14/03/27 17:39:38 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at tiger/192.168.1.5
************************************************************/



 
Regards,
Mahmood



On Thursday, March 27, 2014 5:37 PM, Mahmood Naderan <[email protected]> 
wrote:
 
Here are some onfo. I greped for the port numbers instead of LISTEN. Please 
note that I am using Hadoop 1.2.1

$ netstat -an | grep 54310
$ netstat -an | grep 54311
tcp        0      0 ::ffff:127.0.0.1:54311      :::*                        
LISTEN      
tcp        0      0 ::ffff:127.0.0.1:57479      ::ffff:127.0.0.1:54311      
ESTABLISHED 
tcp        0      0
 ::ffff:127.0.0.1:54311      ::ffff:127.0.0.1:57479      ESTABLISHED 


$ hadoop dfsadmin -report
14/03/27 17:35:07 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:54310. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/03/27 17:35:08 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:54310. Already tried 1 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)


$ hadoop fsck /
14/03/27 17:36:10 ERROR security.UserGroupInformation: 
PriviledgedActionException as:hadoop cause:java.net.ConnectException: 
Connection refused
Exception in thread "main" java.net.ConnectException: Connection refused
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at
 java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
    at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198)
    at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:579)
    at java.net.Socket.connect(Socket.java:528)
    at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
    at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
    at sun.net.www.http.HttpClient.New(HttpClient.java:308)
    at
 sun.net.www.http.HttpClient.New(HttpClient.java:326)
    at 
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
    at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
    at 
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
    at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1300)
    at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:142)
    at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:109)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at
 org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:109)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:183)


 
Regards,
Mahmood



On Thursday, March 27, 2014 5:11 PM, John Lilley <[email protected]> 
wrote:
 
Does “netstat -an | grep LISTEN” show these ports being listened on?
 
Can you stat hdfs from the command line e.g.:
 
hdfs dfsadmin -report
hdfs fsck /
hdfs dfs -ls /
 
Also, check out /var/log/hadoop or /var/log/hdfs for more details.
 
john
 
From:Mahmood Naderan [mailto:[email protected]] 
Sent: Thursday, March 27, 2014 5:04 AM
To: [email protected]
Subject: ipc.Client: Retrying connect to server
 
Hi,
I don't know what mistake I did that now I get this error


 
  INFO ipc.Client:Retrying connect toserver:localhost/127.0.0.1:54310.Already 
tried2 time(s);retry policy 
isRetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1SECONDS)
  INFO ipc.Client:Retrying connect toserver:localhost/127.0.0.1:54310.Already 
tried3 time(s);retry policy 
isRetryUpToMaximumCountWithFixedSleep(maxRetries=10,sleepTime=1SECONDS)




I saw the wiki page for that message and all other resources state that the 
namenode has not been started yet.
However I tried "stop-all.sh && start-all.sh" multiple times. In fact I see 
java processes regarding Hadoop.
 
More info:
 
core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
</property>



mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
</property>


Any more idea on that?


 
Regards,
Mahmood

Reply via email to