Hello Again:

I extracted hadoop and changed the xml as shwon in the tutorial but now it 
seems it connect get a connection. I am using putty to ssh to the server and I 
change the config files to set it up in pseudo mode as shown

conf/core-site.xml:
<configuration>
     <property>
         <name>fs.default.name</name>
         <value>hdfs://localhost:9000</
value>
     </property>
</configuration>

hdfs-site.xml:
<configuration>
     <property>
         <name>dfs.replication</name>
         <value>1</value>
     </property>
     <property>
         <name>dfs.http.address</name>
         <value>0.0.0.0:3500</value>
     </property>
</configuration>


conf/mapred-site.xml:
<configuration>
     <property>
         <name>mapred.job.tracker</name>
         <value>localhost:9001</value>
     </property>
     <property>
         <name>mapred.job.tracker.http.address</name>
         <value>0.0.0.0:3501</value>
     </property>
</configuration>

I tried to format the namenode, started all processes but I notice that when I 
stop it, it said that the namenode was not running.When I try to run the 
example jar, it keeps
 timing out when connecting to 127.0.0.1:port#. I used various port 
numbers and tried replacing localhost with the name for the server but it still 
times out. 
It also has a long ip address for name.server.ac.uk/161.74.12.97:3000 which 
seems to repeating itself since name.server.ac.uk already has the ip address of 
161.74.12.97. The console message is shown below. I was also having problems 
where it did not want to format the namenode. 

Is there something is wrong with connecting to the namenode and what cause it 
to not format?


2011-08-11 05:49:13,529 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = name.server.ac.uk/161.74.12.97
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = 
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; 
compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2011-08-11 05:49:13,663 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: 
Initializing RPC Metrics with hostName=NameNode, port=3000
2011-08-11 05:49:13,669 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
Namenode up at: name.server.ac.uk/161.74.12.97:3000
2011-08-11 05:49:13,672 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processName=NameNode, sessionId=null
2011-08-11 05:49:13,674 INFO 
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing 
NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2011-08-11 05:49:13,755 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: 
fsOwner=w1153435,users,cluster_login
2011-08-11 05:49:13,755 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2011-08-11 05:49:13,756 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2011-08-11 05:49:13,768 INFO 
org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: 
Initializing FSNamesystemMetrics using context 
object:org.apache.hadoop.metrics.spi.NullContext
2011-08-11 05:49:13,770 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered 
FSNamesystemStatusMBean
2011-08-11 05:49:13,812 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
initialization failed.
java.io.IOException: NameNode is not formatted.
    at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
    at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2011-08-11 05:49:13,813 INFO org.apache.hadoop.ipc.Server: Stopping server on 
3000
2011-08-11 05:49:13,814 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
java.io.IOException: NameNode is not formatted.
    at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
    at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

2011-08-11 05:49:13,814 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at name.server.ac.uk/161.74.12.97
************************************************************/

Thank you,
A Df




>________________________________
>From: Harsh J <[email protected]>
>To: A Df <[email protected]>
>Sent: Wednesday, 10 August 2011, 15:13
>Subject: Re: Where is web interface in stand alone operation?
>
>A Df,
>
>On Wed, Aug 10, 2011 at 7:28 PM, A Df <[email protected]> wrote:
>>
>> Hello Harsh:
>> See inline at *
>>
>> ________________________________
>> From: Harsh J <[email protected]>
>> To: [email protected]; A Df <[email protected]>
>> Sent: Wednesday, 10 August 2011, 14:44
>> Subject: Re: Where is web interface in stand alone operation?
>>
>> A Df,
>>
>> The web UIs are a feature of the daemons JobTracker and NameNode. In
>> standalone/'local'/'file:///' modes, these daemons aren't run
>> (actually, no daemon is run at all), and hence there would be no 'web'
>> interface.
>>
>> *ok, but is there any other way to check the performance in this mode such
>> as time to complete etc? I am trying to compare performance between the two.
>> And also for the pseudo mode how would I change the ports for the web
>> interface because I have to connect to a remote server which only allows
>> certain ports to be accessed from the web?
>
>The ports Kai mentioned above are sourced from the configs:
>dfs.http.address (hdfs-site.xml) and mapred.job.tracker.http.address
>(mapred-site.xml). You can change them to bind to a host:port of your
>preference.
>
>
>-- 
>Harsh J
>
>
>

Reply via email to