I see it is picking up other parameters from config, so my hypothesis is that in 0.19 the file system is listening on 8020. I went back to 18.3 and also did not change this hdfs port this time, preempting the question, so I am fine for now. Mark
On Thu, Feb 12, 2009 at 1:40 AM, Rasit OZDAS <[email protected]> wrote: > Hi, Mark > > Try to add an extra property to that file, and try to examine if > hadoop recognizes it. > This way you can find out if hadoop uses your configuration file. > > 2009/2/10 Jeff Hammerbacher <[email protected]>: > > Hey Mark, > > > > In NameNode.java, the DEFAULT_PORT specified for NameNode RPC is 8020. > > From my understanding of the code, your fs.default.name setting should > > have overridden this port to be 9000. It appears your Hadoop > > installation has not picked up the configuration settings > > appropriately. You might want to see if you have any Hadoop processes > > running and terminate them (bin/stop-all.sh should help) and then > > restart your cluster with the new configuration to see if that helps. > > > > Later, > > Jeff > > > > On Mon, Feb 9, 2009 at 9:48 PM, Amar Kamat <[email protected]> wrote: > >> Mark Kerzner wrote: > >>> > >>> Hi, > >>> Hi, > >>> > >>> why is hadoop suddenly telling me > >>> > >>> Retrying connect to server: localhost/127.0.0.1:8020 > >>> > >>> with this configuration > >>> > >>> <configuration> > >>> <property> > >>> <name>fs.default.name</name> > >>> <value>hdfs://localhost:9000</value> > >>> </property> > >>> <property> > >>> <name>mapred.job.tracker</name> > >>> <value>localhost:9001</value> > >>> > >> > >> Shouldnt this be > >> > >> <value>hdfs://localhost:9001</value> > >> > >> Amar > >>> > >>> </property> > >>> <property> > >>> <name>dfs.replication</name> > >>> <value>1</value> > >>> </property> > >>> </configuration> > >>> > >>> and both this http://localhost:50070/dfshealth.jsp and this > >>> http://localhost:50030/jobtracker.jsp links work fine? > >>> > >>> Thank you, > >>> Mark > >>> > >>> > >> > >> > > > > > > -- > M. Raşit ÖZDAŞ >
