HI Jeff,

Do you have another suggestion?  I think the problem is that somewhere I have 
URL that is in the form file:// instead of hdfs://.  I have done some fgrep'ing 
and I see several possibilities, but nothing jumps out at me.

Mostly it related classpath issue  since it is not considering what ever you 
configured in core-site.xml..Please check class path..May be hadoop conf you 
might have put in classpath which different from where ever you are starting..

If NameNode and Datanode are in same machine,they also wn't start..I think they 
are in different machine..

Please correct me If I am wrong..


Thanks And Regards

Brahma Reddy


________________________________
From: Jeffrey Silverman [jeffsilver...@google.com]
Sent: Thursday, June 28, 2012 2:14 AM
To: hdfs-user@hadoop.apache.org
Subject: Re: Problems starting secondarynamenode in hadoop 1.0.3


Varun,

I tried what you suggested and I am still having the same problem:

hduser@master:/usr/local/hadoop-1.0.3/conf$ tail 
/var/log/hadoop/hduser/hadoop-hduser-secondarynamenode-master.out
Exception in thread "main" java.lang.IllegalArgumentException: Does not contain 
a valid host:port authority: file:///
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:162)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:198)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:228)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:222)
at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:161)
at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:129)
at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:567)
hduser@master:/usr/local/hadoop-1.0.3/conf$


hduser@master:/usr/local/hadoop-1.0.3/conf$ cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/hduser/tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name<http://fs.default.name></name>
  <!-- The instructions in 
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#conf-site-xml
says that the value should be hdfs://localhost:54310 however I got an answer on 
the hadoop mailing list that said to explicitly
name the host -->
  <value>hdfs://master:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
hduser@master:/usr/local/hadoop-1.0.3/conf$

Do you have another suggestion?  I think the problem is that somewhere I have 
URL that is in the form file:// instead of hdfs://.  I have done some fgrep'ing 
and I see several possibilities, but nothing jumps out at me.

Thank you


Jeff


On Tue, Jun 26, 2012 at 7:44 PM, varun kumar 
<varun....@gmail.com<mailto:varun....@gmail.com>> wrote:
Hi Jeff,

Instead of localhost,mention the host-name of Primary namenod.


On Wed, Jun 27, 2012 at 3:46 AM, Jeffrey Silverman 
<jeffsilver...@google.com<mailto:jeffsilver...@google.com>> wrote:
I am working with hadoop for the first time, and I am following instructions at 
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

I am having problems starting the secondarynamenode daemon.  The error message 
in /var/log/hadoop/hduser/hadoop-hduser-secondarynamenode-master.out  is

Exception in thread "main" java.lang.IllegalArgumentException: Does not contain 
a valid host:port authority: file:///
        at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:162)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:198)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:228)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:222)
        at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:161)
        at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:129)
        at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:567)



I googled the error message and came across HDFS-2515, which says that I might 
get that error message if the fs.default.name<http://fs.default.name> property 
name had an incorrect value, but I think my value is okay.

My core-site.xml file is :

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/hduser/tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name<http://fs.default.name></name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
~

Does anybody have a suggestion for how to further troubleshoot this problem, 
please?


Thank you,


Jeff Silverman




--
Regards,
Varun Kumar.P


Reply via email to