I have confirgured the xml configuration files corectly. I tried to repeat
the installation process with a different user id . Now getting following
when I execute start-all.sh :



Warning: $HADOOP_HOME is deprecated.

chown: changing ownership of `/usr/libexec/../logs': Operation not permitted
starting namenode, logging to
/usr/libexec/../logs/hadoop-maddy-namenode-mandeep-System-Product-Name.out
/usr/sbin/hadoop-daemon.sh: line 135:
/usr/libexec/../logs/hadoop-maddy-namenode-mandeep-System-Product-Name.out:
Permission denied
head: cannot open
`/usr/libexec/../logs/hadoop-maddy-namenode-mandeep-System-Product-Name.out'
for reading: No such file or directory
localhost: chown: changing ownership of `/usr/libexec/../logs': Operation
not permitted
localhost: starting datanode, logging to
/usr/libexec/../logs/hadoop-maddy-datanode-mandeep-System-Product-Name.out
localhost: /usr/sbin/hadoop-daemon.sh: line 135:
/usr/libexec/../logs/hadoop-maddy-datanode-mandeep-System-Product-Name.out:
Permission denied
localhost: head: cannot open
`/usr/libexec/../logs/hadoop-maddy-datanode-mandeep-System-Product-Name.out'
for reading: No such file or directory
localhost: chown: changing ownership of `/usr/libexec/../logs': Operation
not permitted
localhost: starting secondarynamenode, logging to
/usr/libexec/../logs/hadoop-maddy-secondarynamenode-mandeep-System-Product-Name.out
localhost: /usr/sbin/hadoop-daemon.sh: line 135:
/usr/libexec/../logs/hadoop-maddy-secondarynamenode-mandeep-System-Product-Name.out:
Permission denied
localhost: head: cannot open
`/usr/libexec/../logs/hadoop-maddy-secondarynamenode-mandeep-System-Product-Name.out'
for reading: No such file or directory
chown: changing ownership of `/usr/libexec/../logs': Operation not permitted
starting jobtracker, logging to
/usr/libexec/../logs/hadoop-maddy-jobtracker-mandeep-System-Product-Name.out
/usr/sbin/hadoop-daemon.sh: line 135:
/usr/libexec/../logs/hadoop-maddy-jobtracker-mandeep-System-Product-Name.out:
Permission denied
head: cannot open
`/usr/libexec/../logs/hadoop-maddy-jobtracker-mandeep-System-Product-Name.out'
for reading: No such file or directory
localhost: chown: changing ownership of `/usr/libexec/../logs': Operation
not permitted
localhost: starting tasktracker, logging to
/usr/libexec/../logs/hadoop-maddy-tasktracker-mandeep-System-Product-Name.out
localhost: /usr/sbin/hadoop-daemon.sh: line 135:
/usr/libexec/../logs/hadoop-maddy-tasktracker-mandeep-System-Product-Name.out:
Permission denied
localhost: head: cannot open
`/usr/libexec/../logs/hadoop-maddy-tasktracker-mandeep-System-Product-Name.out'
for reading: No such file or directory




Below are the contents of my configuration files


Core Site:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>


</configuration>






mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>

</configuration>







hdfs-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is
created.
  The default is used if replication is not specified in create time.
  </description>
</property>



</configuration>












On Sun, Apr 29, 2012 at 4:05 AM, Marcos Ortiz <mlor...@uci.cu> wrote:

> Look here
> http://search-hadoop.com/m/**NRMV72pWYVM1/ERROR+org.apache.**
> hadoop.hdfs.server.datanode.**DataNode%5C%3A+java.lang.**
> IllegalArgumentException%5C%**3A+Does+not+contain+a+valid+**
> host%5C%3Aport+authority%5C%**3A+/v=threaded<http://search-hadoop.com/m/NRMV72pWYVM1/ERROR+org.apache.hadoop.hdfs.server.datanode.DataNode%5C%3A+java.lang.IllegalArgumentException%5C%3A+Does+not+contain+a+valid+host%5C%3Aport+authority%5C%3A+/v=threaded>
>
> To solve this, you can check your config files: core-site.xml,
> mapred-site.xml and hdfs-site.xml
>
> http://www.michael-noll.com/**tutorials/running-hadoop-on-**
> ubuntu-linux-single-node-**cluster/#conf-site-xml<http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#conf-site-xml>
>
> Of course, yo can check is the official documentation for 1.0.1:
>
> http://hadoop.apache.org/**common/docs/r1.0.1/single_**node_setup.html<http://hadoop.apache.org/common/docs/r1.0.1/single_node_setup.html>
>
> Regards
>
>
> On 4/28/2012 3:19 PM, maddy gulati wrote:
>
>> Folks,
>>
>> I am trying to set up a single node cluster for Hadoop ( version 1.0.1) on
>> Ubuntu 11.04. After configuring everything, I could not get all the
>> components to start.
>>
>> My installation directory for hadoop is /usr/local/hadoop
>> I followed the following tutorial for the setup :
>> http://www.michael-noll.com/**tutorials/running-hadoop-on-**
>> ubuntu-linux-single-node-**cluster/<http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/>
>> Below is the snapshot of my log file:
>>
>> mandeep@mandeep-System-**Product-Name:/var/log/hadoop/**mandeep$ cat
>> hadoop-mandeep-datanode-**mandeep-System-Product-Name.**log
>> 2012-04-29 01:29:37,240 INFO
>> org.apache.hadoop.hdfs.server.**datanode.DataNode: STARTUP_MSG:
>> /****************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = 
>> mandeep-System-Product-Name/12**7.0.1.1<http://127.0.1.1>
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 1.0.1
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/**asf/hadoop/common/branches/**branch-1.0<https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0>-r
>> 1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012
>> ****************************************************************/
>> 2012-04-29 01:29:37,398 INFO org.apache.hadoop.metrics2.**
>> impl.MetricsConfig:
>> loaded properties from hadoop-metrics2.properties
>> 2012-04-29 01:29:37,446 INFO
>> org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 2012-04-29 01:29:37,449 INFO
>> org.apache.hadoop.metrics2.**impl.MetricsSystemImpl: Scheduled snapshot
>> period at 10 second(s).
>> 2012-04-29 01:29:37,449 INFO
>> org.apache.hadoop.metrics2.**impl.MetricsSystemImpl: DataNode metrics
>> system
>> started
>> 2012-04-29 01:29:37,583 INFO
>> org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter: MBean for source
>> ugi
>> registered.
>> 2012-04-29 01:29:37,586 WARN
>> org.apache.hadoop.metrics2.**impl.MetricsSystemImpl: Source name ugi
>> already
>> exists!
>> 2012-04-29 01:29:37,615 INFO org.apache.hadoop.util.**NativeCodeLoader:
>> Loaded the native-hadoop library
>> 2012-04-29 01:29:37,701 ERROR
>> org.apache.hadoop.hdfs.server.**datanode.DataNode:
>> java.lang.**IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>>  at org.apache.hadoop.net.**NetUtils.createSocketAddr(**
>> NetUtils.java:162)
>> at
>> org.apache.hadoop.hdfs.server.**namenode.NameNode.getAddress(**
>> NameNode.java:198)
>>  at
>> org.apache.hadoop.hdfs.server.**namenode.NameNode.getAddress(**
>> NameNode.java:228)
>> at
>> org.apache.hadoop.hdfs.server.**namenode.NameNode.**
>> getServiceAddress(NameNode.**java:222)
>>  at
>> org.apache.hadoop.hdfs.server.**datanode.DataNode.**
>> startDataNode(DataNode.java:**337)
>> at org.apache.hadoop.hdfs.server.**datanode.DataNode.<init>(**
>> DataNode.java:299)
>>  at
>> org.apache.hadoop.hdfs.server.**datanode.DataNode.**
>> makeInstance(DataNode.java:**1582)
>> at
>> org.apache.hadoop.hdfs.server.**datanode.DataNode.**
>> instantiateDataNode(DataNode.**java:1521)
>>  at
>> org.apache.hadoop.hdfs.server.**datanode.DataNode.**
>> createDataNode(DataNode.java:**1539)
>> at
>> org.apache.hadoop.hdfs.server.**datanode.DataNode.secureMain(**
>> DataNode.java:1665)
>>  at org.apache.hadoop.hdfs.server.**datanode.DataNode.main(**
>> DataNode.java:1682)
>>
>> 2012-04-29 01:29:37,718 INFO
>> org.apache.hadoop.hdfs.server.**datanode.DataNode: SHUTDOWN_MSG:
>> /****************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at mandeep-System-Product-Name/
>> 127.0.1.1
>> ****************************************************************/
>> 2012-04-29 01:33:26,907 INFO
>> org.apache.hadoop.hdfs.server.**datanode.DataNode: STARTUP_MSG:
>> /****************************************************************
>>
>>
>>
> --
> Marcos Luis Ortíz Valmaseda (@marcosluis2186)
>  Data Engineer at UCI
>  http://marcosluis2186.**posterous.com<http://marcosluis2186.posterous.com>
>
> 10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS
> INFORMATICAS...
> CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION
>
> http://www.uci.cu
> http://www.facebook.com/**universidad.uci<http://www.facebook.com/universidad.uci>
> http://www.flickr.com/photos/**universidad_uci<http://www.flickr.com/photos/universidad_uci>
>



-- 
Mandeep Singh Gulati
System Analyst | WorldWide Risk and Information Management
American Express India Pvt. Ltd.
Gurgaon, India

Reply via email to