Hi Macek,

    hadoop.tmp.dir actually belongs to core-site.xml. So,it would be better
to move it there.

On Friday, July 20, 2012, Björn-Elmar Macek <ma...@cs.uni-kassel.de> wrote:
> Hi Mohammad,
>
> Thanks for your fast reply. Here they are:
>
> \_____________hadoop-env.sh___
> I added those 2 lines:
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/opt/jdk1.6.0_01/
> export JAVA_OPTS="-Djava.net.preferIPv4Stack=true $JAVA_OPTS"
>
>
> \_____________core-site.xml_____
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>     <property>
>         <name>fs.default.name</name>
>         <value>hdfs://its-cs100:9005</value>
>     </property>
> </configuration>
>
>
> \_____________hdfs-site.xml____
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- configure data paths for masters and slaves -->
>
> <configuration>
>     <property>
>         <name>dfs.name.dir</name>
>         <value>/home/work/bmacek/hadoop/master</value>
>     </property>
>     <!-- maybe one cannot config masters and slaves on with the same file
-->
>     <property>
>         <name>dfs.data.dir</name>
> <value>/home/work/bmacek/hadoop/hdfs/slave</value>
>     </property>
>     <property>
>         <name>hadoop.tmp.dir</name>
> <value>/home/work/bmacek/hadoop/hdfs/tmp</value>
>     </property>
>
>     <property>
>         <name>dfs.replication</name>
>         <value>1</value>
>     </property>
> </configuration>
>
>
> \_______mapred-site.xml____
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>     <!-- master -->
>     <property>
>         <name>mapred.job.tracker</name>
>         <value>its-cs100:9004</value>
>     </property>
>     <!-- datanode -->
>     <property>
>         <name>dfs.hosts</name>
> <value>/home/fb16/bmacek/hadoop-1.0.2/conf/hosts</value>
>     </property>
>
>     <property>
>         <name>mapred.hosts</name>
> <value>/home/fb16/bmacek/hadoop-1.0.2/conf/hosts</value>
>     </property>
> </configuration>
>
> \_______masters____
> its-cs101
>
> \_______slaves______
> its-cs102
> its-cs103
>
>
> Thats about it, i think. I hope i didnt forget anything.
>
> Regards,
> Björn-Elmar
>
> Am 20.07.2012 16:58, schrieb Mohammad Tariq:
>
> Hello sir,
>
>        If possible, could you please paste your config files??
>
> Regards,
>      Mohammad Tariq
>
>
> On Fri, Jul 20, 2012 at 8:24 PM, Björn-Elmar Macek
> <ma...@cs.uni-kassel.de> wrote:
>
> Hi together,
>
> well just stumbled upon this post:
>
http://ankitasblogger.blogspot.de/2012/01/error-that-occured-in-hadoop-and-its.html
>
> And it says:
> "Problem: Hadoop-datanode job failed or datanode not running:
> java.io.IOException: File ../mapred/system/jobtracker.info could only be
> replicated to 0 nodes, instead of 1.
> ...
> Cause: You may also get this message due to permissions. May be JobTracker
> can not create jobtracker.info on startup."
>
> Since the file does not exist i think, this might be a probable reason for
> my errors. But why should the JobTracker not be able to create that file.
It
> created several other directories on this node with easy via the slave.sh
> script that i started with the very same user that calls start-all.sh.
>
> Any help would be really appreciated.
>
>
> Am 20.07.2012 16:15, schrieb Björn-Elmar Macek:
>
> Hi Srinivas,
>
> thanks for your reply! I have been following your link and idea and been
> playing around alot, but still got problems with the connection (though
they
> are different now):
>
> \_______ JAVA VERSION_________
> "which java" tells me it is 1.6.0_01. If i got it right version 1.7 got
> problems with ssh.
>
> \_______MY TESTS_____________
> According to your suggestion to look for processes running on that port i
> changed ports alot:
> When i was posting the first post of this thread. i was using ports 999
for
> namenode and 1000 for jobtracker.
> Since due to some reasons commands like "lsof -i" etc dont give me any
> output when usedin the cluster enviroment. So i started looking for ports
> that are in general unused by programs.
> When i changed the ports to 9004 and 9005 i got different errors which
look
> very much like the ones you posted in the beginning of this year in the
> lucene section (
>
http://lucene.472066.n3.nabble.com/Unable-to-start-hadoop-0-20-2-but-able-to-start-hadoop-0-20-203-cluster-td2991350.html
> ).
>
> It seems as if a DataNode can not communicate with the NameNode.
>
> The logs look like the following:
>
> \_______TEST RESULTS__________
> ########## A DataNode #############
> 2012-07-20 14:47:59,536 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = its-cs102.its.uni-kassel.de/141.51.205.12
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 1.0.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
> 1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
> ************************************************************/
> 2012-07-20 14:47:59,824 INFO
org.apache.hadoop.metrics2.impl.MetricsConfig:
> loaded properties from hadoop-metrics2.properties
> 2012-07-20 14:47:59,841 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 2012-07-20 14:47:59,843 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period
> at 10 second(s).
> 2012-07-20 14:47:59,844 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
> started
> 2012-07-20 14:47:59,969 INFO
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 2012-07-20 14:48:26,792 INFO org.apache.hadoop.ipc.Client: Retrying
connect
> to server: its-cs100/141.51.205.10:9005. Already tried 0 time(s).
> 2012-07-20 14:48:26,889 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2012-07-20 14:48:26,934 I

-- 
Regards,
    Mohammad Tariq

Reply via email to