Hi, Thanks for reply Harsh, These are my configuration properties
//mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop-master-2:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop-master-2:19888</value> </property> </configuration> //Yarn-site-xml <configuration> <property> <description>Classpath for typical applications.</description> <name>yarn.application.classpath</name> <value> $HADOOP_CONF_DIR, $HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*, $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*, $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*, $YARN_HOME/*,$YARN_HOME/lib/* </value> </property> <property> <name>yarn.resourcemanager.address</name> <value>hadoop-master-2:8040</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>hadoop-master-2:8030</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>hadoop-master-2:8141</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>hadoop-master-2:8088</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce.shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>/srv/storage/yarn/local</value> </property> <property> <name>yarn.nodemanager.log-dirs</name> <value>/srv/storage/yarn/log</value> </property> </configuration> On Sun, Jul 29, 2012 at 2:30 PM, abhiTowson cal <abhishek.dod...@gmail.com> wrote: > Hi All, > > I am getting problem that job is running in localrunner rather than > the cluster enviormnent. > And when am running the job i would not be able to see the job id in > the resource manager UI > > Can you please go through the issues and let me know ASAP. > > sudo -u hdfs hadoop jar > /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar teragen > 1000000 /benchmark/teragen/input > 12/07/29 13:35:59 WARN conf.Configuration: session.id is deprecated. > Instead, use dfs.metrics.session-id > 12/07/29 13:35:59 INFO jvm.JvmMetrics: Initializing JVM Metrics with > processName=JobTracker, sessionId= > 12/07/29 13:35:59 INFO util.NativeCodeLoader: Loaded the native-hadoop library > 12/07/29 13:35:59 WARN mapred.JobClient: Use GenericOptionsParser for > parsing the arguments. Applications should implement Tool for the > same. > Generating 1000000 using 1 maps with step of 1000000 > 12/07/29 13:35:59 INFO mapred.JobClient: Running job: job_local_0001 > 12/07/29 13:35:59 INFO mapred.LocalJobRunner: OutputCommitter set in config > null > 12/07/29 13:35:59 INFO mapred.LocalJobRunner: OutputCommitter is > org.apache.hadoop.mapred.FileOutputCommitter > 12/07/29 13:35:59 WARN mapreduce.Counters: Group > org.apache.hadoop.mapred.Task$Counter is deprecated. Use > org.apache.hadoop.mapreduce.TaskCounter instead > 12/07/29 13:35:59 INFO util.ProcessTree: setsid exited with exit code 0 > 12/07/29 13:35:59 INFO mapred.Task: Using ResourceCalculatorPlugin : > org.apache.hadoop.util.LinuxResourceCalculatorPlugin@47c297a3 > 12/07/29 13:36:00 WARN mapreduce.Counters: Counter name > MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group > name and BYTES_READ as counter name instead > 12/07/29 13:36:00 INFO mapred.MapTask: numReduceTasks: 0 > 12/07/29 13:36:00 INFO mapred.JobClient: map 0% reduce 0% > 12/07/29 13:36:01 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 > is done. And is in the process of commiting > 12/07/29 13:36:01 INFO mapred.LocalJobRunner: > 12/07/29 13:36:01 INFO mapred.Task: Task attempt_local_0001_m_000000_0 > is allowed to commit now > 12/07/29 13:36:01 INFO mapred.FileOutputCommitter: Saved output of > task 'attempt_local_0001_m_000000_0' to > hdfs://hadoop-master-1/benchmark/teragen/input > 12/07/29 13:36:01 INFO mapred.LocalJobRunner: > 12/07/29 13:36:01 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done. > 12/07/29 13:36:02 INFO mapred.JobClient: map 100% reduce 0% > 12/07/29 13:36:02 INFO mapred.JobClient: Job complete: job_local_0001 > 12/07/29 13:36:02 INFO mapred.JobClient: Counters: 19 > 12/07/29 13:36:02 INFO mapred.JobClient: File System Counters > 12/07/29 13:36:02 INFO mapred.JobClient: FILE: Number of bytes read=142686 > 12/07/29 13:36:02 INFO mapred.JobClient: FILE: Number of bytes > written=220956 > 12/07/29 13:36:02 INFO mapred.JobClient: FILE: Number of read operations=0 > 12/07/29 13:36:02 INFO mapred.JobClient: FILE: Number of large > read operations=0 > 12/07/29 13:36:02 INFO mapred.JobClient: FILE: Number of write > operations=0 > 12/07/29 13:36:02 INFO mapred.JobClient: HDFS: Number of bytes read=0 > 12/07/29 13:36:02 INFO mapred.JobClient: HDFS: Number of bytes > written=100000000 > 12/07/29 13:36:02 INFO mapred.JobClient: HDFS: Number of read operations=1 > 12/07/29 13:36:02 INFO mapred.JobClient: HDFS: Number of large > read operations=0 > 12/07/29 13:36:02 INFO mapred.JobClient: HDFS: Number of write > operations=2 > 12/07/29 13:36:02 INFO mapred.JobClient: Map-Reduce Framework > 12/07/29 13:36:02 INFO mapred.JobClient: Map input records=1000000 > 12/07/29 13:36:02 INFO mapred.JobClient: Map output records=1000000 > 12/07/29 13:36:02 INFO mapred.JobClient: Input split bytes=82 > 12/07/29 13:36:02 INFO mapred.JobClient: Spilled Records=0 > 12/07/29 13:36:02 INFO mapred.JobClient: CPU time spent (ms)=0 > 12/07/29 13:36:02 INFO mapred.JobClient: Physical memory (bytes) > snapshot=0 > 12/07/29 13:36:02 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0 > 12/07/29 13:36:02 INFO mapred.JobClient: Total committed heap > usage (bytes)=124715008 > 12/07/29 13:36:02 INFO mapred.JobClient: > org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter > > Regards > Abhishek