Forgot to mention:
Hadoop version: Hadoop 2.0.0-cdh4.0.0

On Wed, Jun 13, 2012 at 12:16 PM, anil gupta <anilgupt...@gmail.com> wrote:

> Hi All
>
> I am using cdh4 for running a HBase cluster on CentOs6.0. I have 5
> nodes in my cluster(2 Admin Node and 3 DN).
> My resourcemanager is up and running and showing that all three DN are
> running the nodemanager. HDFS is also working fine and showing 3 DN's.
>
> But when i fire the pi example job. It starts to run in Local mode.
> Here is the console output:
> sudo -u hdfs yarn jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-
> examples.jar pi 10 1000000000
> Number of Maps  = 10
> Samples per Map = 1000000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Starting Job
> 12/06/13 12:03:27 WARN conf.Configuration: session.id is deprecated.
> Instead, use dfs.metrics.session-id
> 12/06/13 12:03:27 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=JobTracker, sessionId=
> 12/06/13 12:03:27 INFO util.NativeCodeLoader: Loaded the native-hadoop
> library
> 12/06/13 12:03:27 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the
> same.
> 12/06/13 12:03:28 INFO mapred.FileInputFormat: Total input paths to
> process : 10
> 12/06/13 12:03:29 INFO mapred.JobClient: Running job: job_local_0001
> 12/06/13 12:03:29 INFO mapred.LocalJobRunner: OutputCommitter set in
> config null
> 12/06/13 12:03:29 INFO mapred.LocalJobRunner: OutputCommitter is
> org.apache.hadoop.mapred.FileOutputCommitter
> 12/06/13 12:03:29 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
> 12/06/13 12:03:29 INFO util.ProcessTree: setsid exited with exit code
> 0
> 12/06/13 12:03:29 INFO mapred.Task:  Using ResourceCalculatorPlugin :
> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@3d46e381
> 12/06/13 12:03:29 WARN mapreduce.Counters: Counter name
> MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group
> name and  BYTES_READ as counter name instead
> 12/06/13 12:03:29 INFO mapred.MapTask: numReduceTasks: 1
> 12/06/13 12:03:29 INFO mapred.MapTask: io.sort.mb = 100
> 12/06/13 12:03:30 INFO mapred.MapTask: data buffer = 79691776/99614720
> 12/06/13 12:03:30 INFO mapred.MapTask: record buffer = 262144/327680
> 12/06/13 12:03:30 INFO mapred.JobClient:  map 0% reduce 0%
> 12/06/13 12:03:35 INFO mapred.LocalJobRunner: Generated 95735000
> samples.
> 12/06/13 12:03:36 INFO mapred.JobClient:  map 100% reduce 0%
> 12/06/13 12:03:38 INFO mapred.LocalJobRunner: Generated 151872000
> samples.
>
> Here is the content of yarn-site.xml:
>
> <configuration>
>   <property>
>     <name>yarn.nodemanager.aux-services</name>
>     <value>mapreduce.shuffle</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>     <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>   </property>
>
>   <property>
>     <name>yarn.log-aggregation-enable</name>
>     <value>true</value>
>   </property>
>
>   <property>
>     <description>List of directories to store localized files in.</
> description>
>     <name>yarn.nodemanager.local-dirs</name>
>     <value>/disk/yarn/local</value>
>   </property>
>
>   <property>
>     <description>Where to store container logs.</description>
>     <name>yarn.nodemanager.log-dirs</name>
>     <value>/disk/yarn/logs</value>
>   </property>
>
>   <property>
>     <description>Where to aggregate logs to.</description>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/var/log/hadoop-yarn/apps</value>
>   </property>
>
>   <property>
>     <description>Classpath for typical applications.</description>
>      <name>yarn.application.classpath</name>
>      <value>
>         $HADOOP_CONF_DIR,
>         $HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
>         $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
>         $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
>         $YARN_HOME/*,$YARN_HOME/lib/*
>      </value>
>   </property>
> <property>
>         <name>yarn.resourcemanager.resource-tracker.address</name>
>         <value>ihub-an-g1:8025</value>
> </property>
> <property>
>         <name>yarn.resourcemanager.address</name>
>         <value>ihub-an-g1:8040</value>
> </property>
> <property>
>         <name>yarn.resourcemanager.scheduler.address</name>
>         <value>ihub-an-g1:8030</value>
> </property>
> <property>
>         <name>yarn.resourcemanager.admin.address</name>
>         <value>ihub-an-g1:8141</value>
> </property>
> <property>
>         <name>yarn.resourcemanager.webapp.address</name>
>         <value>ihub-an-g1:8088</value>
> </property>
> <property>
>         <name>mapreduce.jobhistory.intermediate-done-dir</name>
>         <value>/disk/mapred/jobhistory/intermediate/done</value>
> </property>
> <property>
>         <name>>mapreduce.jobhistory.done-dir</name>
>         <value>/disk/mapred/jobhistory/done</value>
> </property>
> </configuration>
>
> Can anyone tell me what is the problem over here? Appreciate your
> help.
> Thanks,
> Anil Gupta
>
>


-- 
Thanks & Regards,
Anil Gupta

Reply via email to