If you have a 23 or 2.0 cluster all setup, yes it comes with mapreduce application master and you run mapreduce very similar to how you did before on 1.X. Have you run mapreduce before on older versions of Hadoop?
The queue can be specified the same way with "-Dmapred.job.queue.name=default". You can run the wordcount example like - where input is file you put into your /user directory in hdfs and output is the output directory of the wordcount. hadoop jar hadoop-mapreduce-examples.jar wordcount -Dmapred.job.queue.name=default input output You use the "mapred" command to check on job status - mapred job -list I don't think the 2.x docs are all the way there so you might also checkout the mapreduce tutorial for hadoop1.x: http://hadoop.apache.org/common/docs/r1.0.2/mapred_tutorial.html The 2.x/0.23 docs: http://hadoop.apache.org/common/docs/r0.23.1/ Tom On 4/12/12 4:07 AM, "Dominik Wiernicki" <d...@touk.pl> wrote: > W dniu 12.04.2012 09:17, Dominik Wiernicki pisze: >> Hi, >> >> I want to run MapReduce job using ResourceManager. If I understood well >> i need to create ApplicationMaster and Client for this purpose? >> >> I was trying to read Distributed Shell example but it looks too complex >> and complicated. And it doesn't explains how to run MapReduce job. >> >> Also IMO writing AplicationMaster is too low level - it should be easy >> to create simple applications. >> >> Is there any easy way to run MapReduce job as a certain user, using >> specified queueue? >> >> Greetings, >> Dominik >> >> > I had wrong understending of ApplicationMaster, so i am changing my > question. > > I asume there is somewhere ApplicationMaster for running MapReduce Jobs, > but i can't find it in documentation. > > How to run MapReduce Job to specified queue (as specified user)? > How to run DAG of Jobs? > > Greetings, > Dominik >