we only allow 0 or 1 reducer in local mode

2012/5/15 Abhishek Pratap Singh <manu.i...@gmail.com>

> AFAIK Number of reducers depends upon the Key generated after Mappers are
> done. May be join is resulting in one key.
>
> Regards,
> Abhishek
> On Sun, May 13, 2012 at 10:48 PM, anwar shaikh <anwardsha...@gmail.com
> >wrote:
>
> > Hi Everybody,
> >
> > I am executing a MapReduce job to execute JOIN operation using
> > org.apache.hadoop.contrib.utils.join
> >
> > Four files are given as Input.
> >
> > I think there are four Map Jobs running (based on the line marked in red
> ).
> >
> > I have also set number of reducers to be 10  using - *
> > job.setNumReduceTasks(10) *
> > *
> > *
> > But, only one reduce task is performed  (line marked in blue).
> >
> > So, Please can you suggest how can I increase the number of reducers ?
> >
> > Below are some of the last lines from the log.
> >
> >
> >
> -----------------------------------------------------------------------------------------------------------------------------------------------------
> > 12/05/14 10:32:46 INFO mapred.Task: Task
> '*attempt_local_0001_m_000003_0*'
> > done.
> > 12/05/14 10:32:46 INFO mapred.LocalJobRunner:
> > 12/05/14 10:32:46 INFO mapred.Merger: Merging 4 sorted segments
> > 12/05/14 10:32:46 INFO mapred.Merger: Down to the last merge-pass, with 4
> > segments left of total size: 8018 bytes
> > 12/05/14 10:32:46 INFO mapred.LocalJobRunner:
> > 12/05/14 10:32:46 INFO datajoin.job: key: 1 this.largestNumOfValues: 48
> > 12/05/14 10:32:46 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is
> > done. And is in the process of commiting
> > 12/05/14 10:32:46 INFO mapred.LocalJobRunner:
> > 12/05/14 10:32:46 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is
> > allowed to commit now
> > 12/05/14 10:32:46 INFO mapred.FileOutputCommitter: Saved output of task
> > 'attempt_local_0001_r_000000_0' to
> > file:/home/anwar/workspace/JoinLZOPfiles/OutLarge
> > 12/05/14 10:32:49 INFO mapred.LocalJobRunner: actuallyCollectedCount 86
> > collectedCount 86
> > groupCount 25
> >  > reduce
> > 12/05/14 10:32:49 INFO mapred.Task: Task
> > '*attempt_local_0001_r_000000_0'*done.
> > 12/05/14 10:32:50 INFO mapred.JobClient:  map 100% reduce 100%
> > 12/05/14 10:32:50 INFO mapred.JobClient: Job complete: job_local_0001
> > 12/05/14 10:32:50 INFO mapred.JobClient: Counters: 17
> > 12/05/14 10:32:50 INFO mapred.JobClient:   File Input Format Counters
> > 12/05/14 10:32:50 INFO mapred.JobClient:     Bytes Read=1666
> > 12/05/14 10:32:50 INFO mapred.JobClient:   File Output Format Counters
> > 12/05/14 10:32:50 INFO mapred.JobClient:     Bytes Written=2421
> > 12/05/14 10:32:50 INFO mapred.JobClient:   FileSystemCounters
> > 12/05/14 10:32:50 INFO mapred.JobClient:     FILE_BYTES_READ=22890
> > 12/05/14 10:32:50 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=194702
> > 12/05/14 10:32:50 INFO mapred.JobClient:   Map-Reduce Framework
> > 12/05/14 10:32:50 INFO mapred.JobClient:     Map output materialized
> > bytes=8034
> > 12/05/14 10:32:50 INFO mapred.JobClient:     Map input records=106
> > 12/05/14 10:32:50 INFO mapred.JobClient:     Reduce shuffle bytes=0
> > 12/05/14 10:32:50 INFO mapred.JobClient:     Spilled Records=212
> > 12/05/14 10:32:50 INFO mapred.JobClient:     Map output bytes=7798
> > 12/05/14 10:32:50 INFO mapred.JobClient:     Map input bytes=1666
> > 12/05/14 10:32:50 INFO mapred.JobClient:     SPLIT_RAW_BYTES=472
> > 12/05/14 10:32:50 INFO mapred.JobClient:     Combine input records=0
> > 12/05/14 10:32:50 INFO mapred.JobClient:     Reduce input records=106
> > 12/05/14 10:32:50 INFO mapred.JobClient:     Reduce input groups=25
> > 12/05/14 10:32:50 INFO mapred.JobClient:     Combine output records=0
> > 12/05/14 10:32:50 INFO mapred.JobClient:     Reduce output records=86
> > 12/05/14 10:32:50 INFO mapred.JobClient:     Map output records=106
> >
> > --
> > Mr. Anwar Shaikh
> > Delhi Technological University, Delhi
> > +91 92 50 77 12 44
> >
>



-- 
Regards
Junyong

Reply via email to