I am running the wordcount from hadoop-examples, i am giving as input a bunch 
of test files, i have noticed in the output given below reduce starts when the 
map is at 23%, i was wondering if it is not right that reducers will start only 
after the complete mapping is done which mean when map is 100% then i thought 
the reducers will start. Why r the reducers starting when map is still at 23%.

13/04/11 21:10:32 INFO mapred.JobClient:  map 0% reduce 0%
13/04/11 21:10:56 INFO mapred.JobClient:  map 1% reduce 0%
13/04/11 21:10:59 INFO mapred.JobClient:  map 2% reduce 0%
13/04/11 21:11:02 INFO mapred.JobClient:  map 3% reduce 0%
13/04/11 21:11:05 INFO mapred.JobClient:  map 4% reduce 0%
13/04/11 21:11:08 INFO mapred.JobClient:  map 6% reduce 0%
13/04/11 21:11:11 INFO mapred.JobClient:  map 7% reduce 0%
13/04/11 21:11:17 INFO mapred.JobClient:  map 8% reduce 0%
13/04/11 21:11:23 INFO mapred.JobClient:  map 10% reduce 0%
13/04/11 21:11:26 INFO mapred.JobClient:  map 12% reduce 0%
13/04/11 21:11:32 INFO mapred.JobClient:  map 14% reduce 0%
13/04/11 21:11:44 INFO mapred.JobClient:  map 23% reduce 0%
13/04/11 21:11:50 INFO mapred.JobClient:  map 23% reduce 1%
13/04/11 21:11:53 INFO mapred.JobClient:  map 33% reduce 7%
13/04/11 21:12:02 INFO mapred.JobClient:  map 42% reduce 7%

Please pour some light.
Thanks
Sai

Reply via email to