I was running multi MapReduce jobs at the same time for testing and I 
noticed that the jobs would map all the maps Tasks on all the jobs before 
any reduce tasks would start. Then when the reduce task started they all 
started at the same time. I would thank that on the reduce job would spread 
out the task to slaves and the next job would start maping with the extra 
free slaves.

This would not be a good thing if a cluster receives a lot of MapReduce jobs 
all day long as all the maps would run and the users would have to wait for 
all the map jobs to be done befor the reduce jobs would start. this could be 
hours to days depending on the jobs.

Please let me know if this is a bug or intended route for some reasion I am 
not aware of.

I am running 0.14.1 version.

Billy



Reply via email to