Hi,

I want to change the cluster's capacity of reduce slots on a per job basis.
Originally I have 8 reduce slots for a tasktracker.
I did:

conf.set("mapred.tasktracker.reduce.tasks.maximum", "4");
...
Job job = new Job(conf, ...)


And in the web UI I can see that for this job, the max reduce tasks is
exactly at 4, like I set. However hadoop still launches 8 reducer per
datanode ... why is this?

How could I achieve this?
-- 
*JU Han*

Software Engineer Intern @ KXEN Inc.
UTC   -  Université de Technologie de Compiègne
*     **GI06 - Fouille de Données et Décisionnel*

+33 0619608888

Reply via email to