Hi, I am using Partitioner and Grouper class in my program. And lets say the data which I want to process using MapReduce varies in size, it can be just 10MB or can go up to 10GB. So, do I need to set the number of reducers depending on data size, every time I want to run MapReduce. Or the Hadoop framework automatically invokes the required number of reducers at run time?
-- Regards, Piyush Kansal