Shailesh, there's a lot that goes into distributing work across
tasks/nodes. It's not just distributing work but also fault-tolerance,
data locality etc that come into play. It might be good to refer
Hadoop apache docs or Tom White's definitive guide.

Sent from my iPhone

On Apr 23, 2012, at 11:03 AM, Shailesh Samudrala <shailesh2...@gmail.com> wrote:

> Hello,
>
> I am trying to design my own MapReduce Implementation and I want to know
> how hadoop is able to distribute its workload across multiple computers.
> Can anyone shed more light on this? thanks!

Reply via email to