Trying to figure out how hadoop actually achieves its speed. Assuming that data locality is central to the efficiency of hadoop, how does the magic actually happen, given that data still gets moved all over the network to reach the reducers?
For example, if I have 1gb of logs spread across 10 data nodes, and for the sake of argument, assume I use the identity mapper. Then 90% of data still needs to move across the network - how does the network not become saturated this way? What did I miss?... Thanks, D.S.
