Hi,
     I am a new user of hadoop. This project looks cool.

     There is one question about the MapReduce. I want to process a big
file. To my understanding, hadoop will partition big file into block and
each block is assigned to a worker. Then, how does hadoop decide where to
cut those big files? Does it guarantee that each line in the input file will
be assigned to one block and no line will be divided into two parts in
different blocks?

Lei

Reply via email to