Hi, Lei Chen:

  You can have a view on org.apache.hadoop.mapred.InputFormatBase, I
think it will help you.

On 4/20/06, Lei Chen <[EMAIL PROTECTED]> wrote:
> Hi,
>      I am a new user of hadoop. This project looks cool.
>
>      There is one question about the MapReduce. I want to process a big
> file. To my understanding, hadoop will partition big file into block and
> each block is assigned to a worker. Then, how does hadoop decide where to
> cut those big files? Does it guarantee that each line in the input file will
> be assigned to one block and no line will be divided into two parts in
> different blocks?
>
> Lei
>
>

Reply via email to