In HDFS file will will be devided based on dfs.blocksize configuration.
You can see the code at DFSClient where the client will write the block of 
data. You can post it to [email protected] to get moredetails 
from mapreduce as well.
In mapreduce, tasktrackers will run on each DN machines. Job tracker will 
assign the work to them. They will read the blocks and process. 

I would suggest you to read Hadoop definitive guide to get more understanding 
of the system.

Regards,
Uma
________________________________________
From: Hadoop Sai [[email protected]]
Sent: Thursday, November 24, 2011 10:28 PM
To: [email protected]
Subject: help in learning hadoop

How the file will be divided into blocks and how the map reduce read those
blocks of data .Can anyone can point out this piece of code.

Reply via email to