Dear All,


I have a question in my mind about HDFS and I cannot find the answer from
the documents on the apache website. I have a cluster of 4 machines, one is
the namenode and the other 3 are datanodes. When I put 6 files, each 430 MB,
to HDFS, the 6 files are split into 42 blocks(64MB each). But what polices
are used to assign these blocks to datanode? In my case, machine1 got 14
blocks, machine2 got 12 blocks and machine3 got 16 blocks.



Could anyone one help me with it? Or is there any documentation I can read
to help me clarify this?



Thanks a lot!



Boyu Zhang



Ph. D. Student

Computer and Information Sciences Department

University of Delaware

Reply via email to