If you set your replication count to one and on each datanode, create 10
files, you will achieve the pattern you are trying for.

By default when a file is created on a machine hosting a datanode, that
datanode will receive 1 replica of the file, and will be responsible for
sending the file data to the next replica if any.


On Thu, Oct 15, 2009 at 1:39 PM, Huang Qian <[email protected]> wrote:

> Hi everyone. I am working on a project with hadoop and now I come across
> some problem. How can I deploy 100 files, with each file have one block by
> setting the blocksize and controling the file size, on to 10 datanode, and
> make sure each datanode has 10 blocks. I know the file system can deploy
> the
> blocks automaticly, but I want to make sure for the assigns files, the
> files
> will be deployed well-proportioned. How can I make it by the hadoop tool or
> api?
>
> Huang Qian(黄骞)
> Institute of Remote Sensing and GIS,Peking University
> Phone: (86-10) 5276-3109
> Mobile: (86) 1590-126-8883
> Address:Rm.554,Building 1,ChangChunXinYuan,Peking
> Univ.,Beijing(100871),CHINA
>



-- 
Pro Hadoop, a book to guide you from beginner to hadoop mastery,
http://www.amazon.com/dp/1430219424?tag=jewlerymall
www.prohadoopbook.com a community for Hadoop Professionals

Reply via email to