The block size change will not affect the current files. It will only be used 
when storing new files on HDFS. The block size is eventually a property of the 
file. The HDFS config file only specifies a default block size for files that 
are created without a block size specification. If you want it to affect the 
current files you will have to write a script to copy to temp and back. I know 
of the shell command that sets the rep factor but dont know of an equivalent 
for 
block size. But it should be easy to write a script/DFS client code.

sample API create(Path f, boolean overwrite, int bufferSize)  which lets you 
specify the block size when you create a file.  
 -Ayon





________________________________
From: Rita <rmorgan...@gmail.com>
To: hdfs-user@hadoop.apache.org
Sent: Sun, February 6, 2011 8:50:11 AM
Subject: Re: changing the block size

Neither one was working. 

Is there anything I can do? I always have problems like this in hdfs. It seems 
even experts are guessing at the answers :-/



On Thu, Feb 3, 2011 at 11:45 AM, Ayon Sinha <ayonsi...@yahoo.com> wrote:

conf/hdfs-site.xml
> 
>restart dfs. I believe it should be sufficient to restart the namenode only, 
>but 
>others can confirm.
>
>-Ayon
>
>
>
________________________________
From: Rita <rmorgan...@gmail.com>
>To: hdfs-user@hadoop.apache.org
>Sent: Thu, February 3, 2011 4:35:09  AM
>Subject: changing the block size
>
>
>Currently I am using the default block size of 64MB. I would like to change it 
>for my cluster to 256 megabytes since I deal with large files (over 2GB).  
>What 
>is the best way to do this? 
>
>
>What file do I have to make the change on? Does it have to be applied on the 
>namenode or each individual data nodes?  What has to get restarted, namenode, 
>datanode, or both?
>
>
>
>-- 
>--- Get your facts first, then you can distort them as you please.--
>
>


-- 
--- Get your facts first, then you can distort them as you please.--



      

Reply via email to