Sorry, that's rep factor and not blocksize. I think you need to copy the files.

Sent from my iPhone

On Jun 6, 2011, at 12:09 PM, "J. Ryan Earl" <o...@jryanearl.us> wrote:

> Hello,
> 
> So I have a question about changing dfs.block.size in 
> $HADOOP_HOME/conf/hdfs-site.xml.  I understand that when files are created, 
> blocksizes can be modified from default.  What happens if you modify the 
> blocksize of an existing HDFS site?  Do newly created files get the default 
> blocksize and old files remain the same?  Is there a way to change the 
> blocksize of existing files; I'm assuming you could write MapReduce job to do 
> it, but any build in facilities?
> 
> Thanks,
> -JR
> 
> 

Reply via email to