The block size is a per-file property, so it will change only for the newly created files. If you want to change the block size for the 'legacy' files, you'll need to recreate them, for example with the distcp command (for the new block size 512M): * hadoop distcp -D dfs.block.size=536870912 <path-to-old-file> <path-to-new-file>*
and then rm the old file. -- Alex Kozlov Solutions Architect Cloudera, Inc twitter: alexvk2009 Hadoop World 2010, October 12, New York City - Register now: http://www.cloudera.com/company/press-center/hadoop-world-nyc/ On Tue, Sep 7, 2010 at 8:03 PM, Jeff Zhang <zjf...@gmail.com> wrote: > Those lagacy files won't change block size (NameNode have the mapping > between block and file) > only the new added files will have the block size of 128m > > > On Tue, Sep 7, 2010 at 7:27 PM, Gang Luo <lgpub...@yahoo.com.cn> wrote: > > Hi all, > > I need to change the block size (from 128m to 64m) and have to shut down > the > > cluster first. I was wondering what will happen to the current files on > HDFS > > (with 128M block size). Are they still there and usable? If so, what is > the > > block size of those lagacy files? > > > > Thanks, > > -Gang > > > > > > > > > > > > > > -- > Best Regards > > Jeff Zhang >