Hello,

So I have a question about changing dfs.block.size in
$HADOOP_HOME/conf/hdfs-site.xml.  I understand that when files are created,
blocksizes can be modified from default.  What happens if you modify the
blocksize of an existing HDFS site?  Do newly created files get the default
blocksize and old files remain the same?  Is there a way to change the
blocksize of existing files; I'm assuming you could write MapReduce job to
do it, but any build in facilities?

Thanks,
-JR

Reply via email to