Aaron Kimball wrote:
Blocks already written to HDFS will remain their current size. Blocks are
immutable objects. That procedure would set the size used for all
subsequently-written blocks. I don't think you can change the block size
while the cluster is running, because that would require the NameNode and
DataNodes to re-read their configurations, which they only do at startup.
- Aaron

Block size is a client side configuration. NameNode and DataNode don't need to restart.

In this particular case, even if client's config is changed, MR may not use the new config for partial set of maps or reducers.

Raghu.

On Sun, Apr 12, 2009 at 6:08 AM, Rakhi Khatwani <rakhi.khatw...@gmail.com>wrote:

Hi,
 I would like to know if it is feasbile to change the blocksize of Hadoop
while map reduce jobs are executing?  and if not would the following work?
 1.
stop map-reduce  2. stop-hbase  3. stop hadoop  4. change hadoop-sites.xml
to reduce the blocksize  5. restart all
 whether the data in the hbase tables will be safe and automatically split
after changing the block size of hadoop??

Thanks,
Raakhi



Reply via email to