Also note that the block size in recent releases is actually called
"dfs.blocksize" as opposed to "dfs.block.size", and that you can set it per
job as well. In that scenario, just pass it as an argument to your job (e.g.
Hadoop bla -D dfs.blocksize= 134217728)

 

Regards

 

From: David Sinclair [mailto:[email protected]] 
Sent: Friday, January 03, 2014 10:47 AM
To: [email protected]
Subject: Re: Block size

 

Change the dfs.block.size in hdfs-site.xml to be the value you would like if
you want to have all new files have a different block size.

 

On Fri, Jan 3, 2014 at 11:37 AM, Kurt Moesky <[email protected]> wrote:

I see the default block size for HDFS is 64 MB, is this a value that can be
changed easily?

 

Reply via email to