XG, The newer default is 128 MB [HDFS-4053]. The minimum, however, can be as low as io.bytes.per.checksum (default: 512 bytes) if the user so wishes it. To administratively set a limit to prevent low values from being used, see the config introduced via HDFS-4305.
On Sat, Jan 4, 2014 at 11:38 AM, Zhao, Xiaoguang <[email protected]> wrote: > As I am new to hdfs, I was told that the minimize block size is 64M, is it > correct? > > XG > > 在 2014年1月4日,3:12,"German Florez-Larrahondo" <[email protected]> 写道: > > Also note that the block size in recent releases is actually called > “dfs.blocksize” as opposed to “dfs.block.size”, and that you can set it per > job as well. In that scenario, just pass it as an argument to your job (e.g. > Hadoop bla –D dfs.blocksize= 134217728) > > > > Regards > > > > From: David Sinclair [mailto:[email protected]] > Sent: Friday, January 03, 2014 10:47 AM > To: [email protected] > Subject: Re: Block size > > > > Change the dfs.block.size in hdfs-site.xml to be the value you would like if > you want to have all new files have a different block size. > > > > On Fri, Jan 3, 2014 at 11:37 AM, Kurt Moesky <[email protected]> wrote: > > I see the default block size for HDFS is 64 MB, is this a value that can be > changed easily? > > -- Harsh J
