Lots thanks to David and Sean Busbey, I got it.

On Nov 9, 2014, at 10:58, Sean Busbey <[email protected]> wrote:

> On Sat, Nov 8, 2014 at 8:13 PM, mail list <[email protected]> wrote:
> 
>> Hi David,
>> 
>> Thanks for your reply.
>> The default block size is 64K, So you mean that after i write enough rows
>> which exceeds the block size, then
>> the put command will not success?
>> 
>> 
>> 
> What version of HDFS? The default block size has been 128MB for all of
> Hadoop 2.x and was 64MB for Hadoop 1.x.
> 
> The write ahead log will attempt to roll when you get to 95% of this size
> (unless you have changed either the default block size or configured an
> alternative write ahead log block size). That will require interaction with
> the NameNode.
> 
> -- 
> Sean

Reply via email to